Search Results

Search found 160 results on 7 pages for 'repetition'.

Page 6/7 | < Previous Page | 2 3 4 5 6 7  | Next Page >

  • Parse XML function names and call within whole assembly

    - by Matt Clarkson
    Hello all, I have written an application that unit tests our hardware via a internet browser. I have command classes in the assembly that are a wrapper around individual web browser actions such as ticking a checkbox, selecting from a dropdown box as such: BasicConfigurationCommands EventConfigurationCommands StabilizationCommands and a set of test classes, that use the command classes to perform scripted tests: ConfigurationTests StabilizationTests These are then invoked via the GUI to run prescripted tests by our QA team. However, as the firmware is changed quite quickly between the releases it would be great if a developer could write an XML file that could invoke either the tests or the commands: <?xml version="1.0" encoding="UTF-8" ?> <testsuite> <StabilizationTests> <StressTest repetition="10" /> </StabilizationTests> <BasicConfigurationCommands> <SelectConfig number="2" /> <ChangeConfigProperties name="Weeeeee" timeOut="15000" delay="1000"/> <ApplyConfig /> </BasicConfigurationCommands> </testsuite> I have been looking at the System.Reflection class and have seen examples using GetMethod and then Invoke. This requires me to create the class object at compile time and I would like to do all of this at runtime. I would need to scan the whole assembly for the class name and then scan for the method within the class. This seems a large solution, so any information pointing me (and future readers of this post) towards an answer would be great! Thanks for reading, Matt

    Read the article

  • Creating/Maintaining a large project-agnostic code library

    - by bufferz
    In order to reduce repetition and streamline testing/debugging, I'm trying to find the best way to develop a group of libraries that many projects can utilize. I'd like to keep individual executable relatively small, and have shared libraries for math, database, collections, graphics, etc. that were previously scattered among several projects and in many cases duplicated (bad!). This library is to be in an SVN repo and several programmers will be working on it. This library will be in constant development along with the executables that utilize it. For example, I want a code file in ProjectA to look something like the following: using MyCompany.Math.2D; //static 2D math methods using MyCompany.Math.3D; //static #D math methods using MyCompany.Comms.SQL; //static methods for doing simple SQLDB I/O using MyCompany.Graphics.BitmapOperations; //static methods that play with bitmaps So in my ProjectA solution file in VisualStudio, in order to develop/debug the MyCompany library I have to add several projects (Math, Comms, Graphics). Things get pretty cluttered and Solution files get out of date quickly between programmer SVN commits. I'm just looking for a high level approach to maintaining a large, shared code base in an SCN repository. I am fully willing to radically redesign my approach. I'm looking for that warm fuzzy feeling you get when you're design approach is spot on and development is fluid and natural. And ideas? Thanks!!

    Read the article

  • Can PHP dissect its own syntax?

    - by Nathan Long
    Can PHP dissect its own syntax? For example, I'd like to write a function that takes in an input like $object->attribute and says to itself: OK, he's giving me $foo->bar, which means he must think that $foo is an object that has a property called bar. Before I try accessing bar and potentially get a 'Trying to get property of non-object' error, let me check whether $foo is even an object. The end goal is to echo a value if it is set, and fail silently if not. I want to avoid repetition like this: <input value="<? if(is_object($foo) && is_set($foo->bar)){ echo $foo->bar; }?> "/> ...and to avoid writing a function that does the above, but has to have the object and attribute passed in separately, like this: <input value="<? echoAttribute($foo,'bar') ?>" /> ...but to instead write something which: preserves the object-attribute syntax is flexible: can also handle array keys or regular variables Like this: <input value="<? echoIfSet($foo->bar); ?> /> <input value="<? echoIfSet($baz['buzz']); ?> /> <input value="<? echoIfSet($moo); ?> /> But this all depends on PHP being able to tell me "what kind of thing am I asking for when I say $object->attribute or $array[$key]", so that my function can handle each according to its own type. Is this possible?

    Read the article

  • Netlogo: error when putting variable in table, only constants allowe??

    - by Chantal
    Hello, Currently I am working on a Netlogo program where I need to use nodes and links for vehicle routing problem. (links are called streets in the program) Here I have some practical problems of how to input variable linkspeed in a table with another node. Constants like 200 etc are fine. Online I found some examples where variables are used, but I do not know why I keep getting the following error: Expected a constant. (or why netlogo expects a constant) Here is the relevant piece of code: extensions [table] streets-own [linkspeed linktoll] nodes-own [netw] ;; In another piece of code linkspeed is assigned successfully to the links to cheapcalc ;; start conditions set costs very high 300000 ;; state 3 unsearched state 2 searching state 1 searched (for later purposes) ask nodes [ set i 0 set j count nodes set netw table:make while [i < j][ table:put netw (i) [3000000 3] set i (i + 1)]] set i 0 let k 0 ask node 35 ;; here i use node 35 as an example. ;; node 35 is connected to node 34, 36, 20 and 50 [table:put netw (35) [0 1] ;; node need to search costs to travel to itself ;; putting constants is ok. while [i < j] [ask my-links [ask both-ends [if (who != 35) [set color blue ;; set temp ([linkspeed] of street 35 who) ;; here my real goal is to put this in stat of i. but i is easier than linkspeed. table:put netw (who) [ i 2 ] ] ] ] set i (i + 1)] ] ;; next node for later, no it is just repetition of the same. end I hope somebody knows what is going on... Kind regards, Chantal

    Read the article

  • Passing IDisposable objects through constructor chains

    - by Matt Enright
    I've got a small hierarchy of objects that in general gets constructed from data in a Stream, but for some particular subclasses, can be synthesized from a simpler argument list. In chaining the constructors from the subclasses, I'm running into an issue with ensuring the disposal of the synthesized stream that the base class constructor needs. Its not escaped me that the use of IDisposable objects this way is possibly just dirty pool (plz advise?) for reasons I've not considered, but, this issue aside, it seems fairly straightforward (and good encapsulation). Codes: abstract class Node { protected Node (Stream raw) { // calculate/generate some base class properties } } class FilesystemNode : Node { public FilesystemNode (FileStream fs) : base (fs) { // all good here; disposing of fs not our responsibility } } class CompositeNode : Node { public CompositeNode (IEnumerable some_stuff) : base (GenerateRaw (some_stuff)) { // rogue stream from GenerateRaw now loose in the wild! } static Stream GenerateRaw (IEnumerable some_stuff) { var content = new MemoryStream (); // molest elements of some_stuff into proper format, write to stream content.Seek (0, SeekOrigin.Begin); return content; } } I realize that not disposing of a MemoryStream is not exactly a world-stopping case of bad CLR citizenship, but it still gives me the heebie-jeebies (not to mention that I may not always be using a MemoryStream for other subtypes). It's not in scope, so I can't explicitly Dispose () it later in the constructor, and adding a using statement in GenerateRaw () is self-defeating since I need the stream returned. Is there a better way to do this? Preemptive strikes: yes, the properties calculated in the Node constructor should be part of the base class, and should not be calculated by (or accessible in) the subclasses I won't require that a stream be passed into CompositeNode (its format should be irrelevant to the caller) The previous iteration had the value calculation in the base class as a separate protected method, which I then just called at the end of each subtype constructor, moved the body of GenerateRaw () into a using statement in the body of the CompositeNode constructor. But the repetition of requiring that call for each constructor and not being able to guarantee that it be run for every subtype ever (a Node is not a Node, semantically, without these properties initialized) gave me heebie-jeebies far worse than the (potential) resource leak here does.

    Read the article

  • Data Annotations on ViewModels or Domain Objects

    - by Ahmad
    Where would data annotations be more suitable: ViewModels or Domain Objects or Both I am struggling to decide where these will be more suited. I have not as yet fully utilized them but this question came to mind. From most of the examples I have seen, they are generally placed on Models and simply use the required attributes for validation using ModelState.IsValid. I have also seen another question on SO where the use of data annotations alone is not sufficient and advocate. Option 1 - I will still need to validate again in my service layer. ( I think that my service layer should be complete and this include validation, since its planned to be used elsewhere) Option 2 - How will I then get the benefits of the built in validation both client and server side. Option 3 - there will be a repetition of validation logic, however I was wondering if one could use a MetaData class approach that can be used for both ViewModels and Domain Objects. ( This is completely of the top of my head, so it may be nonsensical) I wonder if this question even makes sense. If not, can someone please help in understanding this better. Have I completely misunderstood the use of data annotations?

    Read the article

  • How to stay DRY when using both Javascript and ERB templates (Rails)

    - by user94154
    I'm building a Rails app that uses Pusher to use web sockets to push updates to directly to the client. In javascript: channel.bind('tweet-create', function(tweet){ //when a tweet is created, execute the following code: $('#timeline').append("<div class='tweet'><div class='tweeter'>"+tweet.username+"</div>"+tweet.status+"</div>"); }); This is nasty mixing of code and presentation. So the natural solution would be to use a javascript template. Perhaps eco or mustache: //store this somewhere convenient, perhaps in the view folder: tweet_view = "<div class='tweet'><div class='tweeter'>{{tweet.username}}</div>{{tweet.status}}</div>" channel.bind('tweet-create', function(tweet){ //when a tweet is created, execute the following code: $('#timeline').append(Mustache.to_html(tweet_view, tweet)); //much cleaner }); This is good and all, except, I'm repeating myself. The mustache template is 99% identical to the ERB templates I already have written to render HTML from the server. The intended output/purpose of the mustache and ERB templates are 100% the same: to turn a tweet object into tweet html. What is the best way to eliminate this repetition? UPDATE: Even though I answered my own question, I really want to see other ideas/solutions from other people--hence the bounty!

    Read the article

  • How do I find all paths through a set of given nodes in a DAG?

    - by Hanno Fietz
    I have a list of items (blue nodes below) which are categorized by the users of my application. The categories themselves can be grouped and categorized themselves. The resulting structure can be represented as a Directed Acyclic Graph (DAG) where the items are sinks at the bottom of the graph's topology and the top categories are sources. Note that while some of the categories might be well defined, a lot is going to be user defined and might be very messy. Example: On that structure, I want to perform the following operations: find all items (sinks) below a particular node (all items in Europe) find all paths (if any) that pass through all of a set of n nodes (all items sent via SMTP from example.com) find all nodes that lie below all of a set of nodes (intersection: goyish brown foods) The first seems quite straightforward: start at the node, follow all possible paths to the bottom and collect the items there. However, is there a faster approach? Remembering the nodes I already passed through probably helps avoiding unnecessary repetition, but are there more optimizations? How do I go about the second one? It seems that the first step would be to determine the height of each node in the set, as to determine at which one(s) to start and then find all paths below that which include the rest of the set. But is this the best (or even a good) approach? The graph traversal algorithms listed at Wikipedia all seem to be concerned with either finding a particular node or the shortest or otherwise most effective route between two nodes. I think both is not what I want, or did I just fail to see how this applies to my problem? Where else should I read?

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • What's the most DRY-appropriate way to execute an SQL command?

    - by Sean U
    I'm looking to figure out the best way to execute a database query using the least amount of boilerplate code. The method suggested in the SqlCommand documentation: private static void ReadOrderData(string connectionString) { string queryString = "SELECT OrderID, CustomerID FROM dbo.Orders;"; using (SqlConnection connection = new SqlConnection(connectionString)) { SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); try { while (reader.Read()) { Console.WriteLine(String.Format("{0}, {1}", reader[0], reader[1])); } } finally { reader.Close(); } } } mostly consists of code that would have to be repeated in every method that interacts with the database. I'm already in the habit of factoring out the establishment of a connection, which would yield code more like the following. (I'm also modifying it so that it returns data, in order to make the example a bit less trivial.) private SQLConnection CreateConnection() { var connection = new SqlConnection(_connectionString); connection.Open(); return connection; } private List<int> ReadOrderData() { using(var connection = CreateConnection()) using(var command = connection.CreateCommand()) { command.CommandText = "SELECT OrderID FROM dbo.Orders;"; using(var reader = command.ExecuteReader()) { var results = new List<int>(); while(reader.Read()) results.Add(reader.GetInt32(0)); return results; } } } That's an improvement, but there's still enough boilerplate to nag at me. Can this be reduced further? In particular, I'd like to do something about the first two lines of the procedure. I don't feel like the method should be in charge of creating the SqlCommand. It's a tiny piece of repetition as it is in the example, but it seems to grow if transactions are being managed manually or timeouts are being altered or anything like that.

    Read the article

  • Refactoring code/consolidating functions (e.g. nested for-loop order)

    - by bmay2
    Just a little background: I'm making a program where a user inputs a skeleton text, two numbers (lower and upper limit), and a list of words. The outputs are a series of modifications on the skeleton text. Sample inputs: text = "Player # likes @." (replace # with inputted integers and @ with words in list) lower = 1 upper = 3 list = "apples, bananas, oranges" The user can choose to iterate over numbers first: Player 1 likes apples. Player 2 likes apples. Player 3 likes apples. Or words first: Player 1 likes apples. Player 1 likes bananas. Player 1 likes oranges. I chose to split these two methods of outputs by creating a different type of dictionary based on either number keys (integers inputted by the user) or word keys (from words in the inputted list) and then later iterating over the values in the dictionary. Here are the two types of dictionary creation: def numkey(dict): # {1: ['Player 1 likes apples', 'Player 1 likes...' ] } text, lower, upper, list = input_sort(dict) d = {} for num in range(lower,upper+1): l = [] for i in list: l.append(text.replace('#', str(num)).replace('@', i)) d[num] = l return d def wordkey(dict): # {'apples': ['Player 1 likes apples', 'Player 2 likes apples'..] } text, lower, upper, list = input_sort(dict) d = {} for i in list: l = [] for num in range(lower,upper+1): l.append(text.replace('#', str(num)).replace('@', i)) d[i] = l return d It's fine that I have two separate functions for creating different types of dictionaries but I see a lot of repetition between the two. Is there any way I could make one dictionary function and pass in different values to it that would change the order of the nested for loops to create the specific {key : value} pairs I'm looking for? I'm not sure how this would be done. Is there anything related to functional programming or other paradigms that might help with this? The question is a little abstract and more stylistic/design-oriented than anything.

    Read the article

  • Writing a synchronized thread-safety wrapper for NavigableMap

    - by polygenelubricants
    java.util.Collections currently provide the following utility methods for creating synchronized wrapper for various collection interfaces: synchronizedCollection(Collection<T> c) synchronizedList(List<T> list) synchronizedMap(Map<K,V> m) synchronizedSet(Set<T> s) synchronizedSortedMap(SortedMap<K,V> m) synchronizedSortedSet(SortedSet<T> s) Analogously, it also has 6 unmodifiedXXX overloads. The glaring omission here are the utility methods for NavigableMap<K,V>. It's true that it extends SortedMap, but so does SortedSet extends Set, and Set extends Collection, and Collections have dedicated utility methods for SortedSet and Set. Presumably NavigableMap is a useful abstraction, or else it wouldn't have been there in the first place, and yet there are no utility methods for it. So the questions are: Is there a specific reason why Collections doesn't provide utility methods for NavigableMap? How would you write your own synchronized wrapper for NavigableMap? Glancing at the source code for OpenJDK version of Collections.java seems to suggest that this is just a "mechanical" process Is it true that in general you can add synchronized thread-safetiness feature like this? If it's such a mechanical process, can it be automated? (Eclipse plug-in, etc) Is this code repetition necessary, or could it have been avoided by a different OOP design pattern?

    Read the article

  • Modify columns in a data frame in R more cleanly - maybe using with() or apply()?

    - by Mittenchops
    I understand the answer in R to repetitive things is usually "apply()" rather than loop. Is there a better R-design pattern for a nasty bit of code I create frequently? So, pulling tabular data from HTML, I usually need to change the data type, and end up running something like this, to convert the first column to date format (from decimal), and columns 2-4 from character strings with comma thousand separators like "2,400,000" to numeric "2400000." X[,1] <- decYY2YY(as.numeric(X[,1])) X[,2] <- as.numeric(gsub(",", "", X[,2])) X[,3] <- as.numeric(gsub(",", "", X[,3])) X[,4] <- as.numeric(gsub(",", "", X[,4])) I don't like that I have X[,number] repeated on both the left and ride sides here, or that I have basically the same statement repeated for 2-4. Is there a very R-style way of making X[,2] less repetitive but still loop-free? Something that sort of says "apply this to columns 2,3,4---a function that reassigns the current column to a modified version in place?" I don't want to create a whole, repeatable cleaning function, really, just a quick anonymous function that does this with less repetition.

    Read the article

  • Avoiding duplication in setting properties on the task in Rake tasks

    - by Stray
    I have a bunch of rake building tasks. They each have unique input / output properties, but the majority of the properties I set on the tasks are the same each time. Currently I'm doing that via simple repetition like this: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| t.input = "src/project/modules/ThisModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| t.input = "src/project/modules/ThatModule.as" t.prop1 = value1 t.prop2 = value2 ... (And many more property=value sets that are the same in each task) end In my usual programming headspace I'd expect to be able to break out the population of the recurring task properties to a re-usable function. Is there a rake analogy for this? Some way I can have a single function where the shared properties are set on any task? Something equivalent to: task :buildThisModule => "bin/modules/thisModule.swf" mxmlc "bin/modules/thisModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThisModule.as" end task :buildThatModule => "bin/modules/thatModule.swf" mxmlc "bin/modules/thatModule.swf" do |t| addCommonTaskParameters(t) t.input = "src/project/modules/ThatModule.as" end Thanks.

    Read the article

  • Singletons and constants

    - by devoured elysium
    I am making a program which makes use of a couple of constants. At first, each time I needed to use a constant, I'd define it as //C# private static readonly int MyConstant = xxx; //Java private static final int MyConstant = xxx; in the class where I'd need it. After some time, I started to realise that some constants would be needed in more than one class. At this time, I had 3 choises: To define them in the different classes that needed it. This leads to repetition. If by some reason later I need to change one of them, I'd have to check in all classes to replace them everywhere. To define a static class/singleton with all the constants as public. If I needed a constant X in ClassA, ClassB and ClassC, I could just define it in ClassA as public, and then have ClassB and ClassC refer to them. This solution doesn't seem that good to me as it introduces even more dependencies as the classes already have between them. I ended up implementing my code with the second option. Is that the best alternative? I feel I am probably missing some other better alternative. What worries me about using the singleton here is that it is nowhere clear to a user of the class that this class is using the singleton. Maybe I could create a ConstantsClass that held all the constants needed and then I'd pass it in the constructor to the classes that'd need it? Thanks

    Read the article

  • Web scrapping from a Google Chrome extension

    - by limoragni
    I've started to develop a Chrome extension to navigate and prform actions on a website. Until now the extension is able to receive a couple of parameters and check a set of radio-buttons, fill in a few inputs of a form and then submit it. What I want to do now is to repeat the process, but I'm stuck when the page is reloaded. And I don't know how can I do to make the script reacts to the finish of the request. The workflow I want to achieve is the following (is for automaticly copying a certain object): Popup side Enter the number of the Master object to copy Enter the base name of the copies (example Mod, so the I can iterate and add mod1, mod2, modn) Enter the number of copies Background side Select master Select standard options Fill in inputs Submit form Wait for the page to complete the request and continue to the next copy. (here I need help) The problem is on the repetition, the rest is taking care of. I assume that must be a way of dealing with requests. Any ideas? By the way I'm doing it all with the extension and tabs methods of google chrome plus javascript and jquery.

    Read the article

  • subset complete or balance dataset in r

    - by SHRram
    I have a dataset that unequal number of repetition. I want to subset a data by removing those entries that are incomplete (i.e. replication less than maximum). Just small example: set.seed(123) mydt <- data.frame (name= rep ( c("A", "B", "C", "D", "E"), c(1,2,4,4, 3)), var1 = rnorm (14, 3,1), var2 = rnorm (14, 4,1)) mydt name var1 var2 1 A 2.439524 3.444159 2 B 2.769823 5.786913 3 B 4.558708 4.497850 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 12 E 3.359814 2.313307 13 E 3.400771 4.837787 14 E 3.110683 4.153373 summary(mydt) name var1 var2 A:1 Min. :1.735 Min. :2.033 B:2 1st Qu.:2.608 1st Qu.:3.048 C:4 Median :3.120 Median :3.486 D:4 Mean :3.203 Mean :3.688 E:3 3rd Qu.:3.446 3rd Qu.:4.412 Max. :4.715 Max. :5.787 I want to get rid of A, B, E from the data as they are incomplete. Thus expected output: name var1 var2 4 C 3.070508 2.033383 5 C 3.129288 4.701356 6 C 4.715065 3.527209 7 C 3.460916 2.932176 8 D 1.734939 3.782025 9 D 2.313147 2.973996 10 D 2.554338 3.271109 11 D 4.224082 3.374961 Please note the dataset is big, the following may not a option: mydt[mydt$name == "C",] mydt[mydt$name == "D", ]

    Read the article

  • What presentation software suits my needs?

    - by claws
    Background: I'm teaching biology to 12th grade students. The syllabus I'm teaching is huge. I mean literally, very huge. There is a lot for students to remember. There are no less than 1000 facts (weird names, dates etc) for students to remember. They'll have to remember all of them, they don't have a choice. The notes I compiled for their learning itself is upto 80 printed pages(Just the bullet outline & facts). That's just one chapter. We have 34 chapters. Also my students are very hardworking, they study upto 8-10hrs per day (Yeah! we are from India :). So, I want to ensure maximum retaining by the students at each and every stage (Teaching & Learning). I'm trying to as many memory training techniques as possible. I'm trying to incorporate, mnemonics, strong visual aids (pictures, 3D-animations, real videos etc.), spaced repetition etc. I think MS powerpoint is not suitable for my needs: There are about 200 slides per chapter. Its very easy for students to get lost while teaching. Because the problem with powerpoint is that it gives facts (as bullets) but it doesn't exploit the association & organization (Concept Map) of the content, which helps students learn quickly. I found an amazing software called XMind. You can see the screenshot here. Problem is that it is not as powerpoint in terms of powerpoint. This software can be used for just for concept maps. In the above screenshot, each topic occupies a single slide. I have an Image/picture(Detailed huge picture) and about 5-10 bullet points and probably a video or an animation of somethings. And this XMind is not good at presenting, in terms that it doesn't allow me to set what to present after what. I want to present a top down view, with a slide for each topic. PS: I Don't like prezi.com. I tried but it simply is too confusing for my students. It zooms here and there. I didn't tried it but I've seen few presentations.

    Read the article

  • What Counts For a DBA: Fitness

    - by Louis Davidson
    If you know me, you can probably guess that physical exercise is not really my thing. There was a time in my past when it a larger part of my life, but even then never in the same sort of passionate way as a number of our SQL friends.  For me, I find that mental exercise satisfies what I believe to be the same inner need that drives people to run farther than I like to drive on most Saturday mornings, and it is certainly just as addictive. Mental fitness shares many common traits with physical fitness, especially the need to attain it through repetitive training. I only wish that mental training burned off a bacon cheeseburger in the same manner as does jogging around a dewy park on Saturday morning. In physical training, there are at least two goals, the first of which is to be physically able to do a task. The second is to train the brain to perform the task without thinking too hard about it. No matter how long it has been since you last rode a bike, you will be almost certainly be able to hop on and start riding without thinking about the process of pedaling or balancing. If you’ve never ridden a bike, you could be a physics professor /Olympic athlete and still crash the first few times you try, even though you are as strong as an ox and your knowledge of the physics of bicycle riding makes the concept child’s play. For programming tasks, the process is very similar. As a DBA, you will come to know intuitively how to backup, optimize, and secure database systems. As a data programmer, you will work to instinctively use the clauses of Transact-SQL DML so that, when you need to group data three ways (and not four), you will know to use the GROUP BY clause with GROUPING SETS without resorting to a search engine.  You have the skill. Making it naturally then requires repetition and experience is the primary requirement, not just simply learning about a topic. The hardest part of being really good at something is this difference between knowledge and skill. I have recently taken several informative training classes with Kimball University on data warehousing and ETL. Now I have a lot more knowledge about designing data warehouses than before. I have also done a good bit of data warehouse designing of late and have started to improve to some level of proficiency with the theory. Yet, for all of this head knowledge, it is still a struggle to take what I have learned and apply it to the designs I am working on.  Data warehousing is still a task that is not yet deeply ingrained in my brain muscle memory. On the other hand, relational database design is something that no matter how much or how little I may get to do it, I am comfortable doing it. I have done it as a profession now for well over a decade, I teach classes on it, and I also have done (and continue to do) a lot of mental training beyond the work day. Sometimes the training is just basic education, some reading blogs and attending sessions at PASS events.  My best training comes from spending time working on other people’s design issues in forums (though not nearly as much as I would like to lately). Working through other people’s problems is a great way to exercise your brain on problems with which you’re not immediately familiar. The final bit of exercise I find useful for cultivating mental fitness for a data professional is also probably the nerdiest thing that I will ever suggest you do.  Akin to running in place, the idea is to work through designs in your head. I have designed more than one database system that would revolutionize grocery store operations, sales at my local Target store, the ordering process at Amazon, and ways to improve Disney World operations to get me through a line faster (some of which they are starting to implement without any of my help.) Never are the designs truly fleshed out, but enough to work through structures and processes.  On “paper”, I have designed database systems to catalog things as trivial as my Lego creations, rental car companies and my audio and video collections. Once I get the database designed mentally, sometimes I will create the database, add some data (often using Red-Gate’s Data Generator), and write a few queries to see if a concept was realistic, but I will rarely fully flesh out the database since I have no desire to do any user interface programming anymore.  The mental training allows me to keep in practice for when the time comes to do the work I love the most for real…even if I have been spending most of my work time lately building data warehouses.  If you are really strong of mind and body, perhaps you can mix a mental run with a physical run; though don’t run off of a cliff while contemplating how you might design a database to catalog the trees on a mountain…that would be contradictory to the purpose of both types of exercise.

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • Prevent recursive CTE visiting nodes multiple times

    - by bacar
    Consider the following simple DAG: 1->2->3->4 And a table, #bar, describing this (I'm using SQL Server 2005): parent_id child_id 1 2 2 3 3 4 //... other edges, not connected to the subgraph above Now imagine that I have some other arbitrary criteria that select the first and last edges, i.e. 1-2 and 3-4. I want to use these to find the rest of my graph. I can write a recursive CTE as follows (I'm using terminology from MSDN): with foo(parent_id,child_id) as ( // anchor member that happens to select first and last edges: select parent_id,child_id from #bar where parent_id in (1,3) union all // recursive member: select #bar.* from #bar join foo on #bar.parent_id = foo.child_id ) select parent_id,child_id from foo However, this results in edge 3-4 being selected twice: parent_id child_id 1 2 3 4 2 3 3 4 // 2nd appearance! How can I prevent the query from recursing into subgraphs that have already been described? I could achieve this if, in my "recursive member" part of the query, I could reference all data that has been retrieved by the recursive CTE so far (and supply a predicate indicating in the recursive member excluding nodes already visited). However, I think I can access data that was returned by the last iteration of the recursive member only. This will not scale well when there is a lot of such repetition. Is there a way of preventing this unnecessary additional recursion? Note that I could use "select distinct" in the last line of my statement to achieve the desired results, but this seems to be applied after all the (repeated) recursion is done, so I don't think this is an ideal solution. Edit - hainstech suggests stopping recursion by adding a predicate to exclude recursing down paths that were explicitly in the starting set, i.e. recurse only where foo.child_id not in (1,3). That works for the case above only because it simple - all the repeated sections begin within the anchor set of nodes. It doesn't solve the general case where they may not be. e.g., consider adding edges 1-4 and 4-5 to the above set. Edge 4-5 will be captured twice, even with the suggested predicate. :(

    Read the article

  • ASP.NET MVC ViewModel Pattern

    - by Omu
    EDIT: I made something much better to fill and read data from a view using ViewModels, called it ValueInjecter. http://valueinjecter.codeplex.com/documentation using the ViewModel to store the mapping logic was not such a good idea because there was repetition and SRP violation, but now with the ValueInjecter I have clean ViewModels and dry mapping code I made a ViewModel pattern for editing stuff in asp.net mvc this pattern is usefull when you have to make a form for editing an entity and you have to put on the form some dropdowns for the user to choose some values public class OrganisationViewModel { //paramterless constructor required, cuz we are gonna get an OrganisationViewModel object from the form in the post save method public OrganisationViewModel() : this(new Organisation()) {} public OrganisationViewModel(Organisation o) { Organisation = o; Country = new SelectList(LookupFacade.Country.GetAll(), "ID", "Description", CountryKey); } //that's the Type for whom i create the viewmodel public Organisation Organisation { get; set; } #region DropDowns //for each dropdown i have a int? Key that stores the selected value public IEnumerable<SelectListItem> Country { get; set; } public int? CountryKey { get { if (Organisation.Country != null) { return Organisation.Country.ID; } return null; } set { if (value.HasValue) { Organisation.Country = LookupFacade.Country.Get(value.Value); } } } #endregion } and that's how i use it public ViewResult Edit(int id) { var model = new OrganisationViewModel(organisationRepository.Get(id)); return View(model); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Edit(OrganisationViewModel model) { organisationRepository.SaveOrUpdate(model.Organisation); return RedirectToAction("Index"); } and the markup <p> <label for="Name"> Name:</label> <%= Html.Hidden("Organisation.ID", Model.Organisation.ID)%> <%= Html.TextBox("Organisation.Name", Model.Organisation.Name)%> <%= Html.ValidationMessage("Organisation.Name", "*")%> </p> <p> ... <label for="CountryKey"> Country:</label> <%= Html.DropDownList("CountryKey", Model.Country, "please select") %> <%= Html.ValidationMessage("CountryKey", "*") %> </p> so tell me what you think about it

    Read the article

  • WPF customer control and direct content support

    - by Mmarquee
    I am fairly new to WPF, and am a bit stuck, so any help would be appreciated. I am trying to write WPF custom control that encapsulates several elements of functionality that I already having working (i.e sorting, filtering, standard menus, etc.), but in a nice neat package to avoid repetition. Anyway I have created the custom control (based on control), and then have the following in the Generic.Xaml <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:Controls.ListViewExtended"> <Style TargetType="{x:Type local:ListViewExtended}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type local:ListViewExtended}"> <Border Background="{TemplateBinding Background}" BorderBrush="{TemplateBinding BorderBrush}" BorderThickness="{TemplateBinding BorderThickness}"> <ListView> <ListView.View> <GridView> <!-- Content goes here --> </GridView> </ListView.View> </ListView> </Border> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> When I try to add GridViewColumns (or any control really), as below ... <elv:ListViewExtended> <GridView> <GridViewColumn Width="140" Header="Column 1" /> <GridViewColumn Width="140" Header="Column 2" /> <GridViewColumn Width="140" Header="Column 3" /> </GridView> </elv:ListViewExtended> I get the "... does not support direct content" error. I have created a dependancy property (again below) that allows the adding of GridView, but it still doesn't work. public static DependencyProperty GridViewProperty; public static string GridViewHeader(DependencyObject target) { return (string)target.GetValue(GridViewProperty); } public static void GridViewHeader(DependencyObject target, string value) { target.SetValue(GridViewProperty, value); } Thanks in advance

    Read the article

  • ContentPlaceHolders: Repeated Content

    - by brad
    Scenario I have an application using asp.net Master Pages in which I would like to repeat some content at the top and bottom of a page. Currently i use something like this: Master Page <html> <body> <asp:ContentPlaceHolder ID="Foo" runat="server"> </asp:ContentPlaceHolder> <!-- page content --> <asp:ContentPlaceHolder ID="Bar" runat="server"> </asp:ContentPlaceHolder> </body> </html> Content Page <asp:Content ID="Top" ContentPlaceHolderID="Foo" runat="server"> <!-- content --> </asp:Content> <asp:Content ID="Bottom" ContentPlaceHolderID="Bar" runat="server"> <!-- content repeated --> </asp:Content> Maintenance As you know, repeating things in code is usually not good. It creates maintenance problems. The following is what I would like to do but will obviously not work because of the repeated id attribute: Master Page <html> <body> <asp:ContentPlaceHolder ID="Foo" runat="server"> </asp:ContentPlaceHolder> <!-- page content --> <asp:ContentPlaceHolder ID="Foo" runat="server"> </asp:ContentPlaceHolder> </body> </html> Content Page <asp:Content ID="Top" ContentPlaceHolderID="Foo" runat="server"> <!-- content (no repetition) --> </asp:Content> Possible? Is there a way to do this using asp.net webforms? The solution does not necessarily have to resemble the above content, it just needs to work the same way. Notes I am using asp.net 3.0 in Visual Studio 2008

    Read the article

  • TestNG - Factories and Dataproviders

    - by Tim K
    Background Story I'm working at a software firm developing a test automation framework to replace our old spaghetti tangled system. Since our system requires a login for almost everything we do, I decided it would be best to use @BeforeMethod, @DataProvider, and @Factory to setup my tests. However, I've run into some issues. Sample Test Case Lets say the software system is a baseball team roster. We want to test to make sure a user can search for a team member by name. (Note: I'm aware that BeforeMethods don't run in any given order -- assume that's been taken care of for now.) @BeforeMethod public void setupSelenium() { // login with username & password // acknowledge announcements // navigate to search page } @Test(dataProvider="players") public void testSearch(String playerName, String searchTerm) { // search for "searchTerm" // browse through results // pass if we find playerName // fail (Didn't find the player) } This test case assumes the following: The user has already logged on (in a BeforeMethod, most likely) The user has already navigated to the search page (trivial, before method) The parameters to the test are associated with the aforementioned login The Problems So lets try and figure out how to handle the parameters for the test case. Idea #1 This method allows us to associate dataproviders with usernames, and lets us use multiple users for any specific test case! @Test(dataProvider="players") public void testSearch(String user, String pass, String name, String search) { // login with user/pass // acknowledge announcements // navigate to search page // ... } ...but there's lots of repetition, as we have to make EVERY function accept two extra parameters. Not to mention, we're also testing the acknowledge announcements feature, which we don't actually want to test. Idea #2 So lets use the factory to initialize things properly! class BaseTestCase { public BaseTestCase(String user, String password, Object[][] data); } class SomeTest { @Factory public void ... } With this, we end up having to write one factory per test case... Although, it does let us have multiple users per test-case. Conclusion I'm about fresh out of ideas. There was another idea I had where I was loading data from an XML file, and then calling the methods from a program... but its getting silly. Any ideas?

    Read the article

< Previous Page | 2 3 4 5 6 7  | Next Page >