Search Results

Search found 5770 results on 231 pages for 'sense hofstede'.

Page 170/231 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • Number Algorithm

    - by James
    I've been struggling to wrap my head around this for some reason. I have 15 bits that represent a number. The bits must match a pattern. The pattern is defined in the way the bits start out: they are in the most flush-right representation of that pattern. So say the pattern is 1 4 1. The bits will be: 000000010111101 So the general rule is, take each number in the pattern, create that many bits (1, 4 or 1 in this case) and then have at least one space separating them. So if it's 1 2 6 1 (it will be random): 001011011111101 Starting with the flush-right version, I want to generate every single possible number that meets that pattern. The # of bits will be stored in a variable. So for a simple case, assume it's 5 bits and the initial bit pattern is: 00101. I want to generate: 00101 01001 01010 10001 10010 10100 I'm trying to do this in Objective-C, but anything resembling C would be fine. I just can't seem to come up with a good recursive algorithm for this. It makes sense in the above example, but when I start getting into 12431 and having to keep track of everything it breaks down.

    Read the article

  • NSFetchedResultsController secondary sorting not working?

    - by binkly
    Hello, I'm using NSFetchedResultsController to fetch Message entities with Core Data. I want my tableview to group the messages by an attribute called ownerName. This works fine in the sense that I get a proper UI dispalying appropriately named sections. I can achieve this with the code below: NSSortDescriptor *sortDescriptor = [[NSSortDescriptor alloc] initWithKey:@"ownerName" ascending:NO]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:sortDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; NSFetchedResultsController *fetchedResultsController = nil; fetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:[appDelegate managedObjectContext] sectionNameKeyPath:@"ownerName" cacheName:@"Messages"] What'd I'd like to do now is have the section with the newest message at the top; exactly how it works with the Messages app on iPhone currently. I've tried adding a second sort descriptor and putting it in an array of sort descriptors and passing that to the fetchRequest but it doesn't appear to be affecting anything. Here is the 2nd sort descriptor that I used. NSSortDescriptor *idDescriptor = [[NSSortDescriptor alloc] initWithKey:@"createdAt" ascending:YES]; If anyone could provide some insight into this I would greatly appreciate it.

    Read the article

  • model view controller question

    - by songBong
    Hi, I've been working on my iphone game recently and came across a forked road when deciding the design of my various classes. So far I've adhered to the MVC pattern but the following situation had me confused: I have 4 buttons displayed visually. Each button though consists of a container UIView (which I've subclassed) and 2 UIButtons (also subclassed) as subviews. When you press a button, it does the flip effect plus other stuff. The user input is using target-action from my container UIView to my controller. This part is ok, the following part is the debatable part: So I've subclassed the container view as well as the UIButtons and I need to add more data/methods (somewhere) to do more things. Putting data that needs to be serialized and non-rendering related code in the view classes seems to break the MVC design but at the moment, it makes the most sense to me to put it there. It's almost like my subclassed views are their own little MVC's and it seems neat. Separating out the data/methods from the view to my main controller in this case seems unnecessary and a more work. How should I be doing it? Thanks heaps.

    Read the article

  • ShSetFolderPath works on win7, doesn't on XP

    - by pipboy3k
    Hey. I'm trying to use ShSetFolderPath function in C#. I work on Win7, I've managed to use ShSetKnownFolderPath and it works fine. Since this function is unavaible in WinXP, i tried to invoke ShSetFolderPath. Because i'm not familiar with invoking, I've done some searching and found something on some French forum. I don't speak French, but this declaration makes sense (as written in Remarks of function documentation in MSDN library): [DllImport( "Shell32.dll", CharSet = CharSet.Unicode, EntryPoint = "#232" ) ] private static extern int SHSetFolderPath( int csidl, IntPtr hToken, uint flags, string path ); I call it like that: private static int CSIDL_DESKTOP = 0x0000; public static void SetDesktopPath(string path) { int ret; ret = SHSetFolderPath(CSIDL_DESKTOP, IntPtr.Zero, 0, path); if (ret != 0) { Console.WriteLine(ret); Console.WriteLine(Marshal.GetExceptionForHR(ret)); } } It works in Win7, but in XP function returns -2147024809, which means "Value does not fall within the expected range". My guess is, it's something wrong with Dll importing. Any idea?

    Read the article

  • What would be the time complexity of counting the number of all structurally different binary trees?

    - by ktslwy
    Using the method presented here: http://cslibrary.stanford.edu/110/BinaryTrees.html#java 12. countTrees() Solution (Java) /** For the key values 1...numKeys, how many structurally unique binary search trees are possible that store those keys? Strategy: consider that each value could be the root. Recursively find the size of the left and right subtrees. */ public static int countTrees(int numKeys) { if (numKeys <=1) { return(1); } else { // there will be one value at the root, with whatever remains // on the left and right each forming their own subtrees. // Iterate through all the values that could be the root... int sum = 0; int left, right, root; for (root=1; root<=numKeys; root++) { left = countTrees(root-1); right = countTrees(numKeys - root); // number of possible trees with this root == left*right sum += left*right; } return(sum); } } I have a sense that it might be n(n-1)(n-2)...1, i.e. n!

    Read the article

  • How can I improve this design?

    - by klausbyskov
    Let's assume that our system can perform actions, and that an action requires some parameters to do its work. I have defined the following base class for all actions (simplified for your reading pleasure): public abstract class BaseBusinessAction<TActionParameters> : where TActionParameters : IActionParameters { protected BaseBusinessAction(TActionParameters actionParameters) { if (actionParameters == null) throw new ArgumentNullException("actionParameters"); this.Parameters = actionParameters; if (!ParametersAreValid()) throw new ArgumentException("Valid parameters must be supplied", "actionParameters"); } protected TActionParameters Parameters { get; private set; } protected abstract bool ParametersAreValid(); public void CommonMethod() { ... } } Only a concrete implementation of BaseBusinessAction knows how to validate that the parameters passed to it are valid, and therefore the ParametersAreValid is an abstract function. However, I want the base class constructor to enforce that the parameters passed are always valid, so I've added a call to ParametersAreValid to the constructor and I throw an exception when the function returns false. So far so good, right? Well, no. Code analysis is telling me to "not call overridable methods in constructors" which actually makes a lot of sense because when the base class's constructor is called the child class's constructor has not yet been called, and therefore the ParametersAreValid method may not have access to some critical member variable that the child class's constructor would set. So the question is this: How do I improve this design? Do I add a Func<bool, TActionParameters> parameter to the base class constructor? If I did: public class MyAction<MyParameters> { public MyAction(MyParameters actionParameters, bool something) : base(actionParameters, ValidateIt) { this.something = something; } private bool something; public static bool ValidateIt() { return something; } } This would work because ValidateIt is static, but I don't know... Is there a better way? Comments are very welcome.

    Read the article

  • show/hide a link if certain jquery ui tab is selected.

    - by Mark
    When #my-text-link is clicked, i need to select tab 5 and when tab 5 is selected i need to hide #my-text-link. hope this makes sense, heres the code, and also what I have done so far, please feel free to show me a better way. Thanks in advance var $tabs = $('.tabbed').tabs(); // first tab selected $('#my-text-link').click(function() { // bind click event to link $tabs.tabs('select', 4); // switch to third tab $('#my-text-link').hide(); return false; }); <a href="#" id="my-text-link"></a> <ul> <li class="one"><a href="#tabs-1" title="Summary"></a></li> <li class="two"><a href="#tabs-2" title="Detailed Info"></a></li> <li class="three"><a href="#tabs-3" title="Images"></a></li> <li class="four"><a href="#tabs-4" title="Reviews"></a></li> <li class="five"><a href="#tabs-5" title="Dates &amp; Prices"></a></li> </ul> <div id="tabs-1"></div> <div id="tabs-2"></div> <div id="tabs-3"></div> <div id="tabs-4"></div> <div id="tabs-5"></div>

    Read the article

  • backbone marionette - displaying a view in a layout region which has already been rendered

    - by Justin Wyllie
    I have something like this: //render the layout Layout.layout = new Layout.Screen(); MyApp.SomeRegion.show(Layout.layout); //render a view into it var mView = new CompositeView({collection: data}); Layout.layout.SomeRegion.show(mView); That all works fine. The layout has 3 regions. The above has rendered a view into one of them. Now, later, I want (in response to some user interaction) to render another view into one of the other regions. So I try: var mView2 = new CompositeView({collection: data}); Layout.layout.SomeOtherRegion.show(mView); But 'SomeOtherRegion' no longer exists on Layout.layout. I looked at this in Firebug. In the first case Layout.layout has my three regions defined on it. In the second case, not. Though they are still there in the regions object. This is the same layout instance. It looks like once a layout has been rendered you cannot render into it again? There must be a way. Nb. I don't want to re-render the layout. I hope that makes sense --Justin Wyllie

    Read the article

  • Background worker not working right

    - by vbNewbie
    I have created a background worker to go and run a pretty long task that includes creating more threads which will read from a file of urls and crawl each. I tried following it through debugging and found that the background process ends prematurely for no apparent reason. Is there something wrong in the logic of my code that is causing this. I will try and paste as much as possible to make sense. While Not myreader.EndOfData Try currentRow = myreader.ReadFields() Dim currentField As String For Each currentField In currentRow itemCount = itemCount + 1 searchItem = currentField generateSearchFromFile(currentField) processQuerySearch() Next Catch ex As Microsoft.VisualBasic.FileIO.MalformedLineException Console.WriteLine(ex.Message.ToString) End Try End While This first bit of code is the loop to input from file and this is what the background worker does. The next bit of code is where the background worker creates threads to work all the 'landingPages'. After about 10 threads are created the background worker exits this sub and skips the file input loop and exits the program. Try For Each landingPage As String In landingPages pgbar.Timer1.Stop() If VisitedPages.Contains(landingPage) Then Continue For Else Dim thread = New Thread(AddressOf processQuery) count = count + 1 thread.Name = "Worm" & count thread.Start(landingPage) If numThread >= 10 Then For Each thread In ThreadList thread.Join() Next numThread = 0 Continue For Else numThread = numThread + 1 SyncLock ThreadList ThreadList.Add(thread) End SyncLock End If End If Next

    Read the article

  • Does Team Foundation support cross-app workitem groups?

    - by drachenstern
    We're currently using Visual Source Safe and BugNet and looking to migrate up and away from VSS. I've been pushing for either SVN ( a) we're an ASP.NET shop, b) DCVS is not an option - no matter how much I like Hg ;-) or TFS. Well we finally got a new dev server, so I talked the boss into installing TFS on it (30 day trial). In the meantime, we had started experimenting with FogBugz. We really like FogBugz for about 80% of what we want to do, and the other 20% is probably stuff that we don't know what we want. I'm pushing for TFS because it allows for IDE integrated (mostly) everything. He's pushing for FogBugz because he can group tasks by customer and then project and manage everything from one dashboard. (which means I lose most of my IDE integration - no huge loss I agree) Does TFS support a single dashboard that would span all our solutions (in this case each solution is a full app that we sell to a vertical market client) and let us assign workitems to each solution-spanning-group? So for instance I think we envision something like this: PROJECT1 - Bugtracker and workitems PROJECT2 - Bugtracker and workitems PROJECT3 - Bugtracker and workitems CUSTOMER1 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT2) CUSTOMER2 - Deployment schedules, required features, specific notes (Uses PROJECT2, PROJECT3) CUSTOMER3 - Deployment schedules, required features, specific notes (Uses PROJECT1, PROJECT3) Hopefully that makes sense. naturally it's more complicated than this but I think I've given the details enough to paint a picture. I offered the option of creating dummy projects per customer but he doesn't like that and it doesn't really give us the single dashboard view that we're hoping to end up with (and that FogBugz as we've sorta implmented things does do now). Has anyone got a good suggestion on a management app that would accomplish what both of us want?

    Read the article

  • Listing directories in Linux from C

    - by nunos
    I am trying to simulate linux command ls using linux api from c. Looking at the code it does make sense, but when I run it I get "stat error: No such file or directory". I have checked that opendir is working ok. I think the problem is in stat, which is returning -1 even though I think it should return 0. What am I missing? Thanks for your help. #include <stdio.h> #include <stdlib.h> #include <string.h> #include <dirent.h> #include <sys/stat.h> #include <errno.h> int main(int argc, char *argv[]) { DIR *dirp; struct dirent *direntp; struct stat stat_buf; char *str; if (argc != 2) { fprintf( stderr, "Usage: %s dir_name\n", argv[0]); exit(1); } if ((dirp = opendir( argv[1])) == NULL) { perror(argv[1]); exit(2); } while ((direntp = readdir( dirp)) != NULL) { if (stat(direntp->d_name, &stat_buf)==-1) { perror("stat ERROR"); exit(3); } if (S_ISREG(stat_buf.st_mode)) str = "regular"; else if (S_ISDIR(stat_buf.st_mode)) str = "directory"; else str = "other"; printf("%-25s - %s\n", direntp->d_name, str); } closedir(dirp); exit(0); }

    Read the article

  • When optimizing database queries, what exactly is the relationship between number of queries and siz

    - by williamjones
    To optimize application speed, everyone always advises to minimize the number of queries an application makes to the database, consolidating them into fewer queries that retrieve more wherever possible. However, this also always comes with the caution that data transferred is still data transferred, and just because you are making fewer queries doesn't make the data transferred free. I'm in a situation where I can over-include on the query in order to cut down the number of queries, and simply remove the unwanted data in the application code. Is there any type of a rule of thumb on how much of a cost there is to each query, to know when to optimize number of queries versus size of queries? I've tried to Google for objective performance analysis data, but surprisingly haven't been able to find anything like that. Clearly this relationship will change for factors such as when the database grows in size, making this somewhat individualized, but surely this is not so individualized that a broad sense of the landscape can't be drawn out? I'm looking for general answers, but for what it's worth, I'm running an application on Heroku.com, which means Ruby on Rails with a Postgres database.

    Read the article

  • Why is MySQL with InnoDB doing a table scan when key exists and choosing to examine 70 times more ro

    - by andysk
    Hello, I'm troubleshooting a query performance problem. Here's an expected query plan from explain: mysql> explain select * from table1 where tdcol between '2010-04-13:00:00' and '2010-04-14 03:16'; +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ | 1 | SIMPLE | table1 | range | tdcol | tdcol | 8 | NULL | 5437848 | Using where | +----+-------------+--------------------+-------+---------------+--------------+---------+------+---------+-------------+ 1 row in set (0.00 sec) That makes sense, since the index named tdcol (KEY tdcol (tdcol)) is used, and about 5M rows should be selected from this query. However, if I query for just one more minute of data, we get this query plan: mysql> explain select * from table1 where tdcol between '2010-04-13 00:00' and '2010-04-14 03:17'; +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ | 1 | SIMPLE | table1 | ALL | tdcol | NULL | NULL | NULL | 381601300 | Using where | +----+-------------+--------------------+------+---------------+------+---------+------+-----------+-------------+ 1 row in set (0.00 sec) The optimizer believes that the scan will be better, but it's over 70x more rows to examine, so I have a hard time believing that the table scan is better. Also, the 'USE KEY tdcol' syntax does not change the query plan. Thanks in advance for any help, and I'm more than happy to provide more info/answer questions.

    Read the article

  • Password Protect for fun

    - by WaZ
    Hi there, I am working on a blog for my friend. I want to gift him the blog on his birthday. Just for some fun I want to restrict access to the blog. e.g. The website. www.myfriend.com opens with a splash screen. The screen has his picture and a question regarding him. these questions can be. If you know me.... What is my nickname? Whats my fav. sport? etc etc This restriction should not be based on the default user management offered by Wordpress and should involve a simple answer to a question which is randomly generated from list of questions once the visior gives a correct answer the page redirects to the blog please note even if the user types www.myfriend.com/blog they should be able to see it. this restriction is not a restriction in true sense but just involves some user interaction. Its just for fun but adds a bit of spice. Much appreciated. Thanks.

    Read the article

  • How can I remove an "ALMOST" Duplicate using LINQ? ( OR SQL? )

    - by Atomiton
    This should be and easy one for the LINQ gurus out there. I'm doing a complex Query using UNIONS and CONTAINSTABLE in my database to return ranked results to my application. I'm getting duplicates in my returned data. This is expected. I'm using CONTAINSTABLE and CONTAINS to get all the results I need. CONTAINSTABLE is ranked by SQL and CONTAINS (which is run only on the Keywords field ) is hard-code-ranked by me. ( Sorry if that doesn't make sense ) Anyway, because the tuples aren't identical ( their rank is different ) a duplicate is returned. I figure the best way to deal with this is use LINQ. I know I'll be using the Distinct() extension method, but do I have to implement the IEqualityComparer interface? I'm a little fuzzy on how to do this. For argument's sake, say my resultset is structured like this class: class Content { ContentID int //KEY Rank int Description String } If I have a List<Content> how would I write the Distinct() method to exclude Rank? Ideally I'd like to keep the Content's highest Rank. SO, if one Content's RAnk is 112 and the other is 76. I'd like to keep the 112 rank. Hopefully I've given enough information.

    Read the article

  • Get current PHP executable from within script?

    - by benizi
    I want to run a PHP cli program from within PHP cli. On some machines where this will run, both php4 and php5 are installed. If I run the outer program as php5 outer.php I want the inner script to be run with the same php version. In Perl, I would use $^X to get the perl executable. It appears there's no such variable in PHP? Right now, I'm using $_SERVER['_'], because bash (and zsh) set the environment variable $_ to the last-run program. But, I'd rather not rely on a shell-specific idiom. UPDATE: Version differences are but one problem. If PHP isn't in PATH, for example, or isn't the first version found in PATH, the suggestions to find the version information won't help. Additionally, csh and variants appear to not set the $_ environment variable for their processes, so the workaround isn't applicable there. UPDATE 2: I was using $_SERVER['_'], until I discovered that it doesn't do the right thing under xargs (which makes sense... zsh sets it to the command it ran, which is xargs, not php5, and xargs doesn't change the variable). Falling back to using: $version = explode('.', phpversion()); $phpcli = "php{$version[0]}";

    Read the article

  • What framework would allow for the largest coverage of freelance developers in the media/digital mar

    - by optician
    This question is not about which is the best, it is about which makes the most business sense to use as a company's platform of choice for ongoing freelance development. I'm currently trying to decide what framework to move my company in regarding frameworks for web application work. Options are ASP.NET MVC Django CakePHP/Symfony etc.. Struts Pearl on Rails Please feel free to add more to the discussion. I currently work in ASP.NET MVC in my Spare time, and find it incredibly enjoyable to work with. It is my first experince with an MVC framework for the web, so I can't talk on the others. The reason for not pushing this at the company is that I feel that there are not many developers in the Media/Marketing world who would work with this, so it may be hard to extend the team, or at least cost more. I would like to move into learning and pushing Django, partly to learn python, partly to feel a bit cooler (all my geeky friends use Java/Python/c++). Microsoft is the dark side to most company's I work with (Marketing/Media focused). But again I'm worried about developers in this sector. PHP seems like the natural choice, but I'm scared by the sheer amount of possible frameworks, and also that the quality of developer may be lower. I know there are great php developers out there, but how many of them know multiple frameworks? Are they similar enough that anyone decent at php can pick them up? Just put struts in the list as an option, but personally I live with a Java developer, and considering my experience with c#, I'm just not that interested in learning Java (selfish personal geeky reasons) Final option was a joke http://www.bbc.co.uk/blogs/radiolabs/2007/11/perl_on_rails.shtml

    Read the article

  • How do I get LongVarchar out param from SPROC in ADO.NET 2.0 with SQLAnywhere 10?

    - by todthomson
    Hi All, I have sproc 'up_selfassessform_view' which has the following parameters: in ai_eqidentkey SYSKEY in ai_acidentkey SYSKEY out as_eqcomments TEXT_STRING out as_acexplanation TEXT_STRING  -  which are domain objects - SYSKEY is 'integer' and TEXT_STRING is 'long varchar'. I can call the sproc fine from iSQL using the following code: create variable @eqcomments TEXT_STRING; create variable @acexamples TEXT_STRING; call up_selfassessform_view (75000146, 3, @eqcomments, @acexamples); select @eqcomments, @acexamples;  - which returns the correct values from the DB (so I know the SPROC is good). I have configured the out param in ADO.NET like so (which has worked up until now for 'integer', 'timestamp', 'varchar(255)', etc): SAParameter as_acexplanation = cmd.CreateParameter(); as_acexplanation.Direction = ParameterDirection.Output; as_acexplanation.ParameterName = "as_acexplanation"; as_acexplanation.SADbType = SADbType.LongVarchar; cmd.Parameters.Add(as_acexplanation); When I run the following code: SADataReader reader = cmd.ExecuteReader(); I receive the following error: Parameter[2]: the Size property has an invalid size of 0. Which (I suppose) makes sense... But the thing is, I don't know the size of the field (it's just "long varchar" it doesn't have a predetermined length - unlike varchar(XXX)). Anyhow, just for fun, I add the following: as_acexplanation.Size = 1000; and the above error goes away, but now when I call: as_acexplanation.Value i get back a string of length = 1000 which is just '\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0...' (\0 repeated 1000 times). So I'm really really stuck... Any help one this one would be much appreciated. Cheers! ;) Tod T.

    Read the article

  • Making two Windows using CreateWindowsEx() - C

    - by Jamie Keeling
    Hello, I have a windows form that has a simple menu and performs a simple operation, I want to be able to create another windows form with all the functionality of a menu bar, message pump etc.. as a separate thread so I can then share the results of the operation to the second window. I.E. 1) Form A opens Form B opens as a separate thread 2)Form A performs operation 3)Form A passes results via memory to Form B 4)Form B display results I'm confused as to how to go about it, the main app runs fine but i'm not sure how to add a second window if the first one already exists. I think that using CreateWindow will allow me to make another window but again i'm not sure how to access the message pump so I can respond to certain events like WM_CREATE on the second window. I hope it makes sense. Thanks! Edit: I've attempted to make a second window and although this does compile, no windows show atall on build. ////////////////////// // WINDOWS FUNCTION // ////////////////////// LRESULT CALLBACK WindowFunc(HWND hMainWindow, UINT message, WPARAM wParam, LPARAM lParam) { //Fields WCHAR buffer[256]; struct DiceData storage; HWND hwnd; // Act on current message switch(message) { case WM_CREATE: AddMenus(hMainWindow); hwnd = CreateWindowEx( 0, "ChildWClass", (LPCTSTR) NULL, WS_CHILD | WS_BORDER | WS_VISIBLE, 0, 0, 0, 0, hMainWindow, NULL, NULL, NULL); ShowWindow(hwnd, SW_SHOW); break; Any suggestions as to why this happens?

    Read the article

  • Why aren't google api clients built on top of Apache's Abdera project ?

    - by lisak
    Hey, Could anybody please explain that to me? As far as I can see, the developers of java's google api client library are reinventing the wheel. It's like writing a new JDK for a Java project. I'm aware of the fact that google data protocol is a little specific re atom publishing, but if one needs to use some of the fancy extensions and features that Apache Abdera project offers for this protocol, it is better not to use google api client library and implement the client from scratch with Abdera... And I'm sure that in a lot of cases its features such as Abdera's JCR adapter would become very handy for google docs, google translator toolkit and others. Now it's great that there is a google api client library to be used for google docs, but what am I going to do with the documents? I believe that in more than a half cases there is also a repository or database on the other side. And in that case, abdera is needed, not the simple google api clients that are only marshalling/unmarshalling the feeds... In fact, there is something to persist in all of the google APIs. It would make sense, if google decided to invest the effort into Abdera enhancement... This doesn't... Also for the question to be more specific: How are you developing google api clients, that need entry persistence (JCR for instance) ? What would be the best way to integrate a google api client library with Apache Abdera ?

    Read the article

  • Why is execution-time method resolution faster than compile-time resolution?

    - by Felix
    At school, we about virtual functions in C++, and how they are resolved (or found, or matched, I don't know what the terminology is -- we're not studying in English) at execution time instead of compile time. The teacher also told us that compile-time resolution is much faster than execution-time (and it would make sense for it to be so). However, a quick experiment would suggest otherwise. I've built this small program: #include <iostream> #include <limits.h> using namespace std; class A { public: void f() { // do nothing } }; class B: public A { public: void f() { // do nothing } }; int main() { unsigned int i; A *a = new B; for (i=0; i < UINT_MAX; i++) a->f(); return 0; } Where I made A::f() once normal, once virtual. Here are my results: [felix@the-machine C]$ time ./normal real 0m25.834s user 0m25.742s sys 0m0.000s [felix@the-machine C]$ time ./virtual real 0m24.630s user 0m24.472s sys 0m0.003s [felix@the-machine C]$ time ./normal real 0m25.860s user 0m25.735s sys 0m0.007s [felix@the-machine C]$ time ./virtual real 0m24.514s user 0m24.475s sys 0m0.000s [felix@the-machine C]$ time ./normal real 0m26.022s user 0m25.795s sys 0m0.013s [felix@the-machine C]$ time ./virtual real 0m24.503s user 0m24.468s sys 0m0.000s There seems to be a steady ~1 second difference in favor of the virtual version. Why is this? Relevant or not: dual-core pentium @ 2.80Ghz, no extra applications running between two tests. Archlinux with gcc 4.5.0. Compiling normally, like: $ g++ test.cpp -o normal Also, -Wall doesn't spit out any warnings, either.

    Read the article

  • Business Objects - Containers or functional?

    - by Walter
    Where I work, we've gone back and forth on this subject a number of times and are looking for a sanity check. Here's the question: Should Business Objects be data containers (more like DTOs) or should they also contain logic that can perform some functionality on that object. Example - Take a customer object, it probably contains some common properties (Name, Id, etc), should that customer object also include functions (Save, Calc, etc.)? One line of reasoning says separate the object from the functionality (single responsibility principal) and put the functionality in a Business Logic layer or object. The other line of reasoning says, no, if I have a customer object I just want to call Customer.Save and be done with it. Why do I need to know about how to save a customer if I'm consuming the object? Our last two projects have had the objects separated from the functionality, but the debate has been raised again on a new project. Which makes more sense? EDIT These results are very similar to our debates. One vote to one side or another completely changes the direction. Does anyone else want to add their 2 cents? EDIT Eventhough the answer sampling is small, it appears that the majority believe that functionality in a business object is acceptable as long as it is simple but persistence is best placed in a separate class/layer. We'll give this a try. Thanks for everyone's input...

    Read the article

  • Understanding CSRF - Simple Question

    - by byronh
    I know this might make me seem like an idiot, I've read everything there is to read about CSRF and I still don't understand how using a 'challenge token' would add any sort of prevention. Please help me clarify the basic concept, none of the articles and posts here on SO I read seemed to really explicitly state what value you're comparing with what. From OWASP: In general, developers need only generate this token once for the current session. After initial generation of this token, the value is stored in the session and is utilized for each subsequent request until the session expires. If I understand the process correctly, this is what happens. I log in at http://example.com and a session/cookie is created containing this random token. Then, every form includes a hidden input also containing this random value from the session which is compared with the session/cookie upon form submission. But what does that accomplish? Aren't you just taking session data, putting it in the page, and then comparing it with the exact same session data? Seems like circular reasoning. These articles keep talking about following the "same-origin policy" but that makes no sense, because all CSRF attacks ARE of the same origin as the user, just tricking the user into doing actions he/she didn't intend. Is there any alternative other than appending the token to every single URL as a query string? Seems very ugly and impractical, and makes bookmarking harder for the user.

    Read the article

  • Alternatives to using web.config to store settings (for complex solutions)

    - by Brian MacKay
    In our web applications, we seperate our Data Access Layers out into their own projects. This creates some problems related to settings. Because the DAL will eventually need to be consumed from perhaps more than one application, web.config does not seem like a good place to keep the connection strings and some of the other DAL-related settings. To solve this, on some of our recent projects we introduced a third project just for settings. We put the setting in a system of .Setting files... With a simple wrapper, the ability to have different settings for various enviroments (Dev, QA, Staging, Production, etc) was easy to achieve. The only problem there is that the settings project (including the .Settings class) compiles into an assembly, so you can't change it without doing a build/deployment, and some of our customers want to be able to configure their projects without Visual Studio. So, is there a best practice for this? I have that sense that I'm reinventing the wheel. Some solutions such as storing settings in a fixed directory on the server in, say, our own XML format occurred to us. But again, I would rather avoid having to re-create encryption for sensitive values and so on. And I would rather keep the solution self-contained if possible. EDIT: The original question did not contain the really penetrating reason that we can't (I think) use web.config ... That puts a few (very good) answers out of context, my bad.

    Read the article

  • WCF Callback Contract InvalidOperationException: Collection has been modified

    - by mrlane
    We are using a WCF service with a Callback contract. Communication is Asynchronous. The service contract is defined as: [ServiceContract(Namespace = "Silverlight", CallbackContract = typeof(ISessionClient),SessionMode = SessionMode.Allowed)] public interface ISessionService With a method: [OperationContract(IsOneWay = true)] void Send(Message message); The callback contract is defined as [ServiceContract] public interface ISessionClient With methods: [OperationContract(IsOneWay = true, AsyncPattern = true)] IAsyncResult BeginSend(Message message, AsyncCallback callback, object state); void EndSend(IAsyncResult result); The implementation of BeginSend and EndSend in the callback channel are as follows: public void Send(ActionMessage actionMessage) { Message message = Message.CreateMessage(_messageVersion, CommsSettings.SOAPActionReceive, actionMessage, _messageSerializer); lock (LO_backChannel) { try { _backChannel.BeginSend(message, OnSendComplete, null); } catch (Exception ex) { _hasFaulted = true; } } } private void OnSendComplete(IAsyncResult asyncResult) { lock (LO_backChannel) { try { _backChannel.EndSend(asyncResult); } catch (Exception ex) { _hasFaulted = true; } } } We are getting an InvalidOperationException: "Collection has been modified" on _backChannel.EndSend(asyncResult) seemingly randomly, and we are really out of ideas about what is causing this. I understand what the exception means, and that concurrency issues are a common cause of such exceptions (hence the locks), but it really doesn't make any sense to me in this situation. The clients of our service are Silverlight 3.0 clients using PollingDuplexHttpBinding which is the only binding available for Silverlight. We have been running fine for ages, but recently have been doing a lot of data binding, and this is when the issues started. Any help with this is appreciated as I am personally stumped at this time.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >