Search Results

Search found 8638 results on 346 pages for 'vs'.

Page 107/346 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • assignment vs std::swap and merging and keeping duplicates in seperate object

    - by rubenvb
    Say I have two std::set<std::string>s. The first one, old_options, needs to be merged with additional options, contained in new_options. I can't just use std::merge (well, I do, but not only that) because I also check for doubles and warn the user about this accordingly. To this effect, I have void merge_options( set<string> &old_options, const set<string> &new_options ) { // find duplicates and create merged_options, a stringset containing the merged options // handle duplicated the way I want to // ... old_options = merged_options; } Is it better to use std::swap( merged_options, old_options ); or the assignment I have? Is there a better way to filter duplicates and return the merged set than consecutive calls to std::set_intersection and std::set_union to detect dupes and merge the sets? I know it's slower than one traversal and doing both at once, but these sets are small (performance is not critical) and I trust the Standard more than I trust myself.

    Read the article

  • What are the pros and cons of using manual list iteration vs recursion through fail

    - by magus
    I come up against this all the time, and I'm never sure which way to attack it. Below are two methods for processing some season facts. What I'm trying to work out is whether to use method 1 or 2, and what are the pros and cons of each, especially large amounts of facts. methodone seems wasteful since the facts are available, why bother building a list of them (especially a large list). This must have memory implications too if the list is large enough ? And it doesn't take advantage of Prolog's natural backtracking feature. methodtwo takes advantage of backtracking to do the recursion for me, and I would guess would be much more memory efficient, but is it good programming practice generally to do this? It's arguably uglier to follow, and might there be any other side effects? One problem I can see is that each time fail is called, we lose the ability to pass anything back to the calling predicate, eg. if it was methodtwo(SeasonResults), since we continually fail the predicate on purpose. So methodtwo would need to assert facts to store state. Presumably(?) method 2 would be faster as it has no (large) list processing to do? I could imagine that if I had a list, then methodone would be the way to go.. or is that always true? Might it make sense in any conditions to assert the list to facts using methodone then process them using method two? Complete madness? But then again, I read that asserting facts is a very 'expensive' business, so list handling might be the way to go, even for large lists? Any thoughts? Or is it sometimes better to use one and not the other, depending on (what) situation? eg. for memory optimisation, use method 2, including asserting facts and, for speed use method 1? season(spring). season(summer). season(autumn). season(winter). % Season handling showseason(Season) :- atom_length(Season, LenSeason), write('Season Length is '), write(LenSeason), nl. % ------------------------------------------------------------- % Method 1 - Findall facts/iterate through the list and process each %-------------------------------------------------------------- % Iterate manually through a season list lenseason([]). lenseason([Season|MoreSeasons]) :- showseason(Season), lenseason(MoreSeasons). % Findall to build a list then iterate until all done methodone :- findall(Season, season(Season), AllSeasons), lenseason(AllSeasons), write('Done'). % ------------------------------------------------------------- % Method 2 - Use fail to force recursion %-------------------------------------------------------------- methodtwo :- % Get one season and show it season(Season), showseason(Season), % Force prolog to backtrack to find another season fail. % No more seasons, we have finished methodtwo :- write('Done').

    Read the article

  • Android Calendar API vs Calendar Provider API

    - by John Roberts
    I'm a little bit confused about the difference between the two. An example of the Calendar API is supposedly located here: http://samples.google-api-java-client.googlecode.com/hg/calendar-android-sample/instructions.html, but the author himself suggests using the Calendar Provider API, details about which are here: http://developer.android.com/guide/topics/providers/calendar-provider.html. Can someone explain to me the difference between the two, and which would be better for me to use for a simple calendar app?

    Read the article

  • C++ arrays as parameters, subscript vs. pointer

    - by awshepard
    Alright, I'm guessing this is an easy question, so I'll take the knocks, but I'm not finding what I need on google or SO. I'd like to create an array in one place, and populate it inside a different function. I define a function: void someFunction(double results[]) { for (int i = 0; i<100; ++i) { for (int n = 0; n<16; ++n) //note this iteration limit { results[n] += i * n; } } } That's an approximation to what my code is doing, but regardless, shouldn't be running into any overflow or out of bounds issues or anything. I generate an array: double result[16]; for(int i = 0; i<16; i++) { result[i] = -1; } then I want to pass it to someFunction someFunction(result); When I set breakpoints and step through the code, upon entering someFunction, results is set to the same address as result, and the value there is -1.000000 as expected. However, when I start iterating through the loop, results[n] doesn't seem to resolve to *(results+n) or *(results+n*sizeof(double)), it just seems to resolve to *(results). What I end up with is that instead of populating my result array, I just get one value. What am I doing wrong?

    Read the article

  • Improving Javascript Load Times - Concatenation vs Many + Cache

    - by El Yobo
    I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail. A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors). First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data. The system I currently have is something like this * All javascript files are compressed and loaded at the bottom of the page * All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time * Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches). In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine. Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit). I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached. So, what do people think is a better approach? In a similar vein, what do you think about a similar approach to CSS - is monolithic better?

    Read the article

  • Accessing property of object vs variable in javascript

    - by Samuel
    Why when I try to access a variable that don't exist, javascript throw an exception but when I try to access a property that don't exist in an object, javascript returns an undefined object? For example, this case returns an undefined object: function Foo(){ console.log(this.bar); } Foo(); But, in this other example, javascript throw an exception: function Foo(){ console.log(bar); } Foo(); ReferenceError: bar is not defined

    Read the article

  • XML fragment with multiple roots: suppressing VS error

    - by Laurent
    In Visual Studio, I have an XML file (with .xml extension) which contains an XML fragment that I use in my program: <tag1> data ... </tag1> <tag2> data ... </tag2> Visual Studio shows me an error in the error list: "XML document cannot contain multiple root level elements". But this is not a complete document, just a fragment that will be reused. I want to keep my 2 roots in my fragment. How can I get rid of the error message? Thanks

    Read the article

  • Sun webstack vs Installing PHP, MySQL, Apache individually

    - by Vincent
    Is it possible to install PHP, MySQL, Apache individually on Solaris instead of installing them through a webstack? What are the advantages and disadvantages? I seem to frequently get a CURL error on Solaris when dealing with HTTPS sites. (error:81072080:lib(129):func(114):reason(128). I have no clue why that error is occuring and thought it might solve it, if I upgrade to latest PHP,MySQL,Apache versions. At this point I am not even sure if it's a Solaris issue. Any advice? Thanks

    Read the article

  • Appropriate uses of Monad `fail` vs. MonadPlus `mzero`

    - by jberryman
    This is a question that has come up several times for me in the design code, especially libraries. There seems to be some interest in it so I thought it might make a good community wiki. The fail method in Monad is considered by some to be a wart; a somewhat arbitrary addition to the class that does not come from the original category theory. But of course in the current state of things, many Monad types have logical and useful fail instances. The MonadPlus class is a sub-class of Monad that provides an mzero method which logically encapsulates the idea of failure in a monad. So a library designer who wants to write some monadic code that does some sort of failure handling can choose to make his code use the fail method in Monad or restrict his code to the MonadPlus class, just so that he can feel good about using mzero, even though he doesn't care about the monoidal combining mplus operation at all. Some discussions on this subject are in this wiki page about proposals to reform the MonadPlus class. So I guess I have one specific question: What monad instances, if any, have a natural fail method, but cannot be instances of MonadPlus because they have no logical implementation for mplus? But I'm mostly interested in a discussion about this subject. Thanks! EDIT: One final thought occured to me. I recently learned (even though it's right there in the docs for fail) that monadic "do" notation is desugared in such a way that pattern match failures, as in (x:xs) <- return [] call the monad's fail. It seems like the language designers must have been strongly influenced by the prospect of some automatic failure handling built in to haskell's syntax in their inclusion of fail in Monad.

    Read the article

  • Generics vs Object performance

    - by Risho
    I'm doing practice problems from MCTS Exam 70-536 Microsft .Net Framework Application Dev Foundation, and one of the problems is to create two classes, one generic, one object type that both perform the same thing; in which a loop uses the class and iterated over thousand times. And using the timer, time the performance of both. There was another post at C# generics question that seeks the same questoion but nonone replied. Basically if in my code I run the generic class first it takes loger to process. If I run the object class first than the object class takes longer to process. The whole idea was to prove that generics perform faster. I used the original users code to save me some time. I didn't particularly see anything wrong with the code and was puzzled by the outcome. Can some one explain why the unusual results? Thanks, Risho Here is the code: class Program { class Object_Sample { public Object_Sample() { Console.WriteLine("Object_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(Object a) { Console.WriteLine("{0}", a); } } class Generics_Samle<T> { public Generics_Samle() { Console.WriteLine("Generics_Sample Class"); } public long getTicks() { return DateTime.Now.Ticks; } public void display(T a) { Console.WriteLine("{0}", a); } } static void Main(string[] args) { long ticks_initial, ticks_final, diff_generics, diff_object; Object_Sample OS = new Object_Sample(); Generics_Samle<int> GS = new Generics_Samle<int>(); //Generic Sample ticks_initial = 0; ticks_final = 0; ticks_initial = GS.getTicks(); for (int i = 0; i < 50000; i++) { GS.display(i); } ticks_final = GS.getTicks(); diff_generics = ticks_final - ticks_initial; //Object Sample ticks_initial = 0; ticks_final = 0; ticks_initial = OS.getTicks(); for (int j = 0; j < 50000; j++) { OS.display(j); } ticks_final = OS.getTicks(); diff_object = ticks_final - ticks_initial; Console.WriteLine("\nPerformance of Generics {0}", diff_generics); Console.WriteLine("Performance of Object {0}", diff_object); Console.ReadKey(); } }

    Read the article

  • SSL_CTX_set_cert_verify_callback vs. SSL_CTX_set_verify

    - by BreakPoint
    Hello, Can anyone tell me what is the difference between SSL_CTX_set_cert_verify_callback and SSL_CTX_set_verify? From OpenSSL docs: SSL_CTX_set_cert_verify_callback() sets the verification callback function for ctx. SSL objects that are created from ctx inherit the setting valid at the time when SSL_new(3) is called. and: SSL_CTX_set_verify() sets the verification flags for ctx to be mode and specifies the verify_callback function to be used. If no callback function shall be specified, the NULL pointer can be used for verify_callback. So I'm trying to understand which callback to send for each one (from client side). Thanks experts.

    Read the article

  • Behaviour difference Dim oDialog1 as Dialog1 = New Dialog1 VS Dim oDialog1 as Dialog1 = Dialog1

    - by user472722
    VB.Net 2005 I have a now closed Dialog1. To get information from the Dialog1 from within a module I need to use Dim oDialog1 as Dialog1 = New Dialog1. VB.Net 2008 I have a still open Dialog1. To get information from the Dialog1 from within a module I need to use Dim oDialog1 as Dialog1 = Dialog1. VB.Net 2005 does not compile using Dim oDialog1 as Dialog1 = Dialog1 and insists on NEW What is going on and why do I need the different initialisation syntax?

    Read the article

  • FindBugs and CheckForNull on classes vs. interfaces

    - by ndn
    Is there any way to let FindBugs check and warn me if a CheckForNull annotation is present on the implementation of a method in a class, but not on the declaration of the method in the interface? import javax.annotation.CheckForNull; interface Foo { public String getBar(); } class FooImpl implements Foo { @CheckForNull @Override public String getBar() { return null; } } public class FindBugsDemo { public static void main(String[] args) { Foo foo = new FooImpl(); System.out.println(foo.getBar().length()); } } I just discovered a bug in my application due to a missing null check that was not spotted by FindBugs because CheckForNull was only present on FooImpl, but not on Foo, and I don't want to spot all other locations of this problem manually.

    Read the article

  • Stateless singleton VS Static methods

    - by Sebastien Lorber
    Hey, Don't find any good answer to this simple question about helper/utils classes: Why would i create a singleton (stateless) rather than static methods? Why an object instance could be needed while the object has no state? Sometimes i really don't know what to use...

    Read the article

  • Interview Question: .Any() vs if (.Length > 0) for testing if a collection has elements

    - by Chris
    In a recent interview I was asked what the difference between .Any() and .Length > 0 was and why I would use either when testing to see if a collection had elements. This threw me a little as it seems a little obvious but feel I may be missing something. I suggested that you use .Length when you simply need to know that a collection has elements and .Any() when you wish to filter the results. Presumably .Any() takes a performance hit too as it has to do a loop / query internally.

    Read the article

  • <asp:Table> Vs html <table>

    - by keith
    What are the pros and cons between using the ASP.Net control compared to the old reliable table html implementation. I know that the asp:Table will end up on the returned page as a html table, and from looking into it so far people are saying its easier to work with the asp:Table in the server side code, but I'd love to hear what the stackoverflow community has to say about the matter.

    Read the article

  • selenium vs phpunit/lime?

    - by fayer
    i have seen the power of selenium and that it can give you the tests in different languages. so the question is, why should i use phpunit or lime (for symfony) when a solution like selenium is available? isn't it time-consuming to write all the tests by hand, when you can just use selenium? thanks.

    Read the article

  • Java BufferedReader behavior in CSV vs TXT file

    - by Gabriel
    If i try to read a CSV file called csv_file.csv. The problem is that when i read lines with BufferedReader.readLine() it skips the first line with months. But when i rename the file to csv_file.txt it reads it allright and it's not skipping the first line. Is there an undocumented "feature" of BufferedReader that i'm not aware? Example of file: Months, SEP2010, OCT2010, NOV2010 col1, col2, col3, col4, col5 aaa,,sdf,"12,456",bla bla bla, xsaffadfafda and so on, and so on, "10,00", xxx, xxx The code: FileInputStream stream = new FileInputStream(UploadSupport.TEMPORARY_FILES_PATH+fileName); BufferedReader br = new BufferedReader(new InputStreamReader(stream, "UTF-8")); String line = br.readLine(); String months[] = line.split(","); while ((line=br.readLine())!=null) { /*parse other lines*/ }

    Read the article

  • VS 2008 Open Word Document - Memory Error

    - by Lord Darkside
    I am executing the following code that worked fine in a vs2003(1.1) but seems to have decided otherwise now that I'm using vs2008(2.0/3.5): Dim wordApp As Microsoft.Office.Interop.Word.Application Dim wordDoc As Microsoft.Office.Interop.Word.Document missing = System.Reflection.Missing.Value wordApp = New Microsoft.Office.Interop.Word.Application() Dim wordfile As Object wordfile = "" ' path and file name goes here wordDoc = wordApp.Documents.Open(wordfile, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing, missing) The error thrown when the Open is attempted is : "Attempted to read or write protected memory. This is often an indication that other memory is corrupt." Does anyone have any idea how to correct this?

    Read the article

  • IoC.Resolve vs Constructor Injection

    - by Omu
    I heard a lot of people saying that it is a bad practice to use IoC.Resolve(), but I never heard a good reason why (if it's all about testing than you can just mock the container, and you're done). now the advantages of using Resolve instead of Constructor Injection is that you don't need to create classes that have 5 parameters in the constructor, and whenever you are going to create a instance of that class you're not gonna need to provide it with anything

    Read the article

  • C++ STL Map vs Vector speed

    - by sub
    In the interpreter for my experimental programming language I have a symbol table. Each symbol consists of a name and a value (the value can be e.g.: of type string, int, function, etc.). At first I represented the table with a vector and iterated through the symbols checking if the given symbol name fitted. Then I though using a map, in my case map<string,symbol>, would be better than iterating through the vector all the time but: It's a bit hard to explain this part but I'll try. If a variable is retrieved the first time in a program in my language, of course its position in the symbol table has to be found (using vector now). If I would iterate through the vector every time the line gets executed (think of a loop), it would be terribly slow (as it currently is, nearly as slow as microsoft's batch). So I could use a map to retrieve the variable: SymbolTable[ myVar.Name ] But think of the following: If the variable, still using vector, is found the first time, I can store its exact integer position in the vector with it. That means: The next time it is needed, my interpreter knows that it has been "cached" and doesn't search the symbol table for it but does something like SymbolTable.at( myVar.CachedPosition ). Now my (rather hard?) question: Should I use a vector for the symbol table together with caching the position of the variable in the vector? Should I rather use a map? Why? How fast is the [] operator? Should I use something completely different?

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >