Search Results

Search found 8638 results on 346 pages for 'vs'.

Page 92/346 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Core Data Inferred Migration – Automatic "lightweight" vs Manual

    - by ohhorob
    I've updated the model of an existing iPhone app in some simple ways (remove attribute, add attribute, remove index), and can use automatic lightweight migration to migrate the persistent store. Due to the typical size of the data set, the processing time is not insignificant, and warrants feedback for the user. NSMigrationManager provides a simple but useful migrationProgress value that sends KVO notifications as the migration is performed. That forms the basis of providing feedback, however attempting to use an inferred model ([NSMappingModel inferredMappingModelForSourceModel:destinationModel:error:]) results in drastically different timing for the exact same dataset. Profile results on and original iPhone (2G) Automatic inferred lightweight migration PROFILE: CacheManager -migrateStore PROFILE: 0.6130 (+0.6130) models loaded PROFILE: 1.1759 (+0.5629) delegate -CacheManagerWillMigrate: PROFILE: 1.2516 (+0.0757) persistent store coordinator loaded PROFILE: 5.1436 (+3.8920) automatic lightweight migration completed PROFILE: 5.5435 (+0.3999) delegate -CacheManagerDidFinishMigration:withError: Manual inferred migration PROFILE: CacheManager -migrateStore PROFILE: 0.6660 (+0.6660) models loaded PROFILE: 1.1471 (+0.4811) inferred mapping model generated PROFILE: 1.4046 (+0.2574) delegate -CacheManagerWillMigrate: PROFILE: 1.5058 (+0.1013) persistent store coordinator loaded PROFILE: 22.6952 (+21.1894) manual migration completed PROFILE: 23.1478 (+0.4525) delegate -CacheManagerDidFinishMigration:withError: So, with an inferred model, the manual migration takes over 5 times longer than automatic! It's a big inconsistency, and the lightweight option that NSPersistentStoreCoordinator -addPersistentStoreWithType:configuration:URL:options:error: provides absolutely no indication of progress while processing. Can anybody provide a supported way to get the migrationProgress values during automatic migration, OR a way to configure an inferred mapping model to be as fast during manual processing as automatic?

    Read the article

  • Minimum vs Minimal vertex covers

    - by panicked
    I am studying for an exam and one of the sample questions is as follows: Vertex cover: a vertex cover in a graph is a set of vertices such that each edge has at least one of its two end points in this set. Minimum vertex cover: a MINIMUM vertex cover in a graph is a vertex cover that has the smallest number of vertices among all possible vertex covers. Minimal vertex cover a MINIMAL vertex cover in a graph is a vertex cover that does not contain another vertex cover (deleting any vertex from the set would create a set of vertices that is not a vertex cover) Question: A minimal vertex cover isn't always a minimum vertex cover. Demonstrate this with a simple example. Can anyone get their head around this? I am failing to see the distinction between the two. More importantly, I'm having a hard time visualizing it. I seriously hope he's not gonna ask odd questions like this one on the exam!

    Read the article

  • Flash vs Javascript

    - by metal-gear-solid
    I always heard approx 5% users in the world keep JavaScript turned off. But Adobe claims Flash content reaches 99% of Internet viewers http://www.adobe.com/products/player_census/flashplayer/ Is it true even iphone, ipad and blackberry doesn't support Flash? if it's true then if same thing we can achieve with FLASH and JavaScript , then should we go for flash?

    Read the article

  • Qt vs .NET - a few comparisons [closed]

    - by Pirate for Profit
    Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • FreeLibrary vs implicit unloading DLL

    - by Adil
    I have implemented a DLL including DllMain() entry function:- BOOL APIENTRY DllMain( HMODULE hModule, DWORD ul_reason_for_call, LPVOID lpReserved ) { case DLL_PROCESS_ATTACH: ... case DLL_THREAD_ATTACH: ... case DLL_THREAD_DETACH: ... case DLL_PROCESS_DETACH: ... } Unfortunately i made a mistake in DLL_PROCESS_DETACH case and accessing illegal memorey (access violation). I made a sample program which loads the library using LoadLibrary() function, uses the library function and finally call FreeLibrary() and return. When i executed this program, i didnt get any error message. But if i remove FreeLibrary(), in that case the DLL_PROCESS_DETACH case is executed implicitly and this time it gives error dialog box mentioning that there is access violation. Why calling FreeLibrary() suppress this error? OR internally it handles this exception? What is suggested way.

    Read the article

  • Silverlight Vs. WPF Vs. Winforms What is good for specifically my purpose?

    - by Cyril Gupta
    I am about to start a new Windows applications and the contenders for the platform are: Windows Forms WPF Silverlight Now my experience with WPF at least in my last application was not very encouraging (the app failed to run on the deployment machines and I had to re-do it in Winforms). So my confidence is shaken here. My app is for mass-distribution (the last version had some 100,000+ installations). So I want to make absolutely sure that my users will be able to use it and enjoy it without any problems. I would love to create a nice interface, going the next step like a Flex or Silverlight, iPhone app, with animations and effects. So I would really like to go with WPF or Silverlight if I can. My needs are Good support for visuals and animation effects. Support for database connectivity. Support for printing (Is there an equivalent of PrintDocument in Silverlight) Must not suffer from deployment troubles. Silverlight is universal, but does it have printing support and good controls toolset? WPF has printing support and a nice toolset, but can I depend on it? Winforms is dated already and is not so impressive, but should I go with it anyway? Your advice would be appreciated

    Read the article

  • C# Threading vs single thread

    - by user177883
    Is it always guaranteed that a multi-threaded application would run faster than a single threaded application? I have two threads that populates data from a data source but different entities (eg: database, from two different tables), seems like single threaded version of the application is running faster than the version with two threads. Why would the reason be? when i look at the performance monitor, both cpu s are very spikey ? is this due to context switching? what are the best practices to jack the CPU and fully utilize it? I hope this is not ambiguous.

    Read the article

  • Svn vs Git

    - by rudigrobler
    I am starting a new distributed project where some of the developers will not be in the same country... What should I use: Git or SVN? Why? PS. It is a smart client application running on windows and will be developed using Visual Studio [UPDATE] And does it work on MacOS (Not required but interesting to know)?

    Read the article

  • Confused about using SpringMVC Vs JEE project in Netbeans

    - by Sanju
    Hi all, I want to start a my first JEE project. I have read a lot that springMVC framework is a good choice (never used though) My earlier experience with java is not much. only some small app development using Netbeans. so I have some experience using Netbeans. but I see that I can start a JEE project in Netbeans. so what kind of framework netbeans is using underneath. PS: My understanding of framework (e.g. SpringMVC) is that you follow rule of framework to configure your app. and then framework take care or linking your View, controller and model. so if i am using netbeans, do i need to take care of linking of my MVC by myself?

    Read the article

  • wget not behaving via IPC::Open3 vs bash

    - by Ryley
    I'm trying to stream a file from a remote website to a local command and am running into some problems when trying to detect errors. The code looks something like this: use IPC::Open3; my @cmd = ('wget','-O','-','http://10.10.1.72/index.php');#any website will do here my ($wget_pid,$wget_in,$wget_out,$wget_err); if (!($wget_pid = open3($wget_in,$wget_out,$wget_err,@cmd))){ print STDERR "failed to run open3\n"; exit(1) } close($wget_in); my @wget_outs = <$wget_out>; my @wget_errs = <$wget_err>; print STDERR "wget stderr: ".join('',@wget_errs); #page and errors outputted on the next line, seems wrong print STDERR "wget stdout: ".join('',@wget_outs); #clean up after this, not shown is running the filtering command, closing and waitpid'ing When I run that wget command directly from the command-line and redirect stderr to a file, something sane happens - the stdout will be the downloaded page, the stderr will contain the info about opening the given page. wget -O - http://10.10.1.72/index.php 2> stderr_test_file When I run wget via open3, I'm getting both the page and the info mixed together in stdout. What I expect is the loaded page in one stream and STDERR from wget in another. I can see I've simplified the code to the point where it's not clear why I want to use open3, but the general plan is that I wanted to stream stdout to another filtering program as I received it, and then at the end I was going to read the stderr from both wget and the filtering program to determine what, if anything went wrong. Other important things: I was trying to avoid writing the wget'd data to a file, then filtering that file to another file, then reading the output. It's key that I be able to see what went wrong, not just reading $? 8 (i.e. I have to tell the user, hey, that IP address is wrong, or isn't the right kind of website, or whatever). Finally, I'm choosing system/open3/exec over other perl-isms (i.e. backticks) because some of the input is provided by untrustworthy users.

    Read the article

  • Generics vs inheritance (whenh no collection classes are involved)

    - by Ram
    This is an extension of this questionand probably might even be a duplicate of some other question(If so, please forgive me). I see from MSDN that generics are usually used with collections The most common use for generic classes is with collections like linked lists, hash tables, stacks, queues, trees and so on where operations such as adding and removing items from the collection are performed in much the same way regardless of the type of data being stored. The examples I have seen also validate the above statement. Can someone give a valid use of generics in a real-life scenario which does not involve any collections ? Pedantically, I was thinking about making an example which does not involve collections public class Animal<T> { public void Speak() { Console.WriteLine("I am an Animal and my type is " + typeof(T).ToString()); } public void Eat() { //Eat food } } public class Dog { public void WhoAmI() { Console.WriteLine(this.GetType().ToString()); } } and "An Animal of type Dog" will be Animal<Dog> magic = new Animal<Dog>(); It is entirely possible to have Dog getting inherited from Animal (Assuming a non-generic version of Animal)Dog:Animal Therefore Dog is an Animal Another example I was thinking was a BankAccount. It can be BankAccount<Checking>,BankAccount<Savings>. This can very well be Checking:BankAccount and Savings:BankAccount. Are there any best practices to determine if we should go with generics or with inheritance ?

    Read the article

  • How to implement web cache: internal fragmentation VS external fragmentation

    - by Summer_More_More_Tea
    Hi there: I come up with this question when play with Firefox web cache: in which approach does the browser cache a response in limited disk space(take my configuration as an example, 50MB is the upper bound)? I think two ways can be employed. One is cache the total response object one by one, but this is inefficient and will introduce external fragmentation, thus the total cache space may not be fully used. The second is take the total space(50MB) as a consecutive file, splitting it into fixed-length slots; incoming response objects will also be treated blocks of data with the same length as the slots. We can fill slots until the whole file is run out of, then some displacement algorithm can be used to swap out the old cached objects. The latter approach will of course bing in internal fragmentation, but in my opinion is easier to implement and maintain than the first strategy. But when I enter Firefox's Cache directory, I find it (maybe) use a different method: a lot of varied-length files reside in that directory and all those files are filled with undisplayable characters. I don't but really want to know what mechanism that a commercial browser, e.g. Firefoix, employed to implement web cache. Regards.

    Read the article

  • Personal Cache vs Memcache?

    - by Kerry
    I have a personal caching class, which can be seen here ( based off WordPress' ): http://pastie.org/988427 I recently learned about memcache and it said to memcache EVERYTHING: http://highscalability.com/blog/2010/5/17/7-lessons-learned-while-building-reddit-to-270-million-page.html My first thought was just to keep my class with the current functions and make it use memcache instead -- is there any downside to doing this? The main difference I see is that memcache stays on with the server from page to page, while mine is for 1 page load. The problem I see arising, and this is with any system, is that they're dynamic. They change all the time. Whether its search results, visible products, etc. etc. If it's all cached, won't the create a problem? Is there a way to handle this? Obviously if something is bringing back the same results everytime it would be cached, but that's why I was doing it on a per page load basis. I'm sure there is a way to handle this, or is the cache time usually set between 5 minutes and an hour?

    Read the article

  • android mobile development performance vs extensibility

    - by mixm
    im developing a game in android, and i've been thinking about subdividing many of the elements of the game (e.g. game objects) into separate classes and sub-classes. but i know that method calls to these objects will cause some overhead. would it be better to improve performance or to improve extensibility?

    Read the article

  • Ambiguous Generic restriction T:class vs T:struct

    - by Maslow
    This code generates a compiler error that the member is already defined with the same parameter types. private T GetProperty<T>(Func<Settings, T> GetFunc) where T:class { try { return GetFunc(Properties.Settings.Default); } catch (Exception exception) { SettingReadException(this,exception); return null; } } private TNullable? GetProperty<TNullable>(Func<Settings, TNullable> GetFunc) where TNullable : struct { try { return GetFunc(Properties.Settings.Default); } catch (Exception ex) { SettingReadException(this, ex); return new Nullable<TNullable>(); } } Is there a clean work around?

    Read the article

  • plist vs static array

    - by morticae
    Generally, I use static arrays and dictionaries for containing lookup tables in my classes. However, with the number of classes creeping quickly into the hundreds, I'm hesitant to continue using this pattern. Even if these static collections are initialized lazily, I've essentially got a bounded memory leak going on as someone uses my app. Most of these are arrays of strings so I can convert strings into NSInteger constants that can be used with switch statements, etc. I could just recreate the array/dictionary on every call, but many of these functions are used heavily and/or in tight loops. So I'm trying to come up with a pattern that is both performant and not persistent. If I store the information in a plist, does the iphoneOS do anything intelligent about caching those when loaded? Do you have another method that might be related?

    Read the article

  • AJAX vs ActiveX/Flash for browser-based game

    - by iconiK
    I have been following the usage of JavaScript for the past few years, and with the release of extremely fast scripting engines (V8, SquirrelFish Extrene, TraceMonkey, etc.) the possibilities of JavaScript have increased dramatically. However, the usage share of Internet Explorer coupled with it's total lack of support for recent standards makes me want to drop a bomb on Microsoft's HQ, as it creates a huge amount of problems for any website. The game will need to be pretty dynamic client-side, with animations and other eye-candy things, but not a full-blown game like those that run directly in the OS using DirectX or OpenGL. However, this might be a little stretch for JavaScript and will certainly feel extremely slow in Internet Explorer (given that the current IE engine can be hundreds of times slower than SFX; gotta see what IE9 will bring), would it be better to just do the whole thing in Flash? I know this means requiring the plug-in AND I have no experience whatsoever with Flash (other than browsing YouTube :P). It also means I can't just output directly from PHP, I would have to use XML or some other format to pass data to it (JSON is directly integrated in JS and PHP can deal with it easily). Another idea would be to provide an alternative interface just for IE, though I don't know how (ActiveX maybe? or with Flash, then why not just provide it to all browsers) or totally not supporting it and requiring the use of other browsers, although this is plain stupid from a business perspective. So here am I, wondering what approach to take and thus asking for your advice. How should I build the client-side? AJAX in all browsers, Flash in all browsers or a mix (AJAX for "modern" browsers and something else for the "grandpa": IE).

    Read the article

  • Drupal VS Zikula

    - by DaNieL
    Hi guys! Im new to drupal (just finished my first web site with it), i found it very simple and powerfull to use. Recently, i've been asked to build a community with Zikula. They prefer to use zikula becose it is the 'evolution' of phpnuke, that is the cms that they currently use (i have 'restayling and rebuild' it). Moreover, I want learn and use just 1 cms. So, what are the main differences between drupal and zikula? What are the advantages and disadvantages of one and other? Why should I choose Drupal or Zikula? p.s: i know that the most of the times the answer is 'its all about your needs', but this is supposed to be a general question

    Read the article

  • Storing Templates and Object-Oriented vs Relational Databases

    - by syrion
    I'm designing some custom blog software, and have run into a conundrum regarding database design. The software requires that there be multiple content types, each of which will require different entry forms and presentation templates. My initial instinct is to create these content types as objects, then serialize them and store them in the database as JSON or YAML, with the entry forms and templates as simple strings attached to the "contentTypes" table. This seems cumbersome, however. Are there established best practices for dealing with this design? Is this a use case where I should consider an object database? If I should be using an object database, which should I consider? I am currently working in Python and would prefer a capable Python library, but can move to Java if need be.

    Read the article

  • Caching Profiles web.config vs IIS

    - by Lieven Cardoen
    What is the difference between configuring a Caching Profile in Web.Config and configuring it in IIS? If you have this in Web.Config <caching> <outputCache enableOutputCache="true" /> <outputCacheSettings> <outputCacheProfiles> <add duration="14800" enabled="true" varyByParam="*" name="AssetCacheProfile" /> </outputCacheProfiles> </outputCacheSettings> </caching> And nothing configured in IIS in the Output Caching, will it work? And what if you add all the extensions I use in Output Caching in IIS, what does that change? It's a aspx page RetrieveBlob.aspx that uses this Caching Profile: <%@ OutputCache CacheProfile="AssetCacheProfile" %> <%@ Page Language="C#" AutoEventWireup="true" CodeFile="RetrieveBlob.aspx.cs" Inherits="RetrieveBlob" %>

    Read the article

  • C# vs C - Big performance difference

    - by John
    I'm finding massive performance differences between similar code in C anc C#. The C code is: #include <stdio.h> #include <time.h> #include <math.h> main() { int i; double root; clock_t start = clock(); for (i = 0 ; i <= 100000000; i++){ root = sqrt(i); } printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC); } And the C# (console app) is: using System; using System.Collections.Generic; using System.Text; namespace ConsoleApplication2 { class Program { static void Main(string[] args) { DateTime startTime = DateTime.Now; double root; for (int i = 0; i <= 100000000; i++) { root = Math.Sqrt(i); } TimeSpan runTime = DateTime.Now - startTime; Console.WriteLine("Time elapsed: " + Convert.ToString(runTime.TotalMilliseconds/1000)); } } } With the above code, the C# completes in 0.328125 seconds (release version) and the C takes 11.14 seconds to run. The c is being compiled to a windows executable using mingw. I've always been under the assumption that C/C++ were faster or at least comparable to C#.net. What exactly is causing the C to run over 30 times slower? EDIT: It does appear that the C# optimizer was removing the root as it wasn't being used. I changed the root assignment to root += and printed out the total at the end. I've also compiled the C using cl.exe with the /O2 flag set for max speed. The results are now: 3.75 seconds for the C 2.61 seconds for the C# The C is still taking longer, but this is acceptable

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >