Search Results

Search found 11888 results on 476 pages for 'hero vs zero'.

Page 107/476 | < Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >

  • Unexpected output from Bubblesort program with MSVC vs TCC

    - by Sujith S Pillai
    One of my friends sent this code to me, saying it doesn't work as expected: #include<stdio.h> void main() { int a [10] ={23, 100, 20, 30, 25, 45, 40, 55, 43, 42}; int sizeOfInput = sizeof(a)/sizeof(int); int b, outer, inner, c; printf("Size is : %d \n", sizeOfInput); printf("Values before bubble sort are : \n"); for ( b = 0; b &lt; sizeOfInput; b++) printf("%d\n", a[b]); printf("End of values before bubble sort... \n"); for ( outer = sizeOfInput; outer &gt; 0; outer-- ) { for ( inner = 0 ; inner &lt; outer ; inner++) { printf ( "Comparing positions: %d and %d\n",inner,inner+1); if ( a[inner] &gt; a[inner + 1] ) { int tmp = a[inner]; a[inner] = a [inner+1]; a[inner+1] = tmp; } } printf ( "Bubble sort total array size after inner loop is %d :\n",sizeOfInput); printf ( "Bubble sort sizeOfInput after inner loop is %d :\n",sizeOfInput); } printf ( "Bubble sort total array size at the end is %d :\n",sizeOfInput); for ( c = 0 ; c &lt; sizeOfInput; c++) printf("Element: %d\n", a[c]); } I am using Micosoft Visual Studio Command Line Tool for compiling this on a Windows XP machine. cl /EHsc bubblesort01.c My friend gets the correct output on a dinosaur machine (code is compiled using TCC there). My output is unexpected. The array mysteriously grows in size, in between. If you change the code so that the variable sizeOfInput is changed to sizeOfInputt, it gives the expected results! A search done at Microsoft Visual C++ Developer Center doesn't give any results for "sizeOfInput". I am not a C/C++ expert, and am curious to find out why this happens - any C/C++ experts who can "shed some light" on this? Unrelated note: I seriously thought of rewriting the whole code to use quicksort or merge sort before posting it here. But, after all, it is not Stooge sort... Edit: I know the code is not correct (it reads beyond the last element), but I am curious why the variable name makes a difference.

    Read the article

  • LINQ-to-SQL vs stored procedures?

    - by scottmarlowe
    I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow (http://stackoverflow.com/questions/8050/beginners-guide-to-linq), but had a follow-up question: We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense. So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's? Thanks.

    Read the article

  • C++ best practice: Returning reference vs. object

    - by Mike Crowe
    Hi folks, I'm trying to learn C++, and trying to understand returning objects. I seem to see 2 ways of doing this, and need to understand what is the best practice. Option 1: QList<Weight *> ret; Weight *weight = new Weight(cname, "Weight"); ret.append(weight); ret.append(c); return &ret; Option 2: QList<Weight *> *ret = new QList(); Weight *weight = new Weight(cname, "Weight"); ret->append(weight); ret->append(c); return ret; (of course, I may not understand this yet either). Which way is considered best-practice, and should be followed? TIA Mike

    Read the article

  • What can we do to make Microsoft add IntelliTrace to VS 2010 Professional Edition?

    - by Ikaso
    Now that Microsoft has released VS 2010 I went to the product page here. To my amazement I found out that IntelliTrace(Historical Debugger) is supported only on the Ultimate Edition of VS 2010. This mean that you have to spend almost $4000 for renewal and almost $12000 for a new license. Does someone have any idea how can we change this decision? Especially make them add this feature to VS 2010 Professional Edition.

    Read the article

  • MS SQL Server BEGIN/END vs BEGIN TRANS/COMMIT/ROLLBACK

    - by Rich
    I have been trying to find info on the web about the differences between these statements, and it seems to me they are identical but I can't find confirmation of that or any kind of comparison between the two. What is the difference between doing this: BEGIN -- Some update, insert, set statements END and doing this BEGIN TRANS -- Some update, insert, set statements COMMIT TRANS ? Note that there is only the need to rollback in the case of some exception or timeout or other general failure, there would not be a conditional reason to rollback.

    Read the article

  • .NET 2.0 vs .NET 4.0 loading error

    - by David Rutten
    My class library is compiled against .NET 2.0 and works just fine whenever I try to load it as a plugin under the 2.0 runtime. If however the master application is running the .NET 4.0 runtime, I get an exception as soon as the resources need to be accessed: Exception occurred during processing of command: Grasshopper Plug-in = Grasshopper Could not find file 'Grasshopper.resources'. Stack trace: at UnhandledExceptionLogger.UnhandledThreadException(Object sender, ThreadExceptionEventArgs args) at System.Windows.Forms.Application.ThreadContext.OnThreadException(Exception t) at System.Windows.Forms.Control.WndProcException(Exception e) at System.Windows.Forms.ControlNativeWindow.OnThreadException(Exception e) at System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam) at System.Windows.Forms.SafeNativeMethods.ShowWindow(Handle Ref hWnd, Int32 nCmdShow) at System.Windows.Forms.Control.SetVisibleCore(Boolean value) at System.Windows.Forms.Form.SetVisibleCore(Boolean value) at System.Windows.Forms.Form.Show(IWin32Window owner) .... What's going on and how do I make my project load on all .NET Runtimes?

    Read the article

  • Understanding Ruby class vs instance methods

    - by randombits
    I have the following code: #!/usr/bin/ruby class Person def self.speak p = self.new puts "Hello" p.chatter end private def chatter puts "Chattering" end end p = Person.new Person.speak I'd like to make chatter private, accessible only within p.. but I want p to be able to access it within the class method. Is there a better way to design this so chatter isn't available to the public, but a "factory" method like self.speak can call chatter?

    Read the article

  • New to ASP.NET: Webforms vs MVC2

    - by Sahat
    I am new to ASP.NET Development and can't decide between developing with Webforms or MVC 2. Nevermind the pros and cons of each. I've seen mixed opinions of each. But which method would be the best for someone who has no prior experience in ASP.NET or C#? If your answer is: learn both, then which should I learn first? MVC 2 or Webforms?

    Read the article

  • Rails' page caching vs. HTTP reverse proxy caches

    - by John Topley
    I've been catching up with the Scaling Rails screencasts. In episode 11 which covers advanced HTTP caching (using reverse proxy caches such as Varnish and Squid etc.), they recommend only considering using a reverse proxy cache once you've already exhausted the possibilities of page, action and fragment caching within your Rails application (as well as memcached etc. but that's not relevant to this question). What I can't quite understand is how using an HTTP reverse proxy cache can provide a performance boost for an application that already uses page caching. To simplify matters, let's assume that I'm talking about a single host here. This is my understanding of how both techniques work (maybe I'm wrong): With page caching the Rails process is hit initially and then generates a static HTML file that is served directly by the Web server for subsequent requests, for as long as the cache for that request is valid. If the cache has expired then Rails is hit again and the static file is regenerated with the updated content ready for the next request With an HTTP reverse proxy cache the Rails process is hit when the proxy needs to determine whether the content is stale or not. This is done using various HTTP headers such as ETag, Last-Modified etc. If the content is fresh then Rails responds to the proxy with an HTTP 304 Not Modified and the proxy serves its cached content to the browser, or even better, responds with its own HTTP 304. If the content is stale then Rails serves the updated content to the proxy which caches it and then serves it to the browser If my understanding is correct, then doesn't page caching result in less hits to the Rails process? There isn't all that back and forth to determine if the content is stale, meaning better performance than reverse proxy caching. Why might you use both techniques in conjunction?

    Read the article

  • service.close() vs. service.abort() - WCF example

    - by Larry Watanabe
    In one of the WCF tutorials, I saw the followign sample code: Dim service as ...(a WCF service ) try .. service.close() catch ex as Exception() ... service.abort() end try Is this the correct way to ensure that resources (i.e. connections) are released even under error conditions? Thanks for the answers guys! I upvoted you all.

    Read the article

  • php vs bash for CLI scripting?

    - by fayer
    i have never used PHP with CLI, but i have seen scripts run with php code. i was wondering, why should we use bash, when php is so popular and is able to run in CLI. what are the pros and cons with each one? should i use php for all CLI scripting in the future?

    Read the article

  • Method Vs Property

    - by obsoleteattribute
    Hi, I'm a newbie to .NET. I have a class called Project, a project can have multiple forecasts.Now If I want to check if the projects has any forecasts or not should I use a readonly boolean property called HasForecast() or should I use a method named HasForecast() which basically returns a boolean value.From framework design guidelines I came to know that methods should be used when the operation is complex,since here I'm retrieving the value of forecasts from DB should I consider method, or since it is a logical data member should I use a property.If I use a property can I call a method in DBLayer from its getter.Please explain Regards, Ravi

    Read the article

  • Remote Service Vs. Local Service

    - by Nguyen Dai Son
    Dear All, I am a newbiew to Android. I had read a lot of articles about Android Service but I am not clearly understanding what defferent between Local Service and Remote Service (except for "Local Service run in the same process as the lunching activity; remote services run in their own process" - The Busy Coder's Guide to Android Development - Mark L. Murphy ). Please shows me what different between Local Service and Remote Service. What's the advantage/disadvantage of using Local Service. What's the advantage/disadvantage of using Remote Service. Thanks & best regards Dai Son

    Read the article

  • Changing settings in multiple VS project

    - by Danra
    Hey, Is there any way to change settings for multiple projects in a Visual Studio 2008 C++ solution? For example, adding a library dependancy for all the projects, or ignoring a specific warning. I am aware being able to change some global settings in the IDE itself, but I'm looking for settings which will be stored in the solution/project files. Thanks, Dan

    Read the article

  • Dottrace Dead vs. Garbage

    - by Moshe
    After reading the dottrace documentation I realized that: Dead objects are objects deleted before the end point of the snapshot. Garbage objects are objects allocated after the starting point and deleted before the end point - in other words, "Garbage objects" is a subset of "Dead objects". But after doing some profiling sessions, I could see that sometimes the number of "Garbage objects" is by far greater than the number of "Dead objects" of the same class (for example System.String). How should I interpret this phenomenon?

    Read the article

  • CFLAGS vs CPPFLAGS

    - by EB
    I understand that CFLAGS (or CXXFLAGS for C++) are for the compiler, whereas CPPFLAGS is used by the preprocessor. But I still don't understand the difference. I need to specify an include path for a header file that is included with #include -- because #include is a preprocessor directive, is the preprocessor (CPPFLAGS) the only thing I care about? Under what circumstances do I need to give the compiler an extra include path? In general, if the preprocessor finds and includes needed header files, why does it ever need to be told about extra include directories? What use is CFLAGS at all? (In my case, I actually found that BOTH of these allow me to compile my program, which adds to the confusion... I can use CFLAGS OR CPPFLAGS to accomplish my goal (in autoconf context at least). What gives?)

    Read the article

  • Yet another Haskell vs. Scala question

    - by Travis Brown
    I've been using Haskell for several months, and I love it—it's gradually become my tool of choice for everything from one-off file renaming scripts to larger XML processing programs. I'm definitely still a beginner, but I'm starting to feel comfortable with the language and the basics of the theory behind it. I'm a lowly graduate student in the humanities, so I'm not under a lot of institutional or administrative pressure to use specific tools for my work. It would be convenient for me in many ways, however, to switch to Scala (or Clojure). Most of the NLP and machine learning libraries that I work with on a daily basis (and that I've written in the past) are Java-based, and the primary project I'm working for uses a Java application server. I've been mostly disappointed by my initial interactions with Scala. Many aspects of the syntax (partial application, for example) still feel clunky to me compared to Haskell, and I miss libraries like Parsec and HXT and QuickCheck. I'm familiar with the advantages of the JVM platform, so practical questions like this one don't really help me. What I'm looking for is a motivational argument for moving to Scala. What does it do (that Haskell doesn't) that's really cool? What makes it fun or challenging or life-changing? Why should I get excited about writing it?

    Read the article

  • Using 'git pull' vs 'git checkout -f' for website deployment

    - by Michelle
    I've found two common approaches to automatically deploying website updates using a bare remote repo. The first requires that the repo is cloned into the document root of the webserver and in the post-update hook a git pull is used. cd /srv/www/siteA/ || exit unset GIT_DIR git pull hub master The second approach adds a 'detached work tree' to the bare repository. The post-receive hook uses git checkout -f to replicate the repository's HEAD into the work directory which is the webservers document root i.e. GIT_WORK_TREE=/srv/www/siteA/ git checkout -f The first approach has the advantage that changes made in the websites working directory can be committed and pushed back to the bare repo (however files should not be updated on the live server). The second approach has the advantage that the git directory is not within the document root but this is easily solved using htaccess. Is one method objectively better than the other in terms of best practice? What other advantages and disadvantages am I missing?

    Read the article

  • VS debugging and watching a variable for changes

    - by Shawn Mclean
    I have a property inside a class that is getting changed by something. The only place I change the value of this code is a line that looks like this: pushpin.Position.Altitude = -31; During visual studio debugging, is there a way to watch .Altitude for any changes made, preferably it breaks at the assignment statement that changes the value. If this is the correct way to track down this problem, could I have a step-by-step tutorial/instruction on how to do this? Thanks.

    Read the article

< Previous Page | 103 104 105 106 107 108 109 110 111 112 113 114  | Next Page >