Search Results

Search found 8270 results on 331 pages for 'difference between websit'.

Page 220/331 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • Directory file size calculation - how to make it faster?

    - by Xinxua
    Using C#, I am finding the total size of a directory. The logic is this way : Get the files inside the folder. Sum up the total size. Find if there are sub directories. Then do a recursive search. I tried one another way to do this too : Using FSO (obj.GetFolder(path).Size). There's not much of difference in time in both these approaches. Now the problem is, I have tens of thousands of files in a particular folder and its taking like atleast 2 minute to find the folder size. Also, if I run the program again, it happens very quickly (5 secs). I think the windows is caching the file sizes. Is there any way I can bring down the time taken when I run the program first time??

    Read the article

  • Programming for a 32-bit environment vs programming for a 64-bit environment / Build configurations

    - by Russel
    I was looking at some same code (a sample MS Visual Studio C++ project) recently with multiple build configurations (Release/Debug, Win32/x64). My question: What is the difference? I guess I understand Release/Debug (Release = finalized version of project, Debug = version used to run in debugger), but what things need to be considered when building different versions for Win32/x64 platforms? Is there any coding differences, or does this just affect how that same code is ultimately built into machine code? I know there are different library files depending on whether you're using a 32-bit or 64-bit system as well... Are all of these differences again just machine code? Would a 32-bit library file and its corresponding 64-bit library file be two files with exactly the same functions build from the same source code originally, and only differing in their machine code implementation? Thanks! --Russel

    Read the article

  • Can I safely store UInt32 to NSUInteger?

    - by mystify
    In the header, it is defined like: #if __LP64__ || (TARGET_OS_EMBEDDED && !TARGET_OS_IPHONE) || TARGET_OS_WIN32 || NS_BUILD_32_LIKE_64 typedef long NSInteger; typedef unsigned long NSUInteger; #else typedef int NSInteger; typedef unsigned int NSUInteger; #endif So does an UInt32 fit without problems into an NSUInteger (an unsigned int)? Where's the difference between UInt32 and unsigned int? And I assume that an unsigned long is bigger than an unsigned int?

    Read the article

  • Should I use Enumeration or Class stereotype in UML to represent a type directory table?

    - by Ivan
    Let's take 2 UML class model entities: One represents an actual Order and another represents an Orede Type. Any Order corresponds to one Type. A 2-way-naviglabe many Orders to one Type relation is meant. Order Type instances are, for example, "Request availability", "Request price", "Preorder", "Buy", "Cancel", "Request support", etc. Order Types are to be addable and editable in the resulting application. Should I model Order Type as Class or as Enumeration? From the data perspective I can't see the difference actually.

    Read the article

  • Python - calendar.timegm() vs. time.mktime()

    - by ibz
    I seem to have a hard time getting my head around this. What's the difference between calendar.timegm() and time.mktime()? Say I have a datetime.datetime with no tzinfo attached, shouldn't the two give the same output? Don't they both give the number of seconds between epoch and the date passed as a parameter? And since the date passed has no tzinfo, isn't that number of seconds the same? >>> import calendar >>> import time >>> import datetime >>> d = datetime.datetime(2010, 10, 10) >>> calendar.timegm(d.timetuple()) 1286668800 >>> time.mktime(d.timetuple()) 1286640000.0 >>>

    Read the article

  • Is there a "Language-Aware" diff?

    - by JS
    (Appologies for the poor title. I'm open to suggestions for a better one. "Language-gnostic", perhaps?) Does there exist a diff utility (preferably *nix-based) that will diff files based on how a (selectable) language compiler would view the code? For example, to a Python compiler, these two 'graphs are identical: # The quick brown fox jumped vs: # The quick brown # fox jumped Telling most diffs (at least the one's I'm familiar with) to ignore spaces and linebreaks still causes them to flag a difference due to the extra '#'. "Language-sensitivity" would sure help to cut down on the "noise". Ideally, it would work in xemacs....(<-- probably pushing my luck? :-)

    Read the article

  • Nokogiri Truncating XML Input

    - by bdorry
    I am having issues with a colleagues machine truncating XML while using Nokogiri to parse a Media RSS feed. The feed is a standard Media RSS feed, and the XML is not malformed. It looks like it simply stops at a certain point in the XML and closes any tags that would have been open at that current point in the document. (Unfortunately I do not have the XML avialable to me right now, but I will update this question with the actual XML when I have it available to me). My confusion comes from it working fine on my machine (OSX 10.6, Nokogiri 1.4.4) while it in correctly on his machine using the same setup - however his machine is a few years older. I imagine that there is a difference somewhere but unfortunately I don't know what to look for. Any thoughts or direction would be greatly appreciated.

    Read the article

  • How do I tell which account is trying to access an ASP.NET web service?

    - by Andrew Lewis
    I'm getting a 401 (access denied) calling a method on an internal web service. I'm calling it from an ASP.NET page on our company intranet. I've checked all the configuration and it should be using integrated security with an account that has access to that service, but I'm trying to figure out how to confirm which account it's connecting under. Unfortunately I can't debug the code on the production network. In our dev environment everything is working fine. I know there has to be a difference in the settings, but I'm at a loss with where to start. Any recommendations?

    Read the article

  • The framerate of one movieclip slowly declines over time. What could cause this? [Flash CS3]

    - by Miles
    I'm creating a flash rhythm game. I have a looping (at a certain frame I have a gotoAndPlay) movieclip that represents the notes that scroll by, which plays for about three minutes. As the level progresses, the movieclip's framerate begins to lag and stutter. I have no idea how this could occur. It is also worth mentioning that the notes are represented by text, if that makes any difference. As far as posting my code goes, I think it would be far too convoluted to be worth your time. And I think it could be a simple problem. I just don't understand how the framerate of this movieclip could drop independent of the rest of the game.

    Read the article

  • How can I add a field with an array value to my Perl object?

    - by superstar
    What's the difference between these two constructors in perl? 1) sub new { my $class = shift; my $self = {}; $self->{firstName} = undef; $self->{lastName} = undef; $self->{PEERS} = []; bless ($self, $class); return $self; } 2) sub new { my $class = shift; my $self = { _firstName => shift, _lastName => shift, _ssn => shift, }; bless $self, $class; return $self; } I am using the second one so far, but I need to implement the PEERS array in the second one? How do I do it with the second constructor and how can we use get and set methods on those array variables?

    Read the article

  • C++/CLI value class constraint won't compile. Why?

    - by Simon
    Hello, a few weeks ago a co-worker of mine spent about two hours finding out why this piece of C++/CLI code won't compile with Visual Studio 2008 (I just tested it with Visual Studio 2010... same story). public ref class Test { generic<class T> where T : value class void MyMethod(Nullable<T> nullable) { } }; The compiler says: Error 1 error C3214: 'T' : invalid type argument for generic parameter 'T' of generic 'System::Nullable', does not meet constraint 'System::ValueType ^' C:\Users\Simon\Desktop\Projektdokumentation\GridLayoutPanel\Generics\Generics.cpp 11 1 Generics Adding ValueType will make the code compile. public ref class Test { generic<class T> where T : value class, ValueType void MyMethod(Nullable<T> nullable) { } }; My question is now. Why? What is the difference between value class and ValueType?

    Read the article

  • Why is distributed source control considered harder?

    - by Will Robertson
    It seems rather common (around here, at least) for people to recommend SVN to newcomers to source control because it's "easier" than one of the distributed options. As a very casual user of SVN before switching to Git for many of my projects, I found this to be not the case at all. It is conceptually easier to set up a DCVS repository with git init (or whichever), without the problem of having to set up an external repository in the case of SVN. And the base functionality between SVN, Git, Mercurial, Bazaar all use essentially identical commands to commit, view diffs, and so on. Which is all a newcomer is really going to be doing. The small difference in the way Git requires changes to be explicitly added before they're committed, as opposed to SVN's "commit everything" policy, is conceptually simple and, unless I'm mistaken, not even an issue when using Mercurial or Bazaar. So why is SVN considered easier? I would argue that this is simply not true.

    Read the article

  • Database Modelling - Conceptually different entities with near identical fields

    - by Andrew Shepherd
    Suppose you have two sets of conceptual entities: MarketPriceDataSet which has multiple ForwardPriceEntries PoolPriceForecastDataSet which has multiple PoolPriceForecastEntry Both different child objects have near identical fields: ForwardPriceEntry has StartDate EndDate SimulationItemId ForwardPrice MarketPriceDataSetId (foreign key to parent table) PoolPriceForecastEntry has StartDate EndDate SimulationItemId ForecastPoolPrice PoolPriceForecastDataSetId (foreign key to parent table) If I modelled them as separate tables, the only difference would be the foreign key, and the name of the price field. There has been a debate as to whether the two near identical tables should be merged into one. Options I've thought of to model this is: Just keep them as two independent, separate tables Have both sets in the one table with an additional "type" field, and a parent_id equalling a foreign key to either parent table. This would sacrifice referential integrity checks. Have both sets in the one table with an additional "type" field, and create a complicated sequence of joining tables to maintain referential integrity. What do you think I should do, and why?

    Read the article

  • Rubygame on OS X shebang problem

    - by Mk12
    I'm playing around with Rubygame. I installed it with the Mac Pack, and now I have the rsdl executable. rsdl game.rb works fine, but when I chmod +x the rb file, add the shebang to rsdl (tried direct path and /usr/bin/env rsdl) and try to execute it (./game.rb), it starts to flicker between the Terminal and rsdl which is trying to open, and eventually gives up and gives a bus error. Anyone know what's causing that? I'm on Snow Leopard (10.6.2) if it makes a difference. Thanks.

    Read the article

  • How to deal with Rounding-off TimeSpan?

    - by infant programmer
    I take the difference between two DateTime fields, and store it in a TimeSpan variable, Now I have to round-off the TimeSpan by the following rules: if the minutes in TimeSpan is less than 30 then Minutes and Seconds must be set to zero, if the minutes in TimeSpan is equal to or greater than 30 then hours must be incremented by 1 and Minutes and Seconds must be set to zero. TimeSpan can also be a negative value, so in that case I need to preserve the sign.. I could be able to achieve the requirement if the TimeSpan wasn't a negative value, though I have written a code I am not happy with its inefficiency as it is more bulky .. Please suggest me a simpler and efficient method. Thanks regards,

    Read the article

  • unsupported major .minor version 51.0

    - by ERJAN
    I am trying to use notepad++ as my all-in-one tool edit, run, compile etc. I have jre installed, i have setup my path variable to .../bin directory. When I run my "Hello world" in notepad++ , I get this message: java.lang.UnsupportedClassVersionError: test_hello_world : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(Unknown Source) ......................................... I think the problem here is about versions, some version of java may be old or too new. how do i fix it? should i install jdk , and setup my path variable to JDK instead of jre? difference between PATH variable in jre or jdk?

    Read the article

  • xgettext vs gettext

    - by Kentor
    I have a few questions: I know what gettext is. I've read a few posts where they mentioned xgettext and was curious as to what is the difference between the two. How can I install xgettext on Windows? And finally, does anybody have a tutorial on how to install the library php-gettext http://savannah.nongnu.org/projects/php-gettext/ (this one usually doesn't come with PHP) I've read about it in an article but I'm not sure how to get it working in Windows. The thing is, sometimes when you make changes, you need to restart Apache to see the new data with the gettext that comes with PHP (but with the library you don't need to restart it) so I wanted to use the library for development. Thanks!

    Read the article

  • Odd optimization problem under MSVC

    - by Goz
    I've seen this blog: http://igoro.com/archive/gallery-of-processor-cache-effects/ The "weirdness" in part 7 is what caught my interest. My first thought was "Thats just C# being weird". Its not I wrote the following C++ code. volatile int* p = (volatile int*)_aligned_malloc( sizeof( int ) * 8, 64 ); memset( (void*)p, 0, sizeof( int ) * 8 ); double dStart = t.GetTime(); for (int i = 0; i < 200000000; i++) { //p[0]++;p[1]++;p[2]++;p[3]++; // Option 1 //p[0]++;p[2]++;p[4]++;p[6]++; // Option 2 p[0]++;p[2]++; // Option 3 } double dTime = t.GetTime() - dStart; The timing I get on my 2.4 Ghz Core 2 Quad go as follows: Option 1 = ~8 cycles per loop. Option 2 = ~4 cycles per loop. Option 3 = ~6 cycles per loop. Now This is confusing. My reasoning behind the difference comes down to the cache write latency (3 cycles) on my chip and an assumption that the cache has a 128-bit write port (This is pure guess work on my part). On that basis in Option 1: It will increment p[0] (1 cycle) then increment p[2] (1 cycle) then it has to wait 1 cycle (for cache) then p[1] (1 cycle) then wait 1 cycle (for cache) then p[3] (1 cycle). Finally 2 cycles for increment and jump (Though its usually implemented as decrement and jump). This gives a total of 8 cycles. In Option 2: It can increment p[0] and p[4] in one cycle then increment p[2] and p[6] in another cycle. Then 2 cycles for subtract and jump. No waits needed on cache. Total 4 cycles. In option 3: It can increment p[0] then has to wait 2 cycles then increment p[2] then subtract and jump. The problem is if you set case 3 to increment p[0] and p[4] it STILL takes 6 cycles (which kinda blows my 128-bit read/write port out of the water). So ... can anyone tell me what the hell is going on here? Why DOES case 3 take longer? Also I'd love to know what I've got wrong in my thinking above, as i obviously have something wrong! Any ideas would be much appreciated! :) It'd also be interesting to see how GCC or any other compiler copes with it as well! Edit: Jerry Coffin's idea gave me some thoughts. I've done some more tests (on a different machine so forgive the change in timings) with and without nops and with different counts of nops case 2 - 0.46 00401ABD jne (401AB0h) 0 nops - 0.68 00401AB7 jne (401AB0h) 1 nop - 0.61 00401AB8 jne (401AB0h) 2 nops - 0.636 00401AB9 jne (401AB0h) 3 nops - 0.632 00401ABA jne (401AB0h) 4 nops - 0.66 00401ABB jne (401AB0h) 5 nops - 0.52 00401ABC jne (401AB0h) 6 nops - 0.46 00401ABD jne (401AB0h) 7 nops - 0.46 00401ABE jne (401AB0h) 8 nops - 0.46 00401ABF jne (401AB0h) 9 nops - 0.55 00401AC0 jne (401AB0h) I've included the jump statetements so you can see that the source and destination are in one cache line. You can also see that we start to get a difference when we are 13 bytes or more apart. Until we hit 16 ... then it all goes wrong. So Jerry isn't right (though his suggestion DOES help a bit), however something IS going on. I'm more and more intrigued to try and figure out what it is now. It does appear to be more some sort of memory alignment oddity rather than some sort of instruction throughput oddity. Anyone want to explain this for an inquisitive mind? :D Edit 3: Interjay has a point on the unrolling that blows the previous edit out of the water. With an unrolled loop the performance does not improve. You need to add a nop in to make the gap between jump source and destination the same as for my good nop count above. Performance still sucks. Its interesting that I need 6 nops to improve performance though. I wonder how many nops the processor can issue per cycle? If its 3 then that account for the cache write latency ... But, if thats it, why is the latency occurring? Curiouser and curiouser ...

    Read the article

  • Should I stick only to AWS RDS Automated Backup or DB Snapshots?

    - by James Wise
    I am using AWS RDS for MySQL. With it comes on backup, I understand that amazon provides two types of backup - automated backup and database (DB) snapshot. The difference is explain in here - http://aws.amazon.com/rds/faqs/#23. However, I am still confuse if should I stick to automated backup only or both automated and manual (db snapshots). What do you think guys? What's the setup of your own? I heard to others that automated backup is not reliable due to some unrecoverable database when the DB instance is crashed so the DB snapshots are the way to rescue you. If I will do daily DB snapshots as similar settings to automated backup, I have gonna pay much bunch of bucks. Hope anyone could enlighten me or advise me the right set up. Thanks. James

    Read the article

  • CSS3 Continous Rotate Animation (Just like a loading sundial)

    - by Gcoop
    Hi, I am trying to replicate an Apple style activity indicator (sundial loading icon) by using a PNG and CSS3 animation. I have the image rotating and doing it continuously, but there seems to be a delay after the animation has finished before it does the next rotation. @-webkit-keyframes rotate { from { -webkit-transform: rotate(0deg); } to { -webkit-transform: rotate(360deg); } } #loading img { -webkit-animation-name: rotate; -webkit-animation-duration: 0.5s; -webkit-animation-iteration-count: infinite; -webkit-transition-timing-function: linear; } I have tried changing the animation duration but it makes no difference, if you slow it right down say 5s its just more apparent that after the first rotation there is a pause before it rotates again. It's this pause I want to get rid of. Any help is much appreciated, thanks.

    Read the article

  • How can I convert a file full of unix time strings to human readable dates?

    - by skymook
    I am processing a file full of unix time strings. I want to convert them all to human readable. The file looks like so: 1153335401 1153448586 1153476729 1153494310 1153603662 1153640211 Here is the script: #! /bin/bash FILE="test.txt" cat $FILE | while read line; do perl -e 'print scalar(gmtime($line)), "\n"' done This is not working. The output I get is Thu Jan 1 00:00:00 1970 for every line. I think the line breaks are being picked up and that is why it is not working. Any ideas? I'm using Mac OSX is that makes any difference.

    Read the article

  • MySQL encoding problem

    - by heffaklump
    I use Java and JDBC to save japanese characters and it works perfectly on my local MySQL. But when I tried doing the same thing on my web hotels MySQL i get ????? instead of japanese characters. I have made the exact same tables and use exact same code. The only difference I have found is SHOW VARIABLES LIKE 'CHAR%' character_set_client utf8 character_set_connection utf8 character_set_database latin1 character_set_filesystem binary character_set_results utf8 character_set_server latin1 character_set_system utf8 character_sets_dir /s/usr-local/share/mysql/charsets/ character_set_datbase is set to latin1. But I can't change it! Any tips?

    Read the article

  • load report fail in crystal reports doesn't have solution.

    - by mr-developer
    hi all, i'm using c sharp and crystal reports in visual studio 2008 in my application. the problem is that there is a form that i have crystalreporviewer on, when i call the form from form1 by button click event, the form and the report loads perfect, but when i call the from from another from that i've called from form1, this exception appears "load report fail". how can i solve this problem. i've googled and i didn't find any solution. I think there is a difference between form1 and the other form which i call the form of the report from. this is the code of the report form load try { cryRpt.Load("..\\..\\CrystalReport1.rpt"); crystalReportViewer1.ReportSource = cryRpt; crystalReportViewer1.Refresh(); } catch(Exception ex) { MessageBox.Show(ex.Message); } thanks in advance for any replies

    Read the article

  • Multiple webservice calls

    - by Mujtaba Hassan
    I have a webservice (ASP.NET) deployed on a webfarm. A client application consumes it on daily basis. The problem is that some of its calls are duplicated (with difference of milliseconds). For example I have a function Foo(string a,string b). The client app calls this webmethod as Foo('test1','test2') once but my log shows that it is being called twice or sometimes 3 or 4 times randomly. Is this anything wrong with the webfarm or the code? Note that the webmethod has simple straighfarward insert and update statements.

    Read the article

  • Python json memory bloat

    - by Anoop
    import json import time from itertools import count def keygen(size): for i in count(1): s = str(i) yield '0' * (size - len(s)) + str(s) def jsontest(num): keys = keygen(20) kvjson = json.dumps(dict((keys.next(), '0' * 200) for i in range(num))) kvpairs = json.loads(kvjson) del kvpairs # Not required. Just to check if it makes any difference print 'load completed' jsontest(500000) while 1: time.sleep(1) Linux top indicates that the python process holds ~450Mb of RAM after completion of 'jsontest' function. If the call to 'json.loads' is omitted then this issue is not observed. A gc.collect after this function execution does releases the memory. Looks like the memory is not held in any caches or python's internal memory allocator as explicit call to gc.collect is releasing memory. Is this happening because the threshold for garbage collection (700, 10, 10) was never reached ? I did put some code after jsontest to simulate threshold. But it didn't help.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >