Search Results

Search found 61615 results on 2465 pages for 'execution time'.

Page 643/2465 | < Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >

  • Can MYSQL filter by date if date is stored as text? ex "02/10/1984"

    - by Roeland
    Hello! I am trying to modify an app for a client which has already a database of over 1000 items. The dates are stored as text in the database with the format "02/10/1984". The system allows you to add and remove fields to the catalog dynamically and it also allows the advanced search to have specific fields be allowed. The problem is that it wasn't designed with dates in mind, so when I set a field as a date, and try to search by a range the query is trying to do a AND (cfv0.value = 01/02/2004 AND cfv0.value <= 05/03/2008) . I can make it so the date range passed is a numeric time value. Is there a way that when sending the query, it takes the text fields (with the date) and converts it to numeric time value so at that point I am basically just comparing numbers which would work fine. I do not have the option to change all the current date to numeric value due to the way the dynamic fields are set up. Thanks guys!

    Read the article

  • C# Proxy using Sockets, how should I do this?

    - by Kin
    I'm writing a proxy using .NET and C#. I haven't done much Socket programming, and I am not sure the best way to go about it. What would be the best way to implement this? Should I use Synchronous Sockets, Asynchronous sockets? Please help! It must... Accept Connections from the client on two different ports, and be able to receive data on both ports at the same time. Connect to the server on two different ports, and be able to send data on both ports as the same time. Immediately connect to the server and start forwarding packets as soon as a client connection is made. Forward packets in the same order they were received. Be as low latency as possible. I don't need the ability for multiple clients to connect to the proxy, but it would be a nice feature if its easy to implement. Client --------- Proxy ------- Server ---|-----------------|----------------| Port <-------- Port <------- Port Port <-------- Port <------- Port

    Read the article

  • Synchronizing reading and writing with synchronous NamedPipes

    - by Mike Trader
    A Named Pipe Server is created with hPipe = CreateNamedPipe( zPipePath, PIPE_ACCESS_DUPLEX, PIPE_TYPE_BYTE | PIPE_WAIT | PIPE_READMODE_BYTE, PIPE_UNLIMITED_INSTANCES, 8192, 8192, NMPWAIT_USE_DEFAULT_WAIT, NULL) Then we immediately call: ConnectNamedPipe( hPipe, BYVAL %NULL ) Which blocks until the client connects. Then we proceed directly to ReadFile( hPipe, ... The problem is that it takes the Client takes a finite amount of time to prepare and write all the fcgi request parameters. This has usually not completed before the Pipe Server performs its ReadFile(). The Read file operation thus finds no data in the pipe and the process fails. Is there a mechanism to tell when a Write() has occurred/finished after a client has connected to a NamedPipe? If I had control of the Client process, I could use a common Mutex, but I don't, and I really do not want to get into I/O completion ports just to solve this problem! I can of course use a simple timer to wait 60m/s or so which is usually plenty of time for the wrote to complete, but that is a horrible hack.

    Read the article

  • Object reference error even when object is not null

    - by Shrewd Demon
    hi, i have an application wherein i have incorporate a "Remember Me" feature for the login screen. I do this by creating a cookie when the user logs in for the first time, so next time when the user visits the site i get the cookie and load the user information. i have written the code for loading user information in a common class in the App_Code folder...and all my pages inherit from this class. code for loading the user info is as follows: public static void LoadUserDetails(string emailId) { UsersEnt currentUser = UsersBL.LoadUserInfo(emailId); if (currentUser != null) HttpContext.Current.Session["CurrentUser"] = currentUser; } Now the problem is i get an "Object reference" error when i try to store the currentUser object in the session variable (even though the currentUser object is not null). However the password property in the currentUser object is null. Am i getting the error because of this...or is there some other reason?? thank you

    Read the article

  • Accelerometer Values from Android/iPhone device

    - by mrlinx
    I'm trying to map my movements with a android device into an OpenGL scene. I've recorded accelerometer values for a simples movement: Moving the phone (lies flat on a table) 10cm forward (+x), and then 10cm backward (-x). The problem is that this values when used to calculate velocity and position, makes only the opengl cube go forward. Seems like the negative acceleration recorded was not enough to reduce the speed and invert its movement. What can be the problem? This is my function that updates the velocity and position every time new data comes in: void updatePosition(double T2) { double T = 0.005; Vec3 old_pos = position.clone(), old_vel = velocity.clone(); velocity = old_vel.plus(acceleration.times(T)); position = old_pos.plus(old_vel.times(T).plus(acceleration.times(0.5 * Math.pow(T, 2)))); } This is the X,Y,Z accelerometer values over the entire captured time:

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • Second call to AVAudioPlayer -> EXC_BAD_ACCESS (code posted, what did I miss?)

    - by Jordan
    I'm using this code to play a different mp3 files with every call. The first time through works great. The second time crash, as indicated below. .h AVAudioPlayer *player; @property (nonatomic, retain) AVAudioPlayer *player; .m -(void)load:(NSURL *)aFileURL { if (aFileURL) { AVAudioPlayer *newPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL: aFileURL error: nil]; [aFileURL release]; self.player = newPlayer; // CRASHES HERE EXC_BAD_ACCESS with second MP3a [newPlayer release]; [self.player prepareToPlay]; [self.player setDelegate:self]; } } I know I must have missed something, any ideas?

    Read the article

  • How to store or share live data between PHP Requests?

    - by Devyn
    Hi, I want to start a project for facebook and the application will be like real-time multiplayer chess game. The problem I'm having is I have no idea how to store the data when a player moves one piece and update the new position in player2 browser. I'm gonna use PHP, MySQL for server side and jQuery for Client Rendering. The simplest idea is to store the data in XML or MySQL and re-generate the result to player2 browser. But I know that when thousand of players are playing, it will not be an efficient way. Since I don't have time to study new language for this project, I'm gonna have to stick with PHP. I'm not going to use flash either because I want my client side light-weight and flash-free. So is there any way that will solve my problems?

    Read the article

  • How to mock test a web service in PHPUnit across multiple tests?

    - by scraton
    I am attempting to test a web service interface class using PHPUnit. Basically, this class makes calls to a SoapClient object. I am attempting to test this class in PHPUnit using the "getMockFromWsdl" method described here: http://www.phpunit.de/manual/current/en/test-doubles.html#test-doubles.stubbing-and-mocking-web-services However, since I want to test multiple methods from this same class, every time I setup the object, I also have to setup the mock WSDL SoapClient object. This is causing a fatal error to be thrown: Fatal error: Cannot redeclare class xxxx in C:\web\php5\PEAR\PHPUnit\Framework\TestCase.php(1227) : eval()'d code on line 15 How can I use the same mock object across multiple tests without having to regenerate it off the WSDL each time? That seems to be the problem.

    Read the article

  • Zero division does not throw exception in nunit

    - by Boris
    Running the following C# code through NUnit yields Test.ControllerTest.TestSanity: Expected: <System.DivideByZeroException> But was: null So either no DivideByZeroException is thrown, or NUnit does not catch it. Similar to this question, but the answers he got, do not seem to work for me. This is using NUnit 2.5.5.10112, and .NET 4.0.30319. [Test] public void TestSanity() { Assert.Throws<DivideByZeroException>(new TestDelegate(() => DivideByZero())); } private void DivideByZero() { // Parse "0" to make sure to get an error at run time, not compile time. var a = (1 / Double.Parse("0")); } Any ideas?

    Read the article

  • Finding changes in MongoDB database

    - by Jonathan Knight
    I'm designing a MongoDB database that works with a script that periodically polls a resource and gets back a response which is stored in the database. Right now my database has one collection with four fields , id, name, timestamp and data. I need to be able to find out which names had changes in the data field between script runs, and which did not. In pseudocode, if(data[name][timestamp]==data[name][timestamp+1]) //data has not changed store data in collection 1 else //data has changed between script runs for this name store data in collection 2 Is there a query that can do this without iterating and running javascript over each item in the collection? There are millions of documents, so this would be pretty slow. Should I create a new collection named timestamp for every time the script runs? Would that make it faster/more organized? Is there a better schema that could be used? The script runs once a day so I won't run into a namespace limitation any time soon.

    Read the article

  • How can I break from a method prematurely that's being called by NSTimer

    - by jammur
    Basically I'm writing a metronome app, but I'm using a sound file that, depending on the BPM, might not be finished playing when the "play" method is called again. For example, if the sound file is 0.5 seconds long, but the BPM is 200, the "play" method needs to be called every 0.3 seconds. I'm not overly familiar with NSTimer, but it appears that if it is supposed to fire before the previous invocation has completed, it doesn't, and just waits for the next time around. I could be completely wrong about that though. What I need to do is have the previous invocation end prematurely, and have the "play" method called again when the time is supposed to fire. Any help would be appreciated!

    Read the article

  • Refactor/rewrite code or continue?

    - by Dan
    I just completed a complex piece of code. It works to spec, it meets performance requirements etc etc but I feel a bit anxious about it and am considering rewriting and/or refactoring it. Should I do this (spending time that could otherwise be spent on features that users will actually notice)? The reasons I feel anxious about the code are: The class hierarchy is complex and not obvious Some classes don't have a well defined purpose (they do a number of unrelated things) Some classes use others internals (they're declared as friend classes) to bypass the layers of abstraction for performance, but I feel they break encapsulation by doing this Some classes leak implementation details (eg, I changed a map to a hash map earlier and found myself having to modify code in other source files to make the change work) My memory management/pooling system is kinda clunky and less-than transparent They look like excellent reasons to refactor and clean code, aiding future maintenance and extension, but could be quite time consuming. Also, I'll never be perfectly happy with any code I write anyway... So, what does stackoverflow think? Clean code or work on features?

    Read the article

  • Mercurial between server and local?

    - by artmania
    I have a portal development work in process... I had some troubles time to time like losing, overwriting wrong files, etc... So I decided to go for Mercurial for this development. My first experience with Source Control. I work on server [bluehost] for this project, is there any way to keep update backups at local? Do I have to setup Mercurial to Bluehost? any way to sync changes on server to my local mac?

    Read the article

  • Performance monitor shows 4294967293 sessions active

    - by TGnat
    I have an ASP.Net 3.5 website running in IIS 6 on Windows Server 2003 R2. It is a relatively small internal application that probably serves less than ten users at any given time. The server has 4 Gig of memory and shows that 3+ Gig is available while the site is active. Just minutes after restarting the web application Performance monitor shows that there is a whopping 4,294,967,293 sessions active! I am fairly certain that this number is incorrect; at the time this reading there were only 100 requests to the website. Has anyone else experienced this kind odd behavior from perf mon? Any ideas on how to get an accurate reading? UPDATE: After running for about an hour the number of active sessions has dropped by 4. So it does seem to be responding to sessions timing out.

    Read the article

  • Implementing Role based Helpers

    - by Cynics
    So my question is how would you implement your handwritten Helpers based on the role of current user. Would it be efficient to change the behaviour at request time? e.g. the Helper somehow figures out the role of user, and include the proper SubModule? module ApplicationHelper module LoggedInHelper # Some functions end module GuestHelper # The Same functions end # If User is Guest then include GuestHelper # If User is LoggedIn then include LoggedInHelper end Is it efficient this way? is it rails way? I've got a whole bunch of function that act like this, and I don't want to wrap every single one of them in an if statement def menu_actions if current_user.nil? # User is guest { "Log in" => link_to "Login", "/login" } else # User is Logged In { "Log out" => link_to "Logout", "/logout" } end end Thank you for your time and thoughts.

    Read the article

  • Actionscript 3 : XML cached in local testing

    - by Boun
    Hi, My question is about XML loading. I need to avoid xml caching. On a web server, the technique is adding a random param to reload each time the XML file. But on local testing (in Flash CS4 IDE, CTRL + Enter), the following lines are not possible : var my_date : Date; path = "toto.xml?time="+my_date.getSeconds()+my_date.getMilliseconds(); Is there any trick to bypass this issue ? I've read on different forum about the "delete" method, we delete the xml object and then recreate one new. In my case, I put : myXML = null; myXML = new XML ( loadedData ); But it doesn't work at all. I spent many hours on that problem, if anyone has a good solution... Thank you.

    Read the article

  • Adding ID's to google map markers

    - by Nick
    I have a script that loops and adds markers one at a time. I am trying to get the current marker to have an info window and and only have 5 markers on a map at a time (4 without info windows and 1 with) How would i add an id to each marker so that i can delete and close info windows as needed. this is the function i am using to set the marker function codeAddress(address, contentString) { var infowindow = new google.maps.InfoWindow({ content: contentString }); if (geocoder) { geocoder.geocode( { 'address': address}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { map.setCenter(results[0].geometry.location); var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location }); infowindow.open(map,marker); } else { alert("Geocode was not successful for the following reason: " + status); } }); } }

    Read the article

  • Integrating a C project in the iPhone SDK

    - by iPhoneARguy
    Hello, This is my first time doing this sort of project so apologies if the question is silly. I've got a question about using a C project with a project in the iPhone SDK. I've dragged and dropped the C project into the iPhone project in Xcode (so it appears in the screenshot below). sjeng.h is a file inside GameEngine.xcodeproj, but when I try to include the header file, I not only receive an error, but the file it is looking for seems to be capitalized whereas the import statement is not. (I would post a screenshot but this is my first time doing something on stack overflow and I need more reputation points. The URL of the screenshot is here: http://imgur.com/AdrGL.png) Does anyone know what the problem might be? Thanks!

    Read the article

  • C# Desktop Application "Encounters an error and has to exit" on first run of the day

    - by Sreedevi J
    Hello, It seems I tend to attract strange issues. This time, I have written a C# application, and handled most of the exceptions I can find. The problem is, when I run the installed/bundled version on any PC for the first time in a day (after the PC has been shut down and started after a while) it comes across some error and has to shutdown the application (even though the try-catch block surrounding the Main() does not fire). The application does not throw the same error on subsequent runs. I added an #if(!DEBUG) ... #else ... #endif surrounding the Main() code. Can anyone help me? Thanks!

    Read the article

  • Track each request to the website using HttpModule

    - by stacker
    I want to save each request to the website. In general I want to include the following information: User IP, The web site url, user-if-exist, date-time. Response time, response success-failed status. Is it reasonable to collect the 1 and 2 in the same action? (like same HttpModule)? Do you know about any existing structure that I can follow that track every request/response-status to the website? The data need to be logged to sql server.

    Read the article

  • Building libopenmetaverse on CentOS 5

    - by Gary
    I'm trying to build libopenmetaverse on CentOS however I get the following error. I'm not this kind of developer and am installing this for someone else to use. This is just the part of the build that fails. Any ideas? [nant] /opt/libomv/Programs/WinGridProxy/WinGridProxy.exe.build build Buildfile: file:///opt/libomv/Programs/WinGridProxy/WinGridProxy.exe.build Target framework: Mono 2.0 Profile Target(s) specified: build build: [echo] Build Directory is /opt/libomv/bin [csc] Compiling 15 files to '/opt/libomv/bin/WinGridProxy.exe'. [resgen] Error: Invalid ResX input. [resgen] Position: Line 2700, Column 5. [resgen] Inner exception: An exception was thrown by the type initializer for System.Drawing.GDIPlus BUILD FAILED External Program Failed: /tmp/tmp5a71a509.tmp/resgen.exe (return code was 1) Total time: 0.4 seconds. BUILD FAILED Nested build failed. Refer to build log for exact reason. Total time: 47 seconds. Build Exit Code: 1

    Read the article

  • OutOfMemoryException, stack size is huge, large number of threads

    - by Captain Comic
    Hello, I was profiling my .net windows service. I was trying to discover OutOfMemoryException and discovered that my stack size is huge and is growing because the the number of threads keeps growing. Each thread gets 1024 KB on Windows x64 machine. Thus when my app has 754 threads the stack size would be 772 MB. The problem for me is that i don't know where these thread come from. Initially my app has a very limited number of threads and they keep growing with time. I have two suspicions - either these threads are created by WCF or by database connection. My application uses both WCF and datasets. Also I tried to profile my app in Ants do Trace i can see large number of System.ServiceModel.Channels.ClientReliableDuplexSessionChannel and this number is increasing with time. I can see thousands of these objects created. So what I want to know is who is creating threads (tools to discover, profilers) and if it is WCF who is creating these threads.

    Read the article

  • I need to create Trigger for delete action which backsup all fields old value in audit table

    - by Parth
    I need to create Trigger for delete action which backsup all fields old value in audit table. I have a table structure for audit table as id, menuid, field, oldvalue, changedone now whenever any of the row is deleted from its mother table(menu) having 21 fields in count, every fields old value should get insert in the audit table with new audit id.. like If I delete a row having fields as: menuid, name, age, address, sex, town now in audit table 6 rows should get inserted seperately for every field given above as: AUdit Table: id=2(audittable id) menuid = menuid field = name oldvalue = joy changedone = (whatever the deleted time was) id=3(audittable id) menuid = menuid field = age oldvalue = 23 changedone = (whatever the deleted time was) an so on..

    Read the article

< Previous Page | 639 640 641 642 643 644 645 646 647 648 649 650  | Next Page >