Search Results

Search found 52 results on 3 pages for 'gettimeofday'.

Page 1/3 | 1 2 3  | Next Page >

  • Why are gettimeofday() intervals occasionally negative?

    - by Andres Jaan Tack
    I have an experimental library whose performance I'm trying to measure. To do this, I've written the following: struct timeval begin; gettimeofday(&begin, NULL); { // Experiment! } struct timeval end; gettimeofday(&end, NULL); // Print the time it took! std::cout << "Time: " << 100000 * (end.tv_sec - begin.tv_sec) + (end.tv_usec - begin.tv_usec) << std::endl; Occasionally, my results include negative timings, some of which are nonsensical. For instance: Time: 226762 Time: 220222 Time: 210883 Time: -688976 What's going on?

    Read the article

  • how to use gettimeofday() or something equivalent with Visual Studio C++ 2008?

    - by make
    Hi, Could someone please help me to use gettimeofday() function with Visual Studio C++ 2008 on Windows XP? here is a code that I found somewhere on the net: #include < time.h > #include <windows.h> #if defined(_MSC_VER) || defined(_MSC_EXTENSIONS) #define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64 #else #define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL #endif struct timezone { int tz_minuteswest; /* minutes W of Greenwich */ int tz_dsttime; /* type of dst correction */ }; int gettimeofday(struct timeval *tv, struct timezone *tz) { FILETIME ft; unsigned __int64 tmpres = 0; static int tzflag; if (NULL != tv) { GetSystemTimeAsFileTime(&ft); tmpres |= ft.dwHighDateTime; tmpres <<= 32; tmpres |= ft.dwLowDateTime; /*converting file time to unix epoch*/ tmpres -= DELTA_EPOCH_IN_MICROSECS; tmpres /= 10; /*convert into microseconds*/ tv->tv_sec = (long)(tmpres / 1000000UL); tv->tv_usec = (long)(tmpres % 1000000UL); } if (NULL != tz) { if (!tzflag) { _tzset(); tzflag++; } tz->tz_minuteswest = _timezone / 60; tz->tz_dsttime = _daylight; } return 0; } ... // call gettimeofday() gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); Last yesr when I tested this code VC++6, it works fine. But now when I use VC++ 2008, I am getting error of exception handling. So is there any idea on how to use gettimeofday or something equivalent? Thanks for your reply and any help would be very appreciated:

    Read the article

  • Apache2 gettimeofday() keeps CPU at 100%

    - by pincoded
    I use Ubuntu 6.06.2 LTS with Server version: Apache/2.0.55; built:Aug 16 2010 18:25:39 and PHP 5.1.2 (cli) (built: Sep 16 2010 20:32:18). All my 4 Cores are constantly at 100% and the system begins to accumulate load. A restart for apache fixes the problem temporarily. I made a strace to the pid of the apache processes that keep the CPU busy. I get the following message continuously: gettimeofday({1285234145, 989639}, NULL) = 0 Do you have any ideas where this problem comes from? Thank you. UPDATE: The problem came from an application error that generated an infinite loop. Thank you all for your great help.

    Read the article

  • How to get the running of time of my program with gettimeofday()

    - by Mechko
    So I get the time at the beginning of the code, run it, and then get the time. struct timeval begin, end; gettimeofday(&begin, NULL); //code to time gettimeofday(&end, NULL); //get the total number of ms that the code took: unsigned int t = end.tv_usec - begin.tv_usec; Now I want to print it out in the form "**code took 0.007 seconds to run" or something similar. So two problems: 1) t seems to contain a value of the order 6000, and I KNOW the code didn't take 6 seconds to run. 2) How can I convert t to a double, given that it's an unsigned int? Or is there an easier way to print the output the way I wanted to?

    Read the article

  • Create ntp time stamp from gettimeofday

    - by krunk
    I need to calculate an ntp time stamp using gettimeofday. Below is how I've done it with comments on method. Look good to you guys? (minus error checking). Also, here's a codepad link. #include <unistd.h> #include <sys/time.h> const unsigned long EPOCH = 2208988800UL; // delta between epoch time and ntp time const double NTP_SCALE_FRAC = 4294967295.0; // maximum value of the ntp fractional part int main() { struct timeval tv; uint64_t ntp_time; uint64_t tv_ntp; double tv_usecs; gettimeofday(&tv, NULL); tv_ntp = tv.tv_sec + EPOCH; // convert tv_usec to a fraction of a second // next, we multiply this fraction times the NTP_SCALE_FRAC, which represents // the maximum value of the fraction until it rolls over to one. Thus, // .05 seconds is represented in NTP as (.05 * NTP_SCALE_FRAC) tv_usecs = (tv.tv_usec * 1e-6) * NTP_SCALE_FRAC; // next we take the tv_ntp seconds value and shift it 32 bits to the left. This puts the // seconds in the proper location for NTP time stamps. I recognize this method has an // overflow hazard if used after around the year 2106 // Next we do a bitwise AND with the tv_usecs cast as a uin32_t, dropping the fractional // part ntp_time = ((tv_ntp << 32) & (uint32_t)tv_usecs); }

    Read the article

  • Measuring execution time of selected loops

    - by user95281
    I want to measure the running times of selected loops in a C program so as to see what percentage of the total time for executing the program (on linux) is spent in these loops. I should be able to specify the loops for which the performance should be measured. I have tried out several tools (vtune, hpctoolkit, oprofile) in the last few days and none of them seem to do this. They all find the performance bottlenecks and just show the time for those. Thats because these tools only store the time taken that is above a threshold (~1ms). So if one loop takes lesser time than that then its execution time won't be reported. The basic block counting feature of gprof depends on a feature in older compilers thats not supported now. I could manually write a simple timer using gettimeofday or something like that but for some cases it won't give accurate results. For ex: for (i = 0; i < 1000; ++i) { for (j = 0; j < N; ++j) { //do some work here } } Now here I want to measure the total time spent in the inner loop and I will have to put a call to gettimeofday inside the first loop. So gettimeofday itself will get called a 1000 times which introduces its own overhead and the result will be inaccurate.

    Read the article

  • Strange results while measuring delta time on Linux

    - by pachanga
    Folks, could you please explain why I'm getting very strange results from time to time using the the following code: #include <unistd.h> #include <sys/time.h> #include <time.h> #include <stdio.h> int main() { struct timeval start, end; long mtime, seconds, useconds; while(1) { gettimeofday(&start, NULL); usleep(2000); gettimeofday(&end, NULL); seconds = end.tv_sec - start.tv_sec; useconds = end.tv_usec - start.tv_usec; mtime = ((seconds) * 1000 + useconds/1000.0) + 0.5; if(mtime > 10) printf("WTF: %ld\n", mtime); } return 0; } (You can compile and run it with: gcc test.c -o out -lrt && ./out) What I'm experiencing is sporadic big values of mtime variable almost every second or even more often, e.g: $ gcc test.c -o out -lrt && ./out WTF: 14 WTF: 11 WTF: 11 WTF: 11 WTF: 14 WTF: 13 WTF: 13 WTF: 11 WTF: 16 How can this be possible? Is it OS to blame? Does it do too much context switching? But my box is idle( load average: 0.02, 0.02, 0.3). Here is my Linux kernel version: $ uname -a Linux kurluka 2.6.31-21-generic #59-Ubuntu SMP Wed Mar 24 07:28:56 UTC 2010 i686 GNU/Linux

    Read the article

  • FastCGI and Apache 500 error intermittently

    - by benkorn1
    Hello, I have a FastCGI (mod_fastcgi)problem. It happens every once in a while, and does not casue a complete server meltdown, just 500 errors. Here are a couple things. First I am using APC so PHP is in control of it's own processes, not FastCGI. Also, I have the webroot set as: /var/www/html And the fcgi-bin inside: /var/www/html/fcgi-bin First off here is the apache error_log: [Fri Jan 07 10:22:39 2011] [error] [client 50.16.222.82] (4)Interrupted system call: FastCGI: comm with server "/var/www/html/fcgi-bin/php.fcgi" aborted: select() failed, referer: http://www.domain.com/ I also ran strace on the 'fcgi-pm' process. Here is a snip from the trace around the time it bombs out: 21725 gettimeofday({1294420603, 14360}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6503 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 96595}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6154 23*C /var/www/html/fcgi-bin/php.fcgi - - 6483 28*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 270744}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 5741 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 311502}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6064 32*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 365598}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6179 33*C /var/www/html/fcgi-bin/php.fcgi - - 5906 59*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 454405}, NULL) = 0 I noticed that the 'select()' seems to stay the same regardless, however the read() changes its return from 46 to some other number while it is bombing out. Has anyone seen anything like this. Could this be some sort of file locking? Thanks, Ben

    Read the article

  • FastCGI and Apache 500 error intermittently

    - by benkorn1
    I have a FastCGI (mod_fastcgi)problem. It happens every once in a while, and does not casue a complete server meltdown, just 500 errors. Here are a couple things. First I am using APC so PHP is in control of it's own processes, not FastCGI. Also, I have the webroot set as: /var/www/html And the fcgi-bin inside: /var/www/html/fcgi-bin First off here is the apache error_log: [Fri Jan 07 10:22:39 2011] [error] [client 50.16.222.82] (4)Interrupted system call: FastCGI: comm with server "/var/www/html/fcgi-bin/php.fcgi" aborted: select() failed, referer: http://www.domain.com/ I also ran strace on the 'fcgi-pm' process. Here is a snip from the trace around the time it bombs out: 21725 gettimeofday({1294420603, 14360}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6503 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 96595}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6154 23*C /var/www/html/fcgi-bin/php.fcgi - - 6483 28*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 270744}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 5741 38*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 311502}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6064 32*", 16384) = 46 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 365598}, NULL) = 0 21725 read(14, "C /var/www/html/fcgi-bin/php.fcgi - - 6179 33*C /var/www/html/fcgi-bin/php.fcgi - - 5906 59*", 16384) = 92 21725 alarm(131) = 0 21725 select(15, [14], NULL, NULL, NULL) = 1 (in [14]) 21725 alarm(0) = 131 21725 gettimeofday({1294420603, 454405}, NULL) = 0 I noticed that the 'select()' seems to stay the same regardless, however the read() changes its return from 46 to some other number while it is bombing out. Has anyone seen anything like this. Could this be some sort of file locking? Thanks, Ben

    Read the article

  • CUDA memory transfer issue

    - by Vaibhav Sundriyal
    I am trying to execute a code which first transfers data from CPU to GPU memory and vice-versa. In spite of increasing the volume of data, the data transfer time remains the same as if no data transfer is actually taking place. I am posting the code. #include <stdio.h> /* Core input/output operations */ #include <stdlib.h> /* Conversions, random numbers, memory allocation, etc. */ #include <math.h> /* Common mathematical functions */ #include <time.h> /* Converting between various date/time formats */ #include <cuda.h> /* CUDA related stuff */ #include <sys/time.h> __global__ void device_volume(float *x_d,float *y_d) { int index = blockIdx.x * blockDim.x + threadIdx.x; } int main(void) { float *x_h,*y_h,*x_d,*y_d,*z_h,*z_d; long long size=9999999; long long nbytes=size*sizeof(float); timeval t1,t2; double et; x_h=(float*)malloc(nbytes); y_h=(float*)malloc(nbytes); z_h=(float*)malloc(nbytes); cudaMalloc((void **)&x_d,size*sizeof(float)); cudaMalloc((void **)&y_d,size*sizeof(float)); cudaMalloc((void **)&z_d,size*sizeof(float)); gettimeofday(&t1,NULL); cudaMemcpy(x_d, x_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(y_d, y_h, nbytes, cudaMemcpyHostToDevice); cudaMemcpy(z_d, z_h, nbytes, cudaMemcpyHostToDevice); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("\n %ld\t\t%f\t\t",nbytes,et); et=0.0; //printf("%f %d\n",seconds,CLOCKS_PER_SEC); // launch a kernel with a single thread to greet from the device //device_volume<<<1,1>>>(x_d,y_d); gettimeofday(&t1,NULL); cudaMemcpy(x_h, x_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(y_h, y_d, nbytes, cudaMemcpyDeviceToHost); cudaMemcpy(z_h, z_d, nbytes, cudaMemcpyDeviceToHost); gettimeofday(&t2,NULL); et = (t2.tv_sec - t1.tv_sec) * 1000.0; // sec to ms et += (t2.tv_usec - t1.tv_usec) / 1000.0; // us to ms printf("%f\n",et); cudaFree(x_d); cudaFree(y_d); cudaFree(z_d); return 0; } Can anybody help me with this issue? Thanks

    Read the article

  • Calling UIGetScreenImage() on manually-spawned thread prints "_NSAutoreleaseNoPool():" message to lo

    - by jtrim
    This is the body of the selector that is specified in NSThread +detachNewThreadSelector:(SEL)aSelector toTarget:(id)aTarget withObject:(id)anArgument NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; while (doIt) { if (doItForSure) { NSLog(@"checking"); doItForSure = NO; (void)gettimeofday(&start, NULL); /* do some stuff */ // the next line prints "_NSAutoreleaseNoPool():" message to the log CGImageRef screenImage = UIGetScreenImage(); /* do some other stuff */ (void)gettimeofday(&end, NULL); elapsed = ((double)(end.tv_sec) + (double)(end.tv_usec) / 1000000) - ((double)(start.tv_sec) + (double)(start.tv_usec) / 1000000); NSLog(@"Time elapsed: %e", elapsed); [pool drain]; } } [pool release]; Even with the autorelease pool present, I get this printed to the log when I call UIGetScreenImage(): 2010-05-03 11:39:04.588 ProjectName[763:5903] *** _NSAutoreleaseNoPool(): Object 0x15a2e0 of class NSCFNumber autoreleased with no pool in place - just leaking Has anyone else seen this with UIGetScreenImage() on a separate thread?

    Read the article

  • how to solve unhandled exception error when using visual C++ 2008?

    - by make
    Hi, Could someone please help me to solve unhandled exception error when using visual C++ 2008? the error is displayed as follow: Unhandled exception at 0x00411690 in time.exe: 0xC0000005: Access violation reading location 0x00000008 Actually when I used visual c++ 6 in the past, there weren't any error and the program was running fine. But now ehen I use visual 2008, I am getting this Unhandled exception error. Here is the program: #include <stdio.h> #include <stdlib.h> #include <time.h> #ifdef _WIN32 // #include <winsock.h> #include <windows.h> #include "stdint.h" // typedef __int64 int64_t // Define it from MSVC's internal type // typedef unsigned __int32 uint32_t #else #include <stdint.h> // Use the C99 official header #include <sys/time.h> #include <unistd.h> #endif #if defined(_MSC_VER) || defined(_MSC_EXTENSIONS) #define DELTA_EPOCH_IN_MICROSECS 11644473600000000Ui64 #else #define DELTA_EPOCH_IN_MICROSECS 11644473600000000ULL #endif struct timezone { int tz_minuteswest; /* minutes W of Greenwich */ int tz_dsttime; /* type of dst correction */ }; #define TEST #ifdef TEST uint32_t stampstart(); uint32_t stampstop(uint32_t start); int main() { uint32_t start, stop; start = stampstart(); /* Your code goes here */ stop = stampstop(start); return 0; } #endif int gettimeofday(struct timeval *tv, struct timezone *tz) { FILETIME ft; unsigned __int64 tmpres = 0; static int tzflag = 0; if (NULL != tv) { GetSystemTimeAsFileTime(&ft); tmpres |= ft.dwHighDateTime; tmpres <<= 32; tmpres |= ft.dwLowDateTime; tmpres /= 10; /*convert into microseconds*/ /*converting file time to unix epoch*/ tmpres -= DELTA_EPOCH_IN_MICROSECS; tv->tv_sec = (long)(tmpres / 1000000UL); tv->tv_usec = (long)(tmpres % 1000000UL); } if (NULL != tz) { if (!tzflag) { _tzset(); tzflag++; } tz->tz_minuteswest = _timezone / 60; tz->tz_dsttime = _daylight; } return 0; } uint32_t stampstart() { struct timeval tv; struct timezone tz; struct tm *tm; uint32_t start; gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); printf("TIMESTAMP-START\t %d:%02d:%02d:%d (~%d ms)\n", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec, tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000); start = tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000; return (start); } uint32_t stampstop(uint32_t start) { struct timeval tv; struct timezone tz; struct tm *tm; uint32_t stop; gettimeofday(&tv, &tz); tm = localtime(&tv.tv_sec); stop = tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000; printf("TIMESTAMP-END\t %d:%02d:%02d:%d (~%d ms) \n", tm->tm_hour, tm->tm_min, tm->tm_sec, tv.tv_usec, tm->tm_hour * 3600 * 1000 + tm->tm_min * 60 * 1000 + tm->tm_sec * 1000 + tv.tv_usec / 1000); printf("ELAPSED\t %d ms\n", stop - start); return (stop); } thanks for your replies:

    Read the article

  • Why Does My Website Redirect me to my localhost?

    - by Noah Brainey
    Alright, my website has some issues that I'm not sure what's causing them. Visit this page http://online-file-sharing.net/tos.html and click one of the bottom footer links... it redirects you to your localhost in the address bar. I have no idea why it does this. I'm hosting this website on my own server, which is this computer, and using Xampp. If this information helps. Anyways any help would be greatly appreciated! I'm also using DYNDNS as my nameservers. I've already ask this question on superuser and webapps QnA sites neither could help. They said to come here. Another thing to note is that this website runs on one script and not multiple scripts (upload.cgi). However there are three files that aren't dynamic and aren't part of the upload.cgi file... these are about.html, browse.html and tos.html. Another thing to note is that my homepage which is upload.cgi can only be accessed by manually typing in online-file-sharing.net/cgi-bin/upload.cgi (which isn't it's real location but it seems to recognize it this way... but redirects me to my localhost). .htaccess file code: DirectoryIndex upload.cgi My upload.cgi path code: my $version = "4.14"; $ENV{PATH} = '/bin:/usr/bin'; delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; ($ENV{DOCUMENT_ROOT}) = ($ENV{DOCUMENT_ROOT} =~ /(.*)/); # untaint. #$ENV{SCRIPT_NAME} = '/cgi-bin/upload.cgi'; use lib './perlmodules'; #use Time::HiRes 'gettimeofday'; #my $hires_start = gettimeofday(); my (%PREF,%TEXT) = (); The script I'm using is FileChucker. I hope this information is enough to find an answer... if not please let me know and I'll post as much information as you need!

    Read the article

  • Why does my CGI script keep redirecting links to localhost?

    - by Noah Brainey
    Visit this page http://online-file-sharing.net/tos.html and click one of the bottom footer links. It redirects you to your localhost in the address bar. I have no idea why it does this. This is in the main script that my entire website revolves around: upload.cgi $ENV{PATH} = '/bin:/usr/bin'; delete @ENV{'IFS', 'CDPATH', 'ENV', 'BASH_ENV'}; ($ENV{DOCUMENT_ROOT}) = ($ENV{DOCUMENT_ROOT} =~ /(.*)/); # untaint. #$ENV{SCRIPT_NAME} = '/cgi-bin/upload.cgi'; use lib './perlmodules'; #use Time::HiRes 'gettimeofday'; #my $hires_start = gettimeofday(); my (%PREF,%TEXT) = (); No file is displayed when someone visits the root directory, although I have a .htaccess file saying to open my upload.cgi file which is located in my root directory. When I point my browser directly to the CGI file it works but it brings me to my localhost again. I'm hosting this website on my own server, which is this computer, and using XAMPP if this information helps. I'm also using DynDNS as my nameservers. I hope you can give me some insight.

    Read the article

  • Easily measure elapsed time

    - by hap497
    I am trying to use time() to measure various points of my program. What I don't understand is why the values in the before and after are the same? I understand this is not the best way to profile my program, I just want to see how long something take. printf("**MyProgram::before time= %ld\n", time(NULL)); doSomthing(); doSomthingLong(); printf("**MyProgram::after time= %ld\n", time(NULL)); I have tried: struct timeval diff, startTV, endTV; gettimeofday(&startTV, NULL); doSomething(); doSomethingLong(); gettimeofday(&endTV, NULL); timersub(&endTV, &startTV, &diff); printf("**time taken = %ld %ld\n", diff.tv_sec, diff.tv_usec); How do I read a result of **time taken = 0 26339? Does that mean 26,339 nanoseconds = 26.3 msec? What about **time taken = 4 45025, does that mean 4 seconds and 25 msec?

    Read the article

  • How to stop input in Perl?

    - by user1472747
    First time poster and part time perl noobie. I'm making a reflex game. Here's the output - __________________________________________________________________________ Reflex game initiated. Press ENTER to begin the game, and then press ENTER after the asterisks are printed to measure your reflexes!. ************************* Your result: 0.285606 seconds. logout [Process completed] __________________________________________________________________________ There's one small problem though - There's 0-10 seconds (based on a random variable) after you press enter to start the game and before the stars are printed. During that time, if the player presses ENTER, it's logged as their reflex time. So I need a way to stop my code from reading their ENTER button before the stars are printed. The code - #!/usr/bin/perl use Time::HiRes qw(sleep); use Time::HiRes qw(gettimeofday); #random delay variable $random_number = rand(); print "Reflex game initiated. Press ENTER to begin the game, and then press ENTER after the asterisks are printed to measure your reflexes!.\n"; #begin button $begin = <>; #waits x milliseconds sleep(10*$random_number); #pre-game $start = [ Time::HiRes::gettimeofday() ]; print "\n****************************\n"; #user presses enter $stop = <>; #post game $elapsed = Time::HiRes::tv_interval($start); #delay time print print "Your result: ".$elapsed." seconds.\n";

    Read the article

  • Is it reasonable that a random disk seek & read costs ~16ms?

    - by fzhang
    I am frustrated about the latency of random reading from a non-ssd disk. Based on results from following test program, it speeds ~16 ms for a random read of just 512 bytes without help of os cache. I tried changing 512 to larger values, such as 25k, and the latency did not increase as much. I guess it is because the disk seek dominates the time. I understand that random reading is inherently slow, but just want to be sure that ~16ms is reasonable, even for non-ssd disk. #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> #include <sys/unistd.h> #include <fcntl.h> #include <limits.h> #include <stdio.h> #include <string.h> int main(int argc, char** argv) { int fd = open(argv[1], O_RDONLY); if (fd < 0) { fprintf(stderr, "Failed open %s\n", argv[1]); return -1; } const size_t count = 512; const off_t offset = 25990611 / 2; char buffer[count] = { '\0' }; struct timeval start_time; gettimeofday(&start_time, NULL); off_t ret = lseek(fd, offset, SEEK_SET); if (ret != offset) { perror("lseek error"); close(fd); return -1; } ret = read(fd, buffer, count); if (ret != count) { fprintf(stderr, "Failed reading all: %ld\n", ret); close(fd); return -1; } struct timeval end_time; gettimeofday(&end_time, NULL); printf("tv_sec: %ld, tv_usec: %ld\n", end_time.tv_sec - start_time.tv_sec, end_time.tv_usec - start_time.tv_usec); close(fd); return 0; }

    Read the article

  • Why is .NET faster than C++ in this case?

    - by acidzombie24
    -edit- I LOVE SLaks comment. "The amount of misinformation in these answers is staggering." :D Calm down guys. Pretty much all of you were wrong. I DID make optimizations. It turns out whatever optimizations I made wasn't good enough. I ran the code in GCC using gettimeofday (I'll paste code below) and used g++ -O2 file.cpp and got slightly faster results then C#. Maybe MS didn't create the optimizations needed in this specific case but after downloading and installing mingw I was tested and found the speed to be near identical. Justicle Seems to be right. I could have sworn I use clock on my PC and used that to count and found it was slower but problem solved. C++ speed isn't almost twice as slower in the MS compiler. When my friend informed me of this I couldn't believe it. So I took his code and put some timers onto it. Instead of Boo I used C#. I constantly got faster results in C#. Why? The .NET version was nearly half the time no matter what number I used. C++ version: #include <iostream> #include <stdio.h> #include <intrin.h> #include <windows.h> using namespace std; int fib(int n) { if (n < 2) return n; return fib(n - 1) + fib(n - 2); } int main() { __int64 time = 0xFFFFFFFF; while (1) { int n; //cin >> n; n = 41; if (n < 0) break; __int64 start = __rdtsc(); int res = fib(n); __int64 end = __rdtsc(); cout << res << endl; cout << (float)(end-start)/1000000<<endl; break; } return 0; } C# version: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Runtime.InteropServices; using System.ComponentModel; using System.Threading; using System.IO; using System.Diagnostics; namespace fibCSTest { class Program { static int fib(int n) { if (n < 2)return n; return fib(n - 1) + fib(n - 2); } static void Main(string[] args) { //var sw = new Stopwatch(); //var timer = new PAB.HiPerfTimer(); var timer = new Stopwatch(); while (true) { int n; //cin >> n; n = 41; if (n < 0) break; timer.Start(); int res = fib(n); timer.Stop(); Console.WriteLine(res); Console.WriteLine(timer.ElapsedMilliseconds); break; } } } } GCC version: #include <iostream> #include <stdio.h> #include <sys/time.h> using namespace std; int fib(int n) { if (n < 2) return n; return fib(n - 1) + fib(n - 2); } int main() { timeval start, end; while (1) { int n; //cin >> n; n = 41; if (n < 0) break; gettimeofday(&start, 0); int res = fib(n); gettimeofday(&end, 0); int sec = end.tv_sec - start.tv_sec; int usec = end.tv_usec - start.tv_usec; cout << res << endl; cout << sec << " " << usec <<endl; break; } return 0; }

    Read the article

  • Importing Conditionally Compiled Functions From a Perl Module

    - by Robert S. Barnes
    I have a set of logging and debugging functions which I want to use across multiple modules / objects. I'd like to be able to turn them on / off globally using a command line switch. The following code does this, however, I would like to be able to omit the package name and keep everything in a single file. This is related to two previous questions I asked, here and here. #! /usr/bin/perl -w use strict; use Getopt::Long; { package LogFuncs; use threads; use Time::HiRes qw( gettimeofday ); # provide tcpdump style time stamp sub _gf_time { my ( $seconds, $microseconds ) = gettimeofday(); my @time = localtime($seconds); return sprintf( "%02d:%02d:%02d.%06ld", $time[2], $time[1], $time[0], $microseconds ); } sub logerr; sub compile { my %params = @_; *logerr = $params{do_logging} ? sub { my $msg = shift; warn _gf_time() . " Thread " . threads->tid() . ": $msg\n"; } : sub { }; } } { package FooObj; sub new { my $class = shift; bless {}, $class; }; sub foo_work { my $self = shift; # do some foo work LogFuncs::logerr($self); } } { package BarObj; sub new { my $class = shift; my $data = { fooObj => FooObj->new() }; bless $data, $class; } sub bar_work { my $self = shift; $self->{fooObj}->foo_work(); LogFuncs::logerr($self); } } my $do_logging = 0; GetOptions( "do_logging" => \$do_logging, ); LogFuncs::compile(do_logging => $do_logging); my $bar = BarObj->new(); LogFuncs::logerr("Created $bar"); $bar->bar_work();

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • Programmatically measure size and way-order of L1 and L2 caches

    - by osgx
    How can I measure programmatically (not query the OS, but measure) the size and order of associativity of L1 and L2 caches (data caches)? Assumptions about system: It has L1 and L2 cache (may be L3 too, may be cache sharing), It may have a hardware prefetch unit (just like P4+), It has a stable clocksource (tickcounter or good HPET for gettimeofday). There are no assumptions about OS (it can be Linux, Windows, or something non-standard), and we can't use POSIX queries. Language is C. And compiler optimizations may be disabled.

    Read the article

  • Programicaly measure size and way-order of L1 and L2 caches

    - by osgx
    Hello How can I measure programicaly (not query the OS, but measure) the size and order of associativity of L1 and L2 caches (data caches)? Assumtions about system: It has L1 and L2 cache (may be L3 too, may be cache sharing), It may have a hardware prefetch unit (just like P4+), it has a stable clocksource (tickcounter or good HPET for gettimeofday). There are no assumtions about OS (it can be Linux, Windows, smth non-standart), and we can't use posix queries. Language is C. And Compiler optimizations may be disabled.

    Read the article

  • ACE_Mutex::acquire problem

    - by O. Askari
    Hi, I have a mutex in my class with the following definition: ACE_Mutex m_specsMutex; When i use the acquire() method that takes no parameters everything works just fine. But when i use it with a time value (as follows) it just immediately returns with -1 value. I'm sure that this mutex hasn't been acquired anywhere else so it shouldn't return -1. m_specsMutex.acquire(ACE_OS::gettimeofday() + ACE_Time_Value(30)) Am i doing anything wrong?

    Read the article

  • Equivalent to GetTickCount() on Linux

    - by Matt Joiner
    I'm looking for an equivalent to GetTickCount() on Linux. Presently I am using Python's time.time() which presumably calls through to gettimeofday(). My concern is that the time returned (the unix epoch), may change erratically if the clock is messed with, such as by NTP. A simple process or system wall time, that only increases positively at a constant rate would suffice. Does any such time function in C or Python exist?

    Read the article

  • How to check total cache size using a program

    - by user1888541
    so I'm having some trouble creating a program to measure cache size in C. I understand the basic concept of going about this but I'm still having trouble figuring out exactly what I am doing wrong. Basically, I create an array of varying length (going by power of 2s) and access each element in the array and put it in a dummy variable. I go through the array and do this around 1000 times to negate the "noise" that would otherwise occur if I only did it once to get an accurate measurement for time. Then, I look for the size that causes a big jump in access time. Unfortunately, this is where I am having my problem, I don't see this jump using my code and clearly I am doing something wrong. Another thing is that I used /proc/cpuinfo to check the cache and it said the size was 6114 but that was not a power of 2. I was told to go by powers of 2 to figure out the cache can anyone explain why this is? Here is the just of my code...I will post the rest if need be { struct timeval start; struct timeval end; // int n = 1; // change this to test different sizes int array_size = 1048576*n; // I'm trying to check the time "manually" first before creating a loop for the program to do it by itself this is why I have a separate "n" variable to increase the size char x = 0; int i =0, j=0; char *a; a =malloc(sizeof(char) * (array_size)); gettimeofday(&start,NULL); for(i=0; i<1000; i++) { for(j=0; j < array_size; j += 1) { x = a[j]; } } gettimeofday(&end,NULL); int timeTaken = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec *1000000 + start.tv_usec); printf("Time Taken: %d \n", timeTaken); printf("Average: %f \n", (double)timeTaken/((double)array_size); }

    Read the article

1 2 3  | Next Page >