Search Results

Search found 487 results on 20 pages for '100000'.

Page 7/20 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • The best cross platform (portable) arbitrary precision math library

    - by Siu Ching Pong - Asuka Kenji
    Dear ninjas / hackers / wizards, I'm looking for a good arbitrary precision math library in C or C++. Could you please give me some advices / suggestions? The primary requirements: It MUST handle arbitrarily big integers (my primary interest is on integers). In case that you don't know what the word arbitrarily big means, imagine something like 100000! (the factorial of 100000). The precision MUST NOT NEED to be specified during library initialization / object creation. The precision should ONLY be constrained by the available resources of the system. It SHOULD utilize the full power of the platform, and should handle "small" numbers natively. That means on a 64-bit platform, calculating 2^33 + 2^32 should use the available 64-bit CPU instructions. The library SHOULD NOT calculate this in the same way as it does with 2^66 + 2^65 on the same platform. It MUST handle addition (+), subtraction (-), multiplication (*), integer division (/), remainder (%), power (**), increment (++), decrement (--), gcd(), factorial(), and other common integer arithmetic calculations efficiently. Ability to handle functions like sqrt() (square root), log() (logarithm) that do not produce integer results is a plus. Ability to handle symbolic computations is even better. Here are what I found so far: Java's BigInteger and BigDecimal class: I have been using these so far. I have read the source code, but I don't understand the math underneath. It may be based on theories / algorithms that I have never learnt. The built-in integer type or in core libraries of bc / Python / Ruby / Haskell / Lisp / Erlang / OCaml / PHP / some other languages: I have ever used some of these, but I have no idea on which library they are using, or which kind of implementation they are using. What I have already known: Using a char as a decimal digit, and a char* as a decimal string and do calculations on the digits using a for-loop. Using an int (or a long int, or a long long) as a basic "unit" and an array of it as an arbitrary long integer, and do calculations on the elements using a for-loop. Booth's multiplication algorithm What I don't know: Printing the binary array mentioned above in decimal without using naive methods. Example of a naive method: (1) add the bits from the lowest to the highest: 1, 2, 4, 8, 16, 32, ... (2) use a char* string mentioned above to store the intermediate decimal results). What I appreciate: Good comparisons on GMP, MPFR, decNumber (or other libraries that are good in your opinion). Good suggestions on books / articles that I should read. For example, an illustration with figures on how a un-naive arbitrarily long binary to decimal conversion algorithm works is good. Any help. Please DO NOT answer this question if: you think using a double (or a long double, or a long long double) can solve this problem easily. If you do think so, it means that you don't understand the issue under discussion. you have no experience on arbitrary precision mathematics. Thank you in advance! Asuka Kenji

    Read the article

  • Why is processing a sorted array faster than an unsorted array?

    - by GManNickG
    Here is a piece of code that shows some very peculiar performance. For some strange reason, sorting the data miraculously speeds up the code by almost 6x: #include <algorithm> #include <ctime> #include <iostream> int main() { // generate data const unsigned arraySize = 32768; int data[arraySize]; for (unsigned c = 0; c < arraySize; ++c) data[c] = std::rand() % 256; // !!! with this, the next loop runs faster std::sort(data, data + arraySize); // test clock_t start = clock(); long long sum = 0; for (unsigned i = 0; i < 100000; ++i) { // primary loop for (unsigned c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } double elapsedTime = static_cast<double>(clock() - start) / CLOCKS_PER_SEC; std::cout << elapsedTime << std::endl; std::cout << "sum = " << sum << std::endl; } Without std::sort(data, data + arraySize);, the code runs in 11.54 seconds. With the sorted data, the code runs in 1.93 seconds. Initially I thought this might be just a language or compiler anomaly. So I tried it Java... import java.util.Arrays; import java.util.Random; public class Main { public static void main(String[] args) { // generate data int arraySize = 32768; int data[] = new int[arraySize]; Random rnd = new Random(0); for (int c = 0; c < arraySize; ++c) data[c] = rnd.nextInt() % 256; // !!! with this, the next loop runs faster Arrays.sort(data); // test long start = System.nanoTime(); long sum = 0; for (int i = 0; i < 100000; ++i) { // primary loop for (int c = 0; c < arraySize; ++c) { if (data[c] >= 128) sum += data[c]; } } System.out.println((System.nanoTime() - start) / 1000000000.0); System.out.println("sum = " + sum); } } with a similar but less extreme result. My first thought was that sorting brings the data into cache, but my next thought was how silly that is because the array was just generated. What is going on? Why is a sorted array faster than an unsorted array? The code is summing up some independent terms, the order should not matter.

    Read the article

  • g++ SSE intrinsics dilemma - value from intrinsic "saturates"

    - by Sriram
    Hi, I wrote a simple program to implement SSE intrinsics for computing the inner product of two large (100000 or more elements) vectors. The program compares the execution time for both, inner product computed the conventional way and using intrinsics. Everything works out fine, until I insert (just for the fun of it) an inner loop before the statement that computes the inner product. Before I go further, here is the code: //this is a sample Intrinsics program to compute inner product of two vectors and compare Intrinsics with traditional method of doing things. #include <iostream> #include <iomanip> #include <xmmintrin.h> #include <stdio.h> #include <time.h> #include <stdlib.h> using namespace std; typedef float v4sf __attribute__ ((vector_size(16))); double innerProduct(float* arr1, int len1, float* arr2, int len2) { //assume len1 = len2. float result = 0.0; for(int i = 0; i < len1; i++) { for(int j = 0; j < len1; j++) { result += (arr1[i] * arr2[i]); } } //float y = 1.23e+09; //cout << "y = " << y << endl; return result; } double sse_v4sf_innerProduct(float* arr1, int len1, float* arr2, int len2) { //assume that len1 = len2. if(len1 != len2) { cout << "Lengths not equal." << endl; exit(1); } /*steps: * 1. load a long-type (4 float) into a v4sf type data from both arrays. * 2. multiply the two. * 3. multiply the same and store result. * 4. add this to previous results. */ v4sf arr1Data, arr2Data, prevSums, multVal, xyz; //__builtin_ia32_xorps(prevSums, prevSums); //making it equal zero. //can explicitly load 0 into prevSums using loadps or storeps (Check). float temp[4] = {0.0, 0.0, 0.0, 0.0}; prevSums = __builtin_ia32_loadups(temp); float result = 0.0; for(int i = 0; i < (len1 - 3); i += 4) { for(int j = 0; j < len1; j++) { arr1Data = __builtin_ia32_loadups(&arr1[i]); arr2Data = __builtin_ia32_loadups(&arr2[i]); //store the contents of two arrays. multVal = __builtin_ia32_mulps(arr1Data, arr2Data); //multiply. xyz = __builtin_ia32_addps(multVal, prevSums); prevSums = xyz; } } //prevSums will hold the sums of 4 32-bit floating point values taken at a time. Individual entries in prevSums also need to be added. __builtin_ia32_storeups(temp, prevSums); //store prevSums into temp. cout << "Values of temp:" << endl; for(int i = 0; i < 4; i++) cout << temp[i] << endl; result += temp[0] + temp[1] + temp[2] + temp[3]; return result; } int main() { clock_t begin, end; int length = 100000; float *arr1, *arr2; double result_Conventional, result_Intrinsic; // printStats("Allocating memory."); arr1 = new float[length]; arr2 = new float[length]; // printStats("End allocation."); srand(time(NULL)); //init random seed. // printStats("Initializing array1 and array2"); begin = clock(); for(int i = 0; i < length; i++) { // for(int j = 0; j < length; j++) { // arr1[i] = rand() % 10 + 1; arr1[i] = 2.5; // arr2[i] = rand() % 10 - 1; arr2[i] = 2.5; // } } end = clock(); cout << "Time to initialize array1 and array2 = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl; // printStats("Finished initialization."); // printStats("Begin inner product conventionally."); begin = clock(); result_Conventional = innerProduct(arr1, length, arr2, length); end = clock(); cout << "Time to compute inner product conventionally = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl; // printStats("End inner product conventionally."); // printStats("Begin inner product using Intrinsics."); begin = clock(); result_Intrinsic = sse_v4sf_innerProduct(arr1, length, arr2, length); end = clock(); cout << "Time to compute inner product with intrinsics = " << ((double) (end - begin)) / CLOCKS_PER_SEC << endl; //printStats("End inner product using Intrinsics."); cout << "Results: " << endl; cout << " result_Conventional = " << result_Conventional << endl; cout << " result_Intrinsics = " << result_Intrinsic << endl; return 0; } I use the following g++ invocation to build this: g++ -W -Wall -O2 -pedantic -march=i386 -msse intrinsics_SSE_innerProduct.C -o innerProduct Each of the loops above, in both the functions, runs a total of N^2 times. However, given that arr1 and arr2 (the two floating point vectors) are loaded with a value 2.5, the length of the array is 100,000, the result in both cases should be 6.25e+10. The results I get are: Results: result_Conventional = 6.25e+10 result_Intrinsics = 5.36871e+08 This is not all. It seems that the value returned from the function that uses intrinsics "saturates" at the value above. I tried putting other values for the elements of the array and different sizes too. But it seems that any value above 1.0 for the array contents and any size above 1000 meets with the same value we see above. Initially, I thought it might be because all operations within SSE are in floating point, but floating point should be able to store a number that is of the order of e+08. I am trying to see where I could be going wrong but cannot seem to figure it out. I am using g++ version: g++ (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2). Any help on this is most welcome. Thanks, Sriram.

    Read the article

  • Cumulative +1/-1 Cointoss crashes on 1000 iterations. Please advise; c++ boost random libraries

    - by user1731972
    following some former advice Multithreaded application, am I doing it right? I think I have a threadsafe number generator using boost, but my program crashes when I input 1000 iterations. The output .csv file when graphed looks right, but I'm not sure why it's crashing. It's using _beginthread, and everyone is telling me I should use the more (convoluted) _beingthreadex, which I'm not familiar with. If someone could recommend an example, I would greatly appreciate it. Also... someone pointed out I should be applying a second parameter to my _beginthread for the array counting start positions, but I have no idea how to pass more than one parameter, other than attempting to use a structure, and I've read structure's and _beginthread don't get along (although, I could just use the boost threads...) #include <process.h> #include <windows.h> #include <iostream> #include <fstream> #include <time.h> #include <random> #include <boost/random.hpp> //for srand48_r(time(NULL), &randBuffer); which doesn't work #include <stdio.h> #include <stdlib.h> //#include <thread> using namespace std; using namespace boost; using namespace boost::random; void myThread0 (void *dummy ); void myThread1 (void *dummy ); void myThread2 (void *dummy ); void myThread3 (void *dummy ); //for random seeds void initialize(); //from http://stackoverflow.com/questions/7114043/random-number-generation-in-c11-how-to-generate-how-do-they-work uniform_int_distribution<> two(1,2); typedef std::mt19937 MyRNG; // the Mersenne Twister with a popular choice of parameters uint32_t seed_val; // populate somehow MyRNG rng1; // e.g. keep one global instance (per thread) MyRNG rng2; // e.g. keep one global instance (per thread) MyRNG rng3; // e.g. keep one global instance (per thread) MyRNG rng4; // e.g. keep one global instance (per thread) //only needed for shared variables //CRITICAL_SECTION cs1,cs2,cs3,cs4; // global int main() { ofstream myfile; myfile.open ("coinToss.csv"); int rNum; long numRuns; long count = 0; int divisor = 1; float fHolder = 0; long counter = 0; float percent = 0.0; //? //unsigned threadID; //HANDLE hThread; initialize(); HANDLE hThread[4]; const int size = 100000; int array[size]; printf ("Runs (uses multiple of 100,000) "); cin >> numRuns; for (int a = 0; a < numRuns; a++) { hThread[0] = (HANDLE)_beginthread( myThread0, 0, (void*)(array) ); hThread[1] = (HANDLE)_beginthread( myThread1, 0, (void*)(array) ); hThread[2] = (HANDLE)_beginthread( myThread2, 0, (void*)(array) ); hThread[3] = (HANDLE)_beginthread( myThread3, 0, (void*)(array) ); //waits for threads to finish before continuing WaitForMultipleObjects(4, hThread, TRUE, INFINITE); //closes handles I guess? CloseHandle( hThread[0] ); CloseHandle( hThread[1] ); CloseHandle( hThread[2] ); CloseHandle( hThread[3] ); //dump array into calculations //average array into fHolder //this could be split into threads as well for (int p = 0; p < size; p++) { counter += array[p] == 2 ? 1 : -1; //cout << array[p] << endl; //cout << counter << endl; } //this fHolder calculation didn't work //fHolder = counter / size; //so I had to use this cout << counter << endl; fHolder = counter; fHolder = fHolder / size; myfile << fHolder << endl; } } void initialize() { //seed value needs to be supplied //rng1.seed(seed_val*1); rng1.seed((unsigned int)time(NULL)); rng2.seed(((unsigned int)time(NULL))*2); rng3.seed(((unsigned int)time(NULL))*3); rng4.seed(((unsigned int)time(NULL))*4); }; void myThread0 (void *param) { //EnterCriticalSection(&cs1); //aquire the critical section object int *i = (int *)param; for (int x = 0; x < 25000; x++) { //doesn't work, part of merssene twister //i[x] = next(); i[x] = two(rng1); //original srand //i[x] = rand() % 2 + 1; //doesn't work for some reason. //uint_dist2(rng); //i[x] = qrand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs1); // release the critical section object } void myThread1 (void *param) { //EnterCriticalSection(&cs2); //aquire the critical section object int *i = (int *)param; for (int x = 25000; x < 50000; x++) { //param[x] = rand() % 2 + 1; i[x] = two(rng2); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs2); // release the critical section object } void myThread2 (void *param) { //EnterCriticalSection(&cs3); //aquire the critical section object int *i = (int *)param; for (int x = 50000; x < 75000; x++) { i[x] = two(rng3); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs3); // release the critical section object } void myThread3 (void *param) { //EnterCriticalSection(&cs4); //aquire the critical section object int *i = (int *)param; for (int x = 75000; x < 100000; x++) { i[x] = two(rng4); //i[x] = rand() % 2 + 1; //cout << i[x] << endl; } //LeaveCriticalSection(&cs4); // release the critical section object }

    Read the article

  • Issues with signal handling [closed]

    - by user34790
    I am trying to actually study the signal handling behavior in multiprocess system. I have a system where there are three signal generating processes generating signals of type SIGUSR1 and SIGUSR1. I have two handler processes that handle a particular type of signal. I have another monitoring process that also receives the signals and then does its work. I have a certain issue. Whenever my signal handling processes generate a signal of a particular type, it is sent to the process group so it is received by the signal handling processes as well as the monitoring processes. Whenever the signal handlers of monitoring and signal handling processes are called, I have printed to indicate the signal handling. I was expecting a uniform series of calls for the signal handlers of the monitoring and handling processes. However, looking at the output I could see like at the beginning the monitoring and signal handling processes's signal handlers are called uniformly. However, after I could see like signal handler processes handlers being called in a burst followed by the signal handler of monitoring process being called in a burst. Here is my code and output #include <iostream> #include <sys/types.h> #include <sys/wait.h> #include <sys/time.h> #include <signal.h> #include <cstdio> #include <stdlib.h> #include <sys/ipc.h> #include <sys/shm.h> #define NUM_SENDER_PROCESSES 3 #define NUM_HANDLER_PROCESSES 4 #define NUM_SIGNAL_REPORT 10 #define MAX_SIGNAL_COUNT 100000 using namespace std; volatile int *usrsig1_handler_count; volatile int *usrsig2_handler_count; volatile int *usrsig1_sender_count; volatile int *usrsig2_sender_count; volatile int *lock_1; volatile int *lock_2; volatile int *lock_3; volatile int *lock_4; volatile int *lock_5; volatile int *lock_6; //Used only by the monitoring process volatile int monitor_count; volatile int usrsig1_monitor_count; volatile int usrsig2_monitor_count; double time_1[NUM_SIGNAL_REPORT]; double time_2[NUM_SIGNAL_REPORT]; //Used only by the main process int total_signal_count; //For shared memory int shmid; const int shareSize = sizeof(int) * (10); double timestamp() { struct timeval tp; gettimeofday(&tp, NULL); return (double)tp.tv_sec + tp.tv_usec / 1000000.; } pid_t senders[NUM_SENDER_PROCESSES]; pid_t handlers[NUM_HANDLER_PROCESSES]; pid_t reporter; void signal_catcher_1(int); void signal_catcher_2(int); void signal_catcher_int(int); void signal_catcher_monitor(int); void signal_catcher_main(int); void terminate_processes() { //Kill the child processes int status; cout << "Time up terminating the child processes" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); //Wait for the child processes to finish for(int i=0; i<NUM_SENDER_PROCESSES; i++) { waitpid(senders[i], &status, 0); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { waitpid(handlers[i], &status, 0); } waitpid(reporter, &status, 0); } int main(int argc, char *argv[]) { if(argc != 2) { cout << "Required parameters missing. " << endl; cout << "Option 1 = 1 which means run for 30 seconds" << endl; cout << "Option 2 = 2 which means run until 100000 signals" << endl; exit(0); } int option = atoi(argv[1]); pid_t pid; if(option == 2) { if(signal(SIGUSR1, signal_catcher_main) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, signal_catcher_main) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } if(signal(SIGINT, signal_catcher_int) == SIG_ERR) { perror("3"); exit(1); } /////////////////////////////////////////////////////////////////////////////////////// ////////////////////// Initializing the shared memory ///////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////// cout << "Initializing the shared memory" << endl; if ((shmid=shmget(IPC_PRIVATE,shareSize,IPC_CREAT|0660))< 0) { perror("shmget fail"); exit(1); } usrsig1_handler_count = (int *) shmat(shmid, NULL, 0); usrsig2_handler_count = usrsig1_handler_count + 1; usrsig1_sender_count = usrsig2_handler_count + 1; usrsig2_sender_count = usrsig1_sender_count + 1; lock_1 = usrsig2_sender_count + 1; lock_2 = lock_1 + 1; lock_3 = lock_2 + 1; lock_4 = lock_3 + 1; lock_5 = lock_4 + 1; lock_6 = lock_5 + 1; //Initialize them to be zero *usrsig1_handler_count = 0; *usrsig2_handler_count = 0; *usrsig1_sender_count = 0; *usrsig2_sender_count = 0; *lock_1 = 0; *lock_2 = 0; *lock_3 = 0; *lock_4 = 0; *lock_5 = 0; *lock_6 = 0; cout << "End of initializing the shared memory" << endl; ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////// End of initializing the shared memory /////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// /////////////////////////////Registering the signal handlers/////////////////////////////// /////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal handlers" << endl; for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { if((pid = fork()) == 0) { if(i%2 == 0) { struct sigaction action; action.sa_handler = signal_catcher_1; sigset_t block_mask; action.sa_flags = 0; sigaction(SIGUSR1,&action,NULL); if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } } else { if(signal(SIGUSR1 ,SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } struct sigaction action; action.sa_handler = signal_catcher_2; action.sa_flags = 0; sigaction(SIGUSR2,&action,NULL); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { //cout << "Registerd the handler " << pid << endl; handlers[i] = pid; } } cout << "End of registering the signal handlers" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////////End of registering the signal handlers ////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// ///////////////////////////Registering the monitoring process ////////////////////////////////////// //////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the monitoring process" << endl; if((pid = fork()) == 0) { struct sigaction action; action.sa_handler = signal_catcher_monitor; sigemptyset(&action.sa_mask); sigset_t block_mask; sigemptyset(&block_mask); sigaddset(&block_mask,SIGUSR1); sigaddset(&block_mask,SIGUSR2); action.sa_flags = 0; action.sa_mask = block_mask; sigaction(SIGUSR1,&action,NULL); sigaction(SIGUSR2,&action,NULL); if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } while(true) { pause(); } exit(0); } else { cout << "Monitor's pid is " << pid << endl; reporter = pid; } cout << "End of registering the monitoring process" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// ////////////////////////End of registering the monitoring process//////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Sleep to make sure that the monitor and handler processes are well initialized and ready to handle signals sleep(5); ////////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////Registering the signal generators/////////////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// cout << "Registering the signal generators" << endl; for(int i=0; i<NUM_SENDER_PROCESSES; i++) { if((pid = fork()) == 0) { if(signal(SIGUSR1, SIG_IGN) == SIG_ERR) { perror("1"); exit(1); } if(signal(SIGUSR2, SIG_IGN) == SIG_ERR) { perror("2"); exit(1); } if(signal(SIGINT, SIG_DFL) == SIG_ERR) { perror("2"); exit(1); } srand(i); while(true) { int signal_id = rand()%2 + 1; if(signal_id == 1) { killpg(getpgid(getpid()), SIGUSR1); while(__sync_lock_test_and_set(lock_4,1) != 0) { } (*usrsig1_sender_count)++; *lock_4 = 0; } else { killpg(getpgid(getpid()), SIGUSR2); while(__sync_lock_test_and_set(lock_5,1) != 0) { } (*usrsig2_sender_count)++; *lock_5=0; } int r = rand()%10 + 1; double s = (double)r/100; sleep(s); } exit(0); } else { //cout << "Registered the sender " << pid << endl; senders[i] = pid; } } //cout << "End of registering the signal generators" << endl; ///////////////////////////////////////////////////////////////////////////////////////////////////// //////////////////////////End of registering the signal generators/////////////////////////////////// ///////////////////////////////////////////////////////////////////////////////////////////////////// //Either sleep for 30 seconds and terminate the program or if the number of signals generated reaches 10000, terminate the program if(option = 1) { sleep(90); terminate_processes(); } else { while(true) { if(total_signal_count >= MAX_SIGNAL_COUNT) { terminate_processes(); } else { sleep(0.001); } } } } void signal_catcher_1(int the_sig) { while(__sync_lock_test_and_set(lock_1,1) != 0) { } (*usrsig1_handler_count) = (*usrsig1_handler_count) + 1; cout << "Signal Handler 1 " << *usrsig1_handler_count << endl; __sync_lock_release(lock_1); } void signal_catcher_2(int the_sig) { while(__sync_lock_test_and_set(lock_2,1) != 0) { } (*usrsig2_handler_count) = (*usrsig2_handler_count) + 1; __sync_lock_release(lock_2); } void signal_catcher_main(int the_sig) { while(__sync_lock_test_and_set(lock_6,1) != 0) { } total_signal_count++; *lock_6 = 0; } void signal_catcher_int(int the_sig) { for(int i=0; i<NUM_SENDER_PROCESSES; i++) { kill(senders[i],SIGKILL); } for(int i=0; i<NUM_HANDLER_PROCESSES; i++) { kill(handlers[i],SIGKILL); } kill(reporter,SIGKILL); exit(3); } void signal_catcher_monitor(int the_sig) { cout << "Monitoring process " << *usrsig1_handler_count << endl; } Here is the initial segment of output Monitoring process 0 Monitoring process 0 Monitoring process 0 Monitoring process 0 Signal Handler 1 1 Monitoring process 2 Signal Handler 1 2 Signal Handler 1 3 Signal Handler 1 4 Monitoring process 4 Monitoring process Signal Handler 1 6 Signal Handler 1 7 Monitoring process 7 Monitoring process 8 Monitoring process 8 Signal Handler 1 9 Monitoring process 9 Monitoring process 9 Monitoring process 10 Signal Handler 1 11 Monitoring process 11 Monitoring process 12 Signal Handler 1 13 Signal Handler 1 14 Signal Handler 1 15 Signal Handler 1 16 Signal Handler 1 17 Signal Handler 1 18 Monitoring process 19 Signal Handler 1 20 Monitoring process 20 Signal Handler 1 21 Monitoring process 21 Monitoring process 21 Monitoring process 22 Monitoring process 22 Monitoring process 23 Signal Handler 1 24 Signal Handler 1 25 Monitoring process 25 Signal Handler 1 27 Signal Handler 1 28 Signal Handler 1 29 Here is the segment when the signal handler processes signal handlers are called in a burst Signal Handler 1 456 Signal Handler 1 457 Signal Handler 1 458 Signal Handler 1 459 Signal Handler 1 460 Signal Handler 1 461 Signal Handler 1 462 Signal Handler 1 463 Signal Handler 1 464 Signal Handler 1 465 Signal Handler 1 466 Signal Handler 1 467 Signal Handler 1 468 Signal Handler 1 469 Signal Handler 1 470 Signal Handler 1 471 Signal Handler 1 472 Signal Handler 1 473 Signal Handler 1 474 Signal Handler 1 475 Signal Handler 1 476 Signal Handler 1 477 Signal Handler 1 478 Signal Handler 1 479 Signal Handler 1 480 Signal Handler 1 481 Signal Handler 1 482 Signal Handler 1 483 Signal Handler 1 484 Signal Handler 1 485 Signal Handler 1 486 Signal Handler 1 487 Signal Handler 1 488 Signal Handler 1 489 Signal Handler 1 490 Signal Handler 1 491 Signal Handler 1 492 Signal Handler 1 493 Signal Handler 1 494 Signal Handler 1 495 Signal Handler 1 496 Signal Handler 1 497 Signal Handler 1 498 Signal Handler 1 499 Signal Handler 1 500 Signal Handler 1 501 Signal Handler 1 502 Signal Handler 1 503 Signal Handler 1 504 Signal Handler 1 505 Signal Handler 1 506 Here is the segment when the monitoring processes signal handlers are called in a burst Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Monitoring process 140 Why isn't it uniform afterwards. Why are they called in a burst?

    Read the article

  • jQuery Cycle plugin z-index float problem

    - by Antony Carthy
    When I try to place an element on top of my jQuery Cycle element, it doesn't work. The element is always behind the jQuery cycle element. I use float: right; to position the element, and set its z-index to 100000, to no avail. Firebug sees the Cycle element and its children as having low z-indexes, and shows the floating element to be in the right place. The element never shows above the Cycling images. <!-- the cycling set --> <div id='headerimages'> <img src='images/header1.jpg' alt='' style='' /> <img src='images/header2.jpg' alt='' style='' /> <img src='images/header3.jpg' alt='' style='' /> </div> <!-- the floating element --> <img src='images/logotransparent.png' alt='' id='logo' />

    Read the article

  • AJAX: how to get progress feedback in web apps, and to avoid timeouts on long requests?

    - by David Dombrowsky
    This is a general design question about how to make a web application that will receive a large amount of uploaded data, process it, and return a result, all without the dreaded spinning beach-ball for 5 minutes or a possible HTTP timeout. Here's the requirements: make a web form where you can upload a CSV file containing a list of URLs when the user clicks "submit", the server fetches the file, and checks each URL to see if its alive, and what the title tag of the page is. the result is a downloadable CSV file containing the URL, and the result HTTP code the input CSV can be very large ( 100000 rows), so the fetch process might take 5-30 minutes. My solution so far is to have a spinning javascript loop on the client site, which queries the server every second to determine the overall progress of the job. This seems kludgy to me, and I'm hesitant to accept this as the best solution. I'm using perl, template toolkit, and jquery, but any solution using any web technology would be acceptable.

    Read the article

  • backslashbox is too wide

    - by Tim
    Hi I feel the first column in my longtable is too wide. Please see the figure below My code is: \begin{center} \begin{longtable}{|c|c|c|c|c|c|} \caption{Error rates for SVM and 100 random counts. \label{tab:svmcount100errortable}}\\ \hline \backslashbox{Concepts}{Train sizes} & 10 & 100 & 1000 & 10000 & 100000\\ \hline 1 & 0.49 (0.00) & 0.49 (0.00) & 0.47 (0.00) & 0.43 (0.00) & 0.33 (0.00)\\ \hline \end{longtable} \end{center} I wonder if there is some way to reduce the width of the first column? Thanks and regards!

    Read the article

  • Combinatorics, probability, dice

    - by TarGz
    A friend of mine asked: if I have two dice and I throw both of them, what is the most frequent sum (of the two dice' numbers)? I wrote a small script: from random import randrange d = dict((i, 0) for i in range(2, 13)) for i in xrange(100000): d[randrange(1, 7) + randrange(1, 7)] += 1 print d Which prints: 2: 2770, 3: 5547, 4: 8379, 5: 10972, 6: 13911, 7: 16610, 8: 14010, 9: 11138, 10: 8372, 11: 5545, 12: 2746 The question I have, why is 11 more frequent than 12? In both cases there is only one way (or two, if you count reverse too) how to get such sum (5 + 6, 6 + 6), so I expected the same probability..?

    Read the article

  • Cassandra or mysql 5

    - by saturngod
    Should I use cassandra in 100,000 users project ? In mysql 5 have full text search and partition table. I'm starting to make Question and answer system like stackoverflow with CodeIgniter. It's move from vbulletin to new system. In old vbulletin have around 100,000 users and total post is around 80,000. In next 3 or 4 year, users and posts will be more and more. So, Should I use cassandra instead of mysql 5 ? If I use cassandra, I need to change gridserver in mediatemple to DV server in mediatemple. Cassandra is not built in hosting system. So, I must use VPS or DV server. If I use mysql 5, hosting is not problem but how about speed and search. Btw, What database using in Stack Over ?

    Read the article

  • async handler deleted by the wrong thread in django

    - by user3480706
    I'm run this algorithm in my django application.when i run several time from my GUI django local server will stopped and i got this error Exception RuntimeError: RuntimeError('main thread is not in main loop',) in ignored Tcl_AsyncDelete: async handler deleted by the wrong thread Aborted (core dumped) code print "Learning the sin function" network =MLP.MLP(2,10,1) samples = np.zeros(2000, dtype=[('x', float, 1), ('y', float, 1)]) samples['x'] = np.linspace(-5,5,2000) samples['y'] = np.sin(samples['x']) #samples['y'] = np.linspace(-4,4,2500) for i in range(100000): n = np.random.randint(samples.size) network.propagate_forward(samples['x'][n]) network.propagate_backward(samples['y'][n]) plt.figure(figsize=(10,5)) # Draw real function x = samples['x'] y = samples['y'] #x=np.linspace(-6.0,7.0,50) plt.plot(x,y,color='b',lw=1) samples1 = np.zeros(2000, dtype=[('x1', float, 1), ('y1', float, 1)]) samples1['x1'] = np.linspace(-4,4,2000) samples1['y1'] = np.sin(samples1['x1']) # Draw network approximated function for i in range(samples1.size): samples1['y1'][i] = network.propagate_forward(samples1['x1'][i]) plt.plot(samples1['x1'],samples1['y1'],color='r',lw=3) plt.axis([-2,2,-2,2]) plt.show() plt.close() return HttpResponseRedirect('/charts/charts') how can i fix this error ?need a quick help

    Read the article

  • migrate method Rand(int) Visual Fox Pro to C#.net

    - by ch2o
    I'm migrating a Visual Fox Pro code to C #. NET What makes the Visual Fox Pro: generates a string of 5 digits ("48963") based on a text string (captured in a textbox), if you always enter the same text string will get that string always 5 digits (no reverse), my code in C #. NET should generate the same string. I want to migrate the following code (Visual Fox Pro 6 to C#) gnLower = 1000 gnUpper = 100000 vcad = 1 For y=gnLower to gnUpper step 52 genClave = **Rand(vcad)** * y vRound = allt(str(int(genclave))) IF Len(vRound) = 3 vDec = Right(allt(str(genClave,10,2)), 2) finClave = vRound+vDec Thisform.txtPass.value = Rand(971); Exit Endif Next y outputs: vcad = 1 return: 99905 vcad = 2 return: 10077 vcad = thanks return: 17200 thks!

    Read the article

  • Missing output when running system command in perl/cgi file

    - by aladine
    I need to write a CGI program and it will display the output of a system command: script.sh echo "++++++" VAR=$(expect -c " spawn ssh -o StrictHostKeyChecking=no $USER@$HOST $CMD match_max 100000 expect \"*?assword:*\" send -- \"$PASS\r\" send -- \"\r\" expect eof ") echo $VAR echo "++++++" In CGI file: my $command= "ksh ../cgi-bin/script.sh"; my @output= `$command`; print @output; Finally, when I run the CGI file in unix, the $VAR is a very long string including \n and some delimiters. However, when I run on web server, the output is ++++++ ++++++ So $VAR is missing when passing in the web interface/browser. I know maybe the problem is $VAR is very long string. But anyway, is there anyway to solve this problem except writing the output to a file then retrieve it from browser? Thanks if you are interested in my question.

    Read the article

  • How to mimic SMPP connection

    - by vijay.shad
    Hi, I have a situation. My application is sending multiple SMSes to end client this number varies from 50000 to 100000 SMSes at any point of time. To achieve this functionality i am using Kannel as sms sender interface. So My solution is complete. But only in development environment! Before going to be in production I supposed to do a testing of this solution environment. Is there any suggestion to created a testing environment? To give more input to my environment; kannel is using an smpp connection to deliver messages to end client. So, I think i need to mimic some smpp server for kannel.

    Read the article

  • pathinfo vs fnmatch

    - by zaf
    There was a small debate regarding the speed of fnmatch over pathinfo here : http://stackoverflow.com/questions/2692536/how-to-check-if-file-is-php I wasn't totally convinced so decided to benchmark the two functions. Using dynamic and static paths showed that pathinfo was faster. Is my benchmarking logic and conclusion valid? I include a sample of the results which are in seconds for 100,000 iterations on my machine : dynamic path pathinfo 3.79311800003 fnmatch 5.10071492195 x1.34 static path pathinfo 1.03921294212 fnmatch 2.37709188461 x2.29 Code: <pre> <?php $iterations=100000; // Benchmark with dynamic file path print("dynamic path\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); } $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0){ $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; if(fnmatch('*.php',$f)) $d=uniqid(); } $t3 = microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); // Benchmark with static file path print("static path\n"); $f='/'.uniqid().'/'.uniqid().'/'.uniqid().'/'.uniqid().'.php'; $i=$iterations; $t1=microtime(true); while($i-->0) if(pathinfo($f,PATHINFO_EXTENSION)=='php') $d=uniqid(); $t2=microtime(true) - $t1; print("pathinfo $t2\n"); $i=$iterations; $t1=microtime(true); while($i-->0) if(fnmatch('*.php',$f)) $d=uniqid(); $t3=microtime(true) - $t1; print("fnmatch $t3\n"); print('x'.round($t3/$t2,2)."\n\n"); ?> </pre>

    Read the article

  • mysql view performance

    - by vamsivanka
    I have a table for about 100,000 users in it. First Case: explain select * from users where state = 'ca' when i do an explain plan for the above query i got the cost as 5200 Second Case: Create or replace view vw_users as select * from users Explain select * from vw_users where state = 'ca' when i do an explain plan on the second query i got the cost as 100,000. How does the where clause in the view work ?? Is the where clause applied after the view retrieves all the rows. Please let know, how can i fix this issue. Thanks

    Read the article

  • Using Ruby Hash instead of Rails ActiveRecord in Coffeescript / Morris.JS

    - by Vanessa L'olzorz
    I'm following the Railscast #223 that introduced Morris.JS. I generate a data set called @orders_yearly in my controller and in my view I have the following to try and render the graph: <%= content_tag :div, "", id: "orders_chart", data: {orders: @orders_yearly} %> Calling @orders_yearly.inspect shows it's just a simple hash: {2009=>1000, 2010=>2000, 2011=>4000, 2012=>100000} I'll need to modify the values for xkey and ykeys in coffeescript to work, but I'm not sure how to make it work with my data set: jQuery -> Morris.Line element: 'orders_chart' data: $('#orders_chart').data('orders') xkey: 'purchased_at' # <------------------ replace with what? ykeys: ['price'] # <---------------------- replace with what? labels: ['Price'] Anyone have any ideas? Thanks!

    Read the article

  • How is schoolbook long division an O(n^2) algorithm?

    - by eSKay
    Premise: This Wikipedia page suggests that the computational complexity of Schoolbook long division is O(n^2). Deduction: Instead of taking "Two n-digit numbers", if I take one n-digit number and one m-digit number, then the complexity would be O(n*m). Contradiction: Suppose you divide 100000000 (n digits) by 1000 (m digits), you get 100000, which takes six steps to arrive at. Now, if you divide 100000000 (n digits) by 10000 (m digits), you get 10000 . Now this takes only five steps. Conclusion: So, it seems that the order of computation should be something like O(n/m). Question: Who is wrong, me or Wikipedia, and where?

    Read the article

  • Raspberry Pi cluster, neuron networks and brain simulation

    - by jokoon
    Since the RBPI (Raspberry Pi) has very low power consumption and very low production price, it means one could build a very big cluster with those. I'm not sure, but a cluster of 100000 RBPI would take little power and little room. Now I think it might not be as powerful as existing supercomputers in terms of FLOPS or others sorts of computing measurements, but could it allow better neuronal network simulation ? I'm not sure if saying "1 CPU = 1 neuron" is a reasonable statement, but it seems valid enough. So does it mean such a cluster would more efficient for neuronal network simulation, since it's far more parallel than other classical clusters ?

    Read the article

  • Schemas and tables versus user-ids in a single table using PostgreSQL

    - by gvkv
    I'm developing a web app and I've come to a fork in the road with respect to database structure and I don't know which direction to take. I have a database with user information that I can structure one of two ways. The first is to create a schema and a set of tables for each user (duplicating the structure for each user) and the second is to create a single set of tables and query information based on user-id. Suppose 100000 users. Here are my questions: Considering security, performance, scalability and administration where does each choice lie? Would the answers change for 1000000 or 10000? Is there a set of best practices that lead to one choice or the other? It seems to me that multiple schemas are more secure since it's trivial to restrict user privileges but what about performance and scalability? Administration seems like a wash since dumping (and restoring) lots of schemas isn't any more difficult than dumping a few.

    Read the article

  • SSIS Dataflow From Excel Empty Rows

    - by Gerard
    Hi All, I am using SSIS Dataflow to import data into SQL2008. My data source is an excel file. The dataflow is working, however it seems that it is importing empty rows from the Excel file. I don't understand why this is happening. For example i have data in rows 1 to rows 100,000. But when the data flow task runs it might say it is importing 200,000 rows. When I then import the data back into excel, I get 200,000 rows of data with 100,000 empty rows in between the data. Can someone please help? Thanks

    Read the article

  • Data structure in c for fast look-up/insertion/removal of integers (from a known finite domain)

    - by MrDatabase
    I'm writing a mobile phone based game in c. I'm interested in a data structure that supports fast (amortized O(1) if possible) insertion, look-up, and removal. The data structure will store integers from the domain [0, n] where n is known ahead of time (it's a constant) and n is relatively small (on the order of 100000). So far I've considered an array of integers where the "ith" bit is set iff the "ith" integer is contained in the set (so a[0] is integers 0 through 31, a[1] is integers 32 through 63 etc). Is there an easier way to do this in c?

    Read the article

  • How can I add methods from a Java class as global functions in Javascript using Rhino?

    - by gooli
    I have a simple Java class that has some methods: public class Utils { public void deal(String price, int amount) { // .... } public void bid(String price, int amount) { // .... } public void offer(String price, int amount) { // .... } } I would like to create an instance of this class and allow the Javascript code to call the methods directly, like so: deal("1.3736", 100000); bid("1.3735", 500000); The only way I could figure out for now was to use ScriptEngine engine = new ScriptEngineManager().getEngineByName("js"); engine.put("utils", new Utils()); and then use utils.deal(...) in the Javascript code. I can also write wrapper functions in Javascript for each method, but there should be a simpler way to do this automatically for all the public methods of a class.

    Read the article

  • NSNumberFormatter to display custom labels for 10^n (10000 -> 10k)

    - by Michele Colombo
    I need to display numbers on a plot axis. The values could change but I want to avoid too long numbers that will ruin the readability of the graph. My thought was to group every 3 characters and substitute them with K, M and so on (or a custom character). So: 1 - 1, 999 - 999, 1.000 - 1k, 1.200 - 1.2k, 1.280 - 1.2k, 12.800 - 12.8k, 999.999 - 999.9k, 1.000.000 - 1M, ... Note that probably I'll only need to format round numbers (1, 10, 1000, 1500, 2000, 10000, 20000, 30000, 100000, ...). Is that possibile with NSNumberFormatter? I saw that it has a setFormat method but I don't know how much customizable it is. I'm using NSNumberFormatter cause the graph object I use wants it to set label format and I want to avoid changing my data to set the label.

    Read the article

  • SQLite.Net Issue With BeginTransaction

    - by cam
    I'm trying to use System.Data.Sqlite library, and I'm following the documentation about optimizing inserts so I copied this code directly out of the documentation: using (SQLiteTransaction mytransaction = myconnection.BeginTransaction()) { using (SQLiteCommand mycommand = new SQLiteCommand(myconnection)) { SQLiteParameter myparam = new SQLiteParameter(); int n; mycommand.CommandText = "INSERT INTO [MyTable] ([MyId]) VALUES(?)"; mycommand.Parameters.Add(myparam); for (n = 0; n < 100000; n ++) { myparam.Value = n + 1; mycommand.ExecuteNonQuery(); } } mytransaction.Commit(); } Now, I initialize I connection right before that by using SqlConnection myconnection = new SqlConnection("Data Source=blah"); I have a Database named blah, with the correct tables and values. The problem is when I run this code, it says "Operation is not valid due to the current state of the object" I've tried changing the code around several times, and it still points to BeginTransaction. What gives?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >