Search Results

Search found 908 results on 37 pages for 'optimistic concurrency'.

Page 11/37 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • iPhone: NSOperationQueue running operations serially

    - by Greg Maletic
    I have a singleton NSOperationQueue that handles all of my network requests. I'm noticing, however, that when I have one particularly long operation running (this particular operation takes at least 25 seconds), my other operations don't run until it completes. maxConcurrentOperationCount is set to NSOperationQueueDefaultMaxConcurrentOperationCount, so I don't believe that's the issue. Any reason why this would be happening? Besides spawning multiple NSOperationQueues (a solution that I'm not sure would work, nor am I sure it's a good idea), what's the best way to fix this problem? Thanks.

    Read the article

  • Ways to polling server status

    - by Yijinsei
    Hi guys, I am try to create a JSP page that will show all the status in a group of local servers. Currently I create a schedule class that will constantly poll to check the status of the server with 30 second interval, with 5 second delay to wait for each server reply, and provide the JSP page with the information. However I find this way to be not accurate as it will take some time before the information of the schedule class to be updated. Do you guys have a better way to check the status of several server within a local network?

    Read the article

  • Why is my multithreaded Java program not maxing out all my cores on my machine?

    - by James B
    Hi, I have a program that starts up and creates an in-memory data model and then creates a (command-line-specified) number of threads to run several string checking algorithms against an input set and that data model. The work is divided amongst the threads along the input set of strings, and then each thread iterates the same in-memory data model instance (which is never updated again, so there are no synchronization issues). I'm running this on a Windows 2003 64-bit server with 2 quadcore processors, and from looking at Windows task Manager they aren't being maxed-out, (nor are they looking like they are being particularly taxed) when I run with 10 threads. Is this normal behaviour? It appears that 7 threads all complete a similar amount of work in a similar amount of time, so would you recommend running with 7 threads instead? Should I run it with more threads?...Although I assume this could be detrimental as the JVM will do more context switching between the threads. Alternatively, should I run it with fewer threads? Alternatively, what would be the best tool I could use to measure this?...Would a profiling tool help me out here - indeed, is one of the several profilers better at detecting bottlenecks (assuming I have one here) than the rest? Note, the server is also running SQL Server 2005 (this may or may not be relevant), but nothing much is happening on that database when I am running my program. Note also, the threads are only doing string matching, they aren't doing any I/O or database work or anything else they may need to wait on. Thanks in advance, -James

    Read the article

  • Erlang-style light-weight processes in .NET

    - by alexey
    Is there any way to implement Erlang-style light-weight processes in .NET? I found some projects that implement Erlang messaging model (actors model). For example, Axum. But I found nothing about light-weight processes implementation. I mean multiple processes that run in a context of a single OS-thread or OS-process.

    Read the article

  • Linux ext3 readdir and concurrent updates

    - by Wangnick
    Dear all, we are receiving about 10000 messages per hour. We store them as individual files in hourly directories on an ext3 filesystem. The file name includes a sequence number. We use rsync to mirror these files every 20 seconds at another location (via a SAN, but that doesn't matter). Sometimes an rsync run picks up files n-3, n-2, n-1, n+1, and then next rsync run continues with n, n+2, n+3, n+4 and so on. Is it possible that when one process creates files in a certain sequence within a directory, that another process using readdir() sees the files appearing in a different sequence? Kind regards, Sebastian

    Read the article

  • Understanding this Pascal-FC threaded code

    - by dmindreader
    **Program Parcial2; type buffer = channel of integer; var buffers : array [1..2] of buffer; val:integer; process sleeper (id:integer); var i : integer; begin for i:=1 to 10 do begin sleep (random(10*id)); **buffers (id):any;** end; end; process troll; begin **buffers[1]: random(10);** end;** What are buffers(id):any and buffers[1]:random(10) doing?

    Read the article

  • Is it dangerous to set off an autoreleased NSOperationQueue?

    - by Paperflyer
    I have a task that takes a rather long time and should run in the background. According to the documentation, this can be done using an NSOperationQueue. However, I do not want to keep a class-global copy of the NSOperationQueue since I really only use it for that one task. Hence, I just set it to autorelease and hope that it won't get released before the task is done. It works. like this: NSInvocationOperation *theTask = [NSInvocationOperation alloc]; theTask = [theTask initWithTarget:self selector:@selector(doTask:) object:nil]; NSOperationQueue *operationQueue = [[NSOperationQueue new] autorelease]; [operationQueue addOperation:theTask]; [theTask release]; I am kind of worried, though. Is this guaranteed to work? Or might operationQueue get deallocated at some point and take theTask with it?

    Read the article

  • Multi:Threading - Is this the right approach?

    - by HonorGod
    Experts - I need some advice in the following scenario. I have a configuration file with a list of tasks. Each task can have zero, one or more dependencies. I wanted to execute these tasks in parallel [right now they are being executed sequentially] The idea is to have a main program to read the configuration file and load all the tasks. Read individual tasks and give it to an executor [callable] that will perform the task and return results in a Future. When the task is submitted to the executor (thread) it will monitor for its dependencies to finish first and perform its own task. Is this the right approach? Are there any other better approaches using java 1.5 features?

    Read the article

  • Multhreading in Java

    - by Vijay Selvaraj
    I'm working with core java and IBM Websphere MQ 6.0. We have a standalone module say DBcomponent that hits the database and fetches a resultset based on the runtime query. The query is passed to the application via MQ messaging medium. We have a trigger configured for the queue which invokes the DBComponent whenever a message is available in the queue. The DBComponent consumes the message, constructs the query and returns the resultset to another queue. In this overall process we use log4j to log statements on a log file for auditing. The connection is pooled to the database using Apache pool. I am trying to check whether the log messages are logged correctly using a sample program. The program places the input message to the queue and checks for the logs in the log file. Its expected for the trigger method invocation to complete before i try to check for the message in log file, but every time my program to check for log message gets executed first leading my check to failure. Even if i introduce a Thread.sleep(time) doesn't solves the case. How can i make it to keep my method execution waiting until the trigger operation completes? Any suggestion will be helpful.

    Read the article

  • How is thread synchronization implemented, at the assembly language level?

    - by Martin
    While I'm familiar with concurrent programming concepts such as mutexes and semaphores, I have never understood how they are implemented at the assembly language level. I imagine there being a set of memory "flags" saying: lock A is held by thread 1 lock B is held by thread 3 lock C is not held by any thread etc But how is access to these flags synchronized between threads? Something like this naive example would only create a race condition: mov edx, [myThreadId] wait: cmp [lock], 0 jne wait mov [lock], edx ; I wanted an exclusive lock but the above ; three instructions are not an atomic operation :(

    Read the article

  • Concurrent web requests with Ruby (Sinatra?)?

    - by cbmeeks
    I have a Sinatra app that basically takes some input values and then finds data matching those values from external services like Flickr, Twitter, etc. For example: input:"Chattanooga Choo Choo" Would go out and find images at Flickr on the Chattanooga Choo Choo and tweets from Twitter, etc. Right now I have something like: @images = Flickr::...find...images.. @tweets = Twitter::...find...tweets... @results << @images @results << @tweets So my question is, is there an efficient way in Ruby to run those requests concurrently? Instead of waiting for the images to finish before the tweets finish. Thanks.

    Read the article

  • Locking NFS files in PHP

    - by Oli
    Part of my latest webapp needs to write to file a fair amount as part of its logging. One problem I've noticed is that if there are a few concurrent users, the writes can overwrite each other (instead of appending to file). I assume this is because of the destination file can be open in a number of places at the same time. flock(...) is usually superb but it doesn't appear to work on NFS... Which is a huge problem for me as the production server uses a NFS array. The closest thing I've seen to an actual solution involves trying to create a lock dir and waiting until it can be created. To say this lacks elegance is understatement of the year, possibly decade. Any better ideas? Edit: I should add that I don't have root on the server and doing the storage in another way isn't really feasible any time soon, not least within my deadline.

    Read the article

  • .Net4 ConcurrentDictionary: Tips & Tricks

    - by SDReyes
    Hi guys, I started to use the new ConcurrentDictionary from .Net4 yesterday to implement a simple caching for a threading project. But I'm wondering what I have to take care of/be careful about when using it? What have been your experiences using it?

    Read the article

  • Java Swing Threading with Updatable JProgressBar

    - by Anthony Sparks
    First off I've been working with Java's Concurency package quite a bit lately but I have found an issue that I am stuck on. I want to have and Application and the Application can have a SplashScreen with a status bar and the loading of other data. So I decided to use SwingUtilities.invokeAndWait( call the splash component here ). The SplashScreen then appears with a JProgressBar and runs a group of threads. But I can't seem to get a good handle on things. I've looked over SwingWorker and tried using it for this purpose but the thread just returns. Here is a bit of sudo-code. and the points I'm trying to achieve. Have an Application that has a SplashScreen that pauses while loading info Be able to run multiple threads under the SplashScreen Have the progress bar of the SplashScreen Update-able yet not exit until all threads are done. Launching splash screen try { SwingUtilities.invokeAndWait( SplashScreen ); } catch (InterruptedException e) { } catch (InvocationTargetException e) { } Splash screen construction SplashScreen extends JFrame implements Runnable{ public void run() { //run threads //while updating status bar } } I have tried many things including SwingWorkers, Threads using CountDownLatch's, and others. The CountDownLatch's actually worked in the manner I wanted to do the processing but I was unable to update the GUI. When using the SwingWorkers either the invokeAndWait was basically nullified (which is their purpose) or it wouldn't update the GUI still even when using a PropertyChangedListener. If someone else has a couple ideas it would be great to hear them. Thanks in advance.

    Read the article

  • HttpServletRequest.getServerName() occasionally returning null in concurrent use?

    - by oconnor0
    Under JBoss 4.0.1SP1, I have a servlet that makes multiple, concurrent calls to web services that are running under the same instance. I'm using request.getServerName() (on HttpServletRequest) to construct the endpoint URL. This normally works fine, but every once in a while returns null. I hadn't seen this before running the web service requests in parallel, and so I guessed that sharing the HttpServletRequest among threads won't always work or something. Any ideas on fixing this?

    Read the article

  • Run java thread at specific times

    - by rmarimon
    I have a web application that synchronizes with a central database four times per hour. The process usually takes 2 minutes. I would like to run this process as a thread at X:55, X:10, X:25, and X:40 so that the users knows that at X:00, X:15, X:30, and X:45 they have a clean copy of the database. It is just about managing expectations. I have gone through the executor in java.util.concurrent but the scheduling is done with the scheduleAtFixedRate which I believe provides no guarantee about when this is actually run in terms of the hours. I could use a first delay to launch the Runnable so that the first one is close to the launch time and schedule for every 15 minutes but it seems that this would probably diverge in time. Is there an easier way to schedule the thread to run 5 minutes before every quarter hour?

    Read the article

  • Deadlock in ThreadPoolExecutor

    - by Vitaly
    Encountered a situation when ThreadPoolExecutor is parked in execute(Runnable) function while all the ThreadPool threads are waiting in getTask func, workQueue is empty. Does anybody have any ideas? The ThreadPoolExecutor is created with ArrayBlockingQueue, corePoolSize == maximumPoolSize = 4 [Edit] To be more precise, the thread is blocked in ThreadPoolExecutor.exec(Runnable command) func. It has the task to execute, but doesn't do it. [Edit2] The executor is blocked somewhere inside the working queue (ArrayBlockingQueue). [Edit3] The callstack: thread = front_end(224) at sun.misc.Unsafe.park(Native methord) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:747) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:778) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1114) at java.util.concurrent.locks.ReentrantLock$NonfairSync.lock(ReentrantLock.java:186) at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:262) at java.util.concurrent.ArrayBlockingQueue.offer(ArrayBlockingQueue.java:224) at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:653) at net.listenThread.WorkersPool.execute(WorkersPool.java:45) at the same time the workQueue is empty (checked using remote debug) [Edit4] Code working with ThreadPoolExecutor: public WorkersPool(int size) { pool = new ThreadPoolExecutor(size, size, IDLE_WORKER_THREAD_TIMEOUT, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(WORK_QUEUE_CAPACITY), new ThreadFactory() { @NotNull private final AtomicInteger threadsCount = new AtomicInteger(0); @NotNull public Thread newThread(@NotNull Runnable r) { final Thread thread = new Thread(r); thread.setName("net_worker_" + threadsCount.incrementAndGet()); return thread; } }, new RejectedExecutionHandler() { public void rejectedExecution(@Nullable Runnable r, @Nullable ThreadPoolExecutor executor) { Verify.warning("new task " + r + " is discarded"); } }); } public void execute(@NotNull Runnable task) { pool.execute(task); } public void stopWorkers() throws WorkersTerminationFailedException { pool.shutdownNow(); try { pool.awaitTermination(THREAD_TERMINATION_WAIT_TIME, TimeUnit.SECONDS); } catch (InterruptedException e) { throw new WorkersTerminationFailedException("Workers-pool termination failed", e); } } }

    Read the article

  • Sql Server Maintenance Plan Tasks & Completion

    - by Ben
    Hi All, I have a maintenance plan that looks like this... Client 1 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client 2 Import Data (Success) -> Process Data (Success) -> Post Process (Completion) -> Next Client Client N ... Import Data and Process Data are calling jobs and Post Process is an Execute Sql task. If Import Data or Process Data Fail, it goes to the next client Import Data... Both Import Data and Process Data are jobs that contain SSIS packages that are using the built-in SQL logging provider. My expectation with the configuration as it stands is: Client 1 Import Data Runs: Failure - Client 2 Import Data | Success Process Data Process Data Runs: Failure - Client 2 Import Data | Success Post Process Post Process Runs: Completion - Success or Failure - Next Client Import Data This isn't what I'm seeing in my logs though... I see several Client Import Data SSIS log entries, then several Post Process log entries, then back to Client Import Data! Arg!! What am I doing wrong? I didn't think the "success" piece of Client 1 Import Data would kick off until it... well... succeeded aka finished! The logs seem to indicate otherwise though... I really need these tasks to be consecutive not concurrent. Is this possible? Thanks!

    Read the article

  • What should be the ideal number of parallel java threads for copying a large set of files from a qua

    - by ukgenie
    What should be the ideal number of parallel java threads for copying a large set of files from a quad core linux box to an external shared folder? I can see that with a single thread it is taking a hell lot of time to move the files one by one. Multiple threads is improving the copy performance, but I don't know what should be the exact number of threads. I am using Java executor service to create the thread pool.

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • Is lock returned by ReentrantReadWriteLock equivalent to it's read and write locks?

    - by Todd
    Hello, I have been looking around for the answer to this, but no joy. In Java, is using the lock created by ReentrantReadWriteLock equivalent to getting the read and write locks as returned by readLock.lock() and writeLock.lock()? In other words, can I expect the read and write locks associated with the ReentrantReadWriteLock to be requested and held by synchronizing on the ReentrantReadWriteLock? My gut says "no" since any object can be used for synchronization. I wouldn't think that there would be special behavior for ReentrantReadWriteLock. However, special behavior is the corner case of which I may not be aware. Thanks, Todd

    Read the article

  • "The usage of semaphores is subtly wrong"

    - by Hoonose
    This past semester I was taking an OS practicum in C, in which the first project involved making a threads package, then writing a multiple producer-consumer program to demonstrate the functionality. However, after getting grading feedback, I lost points for "The usage of semaphores is subtly wrong" and "The program assumes preemption (e.g. uses yield to change control)" (We started with a non-preemptive threads package then added preemption later. Note that the comment and example contradict each other. I believe it doesn't assume either, and would work in both environments). This has been bugging me for a long time - the course staff was kind of overwhelmed, so I couldn't ask them what's wrong with this over the semester. I've spent a long time thinking about this and I can't see the issues. If anyone could take a look and point out the error, or reassure me that there actually isn't a problem, I'd really appreciate it. I believe the syntax should be pretty standard in terms of the thread package functions (minithreads and semaphores), but let me know if anything is confusing. #include <stdio.h> #include <stdlib.h> #include "minithread.h" #include "synch.h" #define BUFFER_SIZE 16 #define MAXCOUNT 100 int buffer[BUFFER_SIZE]; int size, head, tail; int count = 1; int out = 0; int toadd = 0; int toremove = 0; semaphore_t empty; semaphore_t full; semaphore_t count_lock; // Semaphore to keep a lock on the // global variables for maintaining the counts /* Method to handle the working of a student * The ID of a student is the corresponding minithread_id */ int student(int total_burgers) { int n, i; semaphore_P(count_lock); while ((out+toremove) < arg) { n = genintrand(BUFFER_SIZE); n = (n <= total_burgers - (out + toremove)) ? n : total_burgers - (out + toremove); printf("Student %d wants to get %d burgers ...\n", minithread_id(), n); toremove += n; semaphore_V(count_lock); for (i=0; i<n; i++) { semaphore_P(empty); out = buffer[tail]; printf("Student %d is taking burger %d.\n", minithread_id(), out); tail = (tail + 1) % BUFFER_SIZE; size--; toremove--; semaphore_V(full); } semaphore_P(count_lock); } semaphore_V(count_lock); printf("Student %d is done.\n", minithread_id()); return 0; } /* Method to handle the working of a cook * The ID of a cook is the corresponding minithread_id */ int cook(int total_burgers) { int n, i; printf("Creating Cook %d\n",minithread_id()); semaphore_P(count_lock); while ((count+toadd) <= arg) { n = genintrand(BUFFER_SIZE); n = (n <= total_burgers - (count + toadd) + 1) ? n : total_burgers - (count + toadd) + 1; printf("Cook %d wants to put %d burgers into the burger stack ...\n", minithread_id(),n); toadd += n; semaphore_V(count_lock); for (i=0; i<n; i++) { semaphore_P(full); printf("Cook %d is putting burger %d into the burger stack.\n", minithread_id(), count); buffer[head] = count++; head = (head + 1) % BUFFER_SIZE; size++; toadd--; semaphore_V(empty); } semaphore_P(count_lock); } semaphore_V(count_lock); printf("Cook %d is done.\n", minithread_id()); return 0; } /* Method to create our multiple producers and consumers * and start their respective threads by fork */ void starter(int* c){ int i; for (i=0;i<c[2];i++){ minithread_fork(cook, c[0]); } for (i=0;i<c[1];i++){ minithread_fork(student, c[0]); } } /* The arguments are passed as command line parameters * argv[1] is the no of students * argv[2] is the no of cooks */ void main(int argc, char *argv[]) { int pass_args[3]; pass_args[0] = MAXCOUNT; pass_args[1] = atoi(argv[1]); pass_args[2] = atoi(argv[2]); size = head = tail = 0; empty = semaphore_create(); semaphore_initialize(empty, 0); full = semaphore_create(); semaphore_initialize(full, BUFFER_SIZE); count_lock = semaphore_create(); semaphore_initialize(count_lock, 1); minithread_system_initialize(starter, pass_args); }

    Read the article

  • Image processing in a multhithreaded mode using Java

    - by jadaaih
    Hi Folks, I am supposed to process images in a multithreaded mode using Java. I may having varying number of images where as my number of threads are fixed. I have to process all the images using the fixed set of threads. I am just stuck up on how to do it, I had a look ThreadExecutor and BlockingQueues etc...I am still not clear. What I am doing is, - Get the images and add them in a LinkedBlockingQueue which has runnable code of the image processor. - Create a threadpoolexecutor for which one of the arguements is the LinkedBlockingQueue earlier. - Iterate through a for loop till the queue size and do a threadpoolexecutor.execute(linkedblockingqueue.poll). - all i see is it processes only 100 images which is the minimum thread size passed in LinkedBlockingQueue size. I see I am seriously wrong in my understanding somewhere, how do I process all the images in sets of 100(threads) until they are all done? Any examples or psuedocodes would be highly helpful Thanks! J

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >