Search Results

Search found 20549 results on 822 pages for 'concurrency control'.

Page 69/822 | < Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >

  • Is this technically thread safe despite being mutable?

    - by Finbarr
    Yes, the private member variable bar should be final right? But actually, in this instance, it is an atomic operation to simply read the value of an int. So is this technically thread safe? class foo { private int bar; public foo(int bar) { this.bar = bar; } public int getBar() { return bar; } } // assume infinite number of threads repeatedly calling getBar on the same instance of foo.

    Read the article

  • How do I ensure data consistency in this concurrent situation?

    - by MalcomTucker
    The problem is this: I have multiple competing threads (100+) that need to access one database table Each thread will pass a String name - where that name exists in the table, the database should return the id for the row, where the name doesn't already exist, the name should be inserted and the id returned. There can only ever be one instance of name in the database - ie. name must be unique How do I ensure that thread one doesn't insert name1 at the same time as thread two also tries to insert name1? In other words, how do I guarantee the uniqueness of name in a concurrent environment? This also needs to be as efficient as possible - this has the potential to be a serious bottleneck. I am using MySQL and Java. Thanks

    Read the article

  • Limiting object allocation over multiple threads

    - by John
    I have an application which retrieves and caches the results of a clients query. The client then requests different chunks of data and the application sends the relevant results and removes them from the cache. A new requirement for this application is that there needs to be a run-time configurable maximum number of results which may be cached. I've taken the naive approach and implemented this by using a counter under a lock which is incremented every time a result is cached and decremented whenever a result is removed from the cache. Unfortunately, this has drastically reduced the applications performance when processing a large number of concurrent requests. I have tried both a critical section lock and spin-lock; the performance improves a bit with a spin-lock, but is still unacceptably slow. Is there a better way to solve this problem which may improve performance? Right now I have a thread pool that services requests and each request is tied to a Request object which stores that cached results for that particular request. Here is a simplified pseudo code version of my current implementation: void ResultCallback( Result result, Request *request ) { lock totalResultsCached lock cachedLimit if( totalResultsCached + 1 > cachedLimit ) { unlock cachedLimit unlock totalResultsCached //cancel the request return; } ++totalResultsCached; unlock cachedLimit unlock totalResultsCached request.add(result) } void SendResults( int resultsToSend, Request *request ) { while ( resultsToSend > 0 ) { send(request.remove()) lock totalResultsCached --totalResultsCached unlock totalResultsCached --resultsToSend; } }

    Read the article

  • Spring.Net how does WebApplicationContext.GetObject handle concurrent requests?

    - by Alfamale
    Apologies if I have missed something obvious here but having gone through the documentation, forums and googled for a number of hours, I just can't find a definitive answer to the following questions: How does the WebApplicationContext.GetObject() method handle concurrent requests? Are the requests serialized or executed in parallel? Is there any performance data available to demonstrate how it behaves under load? Thanks in advance for your help, Andrew

    Read the article

  • What is the absolute fastest way to implement a concurrent queue with ONLY one consumer and one producer?

    - by JohnPristine
    java.util.concurrent.ConcurrentLinkedQueue comes to mind, but is it really optimum for this two-thread scenario? I am looking for the minimum latency possible on both sides (producer and consumer). If the queue is empty you can immediately return null AND if the queue is full you can immediately discard the entry you are offering. Does ConcurrentLinkedQueue use super fast and light locks (AtomicBoolean) ? Has anyone benchmarked ConcurrentLinkedQueue or knows about the ultimate fastest way of doing that? Additional Details: I imagine the queue should be a fair one, meaning the consumer should not make the consumer wait any longer than it needs (by front-running it) and vice-versa.

    Read the article

  • strange bug - how to pause a java program?

    - by TerraNova993
    I'm trying to: display a text in a jLabel, wait for two seconds, then write a new text in the jLabel this should be simple, but I get a strange bug: the first text is never written, the application just waits for 2 seconds and then displays the final text. here is the example code: private void testButtonActionPerformed(java.awt.event.ActionEvent evt) { displayLabel.setText("Clicked!"); // first method with System timer /* long t0= System.currentTimeMillis(); long t1= System.currentTimeMillis(); do{ t1 = System.currentTimeMillis(); } while ((t1 - t0) < (2000)); */ // second method with thread.sleep() try { Thread.currentThread().sleep(2000); } catch (InterruptedException e) {} displayLabel.setText("STOP"); } with this code, the text "Clicked!" is never displayed. I just get a 2 seconds - pause and then the "STOP" text. I tried to use System timer with a loop, or Thread.sleep(), but both methods give the same result.

    Read the article

  • Multithreaded java cache for objects that are heavy to create ?

    - by krosenvold
    I need a cache some objects with fairly heavy creation times, and I need exactly-once creation semantics. It should be possible to create objects for different CacheKeys concurrently. I think I need something that (under the hood) does something like this: ConcurrentHashMap<CacheKey, Future<HeavyObject>> Are there any existing open-source implementations of this that I can re-use ?

    Read the article

  • Tomcat thread waiting on and locking the same resource

    - by Adam Matan
    Consider the following Java\Tomcat thread dump: "http-0.0.0.0-4080-4" daemon prio=10 tid=0x0000000019a2b000 nid=0x360e in Object.wait() [0x0000000040b71000] java.lang.Thread.State: WAITING (on object monitor) at java.lang.Object.wait(Native Method) - waiting on <0x00002ab5565fe358> (a org.apache.tomcat.util.net.JIoEndpoint$Worker) at java.lang.Object.wait(Object.java:485) at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:458) - locked <0x00002ab5565fe358> (a org.apache.tomcat.util.net.JIoEndpoint$Worker) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:484) at java.lang.Thread.run(Thread.java:662) Is this a deadlock? It seems that the same resource (0x00002ab5565fe358) is both locked and waited on - what does it mean?

    Read the article

  • Does OpenCL allow concurrent writes to same memory address?

    - by Wonko
    Is two (or more) different threads allowed to write to the same memory location in global space in OpenCL? The write is always changing a uchar from 0 to 1 so the outcome should be predictable, but I'm getting erratic results in my program, so I'm wondering if the reason can be that some of the writes fail. Could it help to declare the buffer write-only and copy it to a read-only buffer afterwards?

    Read the article

  • Keeping track of threads when creating them recursively

    - by 66replica
    I'm currently working on some code for my Programming Languages course. I can't post the code but I'm permitted to talk about some high level concepts that I'm struggling with and receive input on them. Basically the code is a recursive DFS on a undirected graph that I'm supposed to convert to a concurrent program. My professor already specified that I should create my threads in the recursive DFS method and then join them in another method. Basically, I'm having trouble thinking of how I should keep track of the threads I'm creating so I can join all of them in the other method. I'm thinking an array of Threads but I'm unsure how to add each new thread to the array or even if that's the right direction.

    Read the article

  • How to debug ConcurrentModificationException?

    - by Dani
    I encountered ConcurrentModificationException and by looking at it I can't see the reason why it's happening; the area throwing the exception and all the places modifying the collection are surrounded by synchronized (this.locks.get(id)) { ... } // locks is a HashMap<String, Object>; I tried to catch the the pesky thread but all I could nail (by setting a breakpoint in the exception) is that the throwing thread owns the monitor while the other thread (there are two threads in the program) sleeps. How should I proceed? What do you usually do when you encounter similar threading issues?

    Read the article

  • Are spinlocks a good choice for a memory allocator?

    - by dsimcha
    I've suggested to the maintainers of the D programming language runtime a few times that the memory allocator/garbage collector should use spinlocks instead of regular OS critical sections. This hasn't really caught on. Here are the reasons I think spinlocks would be better: At least in synthetic benchmarks that I did, it's several times faster than OS critical sections when there's contention for the memory allocator/GC lock. Edit: Empirically, using spinlocks didn't even have measurable overhead in a single-core environment, probably because locks need to be held for such a short period of time in a memory allocator. Memory allocations and similar operations usually take a small fraction of a timeslice, and even a small fraction of the time a context switch takes, making it silly to context switch in the case of contention. A garbage collection in the implementation in question stops the world anyhow. There won't be any spinning during a collection. Are there any good reasons not to use spinlocks in a memory allocator/garbage collector implementation?

    Read the article

  • Waiting for a subset of threads in a Java ThreadPool

    - by David Semeria
    Let's say I have a thread pool containing X items, and a given task employs Y of these items (where Y is much smaller than X). I want to wait for all of the threads of a given task (Y items) to finish, not the entire thread pool. If the thread pool's execute() method returned a reference to the employed thread I could simply join() to each of these Y threads, but it doesn't. Does anyone know of an elegant way to accomplish this? Thanks.

    Read the article

  • What happens when I MPI_Send to a process that has finished?

    - by nieldw
    What happens when I MPI_Send to a process that has finished? I am learning MPI, and writing a small sugar distribution-simulation in C. When the factories stop producing, those processes end. When warehouses run empty, they end. Can I somehow tell if the shop's order to a warehouse did not succeed(because the warehouse process has ended) by looking at the return value of MPI_Send? The documentation doesn't mention a specific error code for this situation, but that no error is returned for success. Can I do: if (MPI_Send(...)) { ... /* destination has ended */ ... } And disregard the error code? Thanks

    Read the article

  • Create table class as a singleton

    - by Mark
    I got a class that I use as a table. This class got an array of 16 row classes. These row classes all have 6 double variables. The values of these rows are set once and never change. Would it be a good practice to make this table a singleton? The advantage is that it cost less memory, but the table will be called from multiple threads so I have to synchronize my code which way cause a bit slower application. However lookups in this table are probably a very small portion of the total code that is executed. EDIT: This is my code, are there better ways to do this or is this a good practice? Removed synchronized keyword according to recommendations in this question. final class HalfTimeTable { private HalfTimeRow[] table = new HalfTimeRow[16]; private static final HalfTimeTable instance = new HalfTimeTable(); private HalfTimeTable() { if (instance != null) { throw new IllegalStateException("Already instantiated"); } table[0] = new HalfTimeRow(4.0, 1.2599, 0.5050, 1.5, 1.7435, 0.1911); table[1] = new HalfTimeRow(8.0, 1.0000, 0.6514, 3.0, 1.3838, 0.4295); //etc } @Override @Deprecated public Object clone() throws CloneNotSupportedException { throw new CloneNotSupportedException(); } public static HalfTimeTable getInstance() { return instance; } public HalfTimeRow getRow(int rownumber) { return table[rownumber]; } }

    Read the article

  • java threads don't see shared boolean changes

    - by andymur
    Here the code class Aux implements Runnable { private Boolean isOn = false; private String statusMessage; private final Object lock; public Aux(String message, Object lock) { this.lock = lock; this.statusMessage = message; } @Override public void run() { for (;;) { synchronized (lock) { if (isOn && "left".equals(this.statusMessage)) { isOn = false; System.out.println(statusMessage); } else if (!isOn && "right".equals(this.statusMessage)) { isOn = true; System.out.println(statusMessage); } if ("left".equals(this.statusMessage)) { System.out.println("left " + isOn); } } } } } public class Question { public static void main(String [] args) { Object lock = new Object(); new Thread(new Aux("left", lock)).start(); new Thread(new Aux("right", lock)).start(); } } In this code I expect to see: left, right, left right and so on, but when Thread with "left" message changes isOn to false, Thread with "right" message don't see it and I get ("right true" and "left false" console messages), left thread don't get isOn in true, but right Thread can't change it cause it always see old isOn value (true). When i add volatile modifier to isOn nothing changes, but if I change isOn to some class with boolean field and change this field then threads are see changes and it works fine Thanks in advance.

    Read the article

  • LinkedBlockingQueue limit ignored?

    - by tgguy
    I created a Java LinkedBlockingQueue like new LinkedBlockingQueue(1) to limit the size of the queue to 1. However, in my testing, this seems to be ignored and there is often several things in the queue at any given time. Why is this?

    Read the article

  • concurrent doubly-linked list (1 writer, n-readers)

    - by Arne
    Hi guys, I am back in the field of programming for my Diploma-thesis now and stumbled over the following issue: I need to implement a thread-safe doubly-linked list for one thread writing the list at any position (delete, insert, mutate node data) and one to many threads traversing and reading the list. I am well aware that mutexes can be used to serialize access to the list, still I presume that a naive lock around any write operation will be less than optimal. I am wondering whether there are better variants. (I am well aware that 'optimal' has not much of a practical meaning as long as no exact measure/profiling are available but this is an academic thesis after all..) I am very gratefull for code-samples as well as references to academic granted these have at least a tiny bit of practical relevance. Thanks at lot

    Read the article

  • is this class thread safe?

    - by flash
    consider this class,with no instance variables and only methods which are non-synchronous can we infer from this info that this class in Thread-safe? public class test{ public void test1{ // do something } public void test2{ // do something } public void test3{ // do something } }

    Read the article

< Previous Page | 65 66 67 68 69 70 71 72 73 74 75 76  | Next Page >