Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 22/66 | < Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >

  • C# Is it possible to interrupt a specific thread inside a ThreadPool?

    - by Lirik
    Suppose that I've queued a work item in a ThreadPool, but the work item blocks if there is no data to process (reading from a BlockingQueue). If the queue is empty and there will be no more work going into the queue, then I must call the Thread.Interrupt method if I want to interrupt the blocking task, but how does one do the same thing with a ThreadPool? The code might look like this: void Run() { try { while(true) { blockingQueue.Dequeue(); doSomething(); } } finally { countDownLatch.Signal(); } } I'm aware that the best thing to do in this situation is use a regular Thread, but I'm wondering if there is a ThreadPool equivalent way to interrupt a work item.

    Read the article

  • SendMessage to window created by AllocateHWND cause deadlock

    - by user2704265
    In my Delphi project, I derive a thread class TMyThread, and follow the advice from forums to use AllocateHWnd to create a window handle. In TMyThread object, I call SendMessage to send message to the window handle. When the messages sent are in small volume, then the application works well. However, when the messages are in large volume, the application will deadlock and lose responses. I think may be the message queue is full as in LogWndProc, there are only codes to process the message, but no codes to remove the messages from the queue, that may cause all the processed messages still exist in the queue and the queue becomes full. Is that correct? The codes are attached below: var hLogWnd: HWND = 0; procedure TForm1.FormCreate(Sender: TObject); begin hLogWnd := AllocateHWnd(LogWndProc); end; procedure TForm1.FormDestroy(Sender: TObject); begin if hLogWnd <> 0 then DeallocateHWnd(hLogWnd); end; procedure TForm1.LogWndProc(var Message: TMessage); var S: PString; begin if Message.Msg = WM_UPDATEDATA then begin S := PString(msg.LParam); try List1.Items.Add(S^); finally Dispose(S); end; end else Message.Result := DefWindowProc(hLogWnd, Message.Msg, Message.WParam, Message.LParam); end; procedure TMyThread.SendLog(I: Integer); var Log: PString; begin New(Log); Log^ := 'Log: current stag is ' + IntToStr(I); SendMessage(hLogWnd, WM_UPDATEDATA, 0, LPARAM(Log)); Dispose(Log); end;

    Read the article

  • Thread Synchronisation 101

    - by taspeotis
    Previously I've written some very simple multithreaded code, and I've always been aware that at any time there could be a context switch right in the middle of what I'm doing, so I've always guarded access the shared variables through a CCriticalSection class that enters the critical section on construction and leaves it on destruction. I know this is fairly aggressive and I enter and leave critical sections quite frequently and sometimes egregiously (e.g. at the start of a function when I could put the CCriticalSection inside a tighter code block) but my code doesn't crash and it runs fast enough. At work my multithreaded code needs to be a tighter, only locking/synchronising at the lowest level needed. At work I was trying to debug some multithreaded code, and I came across this: EnterCriticalSection(&m_Crit4); m_bSomeVariable = true; LeaveCriticalSection(&m_Crit4); Now, m_bSomeVariable is a Win32 BOOL (not volatile), which as far as I know is defined to be an int, and on x86 reading and writing these values is a single instruction, and since context switches occur on an instruction boundary then there's no need for synchronising this operation with a critical section. I did some more research online to see whether this operation did not need synchronisation, and I came up with two scenarios it did: The CPU implements out of order execution or the second thread is running on a different core and the updated value is not written into RAM for the other core to see; and The int is not 4-byte aligned. I believe number 1 can be solved using the "volatile" keyword. In VS2005 and later the C++ compiler surrounds access to this variable using memory barriers, ensuring that the variable is always completely written/read to the main system memory before using it. Number 2 I cannot verify, I don't know why the byte alignment would make a difference. I don't know the x86 instruction set, but does mov need to be given a 4-byte aligned address? If not do you need to use a combination of instructions? That would introduce the problem. So... QUESTION 1: Does using the "volatile" keyword (implicity using memory barriers and hinting to the compiler not to optimise this code) absolve a programmer from the need to synchronise a 4-byte/8-byte on x86/x64 variable between read/write operations? QUESTION 2: Is there the explicit requirement that the variable be 4-byte/8-byte aligned? I did some more digging into our code and the variables defined in the class: class CExample { private: CRITICAL_SECTION m_Crit1; // Protects variable a CRITICAL_SECTION m_Crit2; // Protects variable b CRITICAL_SECTION m_Crit3; // Protects variable c CRITICAL_SECTION m_Crit4; // Protects variable d // ... }; Now, to me this seems excessive. I thought critical sections synchronised threads between a process, so if you've got one you can enter it and no other thread in that process can execute. There is no need for a critical section for each variable you want to protect, if you're in a critical section then nothing else can interrupt you. I think the only thing that can change the variables from outside a critical section is if the process shares a memory page with another process (can you do that?) and the other process starts to change the values. Mutexes would also help here, named mutexes are shared across processes, or only processes of the same name? QUESTION 3: Is my analysis of critical sections correct, and should this code be rewritten to use mutexes? I have had a look at other synchronisation objects (semaphores and spinlocks), are they better suited here? QUESTION 4: Where are critical sections/mutexes/semaphores/spinlocks best suited? That is, which synchronisation problem should they be applied to. Is there a vast performance penalty for choosing one over the other? And while we're on it, I read that spinlocks should not be used in a single-core multithreaded environment, only a multi-core multithreaded environment. So, QUESTION 5: Is this wrong, or if not, why is it right? Thanks in advance for any responses :)

    Read the article

  • Java multithreaded server - each connection returns data. Processing on main thread?

    - by oliwr
    I am writing a client with an integrated server that should wait indefinitely for new connections - and handle each on a Thread. I want to process the received byte array in a system wide available message handler on the main thread. However, currently the processing is obviously done on the client thread. I've looked at Futures, submit() of ExecutorService, but as I create my Client-Connections within the Server, the data would be returned to the Server thread. How can I return it from there onto the main thread (in a synchronized packet store maybe?) to process it without blocking the server? My current implementation looks like this: public class Server extends Thread { private int port; private ExecutorService threadPool; public Server(int port) { this.port = port; // 50 simultaneous connections threadPool = Executors.newFixedThreadPool(50); } public void run() { try{ ServerSocket listener = new ServerSocket(this.port); System.out.println("Listening on Port " + this.port); Socket connection; while(true){ try { connection = listener.accept(); System.out.println("Accepted client " + connection.getInetAddress()); connection.setSoTimeout(4000); ClientHandler conn_c= new ClientHandler(connection); threadPool.execute(conn_c); } catch (IOException e) { System.out.println("IOException on connection: " + e); } } } catch (IOException e) { System.out.println("IOException on socket listen: " + e); e.printStackTrace(); threadPool.shutdown(); } } } class ClientHandler implements Runnable { private Socket connection; ClientHandler(Socket connection) { this.connection=connection; } @Override public void run() { try { // Read data from the InputStream, buffered int count; byte[] buffer = new byte[8192]; InputStream is = connection.getInputStream(); ByteArrayOutputStream out = new ByteArrayOutputStream(); // While there is data in the stream, read it while ((count = is.read(buffer)) > 0) { out.write(buffer, 0, count); } is.close(); out.close(); System.out.println("Disconnect client " + connection.getInetAddress()); connection.close(); // handle the received data MessageHandler.handle(out.toByteArray()); } catch (IOException e) { System.out.println("IOException on socket read: " + e); e.printStackTrace(); } return; } }

    Read the article

  • Boost Thread Synchronization

    - by Dave18
    I don't see synchronized output when i comment the the line wait(1) in thread(). can I make them run at the same time (one after another) without having to use 'wait(1)'? #include <boost/thread.hpp> #include <iostream> void wait(int seconds) { boost::this_thread::sleep(boost::posix_time::seconds(seconds)); } boost::mutex mutex; void thread() { for (int i = 0; i < 100; ++i) { wait(1); mutex.lock(); std::cout << "Thread " << boost::this_thread::get_id() << ": " << i << std::endl; mutex.unlock(); } } int main() { boost::thread t1(thread); boost::thread t2(thread); t1.join(); t2.join(); }

    Read the article

  • C# WinForms. Multiple Forms in separate threads

    - by Calum Murray
    I'm trying to run an ATM Simulation in C# with Windows Forms that can have more than one instance of an ATM machine transacting with a bank account simultaneously. The idea is to use semaphores/locking to block critical code that may lead to race conditions. My question is this: How can I run two Forms simultaneously on separate threads? In particular, how does all of this fit in with the Application.Run() that's already there? Here's my main class: public class Bank { private Account[] ac = new Account[3]; private ATM atm; public Bank() { ac[0] = new Account(300, 1111, 111111); ac[1] = new Account(750, 2222, 222222); ac[2] = new Account(3000, 3333, 333333); Application.Run(new ATM(ac)); } static void Main(string[] args) { new Bank(); } } ...that I want to run two of these forms on separate threads... public partial class ATM : Form { //local reference to the array of accounts private Account[] ac; //this is a reference to the account that is being used private Account activeAccount = null; private static int stepCount = 0; private string buffer = ""; // the ATM constructor takes an array of account objects as a reference public ATM(Account[] ac) { InitializeComponent(); //Sets up Form ATM GUI in ATM.Designer.cs this.ac = ac; } ... I've tried using Thread ATM2 = new Thread(new ThreadStart(/*What goes in here?*/)); But what method do I put in the ThreadStart constructor, since the ATM form is event-driven and there's no one method controlling it? Thanks, Calum

    Read the article

  • C# thread with multiple parameters

    - by Lucas B
    Does anyone know how to pass multiple parameters into a Thread.Start routine? I thought of extending the class, but the C# Thread class is sealed. Here is what I think the code would look like: ... Thread standardTCPServerThread = new Thread(startSocketServerAsThread); standardServerThread.Start( orchestrator, initializeMemberBalance, arg, 60000); ... } static void startSocketServerAsThread(ServiceOrchestrator orchestrator, List<int> memberBalances, string arg, int port) { startSocketServer(orchestrator, memberBalances, arg, port); } Thank you in advance. BTW, I start a number of threads with different orchestrators, balances and ports. Please consider thread safety also.

    Read the article

  • Terminate long running thread in thread pool that was created using QueueUserWorkItem(win 32/nt5).

    - by Jake
    I am programming in a win32 nt5 environment. I have a function that is going to be called many times. Each call is atomic. I would like to use QueueUserWorkItem to take advantage of multicore processors. The problem I am having is I only want to give the function 3 seconds to complete. If it has not completed in 3 seconds I want to terminate the thread. Currently I am doing something like this: HANDLE newThreadFuncCall= CreateThread(NULL,0,funcCall,&func_params,0,NULL); DWORD result = WaitForSingleObject(newThreadFuncCall, 3000); if(result == WAIT_TIMEOUT) { TerminateThread(newThreadFuncCall,WAIT_TIMEOUT); } I just spawn a single thread and wait for 3 seconds or it to complete. Is there anyway to do something similar to but using QueueUserWorkItem to queue up the work? Thanks!

    Read the article

  • Thread safety in Singleton

    - by Robert
    I understand that double locking in Java is broken, so what are the best ways to make Singletons Thread Safe in Java? The first thing that springs to my mind is: class Singleton{ private static Singleton instance; private Singleton(){} public static synchronized Singleton getInstance(){ if(instance == null) instance = new Singleton(); return instance; } } Does this work? if so, is it the best way (I guess that depends on circumstances, so stating when a particular technique is best, would be useful)

    Read the article

  • Deadlock sample in C#.net

    - by DotNetBeginner
    Can anybody give a simple Deadlock sample code in c#.net ? And please tell the simplest way to find deadlock in your C#.net code sample.(May be the tool which will detect the dead lock in the given sample code.) NOTE: I have VS 2008

    Read the article

  • What are shared by multi threads in the same process?

    - by skydoor
    I found that each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory. My questions, what are shared by the multi threads in the same process? What I can imagine is 1) address space of the process; 2) stack, register; 3) variables Can any body elaborate it and add more?

    Read the article

  • How to keep a process running on a remote windows server

    - by DutrowLLC
    I need to implement a background process that runs on a remote windows server 24/7. My development environment is C#/ASP.NET 3.5. The purpose of the process is to: Send reminder e-mails to employees and customers at appropriate times (say 5:00PM on the day before a job is scheduled) Query and save GPS coordinates of employees when they are supposed to be out on jobs so that I can later verify that their positions were where they were supposed to be. If the process fails (which it probably will, especially when updates are added), I need for it to be restarted immediately (or within just a few minutes) as I would have very serious problems if this process failed to send a notification, log a GPS coordinate, or any of the other tasks its meant to perform.

    Read the article

  • Abort SAX parsing mid-document?

    - by CSharperWithJava
    I'm parsing a very simple XML schema with a SAX parser in Android. An example file would be <Lists> <List name="foo"> <Note title="note 1" .../> <Note title="note 2" .../> </List> <List name="bar"> <Note title="note 3" .../> </List> </Lists> The ... represents more note data as attributes that aren't important to question. I use a SAX parser to parse the document and only implement the startElement and 'endElement' methods of the HandlerBase to handle Note and List nodes. However, In some cases the files can be very large and take some time to process. I'd like to be able to abort the parsing process at any time (i.e. user presses cancel button). The best way I've come up with is to throw an exception from my startElement method when certain conditions are met (i.e. boolean stopParsing is true). Is there a better way to do this? I've always used DOM style parsers, so I don't fully understand the SAX parser. One final note, I'm running this on Android, so I will have the Parser running on a worker thread to keep the UI responsive. If you know how I can kill the thread safely while the parser is running that would answer my question as well.

    Read the article

  • deciding between subprocess, multiprocesser and thread in Python?

    - by user248237
    I'd like to parallelize my Python program so that it can make use of multiple processors on the machine that it runs on. My parallelization is very simple, in that all the parallel "threads" of the program are independent and write their output to separate files. I don't need the threads to exchange information but it is imperative that I know when the threads finish since some steps of my pipeline depend on their output. Portability is important, in that I'd like this to run on any Python version on Mac, Linux and Windows. Given these constraints, which is the most appropriate Python module for implementing this? I am tryign to decide between thread, subprocess and multiprocessing, which all seem to provide related functionality. Any thoughts on this? I'd like the simplest solution that's portable. Thanks.

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

  • Does a multithreaded crawler in Python really speed things up?

    - by beagleguy
    Was looking to write a little web crawler in python. I was starting to investigate writing it as a multithreaded script, one pool of threads downloading and one pool processing results. Due to the GIL would it actually do simultaneous downloading? How does the GIL affect a web crawler? Would each thread pick some data off the socket, then move on to the next thread, let it pick some data off the socket, etc..? Basically I'm asking is doing a multi-threaded crawler in python really going to buy me much performance vs single threaded? thanks!

    Read the article

  • Best practices for Java logging from multiple threads?

    - by Jason S
    I want to have a diagnostic log that is produced by several tasks managing data. These tasks may be in multiple threads. Each task needs to write an element (possibly with subelements) to the log; get in and get out quickly. If this were a single-task situation I'd use XMLStreamWriter as it seems like the best match for simplicity/functionality without having to hold a ballooning XML document in memory. But it's not a single-task situation, and I'm not sure how to best make sure this is "threadsafe", where "threadsafe" in this application means that each log element should be written to the log correctly and serially (one after the other and not interleaved in any way). Any suggestions? I have a vague intuition that the way to go is to use a queue of log elements (with each one able to be produced quickly: my application is busy doing real work that's performance-sensitive), and have a separate thread which handles the log elements and sends them to a file so the logging doesn't interrupt the producers. The logging doesn't necessarily have to be XML, but I do want it to be structured and machine-readable. edit: I put "threadsafe" in quotes. Log4j seems to be the obvious choice (new to me but old to the community), why reinvent the wheel...

    Read the article

  • Django - Threading in views without hanging the server

    - by bobthabuilda
    One of my applications in my Django project require each request/visitor to that instance to have their own thread. This might sound confusing, so I'll describe what I'm looking to accomplish in a case based scenario, with steps: User visits application Thread starts Until the thread finishes, that user's server instance hangs Once the thread completes, a response is delivered to the user Other visitors to the site should not be affected by any other users using the application How can I accomplish something like this? If possible, I'd like to find a lightweight solution.

    Read the article

  • AppDomain.CurrentDomain.DomainUnload not be raised in Console app

    - by Guy
    I have an assembly that when accessed spins up a single thread to process items placed on a queue. In that assembly I attach a handler to the DomainUnload event: AppDomain.CurrentDomain.DomainUnload += new EventHandler(CurrentDomain_DomainUnload); That handler joins the thread to the main thread so that all items on the queue can complete processing before the application terminates. The problem that I am experiencing is that the DomainUnload event is not getting fired when the console application terminates. Any ideas why this would be? Using .NET 3.5 and C#

    Read the article

  • how to find the active thread count?

    - by DayOne
    Hi, i have a c# program which calls into a c++ library. The c# programs process has a high thread count 50 - 60. Most seem to be created in c++ and i supect most are suspended/waiting. How do i find how many of these threads are active at a given point in time? thanks

    Read the article

  • Difference in output from use of synchronized keyword and join()

    - by user2964080
    I have 2 classes, public class Account { private int balance = 50; public int getBalance() { return balance; } public void withdraw(int amt){ this.balance -= amt; } } and public class DangerousAccount implements Runnable{ private Account acct = new Account(); public static void main(String[] args) throws InterruptedException{ DangerousAccount target = new DangerousAccount(); Thread t1 = new Thread(target); Thread t2 = new Thread(target); t1.setName("Ravi"); t2.setName("Prakash"); t1.start(); /* #1 t1.join(); */ t2.start(); } public void run(){ for(int i=0; i<5; i++){ makeWithdrawl(10); if(acct.getBalance() < 0) System.out.println("Account Overdrawn"); } } public void makeWithdrawl(int amt){ if(acct.getBalance() >= amt){ System.out.println(Thread.currentThread().getName() + " is going to withdraw"); try{ Thread.sleep(500); }catch(InterruptedException e){ e.printStackTrace(); } acct.withdraw(amt); System.out.println(Thread.currentThread().getName() + " has finished the withdrawl"); }else{ System.out.println("Not Enough Money For " + Thread.currentThread().getName() + " to withdraw"); } } } I tried adding synchronized keyword in makeWithdrawl method public synchronized void makeWithdrawl(int amt){ and I keep getting this output as many times I try Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Ravi is going to withdraw Ravi has finished the withdrawl Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw Not Enough Money For Prakash to withdraw This shows that only Thread t1 is working... If I un-comment the the line saying t1.join(); I get the same output. So how does synchronized differ from join() ? If I don't use synchronize keyword or join() I get various outputs like Ravi is going to withdraw Prakash is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Prakash is going to withdraw Ravi is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Prakash is going to withdraw Ravi is going to withdraw Prakash has finished the withdrawl Ravi has finished the withdrawl Account Overdrawn Account Overdrawn Not Enough Money For Ravi to withdraw Account Overdrawn Not Enough Money For Prakash to withdraw Account Overdrawn Not Enough Money For Ravi to withdraw Account Overdrawn Not Enough Money For Prakash to withdraw Account Overdrawn So how does the output from synchronized differ from join() ?

    Read the article

  • How can I limit access to a particular class to one caller at a time in a web service?

    - by MusiGenesis
    I have a web service method in which I create a particular type of object, use it for a few seconds, and then dispose it. Because of problems arising from multiple threads creating and using instances of this class at the same time, I need to restrict the method so that only one caller at a time ever has one of these objects. To do this, I am creating a private static object: private static object _lock = new object(); ... and then inside the web service method I do this around the critical code: lock (_lock) { using (DangerousObject do = new DangerousObject()) { do.MakeABigMess(); do.CleanItUp(); } } I'm not sure this is working, though. Do I have this right? Will this code ensure that only one instance of DangerousObject is instantiated and in use at a time?

    Read the article

  • Lock-Free, Wait-Free and Wait-freedom algorithms for non-blocking multi-thread synchronization.

    - by GJ
    In multi thread programming we can find different terms for data transfer synchronization between two or more threads/tasks. When exactly we can say that some algorithem is: 1)Lock-Free 2)Wait-Free 3)Wait-Freedom I understand what means Lock-free but when we can say that some synchronization algorithm is Wait-Free or Wait-Freedom? I have made some code (ring buffer) for multi-thread synchronization and it use Lock-Free methods but: 1) Algorithm predicts maximum execution time of this routine. 2) Therad which call this routine at beginning set unique reference, what mean that is inside of this routine. 3) Other threads which are calling the same routine check this reference and if is set than count the CPU tick count (measure time) of first involved thread. If that time is to long interrupt the current work of involved thread and overrides him job. 4) Thread which not finished job because was interrupted from task scheduler (is reposed) at the end check the reference if not belongs to him repeat the job again. So this algorithm is not really Lock-free but there is no memory lock in use, and other involved threads can wait (or not) certain time before overide the job of reposed thread. Added RingBuffer.InsertLeft function: function TgjRingBuffer.InsertLeft(const link: pointer): integer; var AtStartReference: cardinal; CPUTimeStamp : int64; CurrentLeft : pointer; CurrentReference: cardinal; NewLeft : PReferencedPtr; Reference : cardinal; label TryAgain; begin Reference := GetThreadId + 1; //Reference.bit0 := 1 with rbRingBuffer^ do begin TryAgain: //Set Left.Reference with respect to all other cores :) CPUTimeStamp := GetCPUTimeStamp + LoopTicks; AtStartReference := Left.Reference OR 1; //Reference.bit0 := 1 repeat CurrentReference := Left.Reference; until (CurrentReference AND 1 = 0)or (GetCPUTimeStamp - CPUTimeStamp > 0); //No threads present in ring buffer or current thread timeout if ((CurrentReference AND 1 <> 0) and (AtStartReference <> CurrentReference)) or not CAS32(CurrentReference, Reference, Left.Reference) then goto TryAgain; //Calculate RingBuffer NewLeft address CurrentLeft := Left.Link; NewLeft := pointer(cardinal(CurrentLeft) - SizeOf(TReferencedPtr)); if cardinal(NewLeft) < cardinal(@Buffer) then NewLeft := EndBuffer; //Calcolate distance result := integer(Right.Link) - Integer(NewLeft); //Check buffer full if result = 0 then //Clear Reference if task still own reference if CAS32(Reference, 0, Left.Reference) then Exit else goto TryAgain; //Set NewLeft.Reference NewLeft^.Reference := Reference; SFence; //Try to set link and try to exchange NewLeft and clear Reference if task own reference if (Reference <> Left.Reference) or not CAS64(NewLeft^.Link, Reference, link, Reference, NewLeft^) or not CAS64(CurrentLeft, Reference, NewLeft, 0, Left) then goto TryAgain; //Calcolate result if result < 0 then result := Length - integer(cardinal(not Result) div SizeOf(TReferencedPtr)) else result := cardinal(result) div SizeOf(TReferencedPtr); end; //with end; { TgjRingBuffer.InsertLeft } RingBuffer unit you can find here: RingBuffer, CAS functions: FockFreePrimitives, and test program: RingBufferFlowTest Thanks in advance, GJ

    Read the article

  • CPU Affinity Masks (Putting Threads on different CPUs)

    - by hahuang65
    I have 4 threads, and I am trying to set thread 1 to run on CPU 1, thread 2 on CPU 2, etc. However, when I run my code below, the affinity masks are returning the correct values, but when I do a sched_getcpu() on the threads, they all return that they are running on CPU 4. Anybody know what my problem here is? Thanks in advance! #define _GNU_SOURCE #include <stdio.h> #include <pthread.h> #include <stdlib.h> #include <sched.h> #include <errno.h> void *pthread_Message(char *message) { printf("%s is running on CPU %d\n", message, sched_getcpu()); } int main() { pthread_t thread1, thread2, thread3, thread4; pthread_t threadArray[4]; cpu_set_t cpu1, cpu2, cpu3, cpu4; char *thread1Msg = "Thread 1"; char *thread2Msg = "Thread 2"; char *thread3Msg = "Thread 3"; char *thread4Msg = "Thread 4"; int thread1Create, thread2Create, thread3Create, thread4Create, i, temp; CPU_ZERO(&cpu1); CPU_SET(1, &cpu1); temp = pthread_setaffinity_np(thread1, sizeof(cpu_set_t), &cpu1); printf("Set returned by pthread_getaffinity_np() contained:\n"); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu1)) printf("CPU1: CPU %d\n", i); CPU_ZERO(&cpu2); CPU_SET(2, &cpu2); temp = pthread_setaffinity_np(thread2, sizeof(cpu_set_t), &cpu2); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu2)) printf("CPU2: CPU %d\n", i); CPU_ZERO(&cpu3); CPU_SET(3, &cpu3); temp = pthread_setaffinity_np(thread3, sizeof(cpu_set_t), &cpu3); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu3)) printf("CPU3: CPU %d\n", i); CPU_ZERO(&cpu4); CPU_SET(4, &cpu4); temp = pthread_setaffinity_np(thread4, sizeof(cpu_set_t), &cpu4); for (i = 0; i < CPU_SETSIZE; i++) if (CPU_ISSET(i, &cpu4)) printf("CPU4: CPU %d\n", i); thread1Create = pthread_create(&thread1, NULL, (void *)pthread_Message, thread1Msg); thread2Create = pthread_create(&thread2, NULL, (void *)pthread_Message, thread2Msg); thread3Create = pthread_create(&thread3, NULL, (void *)pthread_Message, thread3Msg); thread4Create = pthread_create(&thread4, NULL, (void *)pthread_Message, thread4Msg); pthread_join(thread1, NULL); pthread_join(thread2, NULL); pthread_join(thread3, NULL); pthread_join(thread4, NULL); return 0; }

    Read the article

  • How to spin an independent dispacher thread for a Silverlight UserControl

    - by ondesertverge
    I am trying to move a lot of different elements by 1 pixel very often and in parallel. Trying to do this on one dispatcher thread means that the elements are visited one after another. The result is that the more elements I have the slower they will all move. In WPF I was able to use a HostVisual as described here to solve this. I can't seem to find anything similar in Silverlight. Is this a drawback of the lightweight framework or is there something I haven't stumbled upon yet? I am using SL4.

    Read the article

< Previous Page | 18 19 20 21 22 23 24 25 26 27 28 29  | Next Page >