Search Results

Search found 15087 results on 604 pages for 'python multithreading'.

Page 257/604 | < Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >

  • Hooking thread exit

    - by mackenir
    Is there a way for me to hook the exit of managed threads (i.e. run some code on a thread, just before it exits?) I've developed a mechanism for hooking thread exit that works for some threads. Step 1: develop a 'hook' STA COM class that takes a callback function and calls it in its destructor. Step 2: create a ThreadStatic instance of this object on the thread I want to hook, and pass the object a managed delegate converted to an unmanaged function pointer. The delegate then gets called on thread exit (since the CLR calls IUnknown::Release on all STA COM RCWs as part of thread exit). This mechanism works on, for example, worker threads that I create in code using the Thread class. However, it doesn't seem to work for the application's main thread (be it a console or windows app). The 'hook' COM object seems to be deleted too late in the shutdown process and the attempt to call the delegate fails. (The reason I want to implement this facility is so I can run some native COM code on the exiting thread that works with STA COM objects that were created on the thread, before it's 'too late' (i.e. before the thread has exited, and it's no longer possible to work with STA COM objects on that thread.))

    Read the article

  • Winforms application hangs when switching to another app

    - by joseluisrod
    Hi, I believe I have a potential threading issue. I have a user control that contains the following code: private void btnVerify_Click(object sender, EventArgs e) { if (!backgroundWorkerVerify.IsBusy) { backgroundWorkerVerify.RunWorkerAsync(); } } private void backgroundWorkerVerify_DoWork(object sender, System.ComponentModel.DoWorkEventArgs e) { VerifyAppointments(); } private void backgroundWorkerVerify_RunWorkerCompleted(object sender, System.ComponentModel.RunWorkerCompletedEventArgs e) { MessageBox.Show("Information was Verified.", "Verify", MessageBoxButtons.OK, MessageBoxIcon.Information); CloseEvent(); } vanilla code. but the issue I have is that when the application is running and the users tabs to another application when they return to mine the application is hung, they get a blank screen and they have to kill it. This started when I put the threading code. Could I have some rogue threads out there? what is the best way to zero in a threading problem? The issue can't be recreated on my machine...I know I must be missing something on how to dispose of a backgroundworker properly. Any thoughts are appreciated, Thanks, Jose

    Read the article

  • Gracefully exiting from thread in Ruby

    - by jasonbogd
    Hi, I am trying out Mongrel and using the following code: require 'rubygems' require 'mongrel' class SimpleHandler < Mongrel::HttpHandler def process(request, response) response.start(200) do |head, out| head["Content-Type"] = "text/plain" out.write("Hello World!\n") end end end h = Mongrel::HttpServer.new("0.0.0.0", "3000") h.register("/test", SimpleHandler.new) puts "Press Control-C to exit" h.run.join trap("INT") do puts "Exiting..." end Basically, this just prints out "Hello World!" when I go to localhost:3000/test. It works fine, and I can close the program with Control-C. But when I press Control-C, this gets outputted: my_web_server.rb:17:in `join': Interrupt from my_web_server.rb:17 So I tried putting that trap("INT") statement at the end, but it isn't getting called. Solution? Thanks.

    Read the article

  • Synchronization of Nested Data Structures between Threads in Java

    - by Dominik
    I have a cache implementation like this: class X { private final Map<String, ConcurrentMap<String, String>> structure = new HashMap...(); public String getValue(String context, String id) { // just assume for this example that there will be always an innner map final ConcurrentMap<String, String> innerStructure = structure.get(context); String value = innerStructure.get(id); if(value == null) { synchronized(structure) { // can I be sure, that this inner map will represent the last updated // state from any thread? value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } } return value; } } Is this implementation thread-safe? Can I be sure to get the last updated state from any thread inside the synchronized block? Would this behaviour change if I use a java.util.concurrent.ReentrantLock instead of a synchronized block, like this: ... if(lock.tryLock(3, SECONDS)) { try { value = innerStructure.get(id); if(value == null) { value = getValueFromSomeSlowSource(id); innerStructure.put(id, value); } } finally { lock.unlock(); } } ... I know that final instance members are synchronized between threads, but is this also true for the objects held by these members? Maybe this is a dumb question, but I don't know how to test it to be sure, that it works on every OS and every architecture.

    Read the article

  • Matplotlib and WSGI/mod_python not working on Apache.

    - by Luiz C.
    Everything works as supposed to on the Django development server. In Apache, the django app also works except when matplotlib is used. Here's the error I get: No module named multiarray. Exception Type: ImportError Exception Value: No module named multiarray Exception Location: /usr/share/pyshared/numpy/core/numerictypes.py in <module>, line 81 Python Executable: /usr/bin/python Python Version: 2.6.4 From the python shell, both statements work: import numpy.core.multiarray and import multiarray. Any ideas? Thanks As I'm looking over the numpy files, I found the multiarray module, which has an extension of 'so'. My guess, is that mod_python is not reading these files.

    Read the article

  • libclntsh.so.11.1: cannot open shared object file.

    - by zhangzhong
    I want to schedule a task on linux by icrontab, and the task is written in python and have to import cx_Oracle module, so I export ORACLE_HOME and LD_LIBRARY_PATH in .bash_profile, but it raise the error: libclntsh.so.11.1: cannot open shared object file. Since it is ok to run the task by issue the command in shell like python a.py # ok I change the task in icrontab into a shell script which invoke my python script, but the exception recurred? # the shell script scheduled in icrontab #! bash python a.py Could you help how to do with it?

    Read the article

  • Is this (Lock-Free) Queue Implementation Thread-Safe?

    - by Hosam Aly
    I am trying to create a lock-free queue implementation in Java, mainly for personal learning. The queue should be a general one, allowing any number of readers and/or writers concurrently. Would you please review it, and suggest any improvements/issues you find? Thank you. import java.util.concurrent.atomic.AtomicReference; public class LockFreeQueue<T> { private static class Node<E> { E value; volatile Node<E> next; Node(E value) { this.value = value; } } private AtomicReference<Node<T>> head, tail; public LockFreeQueue() { // have both head and tail point to a dummy node Node<T> dummyNode = new Node<T>(null); head = new AtomicReference<Node<T>>(dummyNode); tail = new AtomicReference<Node<T>>(dummyNode); } /** * Puts an object at the end of the queue. */ public void putObject(T value) { Node<T> newNode = new Node<T>(value); Node<T> prevTailNode = tail.getAndSet(newNode); prevTailNode.next = newNode; } /** * Gets an object from the beginning of the queue. The object is removed * from the queue. If there are no objects in the queue, returns null. */ public T getObject() { Node<T> headNode, valueNode; // move head node to the next node using atomic semantics // as long as next node is not null do { headNode = head.get(); valueNode = headNode.next; // try until the whole loop executes pseudo-atomically // (i.e. unaffected by modifications done by other threads) } while (valueNode != null && !head.compareAndSet(headNode, valueNode)); T value = (valueNode != null ? valueNode.value : null); // release the value pointed to by head, keeping the head node dummy if (valueNode != null) valueNode.value = null; return value; }

    Read the article

  • Please help. Creating threads and wait till finsh

    - by Raj Aththanayake
    Hi I have two method calls that I want to call using two threads. Then I want them to wait till method executions get completed before continuing. My sample solution is something like below. public static void Main() { Console.WriteLine("Main thread starting."); String[] strThreads = new String[] { "one", "two" }; String ctemp = string.Empty; foreach (String c in strThreads) { ctemp = c; Thread thread = new Thread(delegate() { MethodCall(ctemp); }); thread.Start(); thread.Join(); } Console.WriteLine("Main thread ending."); Console.Read(); } public static void MethodCalls(string number) { Console.WriteLine("Method call " + number); } Is this will do the job? Or is there another better way to do the same thing?

    Read the article

  • Cross Thread Exception in PropertyChangedEvent in WPF

    - by Ashish Ashu
    I have a ListView that is binded to my custom collection. At run time , I am updating the certain properties of my entity in my custom collection in my ViewModel. At the same time , I am also doing the custom sorting in the listview. The custom sorting is applicable when I click on the any column header of the listview. For example, I am updating the current datetime on my entity on every 5 seconds and simulaneously , I am applying custom sorting based on DateTime. (The Listview is third party control). Hence I am doing two operations on my custom collection at the same time. Should I pass the dispatcher of my control in the view model and call any methods ( which updates any entity in my custom collection ) through UI dispatcher ?

    Read the article

  • Non-reentrant C# timer

    - by Oak
    I'm trying to invoke a method f() every t time, but if the previous invocation of f() has not finished yet, wait until it's finished. I've read a bit about the available timers (this is a useful link) but couldn't find any good way of doing what I want, save for manually writing it all. Any help about how to achieve this will be appreciated, though I fear I might not be able to find a simple solution using timers. To clarify, if x is one second, and f() runs the arbitrary durations I've written below, then: Step Operation Time taken 1 wait 1s 2 f() 0.6s 3 wait 0.4s (because f already took 0.6 seconds) 4 f() 10s 5 wait 0s (we're late) 6 f() 0.3s 7 wait 0.7s (we can disregard the debt from step 4) Notice that the nature of this timer is that f() will not need to be safe regarding re-entrance, and a thread pool of size 1 is enough here.

    Read the article

  • Deadlock in Java

    - by israkir
    Long time ago, I saved a sentence from a Java reference book: "Java has no mechanism to handle deadlock. it won't even know deadlock occurred." (Head First Java 2nd Edition, p.516) So, what is about it? Is there a way to catch deadlock case in Java? I mean, is there a way that our code understands a deadlock case occurred?

    Read the article

  • Do Scala and Erlang use green threads?

    - by CHAPa
    I've been reading a lot about how Scala and Erlang does lightweight threads and their concurrency model (actors). However, I have my doubts. Do Scala and Erlang use an approach similar to the old thread model used by Java (green threads) ? For example, suppose that there is a machine with 2 cores, so the Scala/Erlang environment will fork one thread per processor? The other threads will be scheduled by user-space (Scala VM / Erlang VM ) environment. Is this correct? Under the hood, how does this really work?

    Read the article

  • BackgroundWorker.ReportProgress() not updating property and locking up the UI

    - by Willem
    i am using a backgroundWorker to do a long running operation: BackgroundWorker backgroundWorker = new BackgroundWorker() { WorkerSupportsCancellation = true, WorkerReportsProgress = true }; backgroundWorker.RunWorkerCompleted += delegate(object s, RunWorkerCompletedEventArgs args) { }; backgroundWorker.ProgressChanged += delegate(object s, ProgressChangedEventArgs args) { someViewModel.SomeProperty.Add((SomeObject)args.UserState); }; backgroundWorker.DoWork += delegate(object s, DoWorkEventArgs args) { someViewModel.SomeList.ForEach(x => { someViewModel.SomeInterface.SomeMethod(backgroundWorker, someViewModel, someViewModel.SomeList, x); }); }; backgroundWorker.RunWorkerAsync(); Then in SomeInterface.SomeMethod: public void SomeMethod(BackgroundWorker backgroundWorker, SomeViewModel someViewModel//....) { //Filtering happens backgroundWorker.ReportProgress(0, someObjectFoundWhileFiltering); } So, when it comes to: backgroundWorker.ProgressChanged += delegate(object s, ProgressChangedEventArgs args) { someViewModel.SomeProperty.Add((SomeObject)args.UserState);//Adding the found object to the Property in the VM }; On the line someViewModel.SomeProperty.Add((SomeObject)args.UserState);, the set on the Property is not firering and the UI just locks up. What am i doing wrong? Is this the correct way to update the UI thread?

    Read the article

  • How to mmap the stack for the clone() system call on linux?

    - by Joseph Garvin
    The clone() system call on Linux takes a parameter pointing to the stack for the new created thread to use. The obvious way to do this is to simply malloc some space and pass that, but then you have to be sure you've malloc'd as much stack space as that thread will ever use (hard to predict). I remembered that when using pthreads I didn't have to do this, so I was curious what it did instead. I came across this site which explains, "The best solution, used by the Linux pthreads implementation, is to use mmap to allocate memory, with flags specifying a region of memory which is allocated as it is used. This way, memory is allocated for the stack as it is needed, and a segmentation violation will occur if the system is unable to allocate additional memory." The only context I've ever heard mmap used in is for mapping files into memory, and indeed reading the mmap man page it takes a file descriptor. How can this be used for allocating a stack of dynamic length to give to clone()? Is that site just crazy? ;) In either case, doesn't the kernel need to know how to find a free bunch of memory for a new stack anyway, since that's something it has to do all the time as the user launches new processes? Why does a stack pointer even need to be specified in the first place if the kernel can already figure this out?

    Read the article

  • ThreadPooler in Spring 2.5.6

    - by MiKu
    My application uses Spring 2.5.6. I have a service that creates explicit threads for some specific task. Triggering of this service call happens through quartz time scheduler. Question : While executing service calls, i want to use some sort of thread pooler that can return me thread instances. Is there any implementations that i can use in Spring?

    Read the article

  • How upload files to azure in background with Delphi and OmniThread?

    - by mamcx
    I have tried to upload +100 files to azure with Delphi. However, the calls block the main thread, so I want to do this with a async call or with a background thread. This is what I do now (like explained here): procedure TCloudManager.UploadTask(const input: TOmniValue; var output: TOmniValue); var FileTask:TFileTask; begin FileTask := input.AsRecord<TFileTask>; Upload(FileTask.BaseFolder, FileTask.LocalFile, FileTask.CloudFile); end; function TCloudManager.MassiveUpload(const BaseFolder: String; Files: TDictionary<String, String>): TStringList; var pipeline: IOmniPipeline; FileInfo : TPair<String,String>; FileTask:TFileTask; begin // set up pipeline pipeline := Parallel.Pipeline .Stage(UploadTask) .NumTasks(Environment.Process.Affinity.Count * 2) .Run; // insert URLs to be retrieved for FileInfo in Files do begin FileTask.LocalFile := FileInfo.Key; FileTask.CloudFile := FileInfo.Value; FileTask.BaseFolder := BaseFolder; pipeline.Input.Add(TOmniValue.FromRecord(FileTask)); end;//for pipeline.Input.CompleteAdding; // wait for pipeline to complete pipeline.WaitFor(INFINITE); end; However this block too (why? I don't understand).

    Read the article

  • Socket server with multiple clients, sending messages to many clients without hurting liveliness

    - by Karl Johanson
    I have a small socket server, and I need to distribute various messages from client-to-client depending on different conditionals. However I think I have a small problem with livelyness in my current code, and is there anything wrong in my approach: public class CuClient extends Thread { Socket socket = null; ObjectOutputStream out; ObjectInputStream in; CuGroup group; public CuClient(Socket s, CuGroup g) { this.socket = s; this.group = g; out = new ObjectOutputStream(this.socket.getOutputStream()); out.flush(); in = new ObjectInputStream(this.socket.getInputStream()); } @Override public void run() { String cmd = ""; try { while (!cmd.equals("client shutdown")) { cmd = (String) in.readObject(); this.group.broadcastToGroup(this, cmd); } out.close(); in.close(); socket.close(); } catch (Exception e) { System.out.println(this.getName()); e.printStackTrace(); } } public void sendToClient(String msg) { try { this.out.writeObject(msg); this.out.flush(); } catch (IOException ex) { } } And my CuGroup: public class CuGroup { private Vector<CuClient> clients = new Vector<CuClient>(); public void addClient(CuClient c) { this.clients.add(c); } void broadcastToGroup(CuClient clientName, String cmd) { Iterator it = this.clients.iterator(); while (it.hasNext()) { CuClient cu = (CuClient)it.next(); cu.sendToClient(cmd); } } } And my main-class: public class SmallServer { public static final Vector<CuClient> clients = new Vector<CuClient>(10); public static boolean serverRunning = true; private ServerSocket serverSocket; private CuGroup group = new CuGroup(); public void body() { try { this.serverSocket = new ServerSocket(1337, 20); System.out.println("Waiting for clients\n"); do { Socket s = this.serverSocket.accept(); CuClient t = new CuClient(s,group); System.out.println("SERVER: " + s.getInetAddress() + " is connected!\n"); t.start(); } while (this.serverRunning); } catch (IOException ex) { ex.printStackTrace(); } } public static void main(String[] args) { System.out.println("Server"); SmallServer server = new SmallServer(); server.body(); } } Consider the example with many more groups, maybe a Collection of groups. If they all synchronize on a single Object, I don't think my server will be very fast. I there a pattern or something that can help my liveliness?

    Read the article

  • JScrollPane Scrolls Down with Long Text in JEditorPane

    - by Jim
    Hello, I want to have a JEditorPane inside a JScrollPane. When the user clicks a button, the click listener will create a textEditor, call jscrollpane.setViewPort(textEditor), call textEditor.setText(String) to fill it with editable text, and call jscrollpane.getVerticalScrollBar().setValue(0). In case you're wondering, yes, the setText() must come after the setViewPort() for reasons that aren't on topic. Here is the problem: After the user clicks the button, the JScrollPane's view scrolls all the way to the bottom. I want the scrollbar to be at the top, as per the last line in my click listener. I popped open a debugger, and to my horror, discovered that the jscrollpane's viewport is being forced down to the bottom after the conclusion of the click listener (when pumping filters). It appears that Swing is delaying the population of the editor/jscrollpane until after the conclusion of the clicklistener, but is calling the scrollbar command first. Thus, the undesired behavior. Anyway, I'm wondering if there is a clean solution. It seems that wanting a scrollpane to be scrolled to the top after modification would be a reasonably common requirement, so I'm assuming this is a well-solved problem. Thanks!

    Read the article

  • Threadsafe binding with DispatcherObject.CheckAccess()

    - by maffe
    Hi, according to this, I can achieve threadsafety with large overhead. I wrote the following class and use it. It works fine. public abstract class BindingBase : DispatcherObject, INotifyPropertyChanged, INotifyPropertyChanging { private string _displayName; private const string NameDisplayName = "DisplayName"; /// /// The display name for the gui element which bound this instance. It can be used for localization. /// public string DisplayName { get { return _displayName; } set { NotifyPropertyChanging(NameDisplayName); _displayName = value; NotifyPropertyChanged(NameDisplayName); } } protected BindingBase() {} protected BindingBase(string displayName) { DisplayName = displayName; } public event PropertyChangedEventHandler PropertyChanged; public event PropertyChangingEventHandler PropertyChanging; protected void NotifyPropertyChanged(string name) { if (PropertyChanged == null) return; if (CheckAccess()) PropertyChanged.Invoke(this, new PropertyChangedEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanged(name))); } protected void NotifyPropertyChanging(string name) { if (PropertyChanging == null) return; if (CheckAccess()) PropertyChanging.Invoke(this, new PropertyChangingEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanging(name))); } } So is there a reason, why I've never found something like that? Are there any issues I should be aware off? Regards

    Read the article

  • Linux 2.6.31 Scheduler and Multithreaded Jobs

    - by dsimcha
    I run massively parallel scientific computing jobs on a shared Linux computer with 24 cores. Most of the time my jobs are capable of scaling to 24 cores when nothing else is running on this computer. However, it seems like when even one single-threaded job that isn't mine is running, my 24-thread jobs (which I set for high nice values) only manage to get ~1800% CPU (using Linux notation). Meanwhile, about 500% of the CPU cycles (again, using Linux notation) are idle. Can anyone explain this behavior and what I can do about it to get all of the 23 cores that aren't being used by someone else? Notes: In case it's relevant, I have observed this on slightly different kernel versions, though I can't remember which off the top of my head. The CPU architecture is x64. Is it at all possible that the fact that my 24-core jobs are 32-bit and the other jobs I'm competing w/ are 64-bit is relevant? Edit: One thing I just noticed is that going up to 30 threads seems to alleviate the problem to some degree. It gets me up to ~2100% CPU.

    Read the article

  • iphone - memory leaks in separate thread

    - by Brodie4598
    I create a second thread to call a method that downloads several images using: [NSThread detachNewThreadSelector:@selector(downloadImages) toTarget:self withObject:nil]; It works fine but I get a long list of leaks in the log similar to: 2010-04-18 00:48:12.287 FS Companion[11074:650f] * _NSAutoreleaseNoPool(): Object 0xbec2640 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0xa58af 0xdb452 0x5e973 0x5e770 0x11d029 0x517fa 0x51708 0x85f2 0x3047d 0x30004 0x99481fbd 0x99481e42) 2010-04-18 00:48:12.288 FS Companion[11074:650f] * _NSAutoreleaseNoPool(): Object 0xbe01510 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0xa58af 0xdb452 0x5e7a6 0x11d029 0x517fa 0x51708 0x85f2 0x3047d 0x30004 0x99481fbd 0x99481e42) 2010-04-18 00:48:12.289 FS Companion[11074:650f] * _NSAutoreleaseNoPool(): Object 0xbde6720 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0xa58af 0xdb452 0x5ea73 0x5e7c2 0x11d029 0x517fa 0x51708 0x85f2 0x3047d 0x30004 0x99481fbd 0x99481e42) Can someone help me understand the problem?

    Read the article

  • Safe to update separate regions of a BufferedImage in separate threads?

    - by finnw
    I have a collection of BufferedImage instances, one main image and some subimages created by calling getSubImage on the main image. The subimages do not overlap. I am also making modifications to the subimage and I want to split this into multiple threads, one per subimage. From my understanding of how BufferedImage, Raster and DataBuffer work, this should be safe because: Each instance of BufferedImage (and its respective WritableRaster) is accessed from only one thread. The shared ColorModel is immutable The DataBuffer has no fields that can be modified (the only thing that can change is elements of the backing array.) Modifying disjoint segments of an array in separate threads is safe. However I cannot find anything in the documentation that says that it is definitely safe to do this. Can I assume it is safe? I know that it is possible to work on copies of the child Rasters but I would prefer to avoid this because of memory constraints. Otherwise, is it possible to make the operation thread-safe without copying regions of the parent image?

    Read the article

  • What does binding mean exactly?

    - by Lily
    I always see people mention that "Python binding" and "C Sharp binding" etc. when I am actually using their C++ libraries. What does binding mean? If the library is written in C, and does Python binding means that they use SWIG kind of tool to mock a Python interface? Newbie in this field, and any suggestion will be welcomed.

    Read the article

  • C# TraceSource class in multithreaded application

    - by matti
    msdn: "Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe." it contains only instance methods. How should I use it in a way that all activity gets recorder by TextWriterTraceListener to a text file. Is one static member which all threads use (by calling) TraceEvent-method safe. (I've kind of asked this question in http://stackoverflow.com/questions/1901086/how-to-instantiate-c-tracesources-to-log-multithreaded-asp-net-2-0-web-applica, but I cannot just believe if somebody just says it's OK despite the documentation).

    Read the article

< Previous Page | 253 254 255 256 257 258 259 260 261 262 263 264  | Next Page >