Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 21/66 | < Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Converting to async,await using async targeting package

    - by e4rthdog
    I have this: private void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; var task = Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); task.ContinueWith(t => { btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = t.Result.Description; lblItemCodeValue.Text = t.Result.Code; lblQuantityValue.Text = t.Result.AvailableQuantity.ToString(); },TaskScheduler .FromCurrentSynchronizationContext() ); LotFocus(true); } and i followed J. Skeet's advice to move into async,await in my .NET 4.0 app. I converted into this: private async void BtnCheckClick(object sender, EventArgs e) { var a = txtLot.Text; var b = cmbMcu.SelectedItem.ToString(); var c = cmbLocn.SelectedItem.ToString(); btnCheck.BackColor = Color.Red; JDEItemLotAvailability itm = await Task.Factory.StartNew(() => Dal.GetLotAvailabilityF41021(a, b, c)); btnCheck.BackColor = Color.Transparent; lblDescriptionValue.Text = itm.Description; lblItemCodeValue.Text = itm.Code; lblQuantityValue.Text = itm.AvailableQuantity.ToString(); LotFocus(true); } It works fine. What confuses me is that i could do it without using Task but just the method of my Dal. But that means that i must have modified my Dal method, which is something i dont want? I would appreciate if someone would explain to me in "plain" words if what i did is optimal or not and why. Thanks P.s. My dal method public bool CheckLotExistF41021(string _lot, string _mcu, string _locn) { using (OleDbConnection con = new OleDbConnection(this.conString)) { OleDbCommand cmd = new OleDbCommand(); cmd.CommandText = "select lilotn from proddta.f41021 " + "where lilotn = ? and trim(limcu) = ? and lilocn= ?"; cmd.Parameters.AddWithValue("@lotn", _lot); cmd.Parameters.AddWithValue("@mcu", _mcu); cmd.Parameters.AddWithValue("@locn", _locn); cmd.Connection = con; con.Open(); OleDbDataReader rdr = cmd.ExecuteReader(); bool _retval = rdr.HasRows; rdr.Close(); con.Close(); return _retval; } }

    Read the article

  • DataTable.WriteXml on background thread

    - by Sheraz KHan
    I am trying to serealize DataTables in a background thread and it's failing. Any idea [Test] public void Async_Writing_DataTables() { string path = @"C:\Temp\SerialzeData"; if (!Directory.Exists(path)) { Directory.CreateDirectory(path); } Assert.IsTrue(Directory.Exists(path)); Thread thread1 = new Thread(new ThreadStart(delegate { object lockObject = new object(); for (int index = 0; index < 10; index++) { lock (lockObject) { DataTable table = new DataTable("test"); table.WriteXml(Path.Combine(path, table.TableName + index + ".xml")); } } })); thread1.Start(); }

    Read the article

  • Why does C# thread die?

    - by JackN
    This is my 1st C# project so I may be doing something obviously improper in the code below. I am using .NET, WinForms (I think), and this is a desktop application until I get the bugs out. UpdateGui() uses Invoke((MethodInvoker)delegate to update various GUI controls based on received serial data and sends a GetStatus() command out the serial port 4 times a second. Thread Read() reads the response from serial port whenever it arrives which should be near immediate. SerialPortFixer is a SerialPort IOException Workaround in C# I found at http://zachsaw.blogspot.com/2010/07/serialport-ioexception-workaround-in-c.html. After one or both threads die I'll see something like The thread 0x1288 has exited with code 0 (0x0). in the debug code output. Why do UpdateGui() and/or Read() eventually die? public partial class UpdateStatus : Form { private readonly byte[] Command = new byte[32]; private readonly byte[] Status = new byte[32]; readonly Thread readThread; private static readonly Mutex commandMutex = new Mutex(); private static readonly Mutex statusMutex = new Mutex(); ... public UpdateStatus() { InitializeComponent(); SerialPortFixer.Execute("COM2"); if (serialPort1.IsOpen) { serialPort1.Close(); } try { serialPort1.Open(); } catch (Exception e) { labelWarning.Text = LOST_COMMUNICATIONS + e; labelStatus.Text = LOST_COMMUNICATIONS + e; labelWarning.Visible = true; } readThread = new Thread(Read); readThread.Start(); new Timer(UpdateGui, null, 0, 250); } static void ProcessStatus(byte[] status) { Status.State = (State) status[4]; Status.Speed = status[6]; // MSB Status.Speed *= 256; Status.Speed += status[5]; var Speed = Status.Speed/GEAR_RATIO; Status.Speed = (int) Speed; ... } public void Read() { while (serialPort1 != null) { try { serialPort1.Read(Status, 0, 1); if (Status[0] != StartCharacter[0]) continue; serialPort1.Read(Status, 1, 1); if (Status[1] != StartCharacter[1]) continue; serialPort1.Read(Status, 2, 1); if (Status[2] != (int)Command.GetStatus) continue; serialPort1.Read(Status, 3, 1); ... statusMutex.WaitOne(); ProcessStatus(Status); Status.update = true; statusMutex.ReleaseMutex(); } catch (Exception e) { Console.WriteLine(@"ERROR! Read() " + e); } } } public void GetStatus() { const int parameterLength = 0; // For GetStatus statusMutex.WaitOne(); Status.update = false; statusMutex.ReleaseMutex(); commandMutex.WaitOne(); if (!SendCommand(Command.GetStatus, parameterLength)) { Console.WriteLine(@"ERROR! SendCommand(GetStatus)"); } commandMutex.ReleaseMutex(); } private void UpdateGui(object x) { try { Invoke((MethodInvoker)delegate { Text = DateTime.Now.ToLongTimeString(); statusMutex.WaitOne(); if (Status.update) { if (Status.Speed > progressBarSpeed.Maximum) { Status.Speed = progressBarSpeed.Maximum; } progressBarSpeed.Value = Status.Speed; labelSpeed.Text = Status.Speed + RPM; ... } else { labelWarning.Text = LOST_COMMUNICATIONS; labelStatus.Text = LOST_COMMUNICATIONS; labelWarning.Visible = true; } statusMutex.ReleaseMutex(); GetStatus(); }); } catch (Exception e) { Console.WriteLine(@"ERROR! UpdateGui() " + e); } } }

    Read the article

  • Java Synchronized List Deadlock

    - by portoalet
    From Effective Java 2nd edition item 67 page 266-268: The background thread calls s.removeObserver, which attempts to lock observers, but it can’t acquire the lock, because the main thread already has the lock. All the while, the main thread is waiting for the background thread to finish removing the observer, which explains the deadlock. I am trying to find out which threads deadlock in the main method by using ThreadMXBean (http://stackoverflow.com/questions/1102359/programmatic-deadlock-detection-in-java) , but why does it not return the deadlocked threads? I used a new Thread to run the ThreadMXBean detection. public class ObservableSet<E> extends ForwardingSet<E> { public ObservableSet(Set<E> set) { super(set); } private final List<SetObserver<E>> observers = new ArrayList<SetObserver<E>>(); public void addObserver(SetObserver<E> observer) { synchronized(observers) { observers.add(observer); } } public boolean removeObserver(SetObserver<E> observer) { synchronized(observers) { return observers.remove(observer); } } private void notifyElementAdded(E element) { synchronized(observers) { for (SetObserver<E> observer : observers) observer.added(this, element); } } @Override public boolean add(E element) { boolean added = super.add(element); if (added) notifyElementAdded(element); return added; } @Override public boolean addAll(Collection<? extends E> c) { boolean result = false; for (E element : c) result|=add(element); //callsnotifyElementAdded return result; } public static void main(String[] args) { ObservableSet<Integer> set = new ObservableSet<Integer>(new HashSet<Integer>()); final ThreadMXBean threadMxBean = ManagementFactory.getThreadMXBean(); Thread t = new Thread(new Runnable() { @Override public void run() { while( true ) { long [] threadIds = threadMxBean.findDeadlockedThreads(); if( threadIds != null) { ThreadInfo[] infos = threadMxBean.getThreadInfo(threadIds); for( ThreadInfo threadInfo : infos) { StackTraceElement[] stacks = threadInfo.getStackTrace(); for( StackTraceElement stack : stacks ) { System.out.println(stack.toString()); } } } try { System.out.println("Sleeping.."); TimeUnit.MILLISECONDS.sleep(1000); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } } } }); t.start(); set.addObserver(new SetObserver<Integer>() { public void added(ObservableSet<Integer> s, Integer e) { ExecutorService executor = Executors.newSingleThreadExecutor(); final SetObserver<Integer> observer = this; try { executor.submit(new Runnable() { public void run() { s.removeObserver(observer); } }).get(); } catch (ExecutionException ex) { throw new AssertionError(ex.getCause()); } catch (InterruptedException ex) { throw new AssertionError(ex.getCause()); } finally { executor.shutdown(); } } }); for (int i = 0; i < 100; i++) set.add(i); } } public interface SetObserver<E> { // Invoked when an element is added to the observable set void added(ObservableSet<E> set, E element); } // ForwardingSet<E> simply wraps another Set and forwards all operations to it.

    Read the article

  • Symbian qt threading

    - by Umesha MS
    Hi, 1) In symbian c++ thread is not recommended. Instead of that they recommend active object for multi tasking. Presently I am using QT to develop a application in symbian. Since there is no active object in QT I thought of using thread. My question is , can I use thread, is it is recommended. If it is not recommended, how to achieve multitasking. 2) I have created a sample thread class as shown bellow. When I call test function from the constructer of the main window thread will start but UI will be in hung state, infact main window itself will not be displayed. Please help me to solve the problem. class CSampleThread: public QThread { Q_OBJECT public: CSampleThread(QObject *parent = 0) : QThread(parent) {} virtual ~CSampleThread() {} void test(){ QThread::start(LowPriority); } protected: void run() { while(true){} } };

    Read the article

  • Threadsafe binding with DispatcherObject.CheckAccess()

    - by maffe
    Hi, according to this, I can achieve threadsafety with large overhead. I wrote the following class and use it. It works fine. public abstract class BindingBase : DispatcherObject, INotifyPropertyChanged, INotifyPropertyChanging { private string _displayName; private const string NameDisplayName = "DisplayName"; /// /// The display name for the gui element which bound this instance. It can be used for localization. /// public string DisplayName { get { return _displayName; } set { NotifyPropertyChanging(NameDisplayName); _displayName = value; NotifyPropertyChanged(NameDisplayName); } } protected BindingBase() {} protected BindingBase(string displayName) { DisplayName = displayName; } public event PropertyChangedEventHandler PropertyChanged; public event PropertyChangingEventHandler PropertyChanging; protected void NotifyPropertyChanged(string name) { if (PropertyChanged == null) return; if (CheckAccess()) PropertyChanged.Invoke(this, new PropertyChangedEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanged(name))); } protected void NotifyPropertyChanging(string name) { if (PropertyChanging == null) return; if (CheckAccess()) PropertyChanging.Invoke(this, new PropertyChangingEventArgs(name)); else Dispatcher.BeginInvoke(DispatcherPriority.Normal, (Action) (() = NotifyPropertyChanging(name))); } } So is there a reason, why I've never found something like that? Are there any issues I should be aware off? Regards

    Read the article

  • Why does one loop take longer to detect a shared memory update than another loop?

    - by Joseph Garvin
    I've written a 'server' program that writes to shared memory, and a client program that reads from the memory. The server has different 'channels' that it can be writing to, which are just different linked lists that it's appending items too. The client is interested in some of the linked lists, and wants to read every node that's added to those lists as it comes in, with the minimum latency possible. I have 2 approaches for the client: For each linked list, the client keeps a 'bookmark' pointer to keep its place within the linked list. It round robins the linked lists, iterating through all of them over and over (it loops forever), moving each bookmark one node forward each time if it can. Whether it can is determined by the value of a 'next' member of the node. If it's non-null, then jumping to the next node is safe (the server switches it from null to non-null atomically). This approach works OK, but if there are a lot of lists to iterate over, and only a few of them are receiving updates, the latency gets bad. The server gives each list a unique ID. Each time the server appends an item to a list, it also appends the ID number of the list to a master 'update list'. The client only keeps one bookmark, a bookmark into the update list. It endlessly checks if the bookmark's next pointer is non-null ( while(node->next_ == NULL) {} ), if so moves ahead, reads the ID given, and then processes the new node on the linked list that has that ID. This, in theory, should handle large numbers of lists much better, because the client doesn't have to iterate over all of them each time. When I benchmarked the latency of both approaches (using gettimeofday), to my surprise #2 was terrible. The first approach, for a small number of linked lists, would often be under 20us of latency. The second approach would have small spats of low latencies but often be between 4,000-7,000us! Through inserting gettimeofday's here and there, I've determined that all of the added latency in approach #2 is spent in the loop repeatedly checking if the next pointer is non-null. This is puzzling to me; it's as if the change in one process is taking longer to 'publish' to the second process with the second approach. I assume there's some sort of cache interaction going on I don't understand. What's going on?

    Read the article

  • QProcess, QEventLoop - of any use for parallel-processing

    - by dlib
    I wonder whether I could use QEventLoop (QProcess?) to parallelize multiple calls to same function with Qt. What is precisely the difference with QtConcurrent or QThread? What is a process and an event loop more precisely? I read that QCoreApplication must exec() as early as possible in main() method, so that I wonder why it is different from main Thread. could you point as some efficient reference to processes and thread with Qt? I came through the official doc and those things remain unclear. Thanks and regards.

    Read the article

  • Semaphore race condition?

    - by poindexter12
    I have created a "Manager" class that contains a limited set of resources. The resources are stored in the "Manager" as a Queue. I initialize the Queue and a Semaphore to the same size, using the semaphore to block a thread if there are no resources available. I have multiple threads calling into this class to request a resource. Here is the psuedo code: public IResource RequestResource() { IResource resource = null; _semaphore.WaitOne(); lock (_syncLock) { resource = _resources.Dequeue(); } return resource; } public void ReleaseResource(IResource resource) { lock (_syncLock) { _resources.Enqueue(resource); } _semaphore.Release(); } While running this application, it seems to run fine for a while. Then, it seems like my Queue is giving out the same object. Does this seem like its possible? I'm pulling my hair out over here, and any help would be greatly appreciated. Feel free to ask for more information if you need it. Thanks!

    Read the article

  • Thread safety in Singleton

    - by Robert
    I understand that double locking in Java is broken, so what are the best ways to make Singletons Thread Safe in Java? The first thing that springs to my mind is: class Singleton{ private static Singleton instance; private Singleton(){} public static synchronized Singleton getInstance(){ if(instance == null) instance = new Singleton(); return instance; } } Does this work? if so, is it the best way (I guess that depends on circumstances, so stating when a particular technique is best, would be useful)

    Read the article

  • I/O between AIR client using Native process and executable java .jar

    - by aseem behl
    I am using Adobe AIR 2.0 native process API to launch a java executable jar. I/O is handled by writing to the input stream of the java process and reading from the output stream. The application is event based where several events are fired from the server. We catch these events in java code, handle them and write the output to the outputstream using the synchronized static method below. public class ReaderWriter { static Logger logger = Logger.getLogger(ReaderWriter.class); public synchronized static void writeToAir(String output){ try{ byte[] byteArray = output.getBytes(); DataOutputStream dataOutputStream = new DataOutputStream(System.out); dataOutputStream.write(byteArray); dataOutputStream.flush(); } catch (IOException e) { logger.info("Exception while writing the output. " + e); } } } The issue is that some messages are lost between the transfer and not all messages reach the AIR client. If I run the java application from the console I am receiving all the messages. It would be great if somebody could point out what I am missing. Following are some of the listeners used to send the event data to the AIR client. // class used to process Shutdown events from the Session private class SessionShutdownListener implements SessionListener{ public void onEvent(Event e) { Session.Shutdown sd = (Session.Shutdown) e; Session.ShutdownReason sr = sd.getReason(); String eventOutput = "eo;" + "Session Shutdown event ocurred reason=" + sr.strValue() + "\n"; ReaderWriter.writeToAir(eventOutput); } } // class used to process OperationSucceeded events from the Session private class SessionOperationSucceededListener implements SessionListener{ public void onEvent(Event e) { Session.OperationSucceeded os = (Session.OperationSucceeded) e; String eventOutput = "eo;" + "Session OperationSucceeded event ocurred" + "\n"; ReaderWriter.writeToAir(eventOutput); } }

    Read the article

  • Terminate long running thread in thread pool that was created using QueueUserWorkItem(win 32/nt5).

    - by Jake
    I am programming in a win32 nt5 environment. I have a function that is going to be called many times. Each call is atomic. I would like to use QueueUserWorkItem to take advantage of multicore processors. The problem I am having is I only want to give the function 3 seconds to complete. If it has not completed in 3 seconds I want to terminate the thread. Currently I am doing something like this: HANDLE newThreadFuncCall= CreateThread(NULL,0,funcCall,&func_params,0,NULL); DWORD result = WaitForSingleObject(newThreadFuncCall, 3000); if(result == WAIT_TIMEOUT) { TerminateThread(newThreadFuncCall,WAIT_TIMEOUT); } I just spawn a single thread and wait for 3 seconds or it to complete. Is there anyway to do something similar to but using QueueUserWorkItem to queue up the work? Thanks!

    Read the article

  • How upload files to azure in background with Delphi and OmniThread?

    - by mamcx
    I have tried to upload +100 files to azure with Delphi. However, the calls block the main thread, so I want to do this with a async call or with a background thread. This is what I do now (like explained here): procedure TCloudManager.UploadTask(const input: TOmniValue; var output: TOmniValue); var FileTask:TFileTask; begin FileTask := input.AsRecord<TFileTask>; Upload(FileTask.BaseFolder, FileTask.LocalFile, FileTask.CloudFile); end; function TCloudManager.MassiveUpload(const BaseFolder: String; Files: TDictionary<String, String>): TStringList; var pipeline: IOmniPipeline; FileInfo : TPair<String,String>; FileTask:TFileTask; begin // set up pipeline pipeline := Parallel.Pipeline .Stage(UploadTask) .NumTasks(Environment.Process.Affinity.Count * 2) .Run; // insert URLs to be retrieved for FileInfo in Files do begin FileTask.LocalFile := FileInfo.Key; FileTask.CloudFile := FileInfo.Value; FileTask.BaseFolder := BaseFolder; pipeline.Input.Add(TOmniValue.FromRecord(FileTask)); end;//for pipeline.Input.CompleteAdding; // wait for pipeline to complete pipeline.WaitFor(INFINITE); end; However this block too (why? I don't understand).

    Read the article

  • Multiple file descriptors to the same file, C

    - by Gigi
    I have a multithreaded application that is opening and reading the same file (not writing). I am opening a different file descriptor for each thread (but they all point to the same file). Each thread then reads the file and may close it and open it again if EOF is reached. Is this ok? If I perform fclose() on a file descriptor does it affect the other file descritptors that point to the same file?

    Read the article

  • How to efficiently save changes made in UI/main thread with Core Data?

    - by Jaanus
    So, there have been several posts here about importing and saving data from an external data source into Core Data. Apple documents a reasonable pattern for this: "import and save on background thread, merge saved objects to main thread." All fine and good. I have a related but different problem: the user is modifying data in the UI and main thread, and thus modifies state of some objects in the managed object context (MOC). I would like to save these changes from time to time. What is a good way to do that? Now, you could say that I could do the same: create a background thread with its own MOC and pass the changed objectID-s there. The catch-22 for me with this is that an object's ID changes when it is saved, and I cannot guarantee the order of things happening. I may end up passing a different objectID into the background thread for the same object, based on whether the object has been previously saved or not, and I don't know if Core Data can resolve this and see that different objectID-s are pointing to the same object and not create duplicates for me. (I could test this, but I'm lazywebbing with this question first.) One thought I had: I could always do MOC saves on a background thread, and queue them up with operationqueue, so that there is always only one save in progress. I would not create a new MOC, I would just use the same MOC as in main thread. Now, this is not thread safe and when someone modifies the MOC in main thread while it is being saved in background thread, the results will probably be catastrophic. But, minus the thread safety, you can see what kind of solution I'd wish for. To be clear, the problem I need to fix is that if I just do the save in main thread, it blocks the UI for an unacceptably long period of time, I want to move the save to background thread. So, questions: what about the reasoning of an object ID changing during saving, and Core Data being able to resolve them to the same object? Would this be the right way of addressing this problem? any other good ways of doing this?

    Read the article

  • How to spin an independent dispacher thread for a Silverlight UserControl

    - by ondesertverge
    I am trying to move a lot of different elements by 1 pixel very often and in parallel. Trying to do this on one dispatcher thread means that the elements are visited one after another. The result is that the more elements I have the slower they will all move. In WPF I was able to use a HostVisual as described here to solve this. I can't seem to find anything similar in Silverlight. Is this a drawback of the lightweight framework or is there something I haven't stumbled upon yet? I am using SL4.

    Read the article

  • Deadlock in Java

    - by israkir
    Long time ago, I saved a sentence from a Java reference book: "Java has no mechanism to handle deadlock. it won't even know deadlock occurred." (Head First Java 2nd Edition, p.516) So, what is about it? Is there a way to catch deadlock case in Java? I mean, is there a way that our code understands a deadlock case occurred?

    Read the article

  • controlling threads flow

    - by owca
    I had a task to write simple game simulating two players picking up 1-3 matches one after another until the pile is gone. I managed to do it for computer choosing random value of matches but now I'd like to go further and allow humans to play the game. Here's what I already have : http://paste.pocoo.org/show/201761/ Class Player is a computer player, and PlayerMan should be human being. Problem is, that thread of PlayerMan should wait until proper value of matches is given but I cannot make it work this way. Logic is as follows: thread runs until matches equals to zero. If player number is correct at the moment function pickMatches() is called. After decreasing number of matches on table, thread should wait and another thread should be notified. I know I must use wait() and notify() but I can't place them right. Class Shared keeps the value of current player, and also amount of matches. public void suspendThread() { suspended = true; } public void resumeThread() { suspended = false; } @Override public void run(){ int matches=1; int which = 0; int tmp=0; Shared data = this.selectData(); String name = this.returnName(); int number = this.getNumber(); while(data.getMatches() != 0){ while(!suspended){ try{ which = data.getCurrent(); if(number == which){ matches = pickMatches(); tmp = data.getMatches() - matches; data.setMatches(tmp, number); if(data.getMatches() == 0){ System.out.println(" "+ name+" takes "+matches+" matches."); System.out.println("Winner is player: "+name); stop(); } System.out.println(" "+ name+" takes "+matches+" matches."); if(number != 0){ data.setCurrent(0); } else{ data.setCurrent(1); } } this.suspendThread(); notifyAll(); wait(); }catch(InterruptedException exc) {} } } } @Override synchronized public int pickMatches(){ Scanner scanner = new Scanner(System.in); int n = 0; Shared data = this.selectData(); System.out.println("Choose amount of matches (from 1 to 3): "); if(data.getMatches() == 1){ System.out.println("There's only 1 match left !"); while(n != 1){ n = scanner.nextInt(); } } else{ do{ n = scanner.nextInt(); } while(n <= 1 && n >= 3); } return n; } }

    Read the article

  • Why is my multithreaded Java program not maxing out all my cores on my machine?

    - by James B
    Hi, I have a program that starts up and creates an in-memory data model and then creates a (command-line-specified) number of threads to run several string checking algorithms against an input set and that data model. The work is divided amongst the threads along the input set of strings, and then each thread iterates the same in-memory data model instance (which is never updated again, so there are no synchronization issues). I'm running this on a Windows 2003 64-bit server with 2 quadcore processors, and from looking at Windows task Manager they aren't being maxed-out, (nor are they looking like they are being particularly taxed) when I run with 10 threads. Is this normal behaviour? It appears that 7 threads all complete a similar amount of work in a similar amount of time, so would you recommend running with 7 threads instead? Should I run it with more threads?...Although I assume this could be detrimental as the JVM will do more context switching between the threads. Alternatively, should I run it with fewer threads? Alternatively, what would be the best tool I could use to measure this?...Would a profiling tool help me out here - indeed, is one of the several profilers better at detecting bottlenecks (assuming I have one here) than the rest? Note, the server is also running SQL Server 2005 (this may or may not be relevant), but nothing much is happening on that database when I am running my program. Note also, the threads are only doing string matching, they aren't doing any I/O or database work or anything else they may need to wait on. Thanks in advance, -James

    Read the article

  • Deadlock sample in C#.net

    - by DotNetBeginner
    Can anybody give a simple Deadlock sample code in c#.net ? And please tell the simplest way to find deadlock in your C#.net code sample.(May be the tool which will detect the dead lock in the given sample code.) NOTE: I have VS 2008

    Read the article

  • Delphi TerminateThread equivalent for Android

    - by Martin
    I have been discussing a problem on the Indy forums related to a thread that is not terminating correctly under Android. They have suggested that there may be an underlying problem with TThread for ARC. Because this problem is holding up the release of a product a work around would be to simply forcibly terminate the thread. I know this is not nice but in this case I cant think of a side effect from doing so. Its wrong but its better than a deadlocked app. Is there a way to forcibly terminate a thread under Android like TerminateThread does under windows? Martin

    Read the article

  • Synchronizing ASP.NET MVC action methods with ReaderWriterLockSlim

    - by James D
    Any obvious issues/problems/gotchas with synchronizing access (in an ASP.NET MVC blogging engine) to a shared object model (NHibernate, but it could be anything) at the Controller/Action level via ReaderWriterLockSlim? (Assume the object model is very large and expensive to build per-request, so we need to share it among requests.) Here's how a typical "Read Post" action would look. Enter the read lock, do some work, exit the read lock. public ActionResult ReadPost(int id) { // ReaderWriterLockSlim allows multiple concurrent writes; this method // only blocks in the unlikely event that some other client is currently // writing to the model, which would only happen if a comment were being // submitted or a new post were being saved. _lock.EnterReadLock(); try { // Access the model, fetch the post with specificied id // Pseudocode, etc. Post p = TheObjectModel.GetPostByID(id); ActionResult ar = View(p); return ar; } finally { // Under all code paths, we must release the read lock _lock.ExitReadLock(); } } Meanwhile, if a user submits a comment or an author authors a new post, they're going to need write access to the model, which is done roughly like so: [AcceptVerbs(HttpVerbs.Post)] public ActionResult SaveComment(/* some posted data */) { // try/finally omitted for brevity _lock.EnterWriteLock(); // Save the comment to the DB, update the model to include the comment, etc. _lock.ExitWriteLock(); } Of course, this could also be done by tagging those action methods with some sort of "synchronized" attribute... but however you do it, my question is is this a bad idea? ps. ReaderWriterLockSlim is optimized for multiple concurrent reads, and only blocks if the write lock is held. Since writes are so infrequent (1000s or 10,000s or 100,000s of reads for every 1 write), and since they're of such a short duration, the effect is that the model is synchronized , and almost nobody ever locks, and if they do, it's not for very long.

    Read the article

  • thread reaches end but isn't removed

    - by pstanton
    I create a bunch of threads to do some processing: new Thread("upd-" + id){ @Override public void run(){ try{ doSomething(); } catch (Throwable e){ LOG.error("error", e); } finally{ LOG.debug("thread death"); } } }.start(); I know i should be using a threadPool but i need to understand the following problem before i change it: I'm using eclipse's debugger and looking at the threads in the debug pane which lists active threads. Many of them complete as you would expect, and are removed from the debug pane, however some seem to stay in the list of active threads even though the log shows the "thread death" entry for these. When i attempt to debug these threads, they either do not pause for debugging or show an error dialog: "A timeout occurred while retrieving stack frames for thread: upd-...". there is some synchronization going on within the doSomething() call but i'm fairly sure it's ok and since the "thread death" log is being called i'm assuming these threads aren't deadlocked in that method. i don't do any Thread.join()s, however i do call a third party API but doubt they do either. Can anyone think of another reason these threads are lingering? Thanks. EDIT: I created this test to check the Garbage Collection theory: Thread thread = new Thread("!!!!!!!!!!!!!!!!") { @Override public void run() { System.out.println("running"); ThreadUs.sleepQuiet(5000); System.out.println("finished"); // <-- thread removed from list here } }; thread.start(); ThreadUs.sleepQuiet(10000); System.out.println(thread.isAlive()); // <-- thread already removed from list but hasn't been GC'd ThreadUs.sleepQuiet(10000); this proves that it is nothing to do with garbage collection as eclipse removes the thread from the thread list as soon as it completes and isn't waiting for the object to be de-referenced/GC'd.

    Read the article

< Previous Page | 17 18 19 20 21 22 23 24 25 26 27 28  | Next Page >