Search Results

Search found 1638 results on 66 pages for 'multithreading'.

Page 51/66 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Using locks inside a loop

    - by Xaqron
    // Member Variable private readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile shared int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • Multiple instances of the same Async task (Windows Phone)

    - by Bart Teunissen
    After googeling for ages, and reading some stuff about async task in books. I made a my first program with an async task in it. Only to find out, that i can only start one task. I want to run the task more then once. This is where i found out that that doesn't seem to work. to be a little bit clearer, here are some parts of my code: InitFunction(var); This is the Task itself public async Task InitFunction(string var) { _VarHandle = await _AdsClient.GetSymhandleByNameAsync(var); _Data = await _AdsClient.ReadAsync<T>(_VarHandle); _AdsClient.AddNotificationAsync<T>(_VarHandle, AdsTransmissionMode.OnChange, 1000, this); } This works like a charm when i execute the task only once.. But is there a possibility to run it multiple times. Something like this? InitFunction(var1); InitFunction(var2); InitFunction(var3); Because if i do this now (multiple tasks at once), the task it wants to start is still running, and it throws an exeption. if someone could help me with this, that would be awesome! ~ Bart

    Read the article

  • Threadpool with pasistant worker instances

    - by Matt Smokey-waters Holmes
    So basically what im trying to do is queue up tasks in a thread pool to be executed as soon as a worker becomes free, i have found various examples of this but in all cases the examples have been setup to use a new Worker instance for each job, i want persistent workers. Im trying to make a ftp backup tool, i have it working but because of the limitations of a single connection it is slow. What i ideally want to do is have a single connection for scanning directories and building up a file list then four workers to download said files. Here is an example of my worker /** * FTP Worker */ public class Worker implements Runnable { protected FTPClient _ftp; // Connection details protected String _host = ""; protected String _user = ""; protected String _pass = ""; // worker status protected boolean _working = false; public Worker(String host, String user, String pass) { this._host = host; this._user = user; this._pass = pass; } // Check if the worker is in use public boolean inUse() { return this._working; } @Override public void run() { this._ftp = new FTPClient(); this._connect(); } // Download a file from the ftp server public boolean download(String base, String path, String file) { this._working = true; boolean outcome = true; //create directory if not exists File pathDir = new File(base + path); if (!pathDir.exists()) { pathDir.mkdirs(); } //download file try { OutputStream output = new FileOutputStream(base + path + file); this._ftp.retrieveFile(file, output); output.close(); } catch (Exception e) { outcome = false; } finally { this._working = false; return outcome; } } // Connect to the server protected boolean _connect() { try { this._ftp.connect(this._host); this._ftp.login(this._user, this._pass); } catch (Exception e) { return false; } return this._ftp.isConnected(); } // Disconnect from the server protected void _disconnect() { try { this._ftp.disconnect(); } catch (Exception e) { /* do nothing */ } } } and basically i want to be able to call Worker.download(...) for each task in a queue whenever a worker becomes available without having to create a new connection to the ftp server for each download Any help would be appreciated as iv'e never used threads before and I'm going round in circles at the moment

    Read the article

  • Multi-threading concept and lock in c#

    - by Neeraj
    I read about lock, though not understood nothing at all. My question is why do we use a un-used object and lock that and how this makes something thread-safe or how this helps in multi-threading ? Isn't there other way to make thread-safe code. public class test { private object Lock { get; set; } ... lock (this.Lock) { ... } ... } Sorry is my question is very stupid, but i don't understand, although i've used it many times.

    Read the article

  • Is this use of PreparedStatements in a Thread in JAVA correct?

    - by Gormcito
    I'm still an undergrad just working part time and so I'm always trying to be aware of better ways to do things. Recently I had to write a program for work where the main thread of the program would spawn "task" threads (for each db "task" record) which would perform some operations and then update the record to say that it has finished. Therefore I needed a database connection object and PreparedStatement objects in or available to the ThreadedTask objects. This is roughly what I ended up writing, is creating a PreparedStatement object per thread a waste? I thought static PreparedStatments could create race conditions... Thread A stmt.setInt(); Thread B stmt.setInt(); Thread A stmt.execute(); Thread B stmt.execute(); A´s version never gets execed.. Is this thread safe? Is creating and destroying PreparedStatement objects that are always the same not a huge waste? public class ThreadedTask implements runnable { private final PreparedStatement taskCompleteStmt; public ThreadedTask() { //... taskCompleteStmt = Main.db.prepareStatement(...); } public run() { //... taskCompleteStmt.executeUpdate(); } } public class Main { public static final db = DriverManager.getConnection(...); }

    Read the article

  • How to terminate a managed thread blocked in unmanaged code?

    - by James Curran
    I have a managed thread which is waiting, blocked, in an unmanaged code (specifically, it on a call to NamedPipeServerStream.WaitForConnection() which ultimitely calls into unmanaged code, and does not offer a timeout). I want to shut the thread down neatly. Thread.Abort() has no effect until the code returns to the managed realm, which it won't do until a client makes a connection, which we can't wait for). I need a way "shock" it out of the unmanaged code; or a way to just kill the thread even while it's in unmanaged land.

    Read the article

  • SQL Server stored procedure in multi threaded environments

    - by Shamika
    Hi, I need to execute some Sql server stored procs in a thread safe manner. At the moment I'm using software locks (C# locks) to achieve this but wonder what kind of features provided by the Sql server itself to achieve thread safety. It seems to be there are some table and row locking features built in to Sql server. Also from a performance perspective what is best approach? Software locks? Or Sql Server built in locks? Thanks, Shamika

    Read the article

  • Faking a Single Address Space

    - by dsimcha
    I have a large scientific computing task that parallelizes very well with SMP, but at too fine grained a level to be easily parallelized via explicit message passing. I'd like to parallelize it across address spaces and physical machines. Is it feasible to create a scheduler that would parallelize already multithreaded code across multiple physical computers under the following conditions: The code is already multithreaded and can scale pretty well on SMP configurations. The fact that not all of the threads are running in the same address space or on the same physical machine must be transparent to the program, even if this comes at a significant performance penalty in some use cases. You may assume that all of the physical machines involved are running operating systems and CPU architectures that are binary compatible. Things like locks and atomic operations may be slow (having network latency to deal with and all) but must "just work".

    Read the article

  • WPF: issue updating UI from background thread

    - by Ted Shaffer
    My code launches a background thread. The background thread makes changes and wants the UI in the main thread to update. The code that launches the thread then waits looks something like: Thread fThread = new Thread(new ThreadStart(PerformSync)); fThread.IsBackground = true; fThread.Start(); fThread.Join(); MessageBox.Show("Synchronization complete"); When the background wants to update the UI, it sets a StatusMessage and calls the code below: static StatusMessage _statusMessage; public delegate void AddStatusDelegate(); private void AddStatus() { AddStatusDelegate methodForUIThread = delegate { _statusMessageList.Add(_statusMessage); }; this.Dispatcher.BeginInvoke(methodForUIThread, System.Windows.Threading.DispatcherPriority.Send); } _statusMessageList is an ObservableCollection that is the source for a ListBox. The AddStatus method is called but the code on the main thread never executes - that is, _statusMessage is not added to _statusMessageList while the thread is executing. However, once it is complete (fThread.Join() returns), all the stacked up calls on the main thread are executed. But, if I display a message box between the calls to fThread.Start() and fThread.Join(), then the status messages are updated properly. What do I need to change so that the code in the main thread executes (UI updates) while waiting for the thread to terminate? Thanks.

    Read the article

  • Best way to reuse a Runnable

    - by Gandalf
    I have a class that implements Runnable and am currently using an Executor as my thread pool to run tasks (indexing documents into Lucene). executor.execute(new LuceneDocIndexer(doc, writer)); My issue is that my Runnable class creates many Lucene Field objects and I would rather reuse them then create new ones every call. What's the best way to reuse these objects (Field objects are not thread safe so I cannot simple make them static) - should I create my own ThreadFactory? I notice that after a while the program starts to degrade drastically and the only thing I can think of is it's GC overhead. I am currently trying to profile the project to be sure this is even an issue - but for now lets just assume it is.

    Read the article

  • C++: is it safe to read an integer variable that's being concurrently modified without locking?

    - by Hongli
    Suppose that I have an integer variable in a class, and this variable may be concurrently modified by other threads. Writes are protected by a mutex. Do I need to protect reads too? I've heard that there are some hardware architectures on which, if one thread modifies a variable, and another thread reads it, then the read result will be garbage; in this case I do need to protect reads. I've never seen such architectures though. This question assumes that a single transaction only consists of updating a single integer variable so I'm not worried about the states of any other variables that might also be involved in a transaction.

    Read the article

  • How to keep a .NET console app running?

    - by intoorbit
    Consider a Console application that starts up some services in a separate thread. All it needs to do is wait for the user to press Ctrl+C to shut it down. Which of the following is the better way to do this? static ManualResetEvent _quitEvent = new ManualResetEvent(false); static void Main() { Console.CancelKeyPress += delegate { _quitEvent.Set(); }; // kick off asynchronous stuff _quitEvent.WaitOne(); // cleanup/shutdown and quit } Or this, using Thread.Sleep(1): static bool _quitFlag = false; static void Main() { Console.CancelKeyPress += delegate { _quitFlag = true; }; // kick off asynchronous stuff while (!_quitFlag) { Thread.Sleep(1); } // cleanup/shutdown and quit }

    Read the article

  • How to end a thread in perl

    - by user1672190
    I am new to perl and i have a question about perl thread. I am trying to create a new thread to check if the running function is timed out, and my way of doing it is as below. Logic is 1.create a new thread 2.run the main function and see if it is timed out, if ture, kill it Sample code: $exit_tread = false; # a flag to make sure timeout thread will run my $thr_timeout = threads->new( \&timeout ); execute main function here; $exit_thread = true # set the flag to true to force thread ends $thr_timeout->join(); #wait for the timeout thread ends Code of timeout function sub timeout { $timeout = false; my $start_time = time(); while (!$exit_thread) { sleep(1); last if (main function is executed); if (time() - $start_time >= configured time ) { logmsg "process is killed as request timed out"; _kill_remote_process(); $timeout = true; last; } } } now the code is running as i expected, but i am just not very clear if the code $exit_thread = true works because there is a "last" at the end of while loop. Can anybody give me a answer? Thanks

    Read the article

  • Issue with GCD and too many threads

    - by dariaa
    I have an image loader class which provided with NSURL loads and image from the web and executes completion block. Code is actually quite simple - (void)downloadImageWithURL:(NSString *)URLString completion:(BELoadImageCompletionBlock)completion { dispatch_async(_queue, ^{ // dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ UIImage *image = nil; NSURL *URL = [NSURL URLWithString:URLString]; if (URL) { image = [UIImage imageWithData:[NSData dataWithContentsOfURL:URL]]; } dispatch_async(dispatch_get_main_queue(), ^{ completion(image, URLString); }); }); } When I replace dispatch_async(_queue, ^{ with commented out dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{ Images are loading much faster, wich is quite logical (before that images would be loaded one at a time, now a bunch of them are loading simultaneously). My issue is that I have perhaps 50 images and I call downloadImageWithURL:completion: method for all of them and when I use global queue instead of _queue my app eventually crashes and I see there are 85+ threads. Can the problem be that my calling dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0) 50 times in a row makes GCD create too many threads? I thought that gcd handles all the treading and makes sure the number of threads is not huge, but if it's not the case is there any way I can influence number of threads?

    Read the article

  • wxpython - Running threads sequentially without blocking GUI

    - by ryantmer
    I've got a GUI script with all my wxPython code in it, and a separate testSequences module that has a bunch of tasks that I run based on input from the GUI. The tasks take a long time to complete (from 20 seconds to 3 minutes), so I want to thread them, otherwise the GUI locks up while they're running. I also need them to run one after another, since they all use the same hardware. (My rationale behind threading is simply to prevent the GUI from locking up.) I'd like to have a "Running" message (with varying number of periods after it, i.e. "Running", "Running.", "Running..", etc.) so the user knows that progress is occurring, even though it isn't visible. I'd like this script to run the test sequences in separate threads, but sequentially, so that the second thread won't be created and run until the first is complete. Since this is kind of the opposite of the purpose of threads, I can't really find any information on how to do this... Any help would be greatly appreciated. Thanks in advance! gui.py import testSequences from threading import Thread #wxPython code for setting everything up here... for j in range(5): testThread = Thread(target=testSequences.test1) testThread.start() while testThread.isAlive(): #wait until the previous thread is complete time.sleep(0.5) i = (i+1) % 4 self.status.SetStatusText("Running"+'.'*i) testSequences.py import time def test1(): for i in range(10): print i time.sleep(1) (Obviously this isn't the actual test code, but the idea is the same.)

    Read the article

  • VB.net SyncLock Object

    - by Budius
    I always seen on SyncLock examples people using Private Lock1 As New Object ' declaration SyncLock Lock1 ' usage but why? In my specific case I'm locking a Queue to avoid problems on mult-threading Enqueueing and Dequeueing my data. Can I lock the Queue object itself, like this? Private cmdQueue As New Queue(Of QueueItem) ' declaration SyncLock cmdQueue ' usage Any help appreciated. Thanks.

    Read the article

  • Solve a maze using multicores?

    - by acidzombie24
    This question is messy, i dont need a working solution, i need some psuedo code. How would i solve this maze? This is a homework question. I have to get from point green to red. At every fork i need to 'spawn a thread' and go that direction. I need to figure out how to get to red but i am unsure how to avoid paths i already have taken (finishing with any path is ok, i am just not allowed to go in circles). Heres an example of my problem, i start by moving down and i see a fork so one goes right and one goes down (or this thread can take it, it doesnt matter). Now lets ignore the rest of the forks and say the one going right hits the wall, goes down, hits the wall and goes left, then goes up. The other thread goes down, hits the wall then goes all the way right. The bottom path has been taken twice, by starting at different sides. How do i mark this path has been taken? Do i need a lock? Is this the only way? Is there a lockless solution? Implementation wise i was thinking i could have the maze something like this. I dont like the solution because there is a LOT of locking (assuming i lock before each read and write of the haveTraverse member). I dont need to use the MazeSegment class below, i just wrote it up as an example. I am allowed to construct the maze however i want. I was thinking maybe the solution requires no connecting paths and thats hassling me. Maybe i could split the map up instead of using the format below (which is easy to read and understand). But if i knew how to split it up i would know how to walk it thus the problem. How do i walk this maze efficiently? The only hint i receive was dont try to conserve memory by reusing it, make copies. However that was related to a problem with ordering a list and i dont think the hint was a hint for this. class MazeSegment { enum Direction { up, down, left, right} List<Pair<Direction, MazeSegment*>> ConnectingPaths; int line_length; bool haveTraverse; } MazeSegment root; class MazeSegment { enum Direction { up, down, left, right} List<Pair<Direction, MazeSegment*>> ConnectingPaths; bool haveTraverse; } void WalkPath(MazeSegment segment) { if(segment.haveTraverse) return; segment.haveTraverse = true; foreach(var v in segment) { if(v.haveTraverse == false) spawn_thread(v); } } WalkPath(root);

    Read the article

  • Does the Java Memory Model (JSR-133) imply that entering a monitor flushes the CPU data cache(s)?

    - by Durandal
    There is something that bugs me with the Java memory model (if i even understand everything correctly). If there are two threads A and B, there are no guarantees that B will ever see a value written by A, unless both A and B synchronize on the same monitor. For any system architecture that guarantees cache coherency between threads, there is no problem. But if the architecture does not support cache coherency in hardware, this essentially means that whenever a thread enters a monitor, all memory changes made before must be commited to main memory, and the cache must be invalidated. And it needs to be the entire data cache, not just a few lines, since the monitor has no information which variables in memory it guards. But that would surely impact performance of any application that needs to synchronize frequently (especially things like job queues with short running jobs). So can Java work reasonably well on architectures without hardware cache-coherency? If not, why doesn't the memory model make stronger guarantees about visibility? Wouldn't it be more efficient if the language would require information what is guarded by a monitor? As i see it the memory model gives us the worst of both worlds, the absolute need to synchronize, even if cache coherency is guaranteed in hardware, and on the other hand bad performance on incoherent architectures (full cache flushes). So shouldn't it be more strict (require information what is guarded by a monitor) or more lose and restrict potential platforms to cache-coherent architectures? As it is now, it doesn't make too much sense to me. Can somebody clear up why this specific memory model was choosen? EDIT: My use of strict and lose was a bad choice in retrospect. I used "strict" for the case where less guarantees are made and "lose" for the opposite. To avoid confusion, its probably better to speak in terms of stronger or weaker guarantees.

    Read the article

  • ConcurrenctBag(Of MyType) Vs List(Of MyType)

    - by Ben
    What is the advantage of using a ConcurrentBag(Of MyType) against just using a List(Of MyType)? The MSDN page on the CB (http://msdn.microsoft.com/en-us/library/dd381779(v=VS.100).aspx) states that ConcurrentBag(Of T) is a thread-safe bag implementation, optimized for scenarios where the same thread will be both producing and consuming data stored in the bag So what is the advantage? I can understand the advantage of the other collection types in the Concurrency namespace, but this one puzzled me. Thanks.

    Read the article

  • C++ boost thread reusing threads

    - by aaa
    hi. I am trying to accomplish something like this: thread t; // create/initialize thread t.launch(); // launch thread. t.wait(); // wait t.launch(); // relaunch the same thread How to go about implementing something like this using boost threads? in essence, I need persistent relaunch-able thread. Thanks

    Read the article

  • Registering an event from different thread

    - by ET
    Hi, I have a question regarding events in c#. Lets say I have an object obj1 of a class that exposes an event, and this object is running on thread t1. Now on different thread t2, there is another object called obj2 that is registered for the event of obj1. Is it promised that obj2 will get the event when it will be raised? thanks.

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >