Search Results

Search found 48 results on 2 pages for 'interlocked'.

Page 2/2 | < Previous Page | 1 2 

  • Is it possible to create thread-safe collections without locks?

    - by Andrey
    This is pure just for interest question, any sort of questions are welcome. So is it possible to create thread-safe collections without any locks? By locks I mean any thread synchronization mechanisms, including Mutex, Semaphore, and even Interlocked, all of them. Is it possible at user level, without calling system functions? Ok, may be implementation is not effective, i am interested in theoretical possibility. If not what is the minimum means to do it? EDIT: Why immutable collections don't work. This of class Stack with methods Add that returns another Stack. Now here is program: Stack stack = new ...; ThreadedMethod() { loop { //Do the loop stack = stack.Add(element); } } this expression stack = stack.Add(element) is not atomic, and you can overwrite new stack from other thread. Thanks, Andrey

    Read the article

  • Using locks inside a loop

    - by Xaqron
    // Member Variable private readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile shared int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • C#/.NET Little Wonders: The ConcurrentDictionary

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In this series of posts, we will discuss how the concurrent collections have been developed to help alleviate these multi-threading concerns.  Last week’s post began with a general introduction and discussed the ConcurrentStack<T> and ConcurrentQueue<T>.  Today's post discusses the ConcurrentDictionary<T> (originally I had intended to discuss ConcurrentBag this week as well, but ConcurrentDictionary had enough information to create a very full post on its own!).  Finally next week, we shall close with a discussion of the ConcurrentBag<T> and BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. Recap As you'll recall from the previous post, the original collections were object-based containers that accomplished synchronization through a Synchronized member.  While these were convenient because you didn't have to worry about writing your own synchronization logic, they were a bit too finely grained and if you needed to perform multiple operations under one lock, the automatic synchronization didn't buy much. With the advent of .NET 2.0, the original collections were succeeded by the generic collections which are fully type-safe, but eschew automatic synchronization.  This cuts both ways in that you have a lot more control as a developer over when and how fine-grained you want to synchronize, but on the other hand if you just want simple synchronization it creates more work. With .NET 4.0, we get the best of both worlds in generic collections.  A new breed of collections was born called the concurrent collections in the System.Collections.Concurrent namespace.  These amazing collections are fine-tuned to have best overall performance for situations requiring concurrent access.  They are not meant to replace the generic collections, but to simply be an alternative to creating your own locking mechanisms. Among those concurrent collections were the ConcurrentStack<T> and ConcurrentQueue<T> which provide classic LIFO and FIFO collections with a concurrent twist.  As we saw, some of the traditional methods that required calls to be made in a certain order (like checking for not IsEmpty before calling Pop()) were replaced in favor of an umbrella operation that combined both under one lock (like TryPop()). Now, let's take a look at the next in our series of concurrent collections!For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here. ConcurrentDictionary – the fully thread-safe dictionary The ConcurrentDictionary<TKey,TValue> is the thread-safe counterpart to the generic Dictionary<TKey, TValue> collection.  Obviously, both are designed for quick – O(1) – lookups of data based on a key.  If you think of algorithms where you need lightning fast lookups of data and don’t care whether the data is maintained in any particular ordering or not, the unsorted dictionaries are generally the best way to go. Note: as a side note, there are sorted implementations of IDictionary, namely SortedDictionary and SortedList which are stored as an ordered tree and a ordered list respectively.  While these are not as fast as the non-sorted dictionaries – they are O(log2 n) – they are a great combination of both speed and ordering -- and still greatly outperform a linear search. Now, once again keep in mind that if all you need to do is load a collection once and then allow multi-threaded reading you do not need any locking.  Examples of this tend to be situations where you load a lookup or translation table once at program start, then keep it in memory for read-only reference.  In such cases locking is completely non-productive. However, most of the time when we need a concurrent dictionary we are interleaving both reads and updates.  This is where the ConcurrentDictionary really shines!  It achieves its thread-safety with no common lock to improve efficiency.  It actually uses a series of locks to provide concurrent updates, and has lockless reads!  This means that the ConcurrentDictionary gets even more efficient the higher the ratio of reads-to-writes you have. ConcurrentDictionary and Dictionary differences For the most part, the ConcurrentDictionary<TKey,TValue> behaves like it’s Dictionary<TKey,TValue> counterpart with a few differences.  Some notable examples of which are: Add() does not exist in the concurrent dictionary. This means you must use TryAdd(), AddOrUpdate(), or GetOrAdd().  It also means that you can’t use a collection initializer with the concurrent dictionary. TryAdd() replaced Add() to attempt atomic, safe adds. Because Add() only succeeds if the item doesn’t already exist, we need an atomic operation to check if the item exists, and if not add it while still under an atomic lock. TryUpdate() was added to attempt atomic, safe updates. If we want to update an item, we must make sure it exists first and that the original value is what we expected it to be.  If all these are true, we can update the item under one atomic step. TryRemove() was added to attempt atomic, safe removes. To safely attempt to remove a value we need to see if the key exists first, this checks for existence and removes under an atomic lock. AddOrUpdate() was added to attempt an thread-safe “upsert”. There are many times where you want to insert into a dictionary if the key doesn’t exist, or update the value if it does.  This allows you to make a thread-safe add-or-update. GetOrAdd() was added to attempt an thread-safe query/insert. Sometimes, you want to query for whether an item exists in the cache, and if it doesn’t insert a starting value for it.  This allows you to get the value if it exists and insert if not. Count, Keys, Values properties take a snapshot of the dictionary. Accessing these properties may interfere with add and update performance and should be used with caution. ToArray() returns a static snapshot of the dictionary. That is, the dictionary is locked, and then copied to an array as a O(n) operation.  GetEnumerator() is thread-safe and efficient, but allows dirty reads. Because reads require no locking, you can safely iterate over the contents of the dictionary.  The only downside is that, depending on timing, you may get dirty reads. Dirty reads during iteration The last point on GetEnumerator() bears some explanation.  Picture a scenario in which you call GetEnumerator() (or iterate using a foreach, etc.) and then, during that iteration the dictionary gets updated.  This may not sound like a big deal, but it can lead to inconsistent results if used incorrectly.  The problem is that items you already iterated over that are updated a split second after don’t show the update, but items that you iterate over that were updated a split second before do show the update.  Thus you may get a combination of items that are “stale” because you iterated before the update, and “fresh” because they were updated after GetEnumerator() but before the iteration reached them. Let’s illustrate with an example, let’s say you load up a concurrent dictionary like this: 1: // load up a dictionary. 2: var dictionary = new ConcurrentDictionary<string, int>(); 3:  4: dictionary["A"] = 1; 5: dictionary["B"] = 2; 6: dictionary["C"] = 3; 7: dictionary["D"] = 4; 8: dictionary["E"] = 5; 9: dictionary["F"] = 6; Then you have one task (using the wonderful TPL!) to iterate using dirty reads: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); And one task to attempt updates in a separate thread (probably): 1: // attempt updates in a separate thread 2: var updateTask = new Task(() => 3: { 4: // iterates, and updates the value by one 5: foreach (var pair in dictionary) 6: { 7: dictionary[pair.Key] = pair.Value + 1; 8: } 9: }); Now that we’ve done this, we can fire up both tasks and wait for them to complete: 1: // start both tasks 2: updateTask.Start(); 3: iterationTask.Start(); 4:  5: // wait for both to complete. 6: Task.WaitAll(updateTask, iterationTask); Now, if I you didn’t know about the dirty reads, you may have expected to see the iteration before the updates (such as A:1, B:2, C:3, D:4, E:5, F:6).  However, because the reads are dirty, we will quite possibly get a combination of some updated, some original.  My own run netted this result: 1: F:6 2: E:6 3: D:5 4: C:4 5: B:3 6: A:2 Note that, of course, iteration is not in order because ConcurrentDictionary, like Dictionary, is unordered.  Also note that both E and F show the value 6.  This is because the output task reached F before the update, but the updates for the rest of the items occurred before their output (probably because console output is very slow, comparatively). If we want to always guarantee that we will get a consistent snapshot to iterate over (that is, at the point we ask for it we see precisely what is in the dictionary and no subsequent updates during iteration), we should iterate over a call to ToArray() instead: 1: // attempt iteration in a separate thread 2: var iterationTask = new Task(() => 3: { 4: // iterates using a dirty read 5: foreach (var pair in dictionary.ToArray()) 6: { 7: Console.WriteLine(pair.Key + ":" + pair.Value); 8: } 9: }); The atomic Try…() methods As you can imagine TryAdd() and TryRemove() have few surprises.  Both first check the existence of the item to determine if it can be added or removed based on whether or not the key currently exists in the dictionary: 1: // try add attempts an add and returns false if it already exists 2: if (dictionary.TryAdd("G", 7)) 3: Console.WriteLine("G did not exist, now inserted with 7"); 4: else 5: Console.WriteLine("G already existed, insert failed."); TryRemove() also has the virtue of returning the value portion of the removed entry matching the given key: 1: // attempt to remove the value, if it exists it is removed and the original is returned 2: int removedValue; 3: if (dictionary.TryRemove("C", out removedValue)) 4: Console.WriteLine("Removed C and its value was " + removedValue); 5: else 6: Console.WriteLine("C did not exist, remove failed."); Now TryUpdate() is an interesting creature.  You might think from it’s name that TryUpdate() first checks for an item’s existence, and then updates if the item exists, otherwise it returns false.  Well, note quite... It turns out when you call TryUpdate() on a concurrent dictionary, you pass it not only the new value you want it to have, but also the value you expected it to have before the update.  If the item exists in the dictionary, and it has the value you expected, it will update it to the new value atomically and return true.  If the item is not in the dictionary or does not have the value you expected, it is not modified and false is returned. 1: // attempt to update the value, if it exists and if it has the expected original value 2: if (dictionary.TryUpdate("G", 42, 7)) 3: Console.WriteLine("G existed and was 7, now it's 42."); 4: else 5: Console.WriteLine("G either didn't exist, or wasn't 7."); The composite Add methods The ConcurrentDictionary also has composite add methods that can be used to perform updates and gets, with an add if the item is not existing at the time of the update or get. The first of these, AddOrUpdate(), allows you to add a new item to the dictionary if it doesn’t exist, or update the existing item if it does.  For example, let’s say you are creating a dictionary of counts of stock ticker symbols you’ve subscribed to from a market data feed: 1: public sealed class SubscriptionManager 2: { 3: private readonly ConcurrentDictionary<string, int> _subscriptions = new ConcurrentDictionary<string, int>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public void AddSubscription(string tickerKey) 7: { 8: // add a new subscription with count of 1, or update existing count by 1 if exists 9: var resultCount = _subscriptions.AddOrUpdate(tickerKey, 1, (symbol, count) => count + 1); 10:  11: // now check the result to see if we just incremented the count, or inserted first count 12: if (resultCount == 1) 13: { 14: // subscribe to symbol... 15: } 16: } 17: } Notice the update value factory Func delegate.  If the key does not exist in the dictionary, the add value is used (in this case 1 representing the first subscription for this symbol), but if the key already exists, it passes the key and current value to the update delegate which computes the new value to be stored in the dictionary.  The return result of this operation is the value used (in our case: 1 if added, existing value + 1 if updated). Likewise, the GetOrAdd() allows you to attempt to retrieve a value from the dictionary, and if the value does not currently exist in the dictionary it will insert a value.  This can be handy in cases where perhaps you wish to cache data, and thus you would query the cache to see if the item exists, and if it doesn’t you would put the item into the cache for the first time: 1: public sealed class PriceCache 2: { 3: private readonly ConcurrentDictionary<string, double> _cache = new ConcurrentDictionary<string, double>(); 4:  5: // adds a new subscription, or increments the count of the existing one. 6: public double QueryPrice(string tickerKey) 7: { 8: // check for the price in the cache, if it doesn't exist it will call the delegate to create value. 9: return _cache.GetOrAdd(tickerKey, symbol => GetCurrentPrice(symbol)); 10: } 11:  12: private double GetCurrentPrice(string tickerKey) 13: { 14: // do code to calculate actual true price. 15: } 16: } There are other variations of these two methods which vary whether a value is provided or a factory delegate, but otherwise they work much the same. Oddities with the composite Add methods The AddOrUpdate() and GetOrAdd() methods are totally thread-safe, on this you may rely, but they are not atomic.  It is important to note that the methods that use delegates execute those delegates outside of the lock.  This was done intentionally so that a user delegate (of which the ConcurrentDictionary has no control of course) does not take too long and lock out other threads. This is not necessarily an issue, per se, but it is something you must consider in your design.  The main thing to consider is that your delegate may get called to generate an item, but that item may not be the one returned!  Consider this scenario: A calls GetOrAdd and sees that the key does not currently exist, so it calls the delegate.  Now thread B also calls GetOrAdd and also sees that the key does not currently exist, and for whatever reason in this race condition it’s delegate completes first and it adds its new value to the dictionary.  Now A is done and goes to get the lock, and now sees that the item now exists.  In this case even though it called the delegate to create the item, it will pitch it because an item arrived between the time it attempted to create one and it attempted to add it. Let’s illustrate, assume this totally contrived example program which has a dictionary of char to int.  And in this dictionary we want to store a char and it’s ordinal (that is, A = 1, B = 2, etc).  So for our value generator, we will simply increment the previous value in a thread-safe way (perhaps using Interlocked): 1: public static class Program 2: { 3: private static int _nextNumber = 0; 4:  5: // the holder of the char to ordinal 6: private static ConcurrentDictionary<char, int> _dictionary 7: = new ConcurrentDictionary<char, int>(); 8:  9: // get the next id value 10: public static int NextId 11: { 12: get { return Interlocked.Increment(ref _nextNumber); } 13: } Then, we add a method that will perform our insert: 1: public static void Inserter() 2: { 3: for (int i = 0; i < 26; i++) 4: { 5: _dictionary.GetOrAdd((char)('A' + i), key => NextId); 6: } 7: } Finally, we run our test by starting two tasks to do this work and get the results… 1: public static void Main() 2: { 3: // 3 tasks attempting to get/insert 4: var tasks = new List<Task> 5: { 6: new Task(Inserter), 7: new Task(Inserter) 8: }; 9:  10: tasks.ForEach(t => t.Start()); 11: Task.WaitAll(tasks.ToArray()); 12:  13: foreach (var pair in _dictionary.OrderBy(p => p.Key)) 14: { 15: Console.WriteLine(pair.Key + ":" + pair.Value); 16: } 17: } If you run this with only one task, you get the expected A:1, B:2, ..., Z:26.  But running this in parallel you will get something a bit more complex.  My run netted these results: 1: A:1 2: B:3 3: C:4 4: D:5 5: E:6 6: F:7 7: G:8 8: H:9 9: I:10 10: J:11 11: K:12 12: L:13 13: M:14 14: N:15 15: O:16 16: P:17 17: Q:18 18: R:19 19: S:20 20: T:21 21: U:22 22: V:23 23: W:24 24: X:25 25: Y:26 26: Z:27 Notice that B is 3?  This is most likely because both threads attempted to call GetOrAdd() at roughly the same time and both saw that B did not exist, thus they both called the generator and one thread got back 2 and the other got back 3.  However, only one of those threads can get the lock at a time for the actual insert, and thus the one that generated the 3 won and the 3 was inserted and the 2 got discarded.  This is why on these methods your factory delegates should be careful not to have any logic that would be unsafe if the value they generate will be pitched in favor of another item generated at roughly the same time.  As such, it is probably a good idea to keep those generators as stateless as possible. Summary The ConcurrentDictionary is a very efficient and thread-safe version of the Dictionary generic collection.  It has all the benefits of type-safety that it’s generic collection counterpart does, and in addition is extremely efficient especially when there are more reads than writes concurrently. Tweet Technorati Tags: C#, .NET, Concurrent Collections, Collections, Little Wonders, Black Rabbit Coder,James Michael Hare

    Read the article

  • Why does Mysql Xampp restart only when i run the mysqld.exe file manually?

    - by Ranjit Kumar
    I am using mysql-xampp v3.0.2 version. while restarting the mysql server first it show me the running status and after 2or3s it stops running automatically. So as of now i got a temporary solution like going into xampp installation folder Xampp-mysql-bin-running the msqld.exe file. i dont know whether it is the correct solution or is there any alternate solution to be made !! please suggest me errorlog 120629 15:29:59 [Note] Plugin 'FEDERATED' is disabled. 120629 15:29:59 InnoDB: The InnoDB memory heap is disabled 120629 15:29:59 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120629 15:29:59 InnoDB: Compressed tables use zlib 1.2.3 120629 15:29:59 InnoDB: Initializing buffer pool, size = 16.0M 120629 15:29:59 InnoDB: Completed initialization of buffer pool InnoDB: The first specified data file D:\xampp\xampp\mysql\data\ibdata1 did not exist: InnoDB: a new database to be created! 120629 15:29:59 InnoDB: Setting file D:\xampp\xampp\mysql\data\ibdata1 size to 10 MB InnoDB: Database physically writes the file full: wait... 120629 15:29:59 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile0 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile0 size to 5 MB InnoDB: Database physically writes the file full: wait... 120629 15:30:00 InnoDB: Log file D:\xampp\xampp\mysql\data\ib_logfile1 did not exist: new to be created InnoDB: Setting log file D:\xampp\xampp\mysql\data\ib_logfile1 size to 5 MB InnoDB: Database physically writes the file full: wait... InnoDB: Doublewrite buffer not found: creating new InnoDB: Doublewrite buffer created InnoDB: 127 rollback segment(s) active. InnoDB: Creating foreign key constraint system tables InnoDB: Foreign key constraint system tables created 120629 15:30:02 InnoDB: Waiting for the background threads to start

    Read the article

  • How to use the buffer on SocketAsyncEventArgs object

    - by Rob
    We're stuck with using buffers on the SocketAsyncEventArgs object. With the old socket method we'd cast our state object, like this: clientState cs = (clientState)asyncResult.AsyncState; However, the 3.5 framework is different. With have strings arriving from the client in chunks and we can't seem to work out how the buffers work so we can process an entire string when we find a char3. Code at the moment: private void ProcessReceive(SocketAsyncEventArgs e) { string content = string.Empty; // Check if the remote host closed the connection. if (e.BytesTransferred > 0) { if (e.SocketError == SocketError.Success) { Socket s = e.UserToken as Socket; //asyncResult.AsyncState; Int32 bytesTransferred = e.BytesTransferred; // Get the message received from the listener. content += Encoding.ASCII.GetString(e.Buffer, e.Offset, bytesTransferred); if (content.IndexOf(Convert.ToString((char)3)) > -1) { e.BufferList = null; // Increment the count of the total bytes receive by the server. Interlocked.Add(ref this.totalBytesRead, bytesTransferred); } else { content += Encoding.ASCII.GetString(e.Buffer, e.Offset, bytesTransferred); ProcessReceive(e); } } else { this.CloseClientSocket(e); } } }

    Read the article

  • Threads in WCF service

    - by dragonfly
    Hi, there is a piece of code: class WCFConsoleHostApp : IBank { private static int _instanceCounter; public WCFConsoleHostApp () { Interlocked.Increment(ref _instanceCounter); Console.WriteLine(string.Format("{0:T} Instance nr " + _instanceCounter + " created", DateTime.Now)); } private static int amount; static void Main(string[] args) { ServiceHost host = new ServiceHost(typeof(WCFConsoleHostApp)); host.Open(); Console.WriteLine("Host is running..."); Console.ReadLine(); } #region IBank Members BankOperationResult IBank.Put(int amount) { Console.WriteLine(string.Format("{0:00} {1}", Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread) + " Putting..."); WCFConsoleHostApp.amount += amount; Thread.Sleep(20000); Console.WriteLine(string.Format("{0:00} {1}", Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread) + " Putting done"); return new BankOperationResult { CurrentAmount = WCFConsoleHostApp.amount, Success = true }; } BankOperationResult IBank.Withdraw(int amount) { Console.WriteLine(string.Format("{0:00} {1}", Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread) + " Withdrawing..."); WCFConsoleHostApp.amount -= amount; Thread.Sleep(20000); Console.WriteLine(string.Format("{0:00} {1}", Thread.CurrentThread.ManagedThreadId, Thread.CurrentThread.IsThreadPoolThread) + " Withdrawing done"); return new BankOperationResult { CurrentAmount = WCFConsoleHostApp.amount, Success = true }; } #endregion } My test client application calls that service in 50 threads (service is PerCall). What I found very disturbing is when I added Thread.Sleep(20000) WCF creates one service instance per second using different thread from pool. When I remove Thread.Sleep(20000) 50 instances are instanciated straight away, and about 2-4 threads are used to do it - which in fact I consider normal. Could somebody explain why when Thread.Sleep causes those funny delays in creating instances?

    Read the article

  • Random String Generator creates same string on multiple calls

    - by rockinthesixstring
    Hi there. I've build a random string generator but I'm having a problem whereby if I call the function multiple times say in a Page_Load method, the function returns the same string twice. here's the code ''' <summary>' ''' Generates a Random String' ''' </summary>' ''' <param name="n">number of characters the method should generate</param>' ''' <param name="UseSpecial">should the method include special characters? IE: # ,$, !, etc.</param>' ''' <param name="SpecialOnly">should the method include only the special characters and excludes alpha numeric</param>' ''' <returns>a random string n characters long</returns>' Public Function GenerateRandom(ByVal n As Integer, Optional ByVal UseSpecial As Boolean = True, Optional ByVal SpecialOnly As Boolean = False) As String Dim chars As String() ' a character array to use when generating a random string' Dim ichars As Integer = 74 'number of characters to use out of the chars string' Dim schars As Integer = 0 ' number of characters to skip out of the characters string' chars = { _ "A", "B", "C", "D", "E", "F", _ "G", "H", "I", "J", "K", "L", _ "M", "N", "O", "P", "Q", "R", _ "S", "T", "U", "V", "W", "X", _ "Y", "Z", "0", "1", "2", "3", _ "4", "5", "6", "7", "8", "9", _ "a", "b", "c", "d", "e", "f", _ "g", "h", "i", "j", "k", "l", _ "m", "n", "o", "p", "q", "r", _ "s", "t", "u", "v", "w", "x", _ "y", "z", "!", "@", "#", "$", _ "%", "^", "&", "*", "(", ")", _ "-", "+"} If Not UseSpecial Then ichars = 62 ' only use the alpha numeric characters out of "char"' If SpecialOnly Then schars = 62 : ichars = 74 ' skip the alpha numeric characters out of "char"' Dim rnd As New Random() Dim random As String = String.Empty Dim i As Integer = 0 While i < n random += chars(rnd.[Next](schars, ichars)) System.Math.Max(System.Threading.Interlocked.Increment(i), i - 1) End While rnd = Nothing Return random End Function but if I call something like this Dim str1 As String = GenerateRandom(5) Dim str2 As String = GenerateRandom(5) the response will be something like this g*3Jq g*3Jq and the second time I call it, it will be 3QM0$ 3QM0$ What am I missing? I'd like every random string to be generated as unique.

    Read the article

  • C# Execute Method (with Parameters) with ThreadPool

    - by washtik
    We have the following piece of code (idea for this code was found on this website) which will spawn new threads for the method "Do_SomeWork()". This enables us to run the method multiple times asynchronously. The code is: var numThreads = 20; var toProcess = numThreads; var resetEvent = new ManualResetEvent(false); for (var i = 0; i < numThreads; i++) { new Thread(delegate() { Do_SomeWork(Parameter1, Parameter2, Parameter3); if (Interlocked.Decrement(ref toProcess) == 0) resetEvent.Set(); }).Start(); } resetEvent.WaitOne(); However we would like to make use of ThreadPool rather than create our own new threads which can be detrimental to performance. The question is how can we modify the above code to make use of ThreadPool keeping in mind that the method "Do_SomeWork" takes multiple parameters and also has a return type (i.e. method is not void). Also, this is C# 2.0.

    Read the article

  • How can I solve the Log Pile wooden puzzle with a computer program?

    - by craig1410
    Can anyone suggest how to solve the Log Pile wooden puzzle using a computer program? See here to visualise the puzzle: http://www.puzzlethis.co.uk/products/madcow/the_log_pile.htm The picture only shows some of the pieces. The full set of 10 pieces are configured as follows with 1 representing a peg, -1 representing a hole and 0 representing neither a peg nor a hole. -1,1,0,-1,0 1,0,1,0,0 1,-1,1,0,0 -1,-1,0,0,-1 -1,1,0,1,0 0,1,0,0,1 1,0,-1,0,-1 0,-1,0,1,0 0,0,-1,1,-1 1,0,-1,0,0 The pieces can be interlocked in two layers of 5 pieces each with the top layer at 90 degrees to the bottom layer as shown in the above link. I have already created a solution to this problem myself using Java but I feel that it was a clumsy solution and I am interested to see some more sophisticated solutions. Feel free to either suggest a general approach or to provide a working program in the language of your choice. My approach was to use the numeric notation above to create an array of "Logs". I then used a combination/permutation generator to try all possible arrangements of the Logs until a solution was found where all the intersections equated to zero (ie. Peg to Hole, Hole to Peg or Blank to Blank). I used some speed-ups to detect the first failed intersection for a given permutation and move on to the next permutation. I hope you find this as interesting as I have. Thanks, Craig.

    Read the article

  • Multithreaded linked list traversal

    - by Rob Bryce
    Given a (doubly) linked list of objects (C++), I have an operation that I would like multithread, to perform on each object. The cost of the operation is not uniform for each object. The linked list is the preferred storage for this set of objects for a variety of reasons. The 1st element in each object is the pointer to the next object; the 2nd element is the previous object in the list. I have solved the problem by building an array of nodes, and applying OpenMP. This gave decent performance. I then switched to my own threading routines (based off Windows primitives) and by using InterlockedIncrement() (acting on the index into the array), I can achieve higher overall CPU utilization and faster through-put. Essentially, the threads work by "leap-frog'ing" along the elements. My next approach to optimization is to try to eliminate creating/reusing the array of elements in my linked list. However, I'd like to continue with this "leap-frog" approach and somehow use some nonexistent routine that could be called "InterlockedCompareDereference" - to atomically compare against NULL (end of list) and conditionally dereference & store, returning the dereferenced value. I don't think InterlockedCompareExchangePointer() will work since I cannot atomically dereference the pointer and call this Interlocked() method. I've done some reading and others are suggesting critical sections or spin-locks. Critical sections seem heavy-weight here. I'm tempted to try spin-locks but I thought I'd first pose the question here and ask what other people are doing. I'm not convinced that the InterlockedCompareExchangePointer() method itself could be used like a spin-lock. Then one also has to consider acquire/release/fence semantics... Ideas? Thanks!

    Read the article

  • Thread-Safe lazy instantiating using MEF

    - by Xaqron
    // Member Variable private static readonly object _syncLock = new object(); // Now inside a static method foreach (var lazyObject in plugins) { if ((string)lazyObject.Metadata["key"] = "something") { lock (_syncLock) { // It seems the `IsValueCreated` is not up-to-date if (!lazyObject.IsValueCreated) lazyObject.value.DoSomething(); } return lazyObject.value; } } Here I need synchronized access per loop. There are many threads iterating this loop and based on the key they are looking for, a lazy instance is created and returned. lazyObject should not be created more that one time. Although Lazy class is for doing so and despite of the used lock, under high threading I have more than one instance created (I track this with a Interlocked.Increment on a volatile static int and log it somewhere). The problem is I don't have access to definition of Lazy and MEF defines how the Lazy class create objects. I should notice the CompositionContainer has a thread-safe option in constructor which is already used. My questions: 1) Why the lock doesn't work ? 2) Should I use an array of locks instead of one lock for performance improvement ?

    Read the article

  • localhost yes but phpmyadmin blank

    - by Giskin Leow
    WAMP people having problem with both localhost and phpmyadmin loads blank which usually the port problem. Mine is only phpmyadmin blank. sqlbuddy and phpinfo no problem. tried uninstall reinstalled wamp. tried xampp, same problem, all works well, not phpmyadmin. mysql log: 120905 8:03:08 [Note] Plugin 'FEDERATED' is disabled. 120905 8:03:08 InnoDB: The InnoDB memory heap is disabled 120905 8:03:08 InnoDB: Mutexes and rw_locks use Windows interlocked functions 120905 8:03:08 InnoDB: Compressed tables use zlib 1.2.3 120905 8:03:09 InnoDB: Initializing buffer pool, size = 128.0M 120905 8:03:09 InnoDB: Completed initialization of buffer pool 120905 8:03:09 InnoDB: highest supported file format is Barracuda. 120905 8:03:09 InnoDB: Waiting for the background threads to start 120905 8:03:10 InnoDB: 1.1.8 started; log sequence number 1595675 120905 8:03:11 [Note] Server hostname (bind-address): '(null)'; port: 3306 120905 8:03:11 [Note] - '(null)' resolves to '::'; 120905 8:03:11 [Note] - '(null)' resolves to '0.0.0.0'; 120905 8:03:11 [Note] Server socket created on IP: '0.0.0.0'. 120905 8:03:13 [Note] Event Scheduler: Loaded 0 events 120905 8:03:13 [Note] wampmysqld: ready for connections. apache log [Wed Sep 05 08:03:09 2012] [notice] Apache/2.2.22 (Win32) PHP/5.4.3 configured -- resuming normal operations [Wed Sep 05 08:03:09 2012] [notice] Server built: May 13 2012 13:32:42 [Wed Sep 05 08:03:09 2012] [notice] Parent: Created child process 3812 [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Child process is running [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Acquired the start mutex. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting 64 worker threads. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:03:09 2012] [notice] Child 3812: Starting thread to listen on port 80. [Wed Sep 05 08:04:14 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:09:50 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/favicon.ico [Wed Sep 05 08:41:03 2012] [error] [client 127.0.0.1] File does not exist: C:/wamp/www/phpMyAdmin

    Read the article

  • Subterranean IL: The ThreadLocal type

    - by Simon Cooper
    I came across ThreadLocal<T> while I was researching ConcurrentBag. To look at it, it doesn't really make much sense. What's all those extra Cn classes doing in there? Why is there a GenericHolder<T,U,V,W> class? What's going on? However, digging deeper, it's a rather ingenious solution to a tricky problem. Thread statics Declaring that a variable is thread static, that is, values assigned and read from the field is specific to the thread doing the reading, is quite easy in .NET: [ThreadStatic] private static string s_ThreadStaticField; ThreadStaticAttribute is not a pseudo-custom attribute; it is compiled as a normal attribute, but the CLR has in-built magic, activated by that attribute, to redirect accesses to the field based on the executing thread's identity. TheadStaticAttribute provides a simple solution when you want to use a single field as thread-static. What if you want to create an arbitary number of thread static variables at runtime? Thread-static fields can only be declared, and are fixed, at compile time. Prior to .NET 4, you only had one solution - thread local data slots. This is a lesser-known function of Thread that has existed since .NET 1.1: LocalDataStoreSlot threadSlot = Thread.AllocateNamedDataSlot("slot1"); string value = "foo"; Thread.SetData(threadSlot, value); string gettedValue = (string)Thread.GetData(threadSlot); Each instance of LocalStoreDataSlot mediates access to a single slot, and each slot acts like a separate thread-static field. As you can see, using thread data slots is quite cumbersome. You need to keep track of LocalDataStoreSlot objects, it's not obvious how instances of LocalDataStoreSlot correspond to individual thread-static variables, and it's not type safe. It's also relatively slow and complicated; the internal implementation consists of a whole series of classes hanging off a single thread-static field in Thread itself, using various arrays, lists, and locks for synchronization. ThreadLocal<T> is far simpler and easier to use. ThreadLocal ThreadLocal provides an abstraction around thread-static fields that allows it to be used just like any other class; it can be used as a replacement for a thread-static field, it can be used in a List<ThreadLocal<T>>, you can create as many as you need at runtime. So what does it do? It can't just have an instance-specific thread-static field, because thread-static fields have to be declared as static, and so shared between all instances of the declaring type. There's something else going on here. The values stored in instances of ThreadLocal<T> are stored in instantiations of the GenericHolder<T,U,V,W> class, which contains a single ThreadStatic field (s_value) to store the actual value. This class is then instantiated with various combinations of the Cn types for generic arguments. In .NET, each separate instantiation of a generic type has its own static state. For example, GenericHolder<int,C0,C1,C2> has a completely separate s_value field to GenericHolder<int,C1,C14,C1>. This feature is (ab)used by ThreadLocal to emulate instance thread-static fields. Every time an instance of ThreadLocal is constructed, it is assigned a unique number from the static s_currentTypeId field using Interlocked.Increment, in the FindNextTypeIndex method. The hexadecimal representation of that number then defines the specific Cn types that instantiates the GenericHolder class. That instantiation is therefore 'owned' by that instance of ThreadLocal. This gives each instance of ThreadLocal its own ThreadStatic field through a specific unique instantiation of the GenericHolder class. Although GenericHolder has four type variables, the first one is always instantiated to the type stored in the ThreadLocal<T>. This gives three free type variables, each of which can be instantiated to one of 16 types (C0 to C15). This puts an upper limit of 4096 (163) on the number of ThreadLocal<T> instances that can be created for each value of T. That is, there can be a maximum of 4096 instances of ThreadLocal<string>, and separately a maximum of 4096 instances of ThreadLocal<object>, etc. However, there is an upper limit of 16384 enforced on the total number of ThreadLocal instances in the AppDomain. This is to stop too much memory being used by thousands of instantiations of GenericHolder<T,U,V,W>, as once a type is loaded into an AppDomain it cannot be unloaded, and will continue to sit there taking up memory until the AppDomain is unloaded. The total number of ThreadLocal instances created is tracked by the ThreadLocalGlobalCounter class. So what happens when either limit is reached? Firstly, to try and stop this limit being reached, it recycles GenericHolder type indexes of ThreadLocal instances that get disposed using the s_availableIndices concurrent stack. This allows GenericHolder instantiations of disposed ThreadLocal instances to be re-used. But if there aren't any available instantiations, then ThreadLocal falls back on a standard thread local slot using TLSHolder. This makes it very important to dispose of your ThreadLocal instances if you'll be using lots of them, so the type instantiations can be recycled. The previous way of creating arbitary thread-static variables, thread data slots, was slow, clunky, and hard to use. In comparison, ThreadLocal can be used just like any other type, and each instance appears from the outside to be a non-static thread-static variable. It does this by using the CLR type system to assign each instance of ThreadLocal its own instantiated type containing a thread-static field, and so delegating a lot of the bookkeeping that thread data slots had to do to the CLR type system itself! That's a very clever use of the CLR type system.

    Read the article

  • How to configure maximum number of transport channels in WCF using basicHttpBinding?

    - by Hemant
    Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Generic class for performing mass-parallel queries. Feedback?

    - by Aaron
    I don't understand why, but there appears to be no mechanism in the client library for performing many queries in parallel for Windows Azure Table Storage. I've created a template class that can be used to save considerable time, and you're welcome to use it however you wish. I would appreciate however, if you could pick it apart, and provide feedback on how to improve this class. public class AsyncDataQuery<T> where T: new() { public AsyncDataQuery(bool preserve_order) { m_preserve_order = preserve_order; this.Queries = new List<CloudTableQuery<T>>(1000); } public void AddQuery(IQueryable<T> query) { var data_query = (DataServiceQuery<T>)query; var uri = data_query.RequestUri; // required this.Queries.Add(new CloudTableQuery<T>(data_query)); } /// <summary> /// Blocking but still optimized. /// </summary> public List<T> Execute() { this.BeginAsync(); return this.EndAsync(); } public void BeginAsync() { if (m_preserve_order == true) { this.Items = new List<T>(Queries.Count); for (var i = 0; i < Queries.Count; i++) { this.Items.Add(new T()); } } else { this.Items = new List<T>(Queries.Count * 2); } m_wait = new ManualResetEvent(false); for (var i = 0; i < Queries.Count; i++) { var query = Queries[i]; query.BeginExecuteSegmented(callback, i); } } public List<T> EndAsync() { m_wait.WaitOne(); return this.Items; } private List<T> Items { get; set; } private List<CloudTableQuery<T>> Queries { get; set; } private bool m_preserve_order; private ManualResetEvent m_wait; private int m_completed = 0; private void callback(IAsyncResult ar) { int i = (int)ar.AsyncState; CloudTableQuery<T> query = Queries[i]; var response = query.EndExecuteSegmented(ar); if (m_preserve_order == true) { // preserve ordering only supports one result per query this.Items[i] = response.Results.First(); } else { // add any number of items this.Items.AddRange(response.Results); } if (response.HasMoreResults == true) { // more data to pull query.BeginExecuteSegmented(response.ContinuationToken, callback, i); return; } m_completed = Interlocked.Increment(ref m_completed); if (m_completed == Queries.Count) { m_wait.Set(); } } }

    Read the article

  • How to fix basicHttpBinding in WCF when using multiple proxy clients?

    - by Hemant
    [Question seems a little long but please have patience. It has sample source to explain the problem.] Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method on separate threads. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • Small performance test on a web service

    - by vtortola
    Hi, I'm trying to develop a small application that test how many requests per second can my service support but I think I'm doing something wrong. The service is in an early development stage, but I'd like to have this test handy in order to check in time to time I'm not doing something that decrease the performance. The problem is that I cannot get the web server or the database server go to the 100% of CPU. I'm using three different computers, in one is the web server (WinSrv Standard 2008 x64 IIS7), in other the database (Win 2K - SQL Server 2005) and the last is my computer (Win7 x64 ultimate), where I'll run the test. The computers are connected through a 100 ethernet switch. The request POST is 9 bytes and the response will be 842 bytes. The test launches several threads, and each thread has a "while" loop, in each loop it creates a WebRequest object, performs a call, increment a common counter and waits between 1 and 5 millisencods, then it do it again: static Int32 counter = 0; static void Main(string[] args) { ServicePointManager.DefaultConnectionLimit = 250; Console.WriteLine("Ready. Press any key..."); Console.ReadKey(); Console.WriteLine("Running..."); String localhost = "localhost"; String linuxmono = "192.168.1.74"; String server= "192.168.1.5:8080"; DateTime start = DateTime.Now; Random r = new Random(DateTime.Now.Millisecond); for (int i = 0; i < 50; i++) { new Thread(new ParameterizedThreadStart(Test)).Start(server); Thread.Sleep(r.Next(1, 3)); } Thread.Sleep(2000); while (true) { Console.WriteLine("Request per second :" + counter/DateTime.Now.Subtract(start).TotalSeconds ); Thread.Sleep(3000); } } public static void Test(Object ip) { Guid guid = Guid.NewGuid(); Random r = new Random(DateTime.Now.Millisecond); while (true) { String test = "<lalala/>"; WebRequest req = WebRequest.Create("http://" + (String)ip + "/WebApp/"+guid.ToString()+"/Data/Tables=whatever"); req.Method = "POST"; req.ContentType = "application/xml"; req.Credentials = new NetworkCredential("aaa", "aaa","domain"); Byte[] array = Encoding.UTF8.GetBytes(test); req.ContentLength = array.Length; using (Stream reqStream = req.GetRequestStream()) { reqStream.Write(array, 0, array.Length); reqStream.Close(); } using (Stream responseStream = req.GetResponse().GetResponseStream()) { String response = new StreamReader(responseStream).ReadToEnd(); if (response.Length != 842) Console.Write(" EEEE "); } Interlocked.Increment(ref counter); Thread.Sleep(r.Next(1,5)); } } If I run the test neither of the computers do an excesive CPU usage. Let's say I get a X requests per second, if I run the console application two times at the same moment, I get X/2 request per second in each one... but still the web server is on 30% of CPU, the database server on 25%... I've tried to remove the thread.sleep in the loop, but it doesn't make a big difference. I'd like to put the machines to the maximun, to check how may requests per second they can provide. I guessed that I could do it in this way... but apparently I'm missing something here... What is the problem? Kind regards.

    Read the article

  • First toe in the water with Object Databases : DB4O

    - by REA_ANDREW
    I have been wanting to have a play with Object Databases for a while now, and today I have done just that.  One of the obvious choices I had to make was which one to use.  My criteria for choosing one today was simple, I wanted one which I could literally wack in and start using, which means I wanted one which either had a .NET API or was designed/ported to .NET.  My decision was between two being: db4o MongoDb I went for db4o for the single reason that it looked like I could get it running and integrated the quickest.  I am making a Blogging application and front end as a project with which I can test and learn with these object databases.  Another requirement which I thought I would mention is that I also want to be able to use the said database in a shared hosting environment where I cannot install, run and maintain a server instance of said object database.  I can do exactly this with db4o. I have not tried to do this with MongoDb at time of writing.  There are quite a few in the industry now and you read an interesting post about different ones and how they are used with some of the heavy weights in the industry here : http://blog.marcua.net/post/442594842/notes-from-nosql-live-boston-2010 In the example which I am building I am using StructureMap as my IOC.  To inject the object for db4o I went with a Singleton instance scope as I am using a single file and I need this to be available to any thread on in the process as opposed to using the server implementation where I could open and close client connections with the server handling each one respectively.  Again I want to point out that I have chosen to stick with the non server implementation of db4o as I wanted to use this in a shared hosting environment where I cannot have such servers installed and run.     public static class Bootstrapper    {        public static void ConfigureStructureMap()        {            ObjectFactory.Initialize(x => x.AddRegistry(new MyApplicationRegistry()));        }    }    public class MyApplicationRegistry : Registry    {        public const string DB4O_FILENAME = "blog123";        public string DbPath        {            get            {                return Path.Combine(Path.GetDirectoryName(Assembly.GetAssembly(typeof(IBlogRepository)).Location), DB4O_FILENAME);            }        }        public MyApplicationRegistry()        {            For<IObjectContainer>().Singleton().Use(                () => Db4oEmbedded.OpenFile(Db4oEmbedded.NewConfiguration(), DbPath));            Scan(assemblyScanner =>            {                assemblyScanner.TheCallingAssembly();                assemblyScanner.WithDefaultConventions();            });        }    } So my code above is the structure map plumbing which I use for the application.  I am doing this simply as a quick scratch pad to play around with different things so I am simply segregating logical layers with folder structure as opposed to different assemblies.  It will be easy if I want to do this with any segment but for the purposes of example I have literally just wacked everything in the one assembly.  You can see an example file structure I have on the right.  I am planning on testing out a few implementations of the object databases out there so I can program to an interface of IBlogRepository One of the things which I was unsure about was how it performed under a multi threaded environment which it will undoubtedly be used 9 times out of 10, and for the reason that I am using the db context as a singleton, I assumed that the library was of course thread safe but I did not know as I have not read any where in the documentation, again this is probably me not reading things correctly.  In short though I threw together a simple test where I simply iterate to a limit each time kicking a common task off with a thread from a thread pool.  This task simply created and added an random Post and added it to the storage. The execution of the threads I put inside the Setup of the Test and then simply ensure the number of posts committed to the database is equal to the number of iterations I made; here is the code I used to do the multi thread jobs: [TestInitialize] public void Setup() { var sw = new System.Diagnostics.Stopwatch(); sw.Start(); var resetEvent = new ManualResetEvent(false); ThreadPool.SetMaxThreads(20, 20); for (var i = 0; i < MAX_ITERATIONS; i++) { ThreadPool.QueueUserWorkItem(delegate(object state) { var eventToReset = (ManualResetEvent)state; var post = new Post { Author = MockUser, Content = "Mock Content", Title = "Title" }; Repository.Put(post); var counter = Interlocked.Decrement(ref _threadCounter); if (counter == 0) eventToReset.Set(); }, resetEvent); } WaitHandle.WaitAll(new[] { resetEvent }); sw.Stop(); Console.WriteLine("{0:00}.{1:00} seconds", sw.Elapsed.Seconds, sw.Elapsed.Milliseconds); }   I was not doing this to test out the speed performance of db4o but while I was doing this I could not help but put in a StopWatch and see out of sheer interest how fast it would take to insert a number of Posts.  I tested it out in this case with 10000 inserts of a small, simple POCO and it resulted in an average of:  899.36 object inserts / second.  Again this is just  simple crude test which came out of my curiosity at how it performed under many threads when using the non server implementation of db4o. The spec summary of the computer I used is as follows: With regards to the actual Repository implementation itself, it really is quite straight forward and I have to say I am very surprised at how easy it was to integrate and get up and running.  One thing I have noticed in the exposure I have had so far is that the Query returns IList<T> as opposed to IQueryable<T> but again I have not looked into this in depth and this could be there already and if not they have provided everything one needs to make there own repository.  An example of a couple of methods from by db4o implementation of the BlogRepository is below: public class BlogRepository : IBlogRepository { private readonly IObjectContainer _db; public BlogRepository(IObjectContainer db) { _db = db; } public void Put(DomainObject obj) { _db.Store(obj); } public void Delete(DomainObject obj) { _db.Delete(obj); } public Post GetByKey(object key) { return _db.Query<Post>(post => post.Key == key).FirstOrDefault(); } … Anyways I hope to get a few more implementations going of the object databases and literally just get familiarized with them and the concept of no sql databases. Cheers for now, Andrew

    Read the article

  • Inside the Concurrent Collections: ConcurrentDictionary

    - by Simon Cooper
    Using locks to implement a thread-safe collection is rather like using a sledgehammer - unsubtle, easy to understand, and tends to make any other tool redundant. Unlike the previous two collections I looked at, ConcurrentStack and ConcurrentQueue, ConcurrentDictionary uses locks quite heavily. However, it is careful to wield locks only where necessary to ensure that concurrency is maximised. This will, by necessity, be a higher-level look than my other posts in this series, as there is quite a lot of code and logic in ConcurrentDictionary. Therefore, I do recommend that you have ConcurrentDictionary open in a decompiler to have a look at all the details that I skip over. The problem with locks There's several things to bear in mind when using locks, as encapsulated by the lock keyword in C# and the System.Threading.Monitor class in .NET (if you're unsure as to what lock does in C#, I briefly covered it in my first post in the series): Locks block threads The most obvious problem is that threads waiting on a lock can't do any work at all. No preparatory work, no 'optimistic' work like in ConcurrentQueue and ConcurrentStack, nothing. It sits there, waiting to be unblocked. This is bad if you're trying to maximise concurrency. Locks are slow Whereas most of the methods on the Interlocked class can be compiled down to a single CPU instruction, ensuring atomicity at the hardware level, taking out a lock requires some heavy lifting by the CLR and the operating system. There's quite a bit of work required to take out a lock, block other threads, and wake them up again. If locks are used heavily, this impacts performance. Deadlocks When using locks there's always the possibility of a deadlock - two threads, each holding a lock, each trying to aquire the other's lock. Fortunately, this can be avoided with careful programming and structured lock-taking, as we'll see. So, it's important to minimise where locks are used to maximise the concurrency and performance of the collection. Implementation As you might expect, ConcurrentDictionary is similar in basic implementation to the non-concurrent Dictionary, which I studied in a previous post. I'll be using some concepts introduced there, so I recommend you have a quick read of it. So, if you were implementing a thread-safe dictionary, what would you do? The naive implementation is to simply have a single lock around all methods accessing the dictionary. This would work, but doesn't allow much concurrency. Fortunately, the bucketing used by Dictionary allows a simple but effective improvement to this - one lock per bucket. This allows different threads modifying different buckets to do so in parallel. Any thread making changes to the contents of a bucket takes the lock for that bucket, ensuring those changes are thread-safe. The method that maps each bucket to a lock is the GetBucketAndLockNo method: private void GetBucketAndLockNo( int hashcode, out int bucketNo, out int lockNo, int bucketCount) { // the bucket number is the hashcode (without the initial sign bit) // modulo the number of buckets bucketNo = (hashcode & 0x7fffffff) % bucketCount; // and the lock number is the bucket number modulo the number of locks lockNo = bucketNo % m_locks.Length; } However, this does require some changes to how the buckets are implemented. The 'implicit' linked list within a single backing array used by the non-concurrent Dictionary adds a dependency between separate buckets, as every bucket uses the same backing array. Instead, ConcurrentDictionary uses a strict linked list on each bucket: This ensures that each bucket is entirely separate from all other buckets; adding or removing an item from a bucket is independent to any changes to other buckets. Modifying the dictionary All the operations on the dictionary follow the same basic pattern: void AlterBucket(TKey key, ...) { int bucketNo, lockNo; 1: GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, m_buckets.Length); 2: lock (m_locks[lockNo]) { 3: Node headNode = m_buckets[bucketNo]; 4: Mutate the node linked list as appropriate } } For example, when adding another entry to the dictionary, you would iterate through the linked list to check whether the key exists already, and add the new entry as the head node. When removing items, you would find the entry to remove (if it exists), and remove the node from the linked list. Adding, updating, and removing items all follow this pattern. Performance issues There is a problem we have to address at this point. If the number of buckets in the dictionary is fixed in the constructor, then the performance will degrade from O(1) to O(n) when a large number of items are added to the dictionary. As more and more items get added to the linked lists in each bucket, the lookup operations will spend most of their time traversing a linear linked list. To fix this, the buckets array has to be resized once the number of items in each bucket has gone over a certain limit. (In ConcurrentDictionary this limit is when the size of the largest bucket is greater than the number of buckets for each lock. This check is done at the end of the TryAddInternal method.) Resizing the bucket array and re-hashing everything affects every bucket in the collection. Therefore, this operation needs to take out every lock in the collection. Taking out mutiple locks at once inevitably summons the spectre of the deadlock; two threads each hold a lock, and each trying to acquire the other lock. How can we eliminate this? Simple - ensure that threads never try to 'swap' locks in this fashion. When taking out multiple locks, always take them out in the same order, and always take out all the locks you need before starting to release them. In ConcurrentDictionary, this is controlled by the AcquireLocks, AcquireAllLocks and ReleaseLocks methods. Locks are always taken out and released in the order they are in the m_locks array, and locks are all released right at the end of the method in a finally block. At this point, it's worth pointing out that the locks array is never re-assigned, even when the buckets array is increased in size. The number of locks is fixed in the constructor by the concurrencyLevel parameter. This simplifies programming the locks; you don't have to check if the locks array has changed or been re-assigned before taking out a lock object. And you can be sure that when a thread takes out a lock, another thread isn't going to re-assign the lock array. This would create a new series of lock objects, thus allowing another thread to ignore the existing locks (and any threads controlling them), breaking thread-safety. Consequences of growing the array Just because we're using locks doesn't mean that race conditions aren't a problem. We can see this by looking at the GrowTable method. The operation of this method can be boiled down to: private void GrowTable(Node[] buckets) { try { 1: Acquire first lock in the locks array // this causes any other thread trying to take out // all the locks to block because the first lock in the array // is always the one taken out first // check if another thread has already resized the buckets array // while we were waiting to acquire the first lock 2: if (buckets != m_buckets) return; 3: Calculate the new size of the backing array 4: Node[] array = new array[size]; 5: Acquire all the remaining locks 6: Re-hash the contents of the existing buckets into array 7: m_buckets = array; } finally { 8: Release all locks } } As you can see, there's already a check for a race condition at step 2, for the case when the GrowTable method is called twice in quick succession on two separate threads. One will successfully resize the buckets array (blocking the second in the meantime), when the second thread is unblocked it'll see that the array has already been resized & exit without doing anything. There is another case we need to consider; looking back at the AlterBucket method above, consider the following situation: Thread 1 calls AlterBucket; step 1 is executed to get the bucket and lock numbers. Thread 2 calls GrowTable and executes steps 1-5; thread 1 is blocked when it tries to take out the lock in step 2. Thread 2 re-hashes everything, re-assigns the buckets array, and releases all the locks (steps 6-8). Thread 1 is unblocked and continues executing, but the calculated bucket and lock numbers are no longer valid. Between calculating the correct bucket and lock number and taking out the lock, another thread has changed where everything is. Not exactly thread-safe. Well, a similar problem was solved in ConcurrentStack and ConcurrentQueue by storing a local copy of the state, doing the necessary calculations, then checking if that state is still valid. We can use a similar idea here: void AlterBucket(TKey key, ...) { while (true) { Node[] buckets = m_buckets; int bucketNo, lockNo; GetBucketAndLockNo( key.GetHashCode(), out bucketNo, out lockNo, buckets.Length); lock (m_locks[lockNo]) { // if the state has changed, go back to the start if (buckets != m_buckets) continue; Node headNode = m_buckets[bucketNo]; Mutate the node linked list as appropriate } break; } } TryGetValue and GetEnumerator And so, finally, we get onto TryGetValue and GetEnumerator. I've left these to the end because, well, they don't actually use any locks. How can this be? Whenever you change a bucket, you need to take out the corresponding lock, yes? Indeed you do. However, it is important to note that TryGetValue and GetEnumerator don't actually change anything. Just as immutable objects are, by definition, thread-safe, read-only operations don't need to take out a lock because they don't change anything. All lockless methods can happily iterate through the buckets and linked lists without worrying about locking anything. However, this does put restrictions on how the other methods operate. Because there could be another thread in the middle of reading the dictionary at any time (even if a lock is taken out), the dictionary has to be in a valid state at all times. Every change to state has to be made visible to other threads in a single atomic operation (all relevant variables are marked volatile to help with this). This restriction ensures that whatever the reading threads are doing, they never read the dictionary in an invalid state (eg items that should be in the collection temporarily removed from the linked list, or reading a node that has had it's key & value removed before the node itself has been removed from the linked list). Fortunately, all the operations needed to change the dictionary can be done in that way. Bucket resizes are made visible when the new array is assigned back to the m_buckets variable. Any additions or modifications to a node are done by creating a new node, then splicing it into the existing list using a single variable assignment. Node removals are simply done by re-assigning the node's m_next pointer. Because the dictionary can be changed by another thread during execution of the lockless methods, the GetEnumerator method is liable to return dirty reads - changes made to the dictionary after GetEnumerator was called, but before the enumeration got to that point in the dictionary. It's worth listing at this point which methods are lockless, and which take out all the locks in the dictionary to ensure they get a consistent view of the dictionary: Lockless: TryGetValue GetEnumerator The indexer getter ContainsKey Takes out every lock (lockfull?): Count IsEmpty Keys Values CopyTo ToArray Concurrent principles That covers the overall implementation of ConcurrentDictionary. I haven't even begun to scratch the surface of this sophisticated collection. That I leave to you. However, we've looked at enough to be able to extract some useful principles for concurrent programming: Partitioning When using locks, the work is partitioned into independant chunks, each with its own lock. Each partition can then be modified concurrently to other partitions. Ordered lock-taking When a method does need to control the entire collection, locks are taken and released in a fixed order to prevent deadlocks. Lockless reads Read operations that don't care about dirty reads don't take out any lock; the rest of the collection is implemented so that any reading thread always has a consistent view of the collection. That leads us to the final collection in this little series - ConcurrentBag. Lacking a non-concurrent analogy, it is quite different to any other collection in the class libraries. Prepare your thinking hats!

    Read the article

  • C# .Net 3.5 Asynchronous Socket Server Performance Problem

    - by iBrAaAa
    I'm developing an Asynchronous Game Server using .Net Socket Asynchronous Model( BeginAccept/EndAccept...etc.) The problem I'm facing is described like that: When I have only one client connected, the server response time is very fast but once a second client connects, the server response time increases too much. I've measured the time from a client sends a message to the server until it gets the reply in both cases. I found that the average time in case of one client is about 17ms and in case of 2 clients about 280ms!!! What I really see is that: When 2 clients are connected and only one of them is moving(i.e. requesting service from the server) it is equivalently equal to the case when only one client is connected(i.e. fast response). However, when the 2 clients move at the same time(i.e. requests service from the server at the same time) their motion becomes very slow (as if the server replies each one of them in order i.e. not simultaneously). Basically, what I am doing is that: When a client requests a permission for motion from the server and the server grants him the request, the server then broadcasts the new position of the client to all the players. So if two clients are moving in the same time, the server is eventually trying to broadcast to both clients the new position of each of them at the same time. EX: Client1 asks to go to position (2,2) Client2 asks to go to position (5,5) Server sends to each of Client1 & Client2 the same two messages: message1: "Client1 at (2,2)" message2: "Client2 at (5,5)" I believe that the problem comes from the fact that Socket class is thread safe according MSDN documentation http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx. (NOT SURE THAT IT IS THE PROBLEM) Below is the code for the server: /// /// This class is responsible for handling packet receiving and sending /// public class NetworkManager { /// /// An integer to hold the server port number to be used for the connections. Its default value is 5000. /// private readonly int port = 5000; /// /// hashtable contain all the clients connected to the server. /// key: player Id /// value: socket /// private readonly Hashtable connectedClients = new Hashtable(); /// /// An event to hold the thread to wait for a new client /// private readonly ManualResetEvent resetEvent = new ManualResetEvent(false); /// /// keeps track of the number of the connected clients /// private int clientCount; /// /// The socket of the server at which the clients connect /// private readonly Socket mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); /// /// The socket exception that informs that a client is disconnected /// private const int ClientDisconnectedErrorCode = 10054; /// /// The only instance of this class. /// private static readonly NetworkManager networkManagerInstance = new NetworkManager(); /// /// A delegate for the new client connected event. /// /// the sender object /// the event args public delegate void NewClientConnected(Object sender, SystemEventArgs e); /// /// A delegate for the position update message reception. /// /// the sender object /// the event args public delegate void PositionUpdateMessageRecieved(Object sender, PositionUpdateEventArgs e); /// /// The event which fires when a client sends a position message /// public PositionUpdateMessageRecieved PositionUpdateMessageEvent { get; set; } /// /// keeps track of the number of the connected clients /// public int ClientCount { get { return clientCount; } } /// /// A getter for this class instance. /// /// only instance. public static NetworkManager NetworkManagerInstance { get { return networkManagerInstance; } } private NetworkManager() {} /// Starts the game server and holds this thread alive /// public void StartServer() { //Bind the mainSocket to the server IP address and port mainSocket.Bind(new IPEndPoint(IPAddress.Any, port)); //The server starts to listen on the binded socket with max connection queue //1024 mainSocket.Listen(1024); //Start accepting clients asynchronously mainSocket.BeginAccept(OnClientConnected, null); //Wait until there is a client wants to connect resetEvent.WaitOne(); } /// /// Receives connections of new clients and fire the NewClientConnected event /// private void OnClientConnected(IAsyncResult asyncResult) { Interlocked.Increment(ref clientCount); ClientInfo newClient = new ClientInfo { WorkerSocket = mainSocket.EndAccept(asyncResult), PlayerId = clientCount }; //Add the new client to the hashtable and increment the number of clients connectedClients.Add(newClient.PlayerId, newClient); //fire the new client event informing that a new client is connected to the server if (NewClientEvent != null) { NewClientEvent(this, System.EventArgs.Empty); } newClient.WorkerSocket.BeginReceive(newClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), newClient); //Start accepting clients asynchronously again mainSocket.BeginAccept(OnClientConnected, null); } /// Waits for the upcoming messages from different clients and fires the proper event according to the packet type. /// /// private void WaitForData(IAsyncResult asyncResult) { ClientInfo sendingClient = null; try { //Take the client information from the asynchronous result resulting from the BeginReceive sendingClient = asyncResult.AsyncState as ClientInfo; // If client is disconnected, then throw a socket exception // with the correct error code. if (!IsConnected(sendingClient.WorkerSocket)) { throw new SocketException(ClientDisconnectedErrorCode); } //End the pending receive request sendingClient.WorkerSocket.EndReceive(asyncResult); //Fire the appropriate event FireMessageTypeEvent(sendingClient.ConvertBytesToPacket() as BasePacket); // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } catch (SocketException e) { if (e.ErrorCode == ClientDisconnectedErrorCode) { // Close the socket. if (sendingClient.WorkerSocket != null) { sendingClient.WorkerSocket.Close(); sendingClient.WorkerSocket = null; } // Remove it from the hash table. connectedClients.Remove(sendingClient.PlayerId); if (ClientDisconnectedEvent != null) { ClientDisconnectedEvent(this, new ClientDisconnectedEventArgs(sendingClient.PlayerId)); } } } catch (Exception e) { // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } } /// /// Broadcasts the input message to all the connected clients /// /// public void BroadcastMessage(BasePacket message) { byte[] bytes = message.ConvertToBytes(); foreach (ClientInfo client in connectedClients.Values) { client.WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, client); } } /// /// Sends the input message to the client specified by his ID. /// /// /// The message to be sent. /// The id of the client to receive the message. public void SendToClient(BasePacket message, int id) { byte[] bytes = message.ConvertToBytes(); (connectedClients[id] as ClientInfo).WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, connectedClients[id]); } private void SendAsync(IAsyncResult asyncResult) { ClientInfo currentClient = (ClientInfo)asyncResult.AsyncState; currentClient.WorkerSocket.EndSend(asyncResult); } /// Fires the event depending on the type of received packet /// /// The received packet. void FireMessageTypeEvent(BasePacket packet) { switch (packet.MessageType) { case MessageType.PositionUpdateMessage: if (PositionUpdateMessageEvent != null) { PositionUpdateMessageEvent(this, new PositionUpdateEventArgs(packet as PositionUpdatePacket)); } break; } } } The events fired are handled in a different class, here are the event handling code for the PositionUpdateMessage (Other handlers are irrelevant): private readonly Hashtable onlinePlayers = new Hashtable(); /// /// Constructor that creates a new instance of the GameController class. /// private GameController() { //Start the server server = new Thread(networkManager.StartServer); server.Start(); //Create an event handler for the NewClientEvent of networkManager networkManager.PositionUpdateMessageEvent += OnPositionUpdateMessageReceived; } /// /// this event handler is called when a client asks for movement. /// private void OnPositionUpdateMessageReceived(object sender, PositionUpdateEventArgs e) { Point currentLocation = ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position; Point locationRequested = e.PositionUpdatePacket.Position; ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position = locationRequested; // Broadcast the new position networkManager.BroadcastMessage(new PositionUpdatePacket { Position = locationRequested, PlayerId = e.PositionUpdatePacket.PlayerId }); }

    Read the article

  • .Net 3.5 Asynchronous Socket Server Performance Problem

    - by iBrAaAa
    I'm developing an Asynchronous Game Server using .Net Socket Asynchronous Model( BeginAccept/EndAccept...etc.) The problem I'm facing is described like that: When I have only one client connected, the server response time is very fast but once a second client connects, the server response time increases too much. I've measured the time from a client sends a message to the server until it gets the reply in both cases. I found that the average time in case of one client is about 17ms and in case of 2 clients about 280ms!!! What I really see is that: When 2 clients are connected and only one of them is moving(i.e. requesting service from the server) it is equivalently equal to the case when only one client is connected(i.e. fast response). However, when the 2 clients move at the same time(i.e. requests service from the server at the same time) their motion becomes very slow (as if the server replies each one of them in order i.e. not simultaneously). Basically, what I am doing is that: When a client requests a permission for motion from the server and the server grants him the request, the server then broadcasts the new position of the client to all the players. So if two clients are moving in the same time, the server is eventually trying to broadcast to both clients the new position of each of them at the same time. EX: Client1 asks to go to position (2,2) Client2 asks to go to position (5,5) Server sends to each of Client1 & Client2 the same two messages: message1: "Client1 at (2,2)" message2: "Client2 at (5,5)" I believe that the problem comes from the fact that Socket class is thread safe according MSDN documentation http://msdn.microsoft.com/en-us/library/system.net.sockets.socket.aspx. (NOT SURE THAT IT IS THE PROBLEM) Below is the code for the server: /// /// This class is responsible for handling packet receiving and sending /// public class NetworkManager { /// /// An integer to hold the server port number to be used for the connections. Its default value is 5000. /// private readonly int port = 5000; /// /// hashtable contain all the clients connected to the server. /// key: player Id /// value: socket /// private readonly Hashtable connectedClients = new Hashtable(); /// /// An event to hold the thread to wait for a new client /// private readonly ManualResetEvent resetEvent = new ManualResetEvent(false); /// /// keeps track of the number of the connected clients /// private int clientCount; /// /// The socket of the server at which the clients connect /// private readonly Socket mainSocket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); /// /// The socket exception that informs that a client is disconnected /// private const int ClientDisconnectedErrorCode = 10054; /// /// The only instance of this class. /// private static readonly NetworkManager networkManagerInstance = new NetworkManager(); /// /// A delegate for the new client connected event. /// /// the sender object /// the event args public delegate void NewClientConnected(Object sender, SystemEventArgs e); /// /// A delegate for the position update message reception. /// /// the sender object /// the event args public delegate void PositionUpdateMessageRecieved(Object sender, PositionUpdateEventArgs e); /// /// The event which fires when a client sends a position message /// public PositionUpdateMessageRecieved PositionUpdateMessageEvent { get; set; } /// /// keeps track of the number of the connected clients /// public int ClientCount { get { return clientCount; } } /// /// A getter for this class instance. /// /// only instance. public static NetworkManager NetworkManagerInstance { get { return networkManagerInstance; } } private NetworkManager() {} /// Starts the game server and holds this thread alive /// public void StartServer() { //Bind the mainSocket to the server IP address and port mainSocket.Bind(new IPEndPoint(IPAddress.Any, port)); //The server starts to listen on the binded socket with max connection queue //1024 mainSocket.Listen(1024); //Start accepting clients asynchronously mainSocket.BeginAccept(OnClientConnected, null); //Wait until there is a client wants to connect resetEvent.WaitOne(); } /// /// Receives connections of new clients and fire the NewClientConnected event /// private void OnClientConnected(IAsyncResult asyncResult) { Interlocked.Increment(ref clientCount); ClientInfo newClient = new ClientInfo { WorkerSocket = mainSocket.EndAccept(asyncResult), PlayerId = clientCount }; //Add the new client to the hashtable and increment the number of clients connectedClients.Add(newClient.PlayerId, newClient); //fire the new client event informing that a new client is connected to the server if (NewClientEvent != null) { NewClientEvent(this, System.EventArgs.Empty); } newClient.WorkerSocket.BeginReceive(newClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), newClient); //Start accepting clients asynchronously again mainSocket.BeginAccept(OnClientConnected, null); } /// Waits for the upcoming messages from different clients and fires the proper event according to the packet type. /// /// private void WaitForData(IAsyncResult asyncResult) { ClientInfo sendingClient = null; try { //Take the client information from the asynchronous result resulting from the BeginReceive sendingClient = asyncResult.AsyncState as ClientInfo; // If client is disconnected, then throw a socket exception // with the correct error code. if (!IsConnected(sendingClient.WorkerSocket)) { throw new SocketException(ClientDisconnectedErrorCode); } //End the pending receive request sendingClient.WorkerSocket.EndReceive(asyncResult); //Fire the appropriate event FireMessageTypeEvent(sendingClient.ConvertBytesToPacket() as BasePacket); // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } catch (SocketException e) { if (e.ErrorCode == ClientDisconnectedErrorCode) { // Close the socket. if (sendingClient.WorkerSocket != null) { sendingClient.WorkerSocket.Close(); sendingClient.WorkerSocket = null; } // Remove it from the hash table. connectedClients.Remove(sendingClient.PlayerId); if (ClientDisconnectedEvent != null) { ClientDisconnectedEvent(this, new ClientDisconnectedEventArgs(sendingClient.PlayerId)); } } } catch (Exception e) { // Begin receiving data from this client sendingClient.WorkerSocket.BeginReceive(sendingClient.Buffer, 0, BasePacket.GetMaxPacketSize(), SocketFlags.None, new AsyncCallback(WaitForData), sendingClient); } } /// /// Broadcasts the input message to all the connected clients /// /// public void BroadcastMessage(BasePacket message) { byte[] bytes = message.ConvertToBytes(); foreach (ClientInfo client in connectedClients.Values) { client.WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, client); } } /// /// Sends the input message to the client specified by his ID. /// /// /// The message to be sent. /// The id of the client to receive the message. public void SendToClient(BasePacket message, int id) { byte[] bytes = message.ConvertToBytes(); (connectedClients[id] as ClientInfo).WorkerSocket.BeginSend(bytes, 0, bytes.Length, SocketFlags.None, SendAsync, connectedClients[id]); } private void SendAsync(IAsyncResult asyncResult) { ClientInfo currentClient = (ClientInfo)asyncResult.AsyncState; currentClient.WorkerSocket.EndSend(asyncResult); } /// Fires the event depending on the type of received packet /// /// The received packet. void FireMessageTypeEvent(BasePacket packet) { switch (packet.MessageType) { case MessageType.PositionUpdateMessage: if (PositionUpdateMessageEvent != null) { PositionUpdateMessageEvent(this, new PositionUpdateEventArgs(packet as PositionUpdatePacket)); } break; } } } The events fired are handled in a different class, here are the event handling code for the PositionUpdateMessage (Other handlers are irrelevant): private readonly Hashtable onlinePlayers = new Hashtable(); /// /// Constructor that creates a new instance of the GameController class. /// private GameController() { //Start the server server = new Thread(networkManager.StartServer); server.Start(); //Create an event handler for the NewClientEvent of networkManager networkManager.PositionUpdateMessageEvent += OnPositionUpdateMessageReceived; } /// /// this event handler is called when a client asks for movement. /// private void OnPositionUpdateMessageReceived(object sender, PositionUpdateEventArgs e) { Point currentLocation = ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position; Point locationRequested = e.PositionUpdatePacket.Position; ((PlayerData)onlinePlayers[e.PositionUpdatePacket.PlayerId]).Position = locationRequested; // Broadcast the new position networkManager.BroadcastMessage(new PositionUpdatePacket { Position = locationRequested, PlayerId = e.PositionUpdatePacket.PlayerId }); }

    Read the article

< Previous Page | 1 2