Search Results

Search found 255115 results on 10205 pages for 'stack exchange'.

Page 93/10205 | < Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >

  • Experiences with (free) embedded TCP / IP stacks?

    - by Dan
    Does anyone have especially good (or bad) experiences with any of the following embedded TCP / IP stacks? uIP lwIP Bentham's TCP/IP Lean implementation The TCP/IP stack from this book My needs are for a solid, easy-to-port stack. Code size isn't terribly important, performance is relatively important, but ease of use & porting is very important. The system will probably use an RTOS, that hasn't been decided, but in my experience most stacks can be used with or without an RTOS. Most likely the platform will be an ARM variant (ARM7 or CM3 in all likelihood). Not too concerned about bolting the stack to the Ethernet driver, so that isn't a big priority in the selection. I'm not terribly interested in extracting a stack out of an OS, such as Linux, RTEMS, etc. I'm also not interested in commercial offerings such as Interniche, Micrium, etc... The stack doesn't need all sorts of bells & whistles, doesn't need IPv6, and I don't need any stuff on top of it (web servers, FTP servers, etc..) In fact it's possible that I'll only use UDP, although I can envision a couple scenarios where TCP would be preferable. Experiences with other stacks I've missed are of course also very much of interest. Thanks for your time & input.

    Read the article

  • How to implement continuations?

    - by Kyle Cronin
    I'm working on a Scheme interpreter written in C. Currently it uses the C runtime stack as its own stack, which is presenting a minor problem with implementing continuations. My current solution is manual copying of the C stack to the heap then copying it back when needed. Aside from not being standard C, this solution is hardly ideal. What is the simplest way to implement continuations for Scheme in C?

    Read the article

  • What are shared by multi threads in the same process?

    - by skydoor
    I found that each thread still has its own registers. Also has its own stack, but other threads can read and write the stack memory. My questions, what are shared by the multi threads in the same process? What I can imagine is 1) address space of the process; 2) stack, register; 3) variables Can any body elaborate it and add more?

    Read the article

  • Linux's thread local storage implementation

    - by anon
    __thread Foo foo; How is "foo" actually resolved? Does the compiler silently replace every instance of "foo" with a function call? Is "foo" stored somewhere relative to the bottom of the stack, and the compiler stores this as "hey, for each thread, have this space near the bottom of the stack, and foo is stored as 'offset x from bottom of stack'"?

    Read the article

  • Thread Local Storage and local method variables

    - by miguel
    In c#, each thread has its own stack space. If this is the case, why is the following code not thread-safe? (It is stated that this code is thread-safe on this post: Locking in C# class Foo { private int count = 0; public void TrySomething() { count++; } } As count is an int (stack variable), surely this value would be isolated to an individual thread, on its own stack, and therefore thread-safe? I am probably missing something here, but I dont understand what is actually in Thread Local Storage if not stack-based variables for the thread?

    Read the article

  • While loop in IL - why stloc.0 and ldloc.0?

    - by Michael Stum
    I'm trying to understand how a while loop looks in IL. I have written this C# function: static void Brackets() { while (memory[pointer] > 0) { // Snipped body of the while loop, as it's not important } } The IL looks like this: .method private hidebysig static void Brackets() cil managed { // Code size 37 (0x25) .maxstack 2 .locals init ([0] bool CS$4$0000) IL_0000: nop IL_0001: br.s IL_0012 IL_0003: nop // Snipped body of the while loop, as it's not important IL_0011: nop IL_0012: ldsfld uint8[] BFHelloWorldCSharp.Program::memory IL_0017: ldsfld int16 BFHelloWorldCSharp.Program::pointer IL_001c: ldelem.u1 IL_001d: ldc.i4.0 IL_001e: cgt IL_0020: stloc.0 IL_0021: ldloc.0 IL_0022: brtrue.s IL_0003 IL_0024: ret } // end of method Program::Brackets For the most part this is really simple, except for the part after cgt. What I don't understand is the local [0] and the stloc.0/ldloc.0. As far as I see it, cgt pushes the result to the stack, stloc.0 gets the result from the stack into the local variable, ldloc.0 pushes the result to the stack again and brtrue.s reads from the stack. What is the purpose of doing this? Couldn't this be shortened to just cgt followed by brtrue.s?

    Read the article

  • JRuby 1.7.0 will not install bundler given plenty of memory

    - by user678615
    I installed jruby with rvm install jruby-1.7.0 and it ran out of memory when it tried to create the gemsets so I started by trying to install bundler with the new version and this is what I get ~>gem install bundler Error: Your application used more stack memory than the safety cap of 2048K. Specify -J-Xss####k to increase it (#### = cap size in KB). Specify -w for full StackOverflowError stack trace So I moved up the memory and I still got nothing with a huge chunk of memory ~>JRUBY_OPTS=-J-Xss1024m gem install bundler Error: Your application used more stack memory than the safety cap of 1024M. Specify -J-Xss####k to increase it (#### = cap size in KB). Specify -w for full StackOverflowError stack trace How the hell can that not be enough I run applications on less than that

    Read the article

  • How is thread local storage (__thread) implemented on LInux?

    - by anon
    __thread Foo foo; How is "foo" actually resolved? Does the compiler silently replace every instance of "foo" with a function call? Is "foo" stored somewhere relative to the bottom of the stack, and the compiler stores this as "hey, for each thread, have this space near the bottom of the stack, and foo is stored as "offset x from bottom of stack"" ? Insights please.

    Read the article

  • C to Assembly code - what does it mean

    - by Smith
    I'm trying to figure out exactly what is going on with the following assembly code. Can someone go down line by line and explain what is happening? I input what I think is happening (see comments) but need clarification. .file "testcalc.c" .section .rodata.str1.1,"aMS",@progbits,1 .LC0: .string "x=%d, y=%d, z=%d, result=%d\n" .text .globl main .type main, @function main: leal 4(%esp), %ecx // establish stack frame andl $-16, %esp // decrement %esp by 16, align stack pushl -4(%ecx) // push original stack pointer pushl %ebp // save base pointer movl %esp, %ebp // establish stack frame pushl %ecx // save to ecx subl $36, %esp // alloc 36 bytes for local vars movl $11, 8(%esp) // store 11 in z movl $6, 4(%esp) // store 6 in y movl $2, (%esp) // store 2 in x call calc // function call to calc movl %eax, 20(%esp) // %esp + 20 into %eax movl $11, 16(%esp) // WHAT movl $6, 12(%esp) // WHAT movl $2, 8(%esp) // WHAT movl $.LC0, 4(%esp) // WHAT?!?! movl $1, (%esp) // move result into address of %esp call __printf_chk // call printf function addl $36, %esp // WHAT? popl %ecx popl %ebp leal -4(%ecx), %esp ret .size main, .-main .ident "GCC: (Ubuntu 4.3.3-5ubuntu4) 4.3.3" .section .note.GNU-stack,"",@progbits Original code: #include <stdio.h> int calc(int x, int y, int z); int main() { int x = 2; int y = 6; int z = 11; int result; result = calc(x,y,z); printf("x=%d, y=%d, z=%d, result=%d\n",x,y,z,result); }

    Read the article

  • Building A True Error Handler

    - by Kevin Pirnie
    I am trying to build an error handler for my desktop application. The code Is in the class ZipCM.ErrorManager listed below. What I am finding is that the outputted file is not giving me the correct info for the StackTrace. Here is how I am trying to use it: Try '... Some stuff here! Catch ex As Exception Dim objErr As New ZipCM.ErrorManager objErr.Except = ex objErr.Stack = New System.Diagnostics.StackTrace(True) objErr.Location = "Form: SelectSite (btn_SelectSite_Click)" objErr.ParseError() objErr = Nothing End Try Here is the class: Imports System.IO Namespace ZipCM Public Class ErrorManager Public Except As Exception Public Location As String Public Stack As System.Diagnostics.StackTrace Public Sub ParseError() Dim objFile As New StreamWriter(Common.BasePath & "error_" & FormatDateTime(DateTime.Today, DateFormat.ShortDate).ToString().Replace("\", "").Replace("/", "") & ".log", True) With objFile .WriteLine("-------------------------------------------------") .WriteLine("-------------------------------------------------") .WriteLine("An Error Occured At: " & DateTime.Now) .WriteLine("-------------------------------------------------") .WriteLine("LOCATION:") .WriteLine(Location) .WriteLine("-------------------------------------------------") .WriteLine("FILENAME:") .WriteLine(Stack.GetFrame(0).GetFileName()) .WriteLine("-------------------------------------------------") .WriteLine("LINE NUMBER:") .WriteLine(Stack.GetFrame(0).GetFileLineNumber()) .WriteLine("-------------------------------------------------") .WriteLine("SOURCE:") .WriteLine(Except.Source) .WriteLine("-------------------------------------------------") .WriteLine("MESSAGE:") .WriteLine(Except.Message) .WriteLine("-------------------------------------------------") .WriteLine("DATA:") .WriteLine(Except.Data.ToString()) End With objFile.Close() objFile = Nothing End Sub End Class End Namespace What is happenning is the .GetFileLineNumber() is getting the line number from 'objErr.Stack = New System.Diagnostics.StackTrace(True)' inside my Try..Catch block. In fact, it's the exact line number that is on. Any thoughts of what is going on here, and how I can catch the real line number the error is occuring on?

    Read the article

  • The Current State Of Serving a PHP 5.x App on the Apache, LightTPD & Nginx Web Servers?

    - by Gregory Kornblum
    Being stuck in a MS stack architecture/development position for the last year and a half has prevented me from staying on top of the world of open source stack based web servers recent evolution more than I would have liked to. However I am now building an open source stack based application/system architecture and sadly I do not have the time to give each of the above mentioned web servers a thorough test of my own to decide. So I figured I'd get input from the best development community site and more specifically the people who make it so. This is a site that is a resource for information regarding a specific domain and target audience with features to help users not only find the information but to also interact with one another in various ways for various reasons. I chose the open source stack for the wealth of resources it has along with much better offers than the MS stack (i.e. WordPress vs BlogEngine.NET). I feel Java is more in the middle of these stacks in this regard although I am not ruling out the possibility of using it in certain areas unrelated to the actual web app itself such as background processes. I have already come to the conclusion of using PHP (using CodeIgniter framework & APC), MySQL (InnoDB) and Memcached on CentOS. I am definitely serving static content on Nginx. However the 3 servers mentioned have no consensus on which is best for dynamic content in regards to performance. It seems LightTPD still has the leak issue which rules it out if it does, Nginx seems it is still not mature enough for this aspect and of course Apache tries to be everything for everybody. I am still going to compile the one chosen with as many performance tweaks as possible such as static linking and the likes. I believe I can get Apache to match the other 2 in regards to serving dynamic content through this process and not having it serve anything static. However during my research it seems the others are still worth considering. So with all things considered I would love to hear what everyone here has to say on the matter. Thanks!

    Read the article

  • how to pass a string to an already opened activity?

    - by Hossein Mansouri
    I have 3 Activities A1,A2,A3 A1 call A2 (A1 goes to stack) A2 call A3 (A2 also goes to stack) And A3 call A1 (A1 should call from stack not new instance...) I don't want to Create new instance of A1 just i want to call it from stack I want to send some Extra String to A1, the problem is here, when i send some String by using putExtra() to A1, A1 can not seen it! I put getIntent() in onResume() for A1 but it s not working... Code in A3 Intent in = new Intent(A3.this,A1.class); in.putExtra("ACTIVITY", "A3"); in.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP | Intent.FLAG_ACTIVITY_SINGLE_TOP); startActivity(in); Code in A1 @Override protected void onResume() { super.onResume(); if(getIntent().getExtras().getString("ACTIVITY")=="A3"){ new LoadAllMyOrders().execute(); }else{ new LoadAllMyshops().execute(); } }

    Read the article

  • An error saying unidentifed function "push", "pop", and"display" occurs, what should i add to fix i

    - by Alesha Aris
    #include<stdio.h> #include<iostream.h> #include<conio.h> #include<stdlib.h>(TOP) #include<fstream.h> #define MAX 5 int top = -1; int stack_arr[MAX]; main() { int choice; while(1) { printf("1.Push\n"); printf("2.Pop\n"); printf("3.Display\n"); printf("4.Quit\n"); printf("Enter your choice : "); scanf("%d",&choice); switch(choice) { case 1 : push(); break; case 2: pop(); break; case 3: display(); break; case 4: exit(1); default: printf("Wrong choice\n"); }/*End of switch*/ }/*End of while*/ }/*End of main()*/ push() { int pushed_item; if(top == (MAX-1)) printf("Stack Overflow\n"); else { printf("Enter the item to be pushed in stack : "); scanf("%d",&pushed_item); top=top+1; stack_arr[top] = pushed_item; } }/*End of push()*/ pop() { if(top == -1) printf("Stack Underflow\n"); else { printf("Popped element is : %d\n",stack_arr[top]); top=top-1; } }/*End of pop()*/ display() { int i; if(top == -1) printf("Stack is empty\n"); else { printf("Stack elements :\n"); for(i = top; i >=0; i--) printf("%d\n", stack_arr[i] ); } }/*End of display()*/

    Read the article

  • What happens when a computer program runs?

    - by gaijinco
    I know the general theory but I can't fit in the details. I know that a program resides in the secondary memory of a computer. Once the program begins execution it is entirely copied to the RAM. Then the processor retrive a few instructions (it depends on the size of the bus) at a time, puts them in registers and executes them. I also know that a computer program uses two kinds of memory: stack and heap, which are also part of the primary memory of the computer. The stack is used for non-dynamic memory, and the heap for dynamic memory (for example, everything related to the new operator in C++) What I can't understand is how those two things connect. At what point is the stack used for the execution of the instructions? Instructions go from the RAM, to the stack, to the registers?

    Read the article

  • How to find validity of a string of parentheses, curly brackets and square brackets?

    - by Rajendra
    I recently came in contact with this interesting problem. You are given a string containing just the characters '(', ')', '{', '}', '[' and ']', for example, "[{()}]", you need to write a function which will check validity of such an input string, function may be like this: bool isValid(char* s); these brackets have to close in the correct order, for example "()" and "()[]{}" are all valid but "(]", "([)]" and "{{{{" are not! I came out with following O(n) time and O(n) space complexity solution, which works fine: Maintain a stack of characters. Whenever you find opening braces '(', '{' OR '[' push it on the stack. Whenever you find closing braces ')', '}' OR ']' , check if top of stack is corresponding opening bracket, if yes, then pop the stack, else break the loop and return false. Repeat steps 2 - 3 until end of the string. This works, but can we optimize it for space, may be constant extra space, I understand that time complexity cannot be less than O(n) as we have to look at every character. So my question is can we solve this problem in O(1) space?

    Read the article

  • How to implement tail calls in a custom VM

    - by DeadMG
    How can I implement tail calls in a custom virtual machine? I know that I need to pop off the original function's local stack, then it's arguments, then push on the new arguments. But, if I pop off the function's local stack, how am I supposed to push on the new arguments? They've just been popped off the stack.

    Read the article

  • Objective-C memory management issue

    - by Toby Wilson
    I've created a graphing application that calls a web service. The user can zoom & move around the graph, and the program occasionally makes a decision to call the web service for more data accordingly. This is achieved by the following process: The graph has a render loop which constantly renders the graph, and some decision logic which adds web service call information to a stack. A seperate thread takes the most recent web service call information from the stack, and uses it to make the web service call. The other objects on the stack get binned. The idea of this is to reduce the number of web service calls to only those appropriate, and only one at a time. Right, with the long story out of the way (for which I apologise), here is my memory management problem: The graph has persistant (and suitably locked) NSDate* objects for the currently displayed start & end times of the graph. These are passed into the initialisers for my web service request objects. The web service call objects then retain the dates. After the web service calls have been made (or binned if they were out of date), they release the NSDate*. The graph itself releases and reallocates new NSDates* on the 'touches ended' event. If there is only one web service call object on the stack when removeAllObjects is called, EXC_BAD_ACCESS occurs in the web service call object's deallocation method when it attempts to release the date objects (even though they appear to exist and are in scope in the debugger). If, however, I comment out the release messages from the destructor, no memory leak occurs for one object on the stack being released, but memory leaks occur if there are more than one object on the stack. I have absolutely no idea what is going wrong. It doesn't make a difference what storage symantics I use for the web service call objects dates as they are assigned in the initialiser and then only read (so for correctness' sake are set to readonly). It also doesn't seem to make a difference if I retain or copy the dates in the initialiser (though anything else obviously falls out of scope or is unwantedly released elsewhere and causes a crash). I'm sorry this explanation is long winded, I hope it's sufficiently clear but I'm not gambling on that either I'm afraid. Major big thanks to anyone that can help, even suggest anything I may have missed?

    Read the article

  • How lucene indexing ?

    - by user312140
    Hello I read some document about lucene ; also i read the document in this link ( http://lucene.sourceforge.net/talks/pisa ) . I don't really understand how lucene index documents and don't understand lucene work with which algorithm for indexing ? On above link , said lucene use this algorithm for indexing : * incremental algorithm: o maintain a stack of segment indices o create index for each incoming document o push new indexes onto the stack o let b=10 be the merge factor; M=8 for (size = 1; size < M; size *= b) { if (there are b indexes with size docs on top of the stack) { pop them off the stack; merge them into a single index; push the merged index onto the stack; } else { break; } } How this algorithm help us to have an optimize indexing ? Does lucene use B-tree algorithm or any other algorithm like that for indexing or have a paticular algorithm ? Thank you for reading my post .

    Read the article

  • help implementing algorithm

    - by davit-datuashvili
    http://en.wikipedia.org/wiki/All_nearest_smaller_values this is site of the problem and here is my code but i have some trouble to implement it import java.util.*; public class stack{ public static void main(String[]args){ int x[]=new int[]{ 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15 }; Stack<Integer> st=new Stack<Integer>(); for (int a:x){ while (!st.empty() && st.pop()>=a){ System.out.println( st.pop()); if (st.empty()){ break; } else{ st.push(a); } } } } } and here is pseudo code from site S = new empty stack data structure for x in the input sequence: while S is nonempty and the top element of S is greater than or equal to x: pop S if S is empty: x has no preceding smaller value else: the nearest smaller value to x is the top element of S push x onto S

    Read the article

  • add array element to row returned from sql query

    - by bert
    I want to add an additional value into an array before passing it to json_encode function, but I can't get the syntax right. $result = db_query($query); // $row is a database query result resource while ($row = db_fetch_object($result)) { $stack[] = $row; // I am trying to 'inject' array element here $stack[]['x'] = "test"; } echo json_encode($stack);

    Read the article

  • help implementing All Nearest Smaller Values algorithm

    - by davit-datuashvili
    http://en.wikipedia.org/wiki/All_nearest_smaller_values this is site of the problem and here is my code but i have some trouble to implement it import java.util.*; public class stack{ public static void main(String[]args){ int x[]=new int[]{ 0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15 }; Stack<Integer> st=new Stack<Integer>(); for (int a:x){ while (!st.empty() && st.pop()>=a){ System.out.println( st.pop()); if (st.empty()){ break; } else{ st.push(a); } } } } } and here is pseudo code from site S = new empty stack data structure for x in the input sequence: while S is nonempty and the top element of S is greater than or equal to x: pop S if S is empty: x has no preceding smaller value else: the nearest smaller value to x is the top element of S push x onto S

    Read the article

  • Problems With SWT on Mac

    - by Jav
    I've got a java project that uses an SWT UI and I'm having trouble deploying it on any Mac OS X computers. The program itself works perfectly on Windows when it is either run from within Eclipse or from a jar file. On Mac, the program also works fine in Eclipse, but when I try to run it from a jar file, I get the following error: 2010-04-30 13:33:04.564 java[17825:41b] *** _NSAutoreleaseNoPool(): Object 0x10b9b0 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0x944acf4f 0x943b9432 0x678fb79 0x35a19b1 0x359ba7f) 2010-04-30 13:33:04.566 java[17825:41b] *** _NSAutoreleaseNoPool(): Object 0x115ef0 of class NSCFNumber autoreleased with no pool in place - just leaking Stack: (0x944acf4f 0x943b9432 0x678a0b0 0x35a19b1 0x359ba7f) 2010-04-30 13:33:04.567 java[17825:41b] *** _NSAutoreleaseNoPool(): Object 0x121000 of class NSCFString autoreleased with no pool in place - just leaking Stack: (0x944acf4f 0x943b9432 0x678fb79 0x35a19b1) 2010-04-30 13:33:04.581 java[17825:41b] *** _NSAutoreleaseNoPool(): Object 0x123720 of class NSPathStore2 autoreleased with no pool in place - just leaking Stack: (0x944acf4f 0x943ba637 0x943c238f 0x943c1e8e 0x943c694b 0x678992e 0x35a19b1) 2010-04-30 13:33:04.582 java[17825:41b] *** _NSAutoreleaseNoPool(): Object 0x12d660 of class NSPathStore2 autoreleased with no pool in place - just leaking Stack: (0x944acf4f 0x943ba637 0x943b9739 0x943c3eb2 0x943c6b22 0x678992e 0x35a19b1) ... ... ... The actual error is much larger, and continues until the program crashes. I know that I am using the correct swt.jar file and I have tried running the program with the -XstartOnFirstThread VM argument, but still have not had any luck. Does anybody have any ideas or any suggestions where I could start looking for a solution? Thanks.

    Read the article

  • C#/.NET Little Wonders: Interlocked CompareExchange()

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Two posts ago, I discussed the Interlocked Add(), Increment(), and Decrement() methods (here) for adding and subtracting values in a thread-safe, lightweight manner.  Then, last post I talked about the Interlocked Read() and Exchange() methods (here) for safely and efficiently reading and setting 32 or 64 bit values (or references).  This week, we’ll round out the discussion by talking about the Interlocked CompareExchange() method and how it can be put to use to exchange a value if the current value is what you expected it to be. Dirty reads can lead to bad results Many of the uses of Interlocked that we’ve explored so far have centered around either reading, setting, or adding values.  But what happens if you want to do something more complex such as setting a value based on the previous value in some manner? Perhaps you were creating an application that reads a current balance, applies a deposit, and then saves the new modified balance, where of course you’d want that to happen atomically.  If you read the balance, then go to save the new balance and between that time the previous balance has already changed, you’ll have an issue!  Think about it, if we read the current balance as $400, and we are applying a new deposit of $50.75, but meanwhile someone else deposits $200 and sets the total to $600, but then we write a total of $450.75 we’ve lost $200! Now, certainly for int and long values we can use Interlocked.Add() to handles these cases, and it works well for that.  But what if we want to work with doubles, for example?  Let’s say we wanted to add the numbers from 0 to 99,999 in parallel.  We could do this by spawning several parallel tasks to continuously add to a total: 1: double total = 0; 2:  3: Parallel.For(0, 10000, next => 4: { 5: total += next; 6: }); Were this run on one thread using a standard for loop, we’d expect an answer of 4,999,950,000 (the sum of all numbers from 0 to 99,999).  But when we run this in parallel as written above, we’ll likely get something far off.  The result of one of my runs, for example, was 1,281,880,740.  That is way off!  If this were banking software we’d be in big trouble with our clients.  So what happened?  The += operator is not atomic, it will read in the current value, add the result, then store it back into the total.  At any point in all of this another thread could read a “dirty” current total and accidentally “skip” our add.   So, to clean this up, we could use a lock to guarantee concurrency: 1: double total = 0.0; 2: object locker = new object(); 3:  4: Parallel.For(0, count, next => 5: { 6: lock (locker) 7: { 8: total += next; 9: } 10: }); Which will give us the correct result of 4,999,950,000.  One thing to note is that locking can be heavy, especially if the operation being locked over is trivial, or the life of the lock is a high percentage of the work being performed concurrently.  In the case above, the lock consumes pretty much all of the time of each parallel task – and the task being locked on is relatively trivial. Now, let me put in a disclaimer here before we go further: For most uses, lock is more than sufficient for your needs, and is often the simplest solution!    So, if lock is sufficient for most needs, why would we ever consider another solution?  The problem with locking is that it can suspend execution of your thread while it waits for the signal that the lock is free.  Moreover, if the operation being locked over is trivial, the lock can add a very high level of overhead.  This is why things like Interlocked.Increment() perform so well, instead of locking just to perform an increment, we perform the increment with an atomic, lockless method. As with all things performance related, it’s important to profile before jumping to the conclusion that you should optimize everything in your path.  If your profiling shows that locking is causing a high level of waiting in your application, then it’s time to consider lighter alternatives such as Interlocked. CompareExchange() – Exchange existing value if equal some value So let’s look at how we could use CompareExchange() to solve our problem above.  The general syntax of CompareExchange() is: T CompareExchange<T>(ref T location, T newValue, T expectedValue) If the value in location == expectedValue, then newValue is exchanged.  Either way, the value in location (before exchange) is returned. Actually, CompareExchange() is not one method, but a family of overloaded methods that can take int, long, float, double, pointers, or references.  It cannot take other value types (that is, can’t CompareExchange() two DateTime instances directly).  Also keep in mind that the version that takes any reference type (the generic overload) only checks for reference equality, it does not call any overridden Equals(). So how does this help us?  Well, we can grab the current total, and exchange the new value if total hasn’t changed.  This would look like this: 1: // grab the snapshot 2: double current = total; 3:  4: // if the total hasn’t changed since I grabbed the snapshot, then 5: // set it to the new total 6: Interlocked.CompareExchange(ref total, current + next, current); So what the code above says is: if the amount in total (1st arg) is the same as the amount in current (3rd arg), then set total to current + next (2nd arg).  This check and exchange pair is atomic (and thus thread-safe). This works if total is the same as our snapshot in current, but the problem, is what happens if they aren’t the same?  Well, we know that in either case we will get the previous value of total (before the exchange), back as a result.  Thus, we can test this against our snapshot to see if it was the value we expected: 1: // if the value returned is != current, then our snapshot must be out of date 2: // which means we didn't (and shouldn't) apply current + next 3: if (Interlocked.CompareExchange(ref total, current + next, current) != current) 4: { 5: // ooops, total was not equal to our snapshot in current, what should we do??? 6: } So what do we do if we fail?  That’s up to you and the problem you are trying to solve.  It’s possible you would decide to abort the whole transaction, or perhaps do a lightweight spin and try again.  Let’s try that: 1: double current = total; 2:  3: // make first attempt... 4: if (Interlocked.CompareExchange(ref total, current + i, current) != current) 5: { 6: // if we fail, go into a spin wait, spin, and try again until succeed 7: var spinner = new SpinWait(); 8:  9: do 10: { 11: spinner.SpinOnce(); 12: current = total; 13: } 14: while (Interlocked.CompareExchange(ref total, current + i, current) != current); 15: } 16:  This is not trivial code, but it illustrates a possible use of CompareExchange().  What we are doing is first checking to see if we succeed on the first try, and if so great!  If not, we create a SpinWait and then repeat the process of SpinOnce(), grab a fresh snapshot, and repeat until CompareExchnage() succeeds.  You may wonder why not a simple do-while here, and the reason it’s more efficient to only create the SpinWait until we absolutely know we need one, for optimal efficiency. Though not as simple (or maintainable) as a simple lock, this will perform better in many situations.  Comparing an unlocked (and wrong) version, a version using lock, and the Interlocked of the code, we get the following average times for multiple iterations of adding the sum of 100,000 numbers: 1: Unlocked money average time: 2.1 ms 2: Locked money average time: 5.1 ms 3: Interlocked money average time: 3 ms So the Interlocked.CompareExchange(), while heavier to code, came in lighter than the lock, offering a good compromise of safety and performance when we need to reduce contention. CompareExchange() - it’s not just for adding stuff… So that was one simple use of CompareExchange() in the context of adding double values -- which meant we couldn’t have used the simpler Interlocked.Add() -- but it has other uses as well. If you think about it, this really works anytime you want to create something new based on a current value without using a full lock.  For example, you could use it to create a simple lazy instantiation implementation.  In this case, we want to set the lazy instance only if the previous value was null: 1: public static class Lazy<T> where T : class, new() 2: { 3: private static T _instance; 4:  5: public static T Instance 6: { 7: get 8: { 9: // if current is null, we need to create new instance 10: if (_instance == null) 11: { 12: // attempt create, it will only set if previous was null 13: Interlocked.CompareExchange(ref _instance, new T(), (T)null); 14: } 15:  16: return _instance; 17: } 18: } 19: } So, if _instance == null, this will create a new T() and attempt to exchange it with _instance.  If _instance is not null, then it does nothing and we discard the new T() we created. This is a way to create lazy instances of a type where we are more concerned about locking overhead than creating an accidental duplicate which is not used.  In fact, the BCL implementation of Lazy<T> offers a similar thread-safety choice for Publication thread safety, where it will not guarantee only one instance was created, but it will guarantee that all readers get the same instance.  Another possible use would be in concurrent collections.  Let’s say, for example, that you are creating your own brand new super stack that uses a linked list paradigm and is “lock free”.  We could use Interlocked.CompareExchange() to be able to do a lockless Push() which could be more efficient in multi-threaded applications where several threads are pushing and popping on the stack concurrently. Yes, there are already concurrent collections in the BCL (in .NET 4.0 as part of the TPL), but it’s a fun exercise!  So let’s assume we have a node like this: 1: public sealed class Node<T> 2: { 3: // the data for this node 4: public T Data { get; set; } 5:  6: // the link to the next instance 7: internal Node<T> Next { get; set; } 8: } Then, perhaps, our stack’s Push() operation might look something like: 1: public sealed class SuperStack<T> 2: { 3: private volatile T _head; 4:  5: public void Push(T value) 6: { 7: var newNode = new Node<int> { Data = value, Next = _head }; 8:  9: if (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next) 10: { 11: var spinner = new SpinWait(); 12:  13: do 14: { 15: spinner.SpinOnce(); 16: newNode.Next = _head; 17: } 18: while (Interlocked.CompareExchange(ref _head, newNode, newNode.Next) != newNode.Next); 19: } 20: } 21:  22: // ... 23: } Notice a similar paradigm here as with adding our doubles before.  What we are doing is creating the new Node with the data to push, and with a Next value being the original node referenced by _head.  This will create our stack behavior (LIFO – Last In, First Out).  Now, we have to set _head to now refer to the newNode, but we must first make sure it hasn’t changed! So we check to see if _head has the same value we saved in our snapshot as newNode.Next, and if so, we set _head to newNode.  This is all done atomically, and the result is _head’s original value, as long as the original value was what we assumed it was with newNode.Next, then we are good and we set it without a lock!  If not, we SpinWait and try again. Once again, this is much lighter than locking in highly parallelized code with lots of contention.  If I compare the method above with a similar class using lock, I get the following results for pushing 100,000 items: 1: Locked SuperStack average time: 6 ms 2: Interlocked SuperStack average time: 4.5 ms So, once again, we can get more efficient than a lock, though there is the cost of added code complexity.  Fortunately for you, most of the concurrent collection you’d ever need are already created for you in the System.Collections.Concurrent (here) namespace – for more information, see my Little Wonders – The Concurent Collections Part 1 (here), Part 2 (here), and Part 3 (here). Summary We’ve seen before how the Interlocked class can be used to safely and efficiently add, increment, decrement, read, and exchange values in a multi-threaded environment.  In addition to these, Interlocked CompareExchange() can be used to perform more complex logic without the need of a lock when lock contention is a concern. The added efficiency, though, comes at the cost of more complex code.  As such, the standard lock is often sufficient for most thread-safety needs.  But if profiling indicates you spend a lot of time waiting for locks, or if you just need a lock for something simple such as an increment, decrement, read, exchange, etc., then consider using the Interlocked class’s methods to reduce wait. Technorati Tags: C#,CSharp,.NET,Little Wonders,Interlocked,CompareExchange,threading,concurrency

    Read the article

< Previous Page | 89 90 91 92 93 94 95 96 97 98 99 100  | Next Page >