Search Results

Search found 269 results on 11 pages for 'volatile'.

Page 1/11 | 1 2 3 4 5 6 7 8 9 10 11  | Next Page >

  • Why is 'volatile' parasitic in C++?

    - by Steve
    Consider the following code: int main() { int i; volatile int* p = &i; int *v = p; return 0; } This gives an error in g++: $ g++ -o volatile volatile.cpp volatile.cpp: In function ‘int main()’: volatile.cpp:6: error: invalid conversion from ‘volatile int*’ to ‘int*’ My intention was that I want to make p volatile. However, once I've read the value of p, I don't care if accessing v is volatile. Why is it required that v be declared volatile? This is hypothetical code of course. In a real situation you could imagine that p points to a memory location, but is modified externally and I want v to point to the location that p pointed to at the time of v = p, even if later p is externally modified. Therefore p is volatile, but v is not. By the way I am interested in the behaviour both when this is considered C and C++, but in C this only generates a warning, not an error.

    Read the article

  • Subterranean IL: Volatile

    - by Simon Cooper
    This time, we'll be having a look at the volatile. prefix instruction, and one of the differences between volatile in IL and C#. The volatile. prefix volatile is a tricky one, as there's varying levels of documentation on it. From what I can see, it has two effects: It prevents caching of the load or store value; rather than reading or writing to a cached version of the memory location (say, the processor register or cache), it forces the value to be loaded or stored at the 'actual' memory location, so it is then immediately visible to other threads. It forces a memory barrier at the prefixed instruction. This ensures instructions don't get re-ordered around the volatile instruction. This is slightly more complicated than it first seems, and only seems to matter on certain architectures. For more details, Joe Duffy has a blog post going into the details. For this post, I'll be concentrating on the first aspect of volatile. Caching field accesses To demonstrate this, I created a simple multithreaded IL program. It boils down to the following code: .class public Holder { .field public static class Holder holder .field public bool stop .method public static specialname void .cctor() { newobj instance void Holder::.ctor() stsfld class Holder Holder::holder ret }}.method private static void Main() { .entrypoint // Thread t = new Thread(new ThreadStart(DoWork)) // t.Start() // Thread.Sleep(2000) // Console.WriteLine("Stopping thread...") ldsfld class Holder Holder::holder ldc.i4.1 stfld bool Holder::stop call instance void [mscorlib]System.Threading.Thread::Join() ret}.method private static void DoWork() { ldsfld class Holder Holder::holder // while (!Holder.holder.stop) {} DoWork: dup ldfld bool Holder::stop brfalse DoWork pop ret} If you compile and run this code, you'll find that the call to Thread.Join() never returns - the DoWork spinlock is reading a cached version of Holder.stop, which is never being updated with the new value set by the Main method. Adding volatile to the ldfld fixes this: dupvolatile.ldfld bool Holder::stopbrfalse DoWork The volatile ldfld forces the field access to read direct from heap memory, which is then updated by the main thread, rather than using a cached copy. volatile in C# This highlights one of the differences between IL and C#. In IL, volatile only applies to the prefixed instruction, whereas in C#, volatile is specified on a field to indicate that all accesses to that field should be volatile (interestingly, there's no mention of the 'no caching' aspect of volatile in the C# spec; it only focuses on the memory barrier aspect). Furthermore, this information needs to be stored within the assembly somehow, as such a field might be accessed directly from outside the assembly, but there's no concept of a 'volatile field' in IL! How this information is stored with the field will be the subject of my next post.

    Read the article

  • May volatile be in user defined types to help writing thread-safe code

    - by David Rodríguez - dribeas
    I know, it has been made quite clear in a couple of questions/answers before, that volatile is related to the visible state of the c++ memory model and not to multithreading. On the other hand, this article by Alexandrescu uses the volatile keyword not as a runtime feature but rather as a compile time check to force the compiler into failing to accept code that could be not thread safe. In the article the keyword is used more like a required_thread_safety tag than the actual intended use of volatile. Is this (ab)use of volatile appropriate? What possible gotchas may be hidden in the approach? The first thing that comes to mind is added confusion: volatile is not related to thread safety, but by lack of a better tool I could accept it. Basic simplification of the article: If you declare a variable volatile, only volatile member methods can be called on it, so the compiler will block calling code to other methods. Declaring an std::vector instance as volatile will block all uses of the class. Adding a wrapper in the shape of a locking pointer that performs a const_cast to release the volatile requirement, any access through the locking pointer will be allowed. Stealing from the article: template <typename T> class LockingPtr { public: // Constructors/destructors LockingPtr(volatile T& obj, Mutex& mtx) : pObj_(const_cast<T*>(&obj)), pMtx_(&mtx) { mtx.Lock(); } ~LockingPtr() { pMtx_->Unlock(); } // Pointer behavior T& operator*() { return *pObj_; } T* operator->() { return pObj_; } private: T* pObj_; Mutex* pMtx_; LockingPtr(const LockingPtr&); LockingPtr& operator=(const LockingPtr&); }; class SyncBuf { public: void Thread1() { LockingPtr<BufT> lpBuf(buffer_, mtx_); BufT::iterator i = lpBuf->begin(); for (; i != lpBuf->end(); ++i) { // ... use *i ... } } void Thread2(); private: typedef vector<char> BufT; volatile BufT buffer_; Mutex mtx_; // controls access to buffer_ };

    Read the article

  • Java: volatile guarantees and out-of-order execution

    - by WizardOfOdds
    Note that this question is solely about the volatile keyword and the volatile guarantees: it is not about the synchronized keyword (so please don't answer "you must use synchronize" for I don't have any issue to solve: I simply want to understand the volatile guarantees (or lack of guarantees) regarding out-of-order execution). Say we have an object containing two volatile String references that are initialized to null by the constructor and that we have only one way to modify the two String: by calling setBoth(...) and that we can only set their references afterwards to non-null reference (only the constructor is allowed to set them to null). For example (it's just an example, there's no question yet): public class SO { private volatile String a; private volatile String b; public SO() { a = null; b = null; } public void setBoth( @NotNull final String one, @NotNull final String two ) { a = one; b = two; } public String getA() { return a; } public String getB() { return b; } } In setBoth(...), the line assigning the non-null parameter "a" appears before the line assigning the non-null parameter "b". Then if I do this (once again, there's no question, the question is coming next): if ( so.getB() != null ) { System.out.println( so.getA().length ); } Am I correct in my understanding that due to out-of-order execution I can get a NullPointerException? In other words: there's no guarantee that because I read a non-null "b" I'll read a non-null "a"? Because due to out-of-order (multi)processor and the way volatile works "b" could be assigned before "a"? volatile guarantees that reads subsequent to a write shall always see the last written value, but here there's an out-of-order "issue" right? (once again, the "issue" is made on purpose to try to understand the semantics of the volatile keyword and the Java Memory Model, not to solve a problem).

    Read the article

  • "volatile" qualifier and compiler reorderings

    - by Checkers
    A compiler cannot eliminate or reorder reads/writes to a volatile-qualified variables. But what about the cases where other variables are present, which may or may not be volatile-qualified? Scenario 1 volatile int a; volatile int b; a = 1; b = 2; a = 3; b = 4; Can the compiler reorder first and the second, or third and the fourth assignments? Scenario 2 volatile int a; int b, c; b = 1; a = 1; c = b; a = 3; Same question, can the compiler reorder first and the second, or third and the fourth assignments?

    Read the article

  • Volatile fields in C#

    - by Danny Chen
    From the specification 10.5.3 Volatile fields: The type of a volatile field must be one of the following: A reference-type. The type byte, sbyte, short, ushort, int, uint, char, float, bool, System.IntPtr, or System.UIntPtr. An enum-type having an enum base type of byte, sbyte, short, ushort, int, or uint. First I want to confirm my understanding is correct: I guess the above types can be volatile because they are stored as a 4-bytes unit in memory(for reference types because of its address), which guarantees the read/write operation is atomic. A double/long/etc type can't be volatile because they are not atomic reading/writing since they are more than 4 bytes in memory. Is my understanding correct? And the second, if the first guess is correct, why a user defined struct with only one int field in it(or something similar, 4 bytes is ok) can't be volatile? Theoretically it's atomic right? Or it's not allowed simply because that all user defined structs(which is possibly more than 4 bytes) are not allowed to volatile by design?

    Read the article

  • Trouble understanding the semantics of volatile in Java

    - by HungryTux
    I've been reading up about the use of volatile variables in Java. I understand that they ensure instant visibility of their latest updates to all the threads running in the system on different cores/processors. However no atomicity of the operations that caused these updates is ensured. I see the following literature being used frequently A write to a volatile field happens-before every read of that same field . This is where I am a little confused. Here's a snippet of code which should help me better explain my query. volatile int x = 0; volatile int y = 0; Thread-0: | Thread-1: | if (x==1) { | if (y==1) { return false; | return false; } else { | } else { y=1; | x=1; return true; | return true; } | } Since x & y are both volatile, we have the following happens-before edges between the write of y in Thread-0 and read of y in Thread-1 between the write of x in Thread-1 and read of x in Thread-0 Does this imply that, at any point of time, only one of the threads can be in its 'else' block(since a write would happen before the read)? It may well be possible that Thread-0 starts, loads x, finds it value as 0 and right before it is about to write y in the else-block, there's a context switch to Thread-1 which loads y finds it value as 0 and thus enters the else-block too. Does volatile guard against such context switches (seems very unlikely)?

    Read the article

  • proper use of volatile keyword

    - by luke
    I think i have a pretty good idea about the volatile keyword in java, but i'm thinking about re-factoring some code and i thought it would be a good idea to use it. i have a class that is basically working as a DB Cache. it holds a bunch of objects that it has read from a database, serves requests for those objects, and then occasionally refreshes the database (based on a timeout). Heres the skeleton public class Cache { private HashMap mappings =....; private long last_update_time; private void loadMappingsFromDB() { //.... } private void checkLoad() { if(System.currentTimeMillis() - last_update_time > TIMEOUT) loadMappingsFromDB(); } public Data get(ID id) { checkLoad(); //.. look it up } } So the concern is that loadMappingsFromDB could be a high latency operation and thats not acceptable, So initially i thought that i could spin up a thread on cache startup and then just have it sleep and then update the cache in the background. But then i would need to synchronize my class (or the map). and then i would just be trading an occasional big pause for making every cache access slower. Then i thought why not use volatile i could define the map reference as volatile private volatile HashMap mappings =....; and then in get (or anywhere else that uses the mappings variable) i would just make a local copy of the reference: public Data get(ID id) { HashMap local = mappings; //.. look it up using local } and then the background thread would just load into a temp table and then swap the references in the class HashMap tmp; //load tmp from DB mappings = tmp;//swap variables forcing write barrier Does this approach make sense? and is it actually thread-safe?

    Read the article

  • .NET multithreading, volatile and memory model

    - by fedor-serdukov
    Assume that we have the following code: class Program { static volatile bool flag1; static volatile bool flag2; static volatile int val; static void Main(string[] args) { for (int i = 0; i < 10000 * 10000; i++) { if (i % 500000 == 0) { Console.WriteLine("{0:#,0}",i); } flag1 = false; flag2 = false; val = 0; Parallel.Invoke(A1, A2); if (val == 0) throw new Exception(string.Format("{0:#,0}: {1}, {2}", i, flag1, flag2)); } } static void A1() { flag2 = true; if (flag1) val = 1; } static void A2() { flag1 = true; if (flag2) val = 2; } } } It's fault! The main quastion is Why... I suppose that CPU reorder operations with flag1 = true; and if(flag2) statement, but variables flag1 and flag2 marked as volatile fields...

    Read the article

  • Java Concurrency : Volatile vs final in "cascaded" variables?

    - by Tom
    Hello Experts, is final Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); the same as volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); in case the inner Map is accessed by different Threads? or is even something like this required: volatile Map<Integer,Map<String,Integer>> status = new ConcurrentHashMap<Integer, Map<String,Integer>>(); volatile Map<Integer,Map<String,Integer>> statusInner = new ConcurrentHashMap<Integer, Map<String,Integer>>(); status.put(key,statusInner); In case the it is NOT a "cascaded" map, final and volatile have in the end the same effect of making shure that all threads see always the correct contents of the Map... But what happens if the Map iteself contains a map, as in the example... How do I make shure that the inner Map is correctly "Memory barriered"? Tanks! Tom

    Read the article

  • Does this variable need to be declared volatile?

    - by titaniumdecoy
    Does the out variable in the MyThread class need to be declared volatile in this code or will the "volatility" of the stdout variable in the ThreadTest class carry over? import java.io.PrintStream; class MyThread implements Runnable { int id; PrintStream out; // should this be declared volatile? MyThread(int id, PrintStream out) { this.id = id; this.out = out; } public void run() { try { Thread.currentThread().sleep((int)(1000 * Math.random())); out.println("Thread " + id); } catch (InterruptedException e) { e.printStackTrace(); } } } public class ThreadTest { static volatile PrintStream stdout = System.out; public static void main(String[] args) { for (int i = 0; i < 10; i++) { new Thread(new MyThread(i, stdout)).start(); } } }

    Read the article

  • Does armcc optimizes non-volatile variables with -O0 ?

    - by Dor
    int* Register = 0x00FF0000; // Address of micro-seconds timer while(*Register != 0); Should I declare *Register as volatile while using armcc compiler and -O0 optimization ? In other words: Does -O0 optimization requires qualifying that sort of variables as volatile ? (which is probably required in -O2 optimization)

    Read the article

  • volatile keyword seems to be useless?

    - by Finbarr
    import java.util.concurrent.CountDownLatch; import java.util.concurrent.atomic.AtomicInteger; public class Main implements Runnable { private final CountDownLatch cdl1 = new CountDownLatch(NUM_THREADS); private volatile int bar = 0; private AtomicInteger count = new AtomicInteger(0); private static final int NUM_THREADS = 25; public static void main(String[] args) { Main main = new Main(); for(int i = 0; i < NUM_THREADS; i++) new Thread(main).start(); } public void run() { int i = count.incrementAndGet(); cdl1.countDown(); try { cdl1.await(); } catch (InterruptedException e1) { e1.printStackTrace(); } bar = i; if(bar != i) System.out.println("Bar not equal to i"); else System.out.println("Bar equal to i"); } } Each Thread enters the run method and acquires a unique, thread confined, int variable i by getting a value from the AtomicInteger called count. Each Thread then awaits the CountDownLatch called cdl1 (when the last Thread reaches the latch, all Threads are released). When the latch is released each thread attempts to assign their confined i value to the shared, volatile, int called bar. I would expect every Thread except one to print out "Bar not equal to i", but every Thread prints "Bar equal to i". Eh, wtf does volatile actually do if not this?

    Read the article

  • Volatile or synchronized for primitive type?

    - by DKSRathore
    In java, assignment is atomic if the size of the variable is less that or equal to 32 bits but is not if more than 32 bits. What(volatile/synchronized) would be more efficient to use in case of double or long assignment. like, volatile double x = y; synchronized is not applicable with primitive argument. How do i use synchronized in this case. Of course I don't want to lock my class. so this should not be used.

    Read the article

  • java - volatile keyword

    - by Tiyoal
    Say I have two threads and an object. One thread assigns the object: public void assign(MyObject o) { myObject = o; } Another thread uses the object: public void use() { myObject.use(); } Does the variable myObject have to be declared as volatile? I am trying to understand when to use volatile and when not, and this is puzzling me. Is it possible that the second thread keeps a reference to an old object in its local memory cache? If not, why not? Thanks a lot.

    Read the article

  • Are volatile data members trivially copyable?

    - by Lightness Races in Orbit
    Whilst writing this answer I realised that I'm not as confident about my conclusions as I usually would ensure before hitting Post Your Answer. I can find a couple of reasonably convincing citations for the argument that the trivial-copyability of volatile data members is either implementation defined or flat-out false: https://groups.google.com/forum/?fromgroups=#!topic/comp.std.c++/5cWxmw71ktI http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48118 http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2010/n3159.html#496 But I haven't been able to back this up in the standard1 itself. Particularly "worrying" is that there's no sign of the proposed wording change from that n3159 issues list in the actual standard's final wording. So, what gives? Are volatile data members trivially copyable, or not? 1   C++11

    Read the article

  • Volatile keyword

    - by Tiyoal
    Say I have two threads and an object. One thread assigns the object: public void assign(MyObject o) { myObject = o; } Another thread uses the object: public void use() { myObject.use(); } Does the variable myObject have to be declared as volatile? I am trying to understand when to use volatile and when not, and this is puzzling me. Is it possible that the second thread keeps a reference to an old object in its local memory cache? If not, why not? Thanks a lot.

    Read the article

  • How does volatile actually work?

    - by FredOverflow
    Marking a variable as volatile in Java ensures that every thread sees the value that was last written to it instead of some stale value. I was wondering how this is actually achieved. Does the JVM emit special instructions that flush the CPU cashes or something?

    Read the article

  • c++ volatile multithreading variables

    - by anon
    I'm writing a C++ app. I have a class variable that more than one thread is writing to. In C++, anything that can be modified without the compiler "realizing" that it's being changed needs to be marked volatile right? So if my code is multi threaded, and one thread may write to a var while another reads from it, do I need to mark the var volaltile? [I don't have a race condition since I'm relying on writes to ints being atomic] Thanks!

    Read the article

  • Is there any point in using a volatile long?

    - by Adamski
    I occasionally use a volatile instance variable in cases where I have two threads reading from / writing to it and don't want the overhead (or potential deadlock risk) of taking out a lock; for example a timer thread periodically updating an int ID that is exposed as a getter on some class: public class MyClass { private volatile int id; public MyClass() { ScheduledExecutorService execService = Executors.newScheduledThreadPool(1); execService.scheduleAtFixedRate(new Runnable() { public void run() { ++id; } }, 0L, 30L, TimeUnit.SECONDS); } public int getId() { return id; } } My question: Given that the JLS only guarantees that 32-bit reads will be atomic is there any point in ever using a volatile long? (i.e. 64-bit). Caveat: Please do not reply saying that using volatile over synchronized is a case of pre-optimisation; I am well aware of how / when to use synchronized but there are cases where volatile is preferable. For example, when defining a Spring bean for use in a single-threaded application I tend to favour volatile instance variables, as there is no guarantee that the Spring context will initialise each bean's properties in the main thread.

    Read the article

  • Does Interlocked guarantee visibility to other threads in C# or do I still have to use volatile?

    - by Lirik
    I've been reading the answer to a similar question, but I'm still a little confused... Abel had a great answer, but this is the part that I'm unsure about: ...declaring a variable volatile makes it volatile for every single access. It is impossible to force this behavior any other way, hence volatile cannot be replaced with Interlocked. This is needed in scenarios where other libraries, interfaces or hardware can access your variable and update it anytime, or need the most recent version. Does Interlocked guarantee visibility of the atomic operation to all threads, or do I still have to use the volatile keyword on the value in order to guarantee visibility of the change? Here is my example: public class CountDownLatch { private volatile int m_remain; // <--- do I need the volatile keyword there since I'm using Interlocked? private EventWaitHandle m_event; public CountDownLatch (int count) { Reset(count); } public void Reset(int count) { if (count < 0) throw new ArgumentOutOfRangeException(); m_remain = count; m_event = new ManualResetEvent(false); if (m_remain == 0) { m_event.Set(); } } public void Signal() { // The last thread to signal also sets the event. if (Interlocked.Decrement(ref m_remain) == 0) m_event.Set(); } public void Wait() { m_event.WaitOne(); } }

    Read the article

  • Why is the volatile qualifier used through out std::atomic?

    - by Caspin
    From what I've read from Herb Sutter and others you would think that volatile and concurrent programming were completely orthogonal concepts, at least as far as C/C++ are concerned. However, in GCC c++0x extension all of std::atomic's member functions have the volatile qualifier. The same is true in Anthony Williams's implementation of std::atomic. So what's deal, do my atomic<> variables need be volatile or not?

    Read the article

  • Why does std::cout convert volatile pointers to bool?

    - by Joseph Garvin
    If you try to cout a volatile pointer, even a volatile char pointer where you would normally expect cout to print the string, you will instead simply get '1' (assuming the pointer is not null I think). I assume output stream operator<< is template specialized for volatile pointers, but my question is, why? What use case motivates this behavior? Example code: #include <iostream> #include <cstring> int main() { char x[500]; std::strcpy(x, "Hello world"); int y; int *z = &y; std::cout << x << std::endl; std::cout << (char volatile*)x << std::endl; std::cout << z << std::endl; std::cout << (int volatile*)z << std::endl; return 0; } Output: Hello world 1 0x8046b6c 1

    Read the article

1 2 3 4 5 6 7 8 9 10 11  | Next Page >