Search Results

Search found 16324 results on 653 pages for 'per thread'.

Page 12/653 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Pass a Message From Thread to Update UI

    - by Jay Dee
    Ive created a new thread for a file browser. The thread reads the contents of a directory. What I want to do is update the UI thread to draw a graphical representation of the files and folders. I know I can't update the UI from within a new thread so what I want to do is: whilst the file scanning thread iterates through a directories files and folders pass a file path string back to the UI thread. The handler in the UI thread then draws the graphical representation of the file passed back. public class New_Project extends Activity implements Runnable { private Handler handler = new Handler() { @Override public void handleMessage(Message msg) { Log.d("New Thread","Proccess Complete."); Intent intent = new Intent(); setResult(RESULT_OK, intent); finish(); } }; public void getFiles(){ //if (!XMLEFunctions.canReadExternal(this)) return; pd = ProgressDialog.show(this, "Reading Directory.", "Please Wait...", true, false); Log.d("New Thread","Called"); Thread thread = new Thread(this); thread.start(); } public void run() { Log.d("New Thread","Reading Files"); getFiles(); handler.sendEmptyMessage(0); } public void getFiles() { for (int i=0;i<=allFiles.length-1;i++){ //I WANT TO PASS THE FILE PATH BACK TU A HANDLER IN THE UI //SO IT CAN BE DRAWN. **passFilePathBackToBeDrawn(allFiles[i].toString());** } } }

    Read the article

  • Is this (Lock-Free) Queue Implementation Thread-Safe?

    - by Hosam Aly
    I am trying to create a lock-free queue implementation in Java, mainly for personal learning. The queue should be a general one, allowing any number of readers and/or writers concurrently. Would you please review it, and suggest any improvements/issues you find? Thank you. import java.util.concurrent.atomic.AtomicReference; public class LockFreeQueue<T> { private static class Node<E> { E value; volatile Node<E> next; Node(E value) { this.value = value; } } private AtomicReference<Node<T>> head, tail; public LockFreeQueue() { // have both head and tail point to a dummy node Node<T> dummyNode = new Node<T>(null); head = new AtomicReference<Node<T>>(dummyNode); tail = new AtomicReference<Node<T>>(dummyNode); } /** * Puts an object at the end of the queue. */ public void putObject(T value) { Node<T> newNode = new Node<T>(value); Node<T> prevTailNode = tail.getAndSet(newNode); prevTailNode.next = newNode; } /** * Gets an object from the beginning of the queue. The object is removed * from the queue. If there are no objects in the queue, returns null. */ public T getObject() { Node<T> headNode, valueNode; // move head node to the next node using atomic semantics // as long as next node is not null do { headNode = head.get(); valueNode = headNode.next; // try until the whole loop executes pseudo-atomically // (i.e. unaffected by modifications done by other threads) } while (valueNode != null && !head.compareAndSet(headNode, valueNode)); T value = (valueNode != null ? valueNode.value : null); // release the value pointed to by head, keeping the head node dummy if (valueNode != null) valueNode.value = null; return value; }

    Read the article

  • thread local stroage macosx

    - by anon
    http://developer.apple.com/mac/library/documentation/DeveloperTools/gcc-4.0.1/gcc/Thread_002dLocal.html Documents __thread yet my g++ compalins that __thread is not suppoted on my arch (Leopard on Macbookpro). Why is this? And how do I get around it?

    Read the article

  • Cache consistency & spawning a thread

    - by Dave Keck
    Background I've been reading through various books and articles to learn about processor caches, cache consistency, and memory barriers in the context of concurrent execution. So far though, I have been unable to determine whether a common coding practice of mine is safe in the strictest sense. Assumptions The following pseudo-code is executed on a two-processor machine: int sharedVar = 0; myThread() { print(sharedVar); } main() { sharedVar = 1; spawnThread(myThread); sleep(-1); } main() executes on processor 1 (P1), while myThread() executes on P2. Initially, sharedVar exists in the caches of both P1 and P2 with the initial value of 0 (due to some "warm-up code" that isn't shown above.) Question Strictly speaking – preferably without assuming any particular CPU – is myThread() guaranteed to print 1? With my newfound knowledge of processor caches, it seems entirely possible that at the time of the print() statement, P2 may not have received the invalidation request for sharedVar caused by P1's assignment in main(). Therefore, it seems possible that myThread() could print 0. References These are the related articles and books I've been reading. (It wouldn't allow me to format these as links because I'm a new user - sorry.) Shared Memory Consistency Models: A Tutorial hpl.hp.com/techreports/Compaq-DEC/WRL-95-7.pdf Memory Barriers: a Hardware View for Software Hackers rdrop.com/users/paulmck/scalability/paper/whymb.2009.04.05a.pdf Linux Kernel Memory Barriers kernel.org/doc/Documentation/memory-barriers.txt Computer Architecture: A Quantitative Approach amazon.com/Computer-Architecture-Quantitative-Approach-4th/dp/0123704901/ref=dp_ob_title_bk

    Read the article

  • java share data between thread

    - by ayush
    i have a java process that reads data from a socket server. Thus i have a BufferedReader and a PrintWriter object corresponding to that socket. Now in the same java process i have a multithreaded java server that accepts client connections. I want to achieve a functionality where all these clients that i accept can read data from the BufferedReader object that i mentioned above.(so that they can multiplex the data) How do i make these individual client threads read the data from BuffereReader single object? Sorry for the confusion.

    Read the article

  • Is this ruby code thread safe?

    - by Ben K.
    Is this code threadsafe? It seems like it should be, because @myvar will never be assigned from multiple threads (assuming block completes in < 1s). But do I need to be worried about a situation where the second block is trying to read @myvar as it's being written? require 'rubygems' require 'eventmachine' @myvar = Time.now.to_i EventMachine.run do EventMachine.add_periodic_timer(1) do EventMachine.defer do @myvar = Time.now.to_i # some calculation and reassign end end EventMachine.add_periodic_timer(0.5) do puts @myvar end end

    Read the article

  • How do I make a background thread in Java that allows the main application to exit completely? This

    - by Bob
    I have a Java application that creates a new thread to do some work. I can launch the new thread with no problems. When the "main" program terminates, I want the thread I created to keep running - which it does... But the problem is, when I run the main application from Eclipse or from Ant under Windows, control doesn't return unless the background process is killed. If I fork the main java process in ant, I want control to return to ant once the main thread is done with its work... But as it is, ant continues to wait until both the main process and the created thread are both terminated. How do I launch the thread in the background such that control will return to ant when the "main" application is finished? (By the way, when I run the same application under Linux, I am able to do this with no problems).

    Read the article

  • Java Thread - Memory consistency errors

    - by Yatendra Goel
    I was reading a Sun's tutorial on Concurrency. But I couldn't understand exactly what memory consistency errors are? I googled about that but didn't find any helpful tutorial or article about that. I know that this question is a subjective one, so you can provide me links to articles on the above topic. It would be great if you explain it with a simple example.

    Read the article

  • ID generator with local static variable - thread-safe?

    - by Poseidon
    Will the following piece of code work as expected in a multi-threaded scenario? int getUniqueID() { static int ID=0; return ++ID; } It's not necessary that the IDs to be contiguous - even if it skips a value, it's fine. Can it be said that when this function returns, the value returned will be unique across all threads?

    Read the article

  • Server Systems for SQL Server 2012 per core licensing

    - by jchang
    Until recently, the SQL Server Enterprise Edition per processor (socket) licensing model resulted in only 2 or 3 server system configurations being the preferred choice. Determine the number of sockets: 2, 4 or 8. Then select the processor with the most compute capability at that socket count level. Finally, fill the DIMM sockets with the largest capacity ECC memory module at reasonable cost per GB. Currently this is the 16GB DIMM with a price of $365 on the Dell website, and $240 from Crucial. The...(read more)

    Read the article

  • Thread safe GUI programming

    - by James
    I have been programming Java with swing for a couple of years now, and always accepted that GUI interactions had to happen on the Event Dispatch Thread. I recently started to use GTK+ for C applications and was unsurprised to find that GUI interactions had to be called on gtk_main. Similarly, I looked at SWT to see in what ways it was different to Swing and to see if it was worth using, and again found the UI thread idea, and I am sure that these 3 are not the only toolkits to use this model. I was wondering if there is a reason for this design i.e. what is the reason for keeping UI modifications isolated to a single thread. I can see why some modifications may cause issues (like modifying a list while it is being drawn), but I do not see why these concerns pass on to the user of the API. Is there a limit imposed by an operating system? Is there a good reason these concerns are not 'hidden' (i.e. some form of synchronization that is invisible to the user)? Is there any (even purely conceptual) way of creating a thread safe graphics library, or is such a thing actually impossible? I found this http://blogs.operationaldynamics.com/andrew/software/gnome-desktop/gtk-thread-awareness which seems to describe GTK differently to how I understood it (although my understanding was the same as many people's) How does this differ to other toolkits? Is it possible to implement this in Swing (as the EDT model does not actually prevent access from other threads, it just often leads to Exceptions)

    Read the article

  • the limit of pageviews per month in Google Analytics

    - by crmpicco
    I have been looking around to try and find some confirmation and clarity on the limit of pageviews that Google allow per month for a Google Analytics account. I have read that the limit of hits per month is 10,000,000, and the limit of pageviews is 5,000,000. Putting 2 and 2 together I am thinking this is to allow the other 5,000,000 for events and social clicks and the like? Google's documentation states 5m, but the hits/pageviews is a bit of a grey area as i've read suggestions that the limit can be considered as 10m

    Read the article

  • How to do thread management in C++?

    - by Dipan Mehta
    We use pthread for thread management in C based systems. pthread is in general compilable by C++ compiler (like g++). However, what are the better ways of abstractions for threads in C++? Also, for making any system to be working in a multi-threaded system, it is also important to make thread safe. What are the standard libraries that requires alternative (installs) to be thread safe or are they unsafe for multi-threaded environments? Is smart pointers, templates require special measures to make it safe? What are the best practices for the thread managements in C++?

    Read the article

  • Logic / Render phases with a single thread

    - by DevilWithin
    The question I have may generate different opinions from different developers, but I'd still like to have an answer on this. Its all about the updating and rendering steps of the game loop, and their use under multi and single threaded environments. Currently, there is one thread running, which takes care of sequentially executing events , logic and rendering. Sometimes, the logic part may wish to change the game state to something else, and in between do some loading of files. The result is that the game hangs completely while loading, and then proceeds to normal rendering of the new state. To go around this, i could make another thread, do the loading there while the main thread renders a smooth loading animation, and then proceed normally. The real question is about if i don't create another thread. I could refresh the screen from the logic thread, and provide some basic loading screen, which could be not so smoothly updated while the files load. In fact, this approach is not loved by a lot of developers, as it scrambles render code in the logic step, which may cause problems of different sorts.. Hope its clear!

    Read the article

  • Woolrich Prezzi che sono perfetti per tutte le donne

    - by WoolrichParka
    Gli strati Parka Woolrich Prezzi Woolrich sono realizzati con il 100% verso il basso e con la miscela remove ragionevole sull'area rivestimento esterno che tiene i siti per più innovativi clienti.L Woolrich Arctic strati sono sviluppati con la tonalità tradizionale che può essere anche in aggiunta al cappuccio Woolrich Parka e creare femmine in reasonable.Now prospettiva elegante, si mostra una vasta gamma Woolrich Outlet Bologna di disegni di Woolrich Parka Men per la vostra scelta, che sono tutti un valore di acquisto nel design e prezzo.wufengfengmaple36

    Read the article

  • Multithreaded game fails on SwapBuffers in render thread at exit

    - by user782220
    The render loop and windows message loop run on separate threads. The way the program exits is that after PostQuitMessage is called in WM_DESTROY the message loop thread signals the render loop thread to exit. As far as I can tell before the render loop thread can even process the signal it tries SwapBuffers and that fails. My question, is there something about how Windows processes WM_DESTROY and WM_QUIT, in maybe DefWindowProc that causes various objects associated with rendering to go away even though I haven't explicitly deleted anything? And that would explain why the rendering thread is making bad calls at exit?

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • [C# Thread] I'd like access to a share on the network!

    - by JustLooking
    Some Details I am working with VisualWebGUI, so this app is like ASP.NET, and it is deployed on IIS 7 (for testing) For my 'Web Site', Anonymous Authentication is set to a specific user (DomainName\DomainUser). In my web.config, I have impersonation on. This is how I got my app to access the share in the first place. The Problem There is a point in the the app where we use the Thread class, something similar to: Thread myThread = new Thread(new ThreadStart(objInstance.PublicMethod)); myThread.Start(); What I have noticed is that I can write to my logs (text file on the share), everywhere throughout my code, except in the thread that I kicked off. I added some debugging output and what I see for users is: The thread that's kicked off: NT AUTHORITY\NETWORK SERVICE Everywhere else in my code: DomainName\DomainUser (described in my IIS setup) OK, for some reason the thread gets a different user (NETWORK SERVICE). Fine. But, my share (and the actual log file) was given 'Full Control' to the NETWORK SERVICE user (this share resides on a different server than the one that my app is running). If NETWORK SERVICE has rights to this folder, why do I get access denied? Or is there a way to have the thread I kick off have the same user as the process?

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >