Search Results

Search found 10004 results on 401 pages for 'thread pool'.

Page 3/401 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • mutliprocessing.Pool.add_sync() eating up memory

    - by Austin
    I want to use multithreading to make my script faster... I'm still new to this. The Python doc assumes you already understand threading and what-not. So... I have code that looks like this from itertools import izip from multiprocessing import Pool p = Pool() for i, j in izip(hugeseta, hugesetb): p.apply_async(number_crunching, (i, j)) Which gives me great speed! However, hugeseta and hugesetb are really huge. Pool keeps all of the _i_s and _j_s in memory after they've finished their job (basically, print output to stdout). Is there any to del i, and j after they complete?

    Read the article

  • Thread placement policies on NUMA systems - update

    - by Dave
    In a prior blog entry I noted that Solaris used a "maximum dispersal" placement policy to assign nascent threads to their initial processors. The general idea is that threads should be placed as far away from each other as possible in the resource topology in order to reduce resource contention between concurrently running threads. This policy assumes that resource contention -- pipelines, memory channel contention, destructive interference in the shared caches, etc -- will likely outweigh (a) any potential communication benefits we might achieve by packing our threads more densely onto a subset of the NUMA nodes, and (b) benefits of NUMA affinity between memory allocated by one thread and accessed by other threads. We want our threads spread widely over the system and not packed together. Conceptually, when placing a new thread, the kernel picks the least loaded node NUMA node (the node with lowest aggregate load average), and then the least loaded core on that node, etc. Furthermore, the kernel places threads onto resources -- sockets, cores, pipelines, etc -- without regard to the thread's process membership. That is, initial placement is process-agnostic. Keep reading, though. This description is incorrect. On Solaris 10 on a SPARC T5440 with 4 x T2+ NUMA nodes, if the system is otherwise unloaded and we launch a process that creates 20 compute-bound concurrent threads, then typically we'll see a perfect balance with 5 threads on each node. We see similar behavior on an 8-node x86 x4800 system, where each node has 8 cores and each core is 2-way hyperthreaded. So far so good; this behavior seems in agreement with the policy I described in the 1st paragraph. I recently tried the same experiment on a 4-node T4-4 running Solaris 11. Both the T5440 and T4-4 are 4-node systems that expose 256 logical thread contexts. To my surprise, all 20 threads were placed onto just one NUMA node while the other 3 nodes remained completely idle. I checked the usual suspects such as processor sets inadvertently left around by colleagues, processors left offline, and power management policies, but the system was configured normally. I then launched multiple concurrent instances of the process, and, interestingly, all the threads from the 1st process landed on one node, all the threads from the 2nd process landed on another node, and so on. This happened even if I interleaved thread creating between the processes, so I was relatively sure the effect didn't related to thread creation time, but rather that placement was a function of process membership. I this point I consulted the Solaris sources and talked with folks in the Solaris group. The new Solaris 11 behavior is intentional. The kernel is no longer using a simple maximum dispersal policy, and thread placement is process membership-aware. Now, even if other nodes are completely unloaded, the kernel will still try to pack new threads onto the home lgroup (socket) of the primordial thread until the load average of that node reaches 50%, after which it will pick the next least loaded node as the process's new favorite node for placement. On the T4-4 we have 64 logical thread contexts (strands) per socket (lgroup), so if we launch 48 concurrent threads we will find 32 placed on one node and 16 on some other node. If we launch 64 threads we'll find 32 and 32. That means we can end up with our threads clustered on a small subset of the nodes in a way that's quite different that what we've seen on Solaris 10. So we have a policy that allows process-aware packing but reverts to spreading threads onto other nodes if a node becomes too saturated. It turns out this policy was enabled in Solaris 10, but certain bugs suppressed the mixed packing/spreading behavior. There are configuration variables in /etc/system that allow us to dial the affinity between nascent threads and their primordial thread up and down: see lgrp_expand_proc_thresh, specifically. In the OpenSolaris source code the key routine is mpo_update_tunables(). This method reads the /etc/system variables and sets up some global variables that will subsequently be used by the dispatcher, which calls lgrp_choose() in lgrp.c to place nascent threads. Lgrp_expand_proc_thresh controls how loaded an lgroup must be before we'll consider homing a process's threads to another lgroup. Tune this value lower to have it spread your process's threads out more. To recap, the 'new' policy is as follows. Threads from the same process are packed onto a subset of the strands of a socket (50% for T-series). Once that socket reaches the 50% threshold the kernel then picks another preferred socket for that process. Threads from unrelated processes are spread across sockets. More precisely, different processes may have different preferred sockets (lgroups). Beware that I've simplified and elided details for the purposes of explication. The truth is in the code. Remarks: It's worth noting that initial thread placement is just that. If there's a gross imbalance between the load on different nodes then the kernel will migrate threads to achieve a better and more even distribution over the set of available nodes. Once a thread runs and gains some affinity for a node, however, it becomes "stickier" under the assumption that the thread has residual cache residency on that node, and that memory allocated by that thread resides on that node given the default "first-touch" page-level NUMA allocation policy. Exactly how the various policies interact and which have precedence under what circumstances could the topic of a future blog entry. The scheduler is work-conserving. The x4800 mentioned above is an interesting system. Each of the 8 sockets houses an Intel 7500-series processor. Each processor has 3 coherent QPI links and the system is arranged as a glueless 8-socket twisted ladder "mobius" topology. Nodes are either 1 or 2 hops distant over the QPI links. As an aside the mapping of logical CPUIDs to physical resources is rather interesting on Solaris/x4800. On SPARC/Solaris the CPUID layout is strictly geographic, with the highest order bits identifying the socket, the next lower bits identifying the core within that socket, following by the pipeline (if present) and finally the logical thread context ("strand") on the core. But on Solaris on the x4800 the CPUID layout is as follows. [6:6] identifies the hyperthread on a core; bits [5:3] identify the socket, or package in Intel terminology; bits [2:0] identify the core within a socket. Such low-level details should be of interest only if you're binding threads -- a bad idea, the kernel typically handles placement best -- or if you're writing NUMA-aware code that's aware of the ambient placement and makes decisions accordingly. Solaris introduced the so-called critical-threads mechanism, which is expressed by putting a thread into the FX scheduling class at priority 60. The critical-threads mechanism applies to placement on cores, not on sockets, however. That is, it's an intra-socket policy, not an inter-socket policy. Solaris 11 introduces the Power Aware Dispatcher (PAD) which packs threads instead of spreading them out in an attempt to be able to keep sockets or cores at lower power levels. Maximum dispersal may be good for performance but is anathema to power management. PAD is off by default, but power management polices constitute yet another confounding factor with respect to scheduling and dispatching. If your threads communicate heavily -- one thread reads cache lines last written by some other thread -- then the new dense packing policy may improve performance by reducing traffic on the coherent interconnect. On the other hand if your threads in your process communicate rarely, then it's possible the new packing policy might result on contention on shared computing resources. Unfortunately there's no simple litmus test that says whether packing or spreading is optimal in a given situation. The answer varies by system load, application, number of threads, and platform hardware characteristics. Currently we don't have the necessary tools and sensoria to decide at runtime, so we're reduced to an empirical approach where we run trials and try to decide on a placement policy. The situation is quite frustrating. Relatedly, it's often hard to determine just the right level of concurrency to optimize throughput. (Understanding constructive vs destructive interference in the shared caches would be a good start. We could augment the lines with a small tag field indicating which strand last installed or accessed a line. Given that, we could augment the CPU with performance counters for misses where a thread evicts a line it installed vs misses where a thread displaces a line installed by some other thread.)

    Read the article

  • Run a pool of processes in shell

    - by viraptor
    I'm looking for an easy method to run N selected processes at the same time with one command. It should put all the output on my terminal and shut down all of them when I exit with ctrl+c. Is there any existing app that does this? I'm thinking of some thing like exec_many 10 foo - it should keep 10 foos running and respawn any that dies.

    Read the article

  • wpf exit thread automatically when application closes

    - by toni
    Hi, I have a main wpf window and one of its controls is a user control that I have created. this user control is an analog clock and contains a thread that update hour, minute and second hands. Initially it wasn't a thread, it was a timer event that updated the hour, minutes and seconds but I have changed it to a thread because the application do some hard work when the user press a start button and then the clock don't update so I changed it to a thread. COde snippet of wpf window: <Window xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:local="clr-namespace:GParts" xmlns:Microsoft_Windows_Themes="clr-namespace:Microsoft.Windows.Themes assembly=PresentationFramework.Aero" xmlns:UC="clr-namespace:GParts.UserControls" x:Class="GParts.WinMain" Title="GParts" WindowState="Maximized" Closing="Window_Closing" Icon="/Resources/Calendar-clock.png" x:Name="WMain" > <...> <!-- this is my user control --> <UC:AnalogClock Grid.Row="1" x:Name="AnalogClock" Background="Transparent" Margin="0" Height="Auto" Width="Auto"/> <...> </Window> My problem is when the user exits the application then the thread seems to continue executing. I would like the thread finishes automatically when main windows closes. code snippet of user control constructor: namespace GParts.UserControls { /// <summary> /// Lógica de interacción para AnalogClock.xaml /// </summary> public partial class AnalogClock : UserControl { System.Timers.Timer timer = new System.Timers.Timer(1000); public AnalogClock() { InitializeComponent(); MDCalendar mdCalendar = new MDCalendar(); DateTime date = DateTime.Now; TimeZone time = TimeZone.CurrentTimeZone; TimeSpan difference = time.GetUtcOffset(date); uint currentTime = mdCalendar.Time() + (uint)difference.TotalSeconds; christianityCalendar.Content = mdCalendar.Date("d/e/Z", currentTime, false); // this was before implementing thread //timer.Elapsed += new System.Timers.ElapsedEventHandler(timer_Elapsed); //timer.Enabled = true; // The Work to perform ThreadStart start = delegate() { // With this condition the thread exits when main window closes but // despite of this it seems like the thread continues executing after // exiting application because in task manager cpu is very busy // while ((this.IsInitialized) && (this.Dispatcher.HasShutdownFinished== false)) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } Console.Write("Quit ok"); }; // Create the thread and kick it started! new Thread(start).Start(); } // this was before implementing thread void timer_Elapsed(object sender, System.Timers.ElapsedEventArgs e) { this.Dispatcher.Invoke(DispatcherPriority.Normal, (Action)(() => { DateTime hora = DateTime.Now; secondHand.Angle = hora.Second * 6; minuteHand.Angle = hora.Minute * 6; hourHand.Angle = (hora.Hour * 30) + (hora.Minute * 0.5); DigitalClock.CurrentTime = hora; })); } } // end class } // end namespace How can I exit correctly from thread automatically when main window closes and then application exits? Thanks very much!

    Read the article

  • In .NET when Aborting Thread, can this piece of code get corrupted?

    - by bosko
    Little intro: In complex multithreaded aplication (enterprise service bus EBS), I need to use Thread.Abort, because this EBS accepts user written modules which communicates with hardware security modules. So if this module gets deadlocked or hardware stops responding - i need to just unload this module and rest of this server aplication must keep runnnig. So there is abort sync mechanism which ensures that code can be aborted only in user section and this section must be marked as AbortAble. If this happen there is possibility that ThreadAbortException will be thrown in this pieace of code: public void StopAbortSection() { var id = Thread.CurrentThread.ManagedThreadId; lock (threadIdMap[id]) { .... } } If module is on AbortSection and Aplication decides to abort module, but after this decision but before actual Thread.Abort, module enters NonAbortableSection by calling this method, but lock is actualy taken on that locking object. So lock will block until Abort or abort can be executed before reaching this block by this code. But Object with this method is essential and i need to be sure that this pieace of code is safe to abort in any moment. Probably i have to mention that threadIdMap is Dictionary(int,ManualResetEvent), so locking object is instance of ManualResetEvent. I hope you now understad my question. Sorry for its largeness.

    Read the article

  • Stopping work from one thread using another thread

    - by 113483626144458436514
    Not sure if my title is worded well, but whatever :) I have two threads: the main thread with the work that needs to be done, and a worker thread that contains a form with a progress bar and a cancel button. In normal code, it would be the other way around, but I can't do that in this case. When the user clicks the cancel button, a prompt is displayed asking if he wants to really cancel the work. The problem is that work continues on the main thread. I can get the main thread to stop work and such, but I would like for it to stop doing work when he clicks "Yes" on the prompt. Example: // Main thread work starts here t1 = new Thread(new ThreadStart(progressForm_Start)); t1.Start(); // Working for (i = 0; i <= 10000; i++) { semaphore.WaitOne(); if (pBar.Running) bgworker_ProgressChanged(i); semaphore.Release(); if (pBar.IsCancelled) break; } t1.Abort(); // Main thread work ends here // Start progress bar form in another thread void progressForm_Start() { pBar.Status("Starting"); pBar.ShowDialog(); } I could theoretically include a prompt in the cancelWatch() function, but then I would have to do that everywhere I'm implementing this class.

    Read the article

  • I just can't kill Java thread.

    - by Adrian
    I have a thread that downloads some images from internet using different proxies. Sometimes it hangs, and can't be killed by any means. public HttpURLConnection uc; public InputStream in; Proxy proxy = new Proxy(Proxy.Type.HTTP, new InetSocketAddress("server", 8080)); URL url = new URL("http://images.com/image.jpg"); uc = (HttpURLConnection)url.openConnection(proxy); uc.setConnectTimeout(30000); uc.setAllowUserInteraction(false); uc.setDoOutput(true); uc.addRequestProperty("User-Agent","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0)"); uc.connect(); in = uc.getInputStream(); When it hangs, it freezes at the uc.getInputStream() method. I made a timer which tries to kill the thread if it's run time exceeds 3 minutes. I tried .terminate() the thread. No effect. I tried uc.disconnect() from the main thread. The method also hangs and with it, the main thread. I tried in.close(). No effect. I tried uc=null, in=null hoping for an exception that will end the thread. It keeps running. It never passes the uc.getInputStream() method. In my last test the thread lasted over 14 hours after receiving all above commands (or various combinations). I had to kill the Java process to stop the thread. If I just ignore the thread, and set it's instance to null, the thread doesn't die and is not cleaned by garbage collector. I know that because if I let the application running for several days, the Java process takes more and more system memory. In 3 days it took 10% of my 8Gb. RAM system. It is impossible to kill a thread whatever?

    Read the article

  • Is it a good way to close a thread?

    - by Roman
    I have a short version of the question: I start a thread like that: counter.start();, where counter is a thread. At the point when I want to stop the thread I do that: counter.interrupt() In my thread I periodically do this check: Thread.interrupted(). If it gives true I return from the thread and, as a consequence, it stops. And here are some details, if needed: If you need more details, they are here. From the invent dispatch thread I start a counter thread in this way: public static void start() { SwingUtilities.invokeLater(new Runnable() { public void run() { showGUI(); counter.start(); } }); } where the thread is defined like that: public static Thread counter = new Thread() { public void run() { for (int i=4; i>0; i=i-1) { updateGUI(i,label); try {Thread.sleep(1000);} catch(InterruptedException e) {}; } // The time for the partner selection is over. SwingUtilities.invokeLater(new Runnable() { public void run() { frame.remove(partnerSelectionPanel); frame.add(selectionFinishedPanel); frame.invalidate(); frame.validate(); } }); } }; The thread performs countdown in the "first" window (it shows home much time left). If time limit is over, the thread close the "first" window and generate a new one. I want to modify my thread in the following way: public static Thread counter = new Thread() { public void run() { for (int i=4; i>0; i=i-1) { if (!Thread.interrupted()) { updateGUI(i,label); } else { return; } try {Thread.sleep(1000);} catch(InterruptedException e) {}; } // The time for the partner selection is over. if (!Thread.interrupted()) { SwingUtilities.invokeLater(new Runnable() { public void run() { frame.remove(partnerSelectionPanel); frame.add(selectionFinishedPanel); frame.invalidate(); frame.validate(); } }); } else { return; } } };

    Read the article

  • Visual Studio Plug-in that can tell the Application Pool name of w3wp.exe when debugging

    - by Colin Niu
    Is there any plug-in for Visual Studio that can display the associated Application Pool name for those w3wp processes when debugging them with "Attach to Process..." ? Usually I have to do following steps before debugging: c: \Windows\system32\inetsrv\appcmd list wps then I get the process id for the Application Pool I want to debug, and then attach it in the Attach to Process window. I feel it will be very pleasure if there's a plug in can do this automatically, but didn't find any such thing after Googled.

    Read the article

  • Help regarding C# thread pool

    - by Matt
    I have a method that gets called quite often, with text coming in as a parameter.. I'm looking at creating a thread pool that checks the line of text, and performs actions based on that.. Can someone help me out with the basics behind creating the thread pool and firing off new threads please? This is so damn confusing..

    Read the article

  • Error while excuting a simple boost thread program

    - by Eternal Learner
    Hi All, Could you tell mw what is the problem with the below boost::thread program #include<iostream> #include<boost/thread/thread.hpp> boost::mutex mutex; class A { public: A() : a(0) {} void operator()() { boost::mutex::scoped_lock lock(mutex); } private: int a; }; int main() { boost::thread thr1(A()); boost::thread thr2(A()); thr1.join(); thr2.join(); } I get the error message: error: request for member 'join' in 'thr1', which is of non-class type 'boost::thread()(A ()())' BoostThread2.cpp:30: error: request for member 'join' in 'thr2', which is of non-class type 'boost::thread ()(A ()())'

    Read the article

  • Who interrupts my thread?

    - by Thirler
    I understand what an InterruptedException does and why it is thrown. However in my application I get it when waiting for SwingUtilities.invokeAndWait() on a thread that is only known by my application, and my application never class Thread.interrupt() on any thread, also it never passes the reference of the thread on to anyone. So my question is: Who interrupts my thread? Is there any way to tell? Is there a reason why the InterruptedException doesn't contain the name of the Thread that requests the interrupt? I read that it could be a framework or library that does this, we use the following, but I can't think of reason for them to interrupt my thread: Hibernate Spring Log4J Mysql connector

    Read the article

  • Using a Cross Thread Boolean to Abort Thread

    - by Jon
    Possible Duplicate: Can a C# thread really cache a value and ignore changes to that value on other threads? Lets say we have this code: bool KeepGoing = true; DataInThread = new Thread(new ThreadStart(DataInThreadMethod)); DataInThread.Start(); //bla bla time goes on KeepGoing = false; private void DataInThreadMethod() { while (KeepGoing) { //Do stuff } } } Now the idea is that using the boolean is a safe way to terminate the thread however because that boolean exists on the calling thread does that cause any issue? That boolean is only used on the calling thread to stop the thread so its not like its being used elsewhere

    Read the article

  • Handle leaks with .NET System.Threading.Thread class

    - by Mahno
    I've a problem that number of Handles in my app is continuously growing. I did the debugging and recognize that this is caused by System.Threading.Thread class which is used for some routine. To simplify the debugging I’ve created a sample .NET application: ... private void button1_Click(object sender, EventArgs e) { Thread t = new Thread(DoWork); t.Start(); } public void DoWork(object parameter) { // Do something... } ... Each time I’m clicking the button, a thread is created using System.Threading.Thread class. The problem is that looks like the thread do not frees Handles because each click cause number of Handles growing by ~5. The question is: how can I manually free all Handles created by System.Threading.Thread class? Thanks in advance.

    Read the article

  • Boost thread synchronization in release build

    - by Joseph16
    Hi, when I try to run the following code in debug and release mode in VS2005. Each time I see different output in console and It doesn't seem like the multithreading is achieved in release mode. 1. #include <boost/thread.hpp> 2. #include <iostream> 3. 4. void wait(int seconds) 5. { 6. boost::this_thread::sleep(boost::posix_time::seconds(seconds)); 7. } 8. 9. boost::mutex mutex; 10. 11. void thread() 12. { 13. for (int i = 0; i < 5; ++i) 14. { 15. //wait(1); 16. mutex.lock(); 17. std::cout << "Thread " << boost::this_thread::get_id() << ": " << i << std::endl; 18. mutex.unlock(); 19. } 20. } 21. 22. int main() 23. { 24. boost::thread t1(thread); 25. boost::thread t2(thread); 26. t1.join(); 27. t2.join(); 28. } Debug Mode Thread 00153E60: 0 Thread 00153E90: 0 Thread 00153E60: 1 Thread 00153E90: 1 Thread 00153E90: 2 Thread 00153E60: 2 Thread 00153E90: 3 Thread 00153E60: 3 Thread 00153E60: 4 Thread 00153E90: 4 Press any key to continue . . . Release Mode Thread 00153D28: 0 Thread 00153D28: 1 Thread 00153D28: 2 Thread 00153D28: 3 Thread 00153D28: 4 Thread 00153D58: 0 Thread 00153D58: 1 Thread 00153D58: 2 Thread 00153D58: 3 Thread 00153D58: 4 Press any key to continue . . .

    Read the article

  • Run sub on main thread from separate thread [VB.NET|SerialPort]

    - by Steven
    I'm reading data from a serial port, but the DataReceived event of SerialPort is handled on it's own thread. I want to handle this on the main thread, but simply declaring an event and raising it still results in it being processed on the SerialPort thread. I'm assuming I need to declare a delegate I can call, but I don't see how that would work. For example, I want to call Sub HandleDataReceived() on the main thread from the DataReceived thread, having HandleDataReceived() run on the main thread. How would I do this?

    Read the article

  • Rails: Thread won't affect database unless joined to main Thread

    - by hatboysam
    I have a background operation I would like to occur every 20 seconds in Rails given that some condition is true. It kicked off when a certain controller route is hit, and it looks like this def startProcess argId = self.id t = Thread.new do while (Argument.isRunning(argId)) do Argument.update(argId) Argument.markVotes(argId) puts "Thread ran" sleep 20 end end end However, this code does absolutely nothing to my database unless I call "t.join" in which case my whole server is blocked for a long time (but it works). Why can't the read commit ActiveRecords without being joined to the main thread? The thread calls methods that look something like def sample model = Model.new() model.save() end but the models are not saved to the DB unless the thread is joined to the main thread. Why is this? I have been banging my head about this for hours.

    Read the article

  • .NET Thread Pool - Unresponsive WinForms UI

    - by Goober
    Scenario I have a Windows Forms Application. Inside the main form there is a loop that iterates around 3000 times, Creating a new instance of a class on a new thread to perform some calculations. Bearing in mind that this setup uses a Thread Pool, the UI does stay responsive when there are only around 100 iterations of this loop (100 Assets to process). But as soon as this number begins to increase heavily, the UI locks up into eggtimer mode and the thus the log that is writing out to the listbox on the form becomes unreadable. Question Am I right in thinking that the best way around this is to use a Background Worker? And is the UI locking up because even though I'm using lots of different threads (for speed), the UI itself is not on its own separate thread? Suggested Implementations greatly appreciated.

    Read the article

  • C# Win Forms Thread Pool - Unresponsive UI

    - by Goober
    Scenario I have a Windows Form Application. Inside the main form there is a loop that iterates around 3000 times, Creating a new instance of a class on a new thread to perform some calculations. Baring in mind that this setup uses a Thread Pool, the UI does stay responsive when there are only around 100 iterations of this loop (100 Assets to process). But as soon as this number begins to increase heavily, the UI locks up into eggtimer mode and the thus the log that is writing out to the listbox on the form becomes unreadable. Question Am I right in thinking that the best way around this is to use a Background Worker? And is the UI locking up because even though I'm using lots of different threads (for speed), the UI itself is not on its own separate thread? Suggested Implementations greatly appreciated.

    Read the article

  • C++ boost thread reusing threads

    - by aaa
    hi. I am trying to accomplish something like this: thread t; // create/initialize thread t.launch(); // launch thread. t.wait(); // wait t.launch(); // relaunch the same thread How to go about implementing something like this using boost threads? in essence, I need persistent relaunch-able thread. Thanks

    Read the article

  • Java: How to make this main thread wait for the new thread to terminate

    - by Jeff Bullard
    I have a java class that creates a process, called child, using ProcessBuilder. The child process generates a lot of output that I am draining on a separate thread to keep the main thread from getting blocked. However, a little later on I need to wait for the output thread to complete/terminate before going on, and I'm not sure how to do that. I think that join() is the usual way to do this but I'm not sure how to do that in this case. Here is the relevant part of the java code. // Capture output from process called child on a separate thread final StringBuffer outtext = new StringBuffer(""); new Thread(new Runnable() { public void run() { InputStream in = null; in = child.getInputStream(); try { if (in != null) { BufferedReader reader = new BufferedReader(new InputStreamReader(in)); String line = reader.readLine(); while ((line != null)) { outtext.append(line).append("\n"); ServerFile.appendUserOpTextFile(userName, opname, outfile, line+"\n"); line = reader.readLine(); } } } catch (IOException iox) { throw new RuntimeException(iox); } } }).start(); // Write input to for the child process on this main thread // String intext = ServerFile.readUserOpTextFile(userName, opname, infile); OutputStream out = child.getOutputStream(); try { out.write(intext.getBytes()); out.close(); } catch (IOException iox) { throw new RuntimeException(iox); } // ***HERE IS WHERE I NEED TO WAIT FOR THE THREAD TO FINISH *** // Other code goes here that needs to wait for outtext to get all // of the output from the process // Then, finally, when all the remaining code is finished, I return // the contents of outtext return outtext.toString();

    Read the article

  • Data Source Connection Pool Sizing

    - by Steve Felts
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} One of the most time-consuming procedures of a database application is establishing a connection. The connection pooling of the data source can be used to minimize this overhead.  That argues for using the data source instead of accessing the database driver directly. Configuring the size of the pool in the data source is somewhere between an art and science – this article will try to move it closer to science.  From the beginning, WLS data source has had an initial capacity and a maximum capacity configuration values.  When the system starts up and when it shrinks, initial capacity is used.  The pool can grow to maximum capacity.  Customers found that they might want to set the initial capacity to 0 (more on that later) but didn’t want the pool to shrink to 0.  In WLS 10.3.6, we added minimum capacity to specify the lower limit to which a pool will shrink.  If minimum capacity is not set, it defaults to the initial capacity for upward compatibility.   We also did some work on the shrinking in release 10.3.4 to reduce thrashing; the algorithm that used to shrink to the maximum of the currently used connections or the initial capacity (basically the unused connections were all released) was changed to shrink by half of the unused connections. The simple approach to sizing the pool is to set the initial/minimum capacity to the maximum capacity.  Doing this creates all connections at startup, avoiding creating connections on demand and the pool is stable.  However, there are a number of reasons not to take this simple approach. When WLS is booted, the deployment of the data source includes synchronously creating the connections.  The more connections that are configured in initial capacity, the longer the boot time for WLS (there have been several projects for parallel boot in WLS but none that are available).  Related to creating a lot of connections at boot time is the problem of logon storms (the database gets too much work at one time).   WLS has a solution for that by setting the login delay seconds on the pool but that also increases the boot time. There are a number of cases where it is desirable to set the initial capacity to 0.  By doing that, the overhead of creating connections is deferred out of the boot and the database doesn’t need to be available.  An application may not want WLS to automatically connect to the database until it is actually needed, such as for some code/warm failover configurations. There are a number of cases where minimum capacity should be less than maximum capacity.  Connections are generally expensive to keep around.  They cause state to be kept on both the client and the server, and the state on the backend may be heavy (for example, a process).  Depending on the vendor, connection usage may cost money.  If work load is not constant, then database connections can be freed up by shrinking the pool when connections are not in use.  When using Active GridLink, connections can be created as needed according to runtime load balancing (RLB) percentages instead of by connection load balancing (CLB) during data source deployment. Shrinking is an effective technique for clearing the pool when connections are not in use.  In addition to the obvious reason that there times where the workload is lighter,  there are some configurations where the database and/or firewall conspire to make long-unused or too-old connections no longer viable.  There are also some data source features where the connection has state and cannot be used again unless the state matches the request.  Examples of this are identity based pooling where the connection has a particular owner and XA affinity where the connection is associated with a particular RAC node.  At this point, WLS does not re-purpose (discard/replace) connections and shrinking is a way to get rid of the unused existing connection and get a new one with the correct state when needed. So far, the discussion has focused on the relationship of initial, minimum, and maximum capacity.  Computing the maximum size requires some knowledge about the application and the current number of simultaneously active users, web sessions, batch programs, or whatever access patterns are common.  The applications should be written to only reserve and close connections as needed but multiple statements, if needed, should be done in one reservation (don’t get/close more often than necessary).  This means that the size of the pool is likely to be significantly smaller then the number of users.   If possible, you can pick a size and see how it performs under simulated or real load.  There is a high-water mark statistic (ActiveConnectionsHighCount) that tracks the maximum connections concurrently used.  In general, you want the size to be big enough so that you never run out of connections but no bigger.   It will need to deal with spikes in usage, which is where shrinking after the spike is important.  Of course, the database capacity also has a big influence on the decision since it’s important not to overload the database machine.  Planning also needs to happen if you are running in a Multi-Data Source or Active GridLink configuration and expect that the remaining nodes will take over the connections when one of the nodes in the cluster goes down.  For XA affinity, additional headroom is also recommended.  In summary, setting initial and maximum capacity to be the same may be simple but there are many other factors that may be important in making the decision about sizing.

    Read the article

  • c# CF Restart a thread

    - by Ed
    Hi all, Supose you have a form with a button that starts/stops a thread (NOT pausing or interrupting, I really need to stop the thread !) Check out this code: Constructor() { m_TestThread = new Thread(new ThreadStart(ButtonsThread)); m_bStopThread = false; } ButtonClick { // If the thread is not running, start it m_TestThread.Start(); // If the thread is running, stop it m_bStopThread = true; m_TestThread.Join(); } ThreadFunction() { while(!m_bStopThread) { // Do work } } 2 questions (remember CF): - How can I know if the thread is running (I cannot seem to access the m_pThreadState, and I've tried the C++ GetThreadExitCode(), it give false results) ? - Most important question : if I have stopped the thread, I cannot restart it, most probably because the m_TestThread.m_fStarted is still set (and it is private so I cannot access it) ! And thus m_TestThread.Start() generates an exception (StateException). Stopping the thread with an Abort() doesn't solve it. If I put my m_TestThread = null; it works, but then I create a memory leak. The GC doesn't clean up either, even if I wait for xx seconds. Anybody has an idea ? All help highly appreciated ! Grtz E

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >