Search Results

Search found 8166 results on 327 pages for 'thread syncronization'.

Page 279/327 | < Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >

  • how to return a list using SwingWorker

    - by Ender
    I have an assignment where i have to create an Image Gallery which uses a SwingWorker to load the images froma a file, once the image is load you can flip threw the image and have a slideshow play. I am having trouble getting the list of loaded images using SwingWorker. This is what happens in the background it just publishes the results to a TextArea // In a thread @Override public List<Image> doInBackground() { List<Image> images = new ArrayList<Image>(); for (File filename : filenames) { try { //File file = new File(filename); System.out.println("Attempting to add: " + filename.getAbsolutePath()); images.add(ImageIO.read(filename)); publish("Loaded " + filename); System.out.println("Added file" + filename.getAbsolutePath()); } catch (IOException ioe) { publish("Error loading " + filename); } } return images; } } when it is done I just insert the images in a List<Image> and that is all it does. // In the EDT @Override protected void done() { try { for (Image image : get()) { list.add(image); } } catch (Exception e) { } } Also I created an method that returns the list called getImages() what I need to get is the list from getImages() but doesn't seam to work when I call execute() for example MySwingWorkerClass swingworker = new MySwingWorkerClass(log,list,filenames); swingworker.execute(); imageList = swingworker.getImage() Once it reaches the imageList it doesn't return anything the only way I was able to get the list was when i used the run() instead of the execute() is there another way to get the list or is the run() method the only way?. or perhaps i am not understanding the Swing Worker Class.

    Read the article

  • What's a reasonable way to mutate a primitive variable from an anonymous Java class?

    - by Steve
    I would like to write the following code: boolean found = false; search(new SearchCallback() { @Override void onFound(Object o) { found = true; } }); Obviously this is not allowed, since found needs to be final. I can't make found a member field for thread-safety reasons. What is the best alternative? One workaround is to define final class MutableReference<T> { private T value; MutableReference(T value) { this.value = value; } T get() { return value; } void set(T value) { this.value = value; } } but this ends up taking a lot of space when formatted properly, and I'd rather not reinvent the wheel if at all possible. I could use a List<Boolean> with a single element (either mutating that element, or else emptying the list) or even a Boolean[1]. But everything seems to smell funny, since none of the options are being used as they were intended. What is a reasonable way to do this?

    Read the article

  • problm with MANIFEST.MF in jar

    - by Atul
    hi I have created jar in the following folder: /usr/local/bin/niidle.jar. And my MANIFEST.MF file is as follows: Manifest-Version: 1.0 Main-Class: com.ensarm.niidle.web.scraper.NiidleScrapeManager Class-Path: hector-0.6.0-17.jar And I verified that,this 'hector-0.6.0-17.jar' file is also present in the folder: /Projects/EnwelibDatedOct13/Niidle/lib/hector-0.6.0-17.jar I don't want to give full class-path name in MANIFEST.MF file,because I have to run this jar on other's machine,so I gave only jar file name 'Class-Path=hector-0.6.0-17.jar' in MANIFEST.MF file. Inspite of mentioning the Class-Path in MANIFEST.MF file, when I run this using command: java -jar /usr/local/bin/niidle.jar arguments... It is showing error massage: --Exception in thread "main" java.lang.NoClassDefFoundError: me/prettyprint/hector/api/Serializer at com.ensarm.niidle.web.scraper.NiidleScrapeManager.main(NiidleScrapeManager.java:21) Caused by: java.lang.ClassNotFoundException: me.prettyprint.hector.api.Serializer at java.net.URLClassLoader$1.run(URLClassLoader.java:200) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:188) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:252) at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:320) ... 1 more Please give me solution for this error message..

    Read the article

  • Parallel Tasking Concurrency with Dependencies on Python like GNU Make

    - by Brian Bruggeman
    I'm looking for a method or possibly a philosophical approach for how to do something like GNU Make within python. Currently, we utilize makefiles to execute processing because the makefiles are extremely good at parallel runs with changing single option: -j x. In addition, gnu make already has the dependency stacks built into it, so adding a secondary processor or the ability to process more threads just means updating that single option. I want that same power and flexibility in python, but I don't see it. As an example: all: dependency_a dependency_b dependency_c dependency_a: dependency_d stuff dependency_b: dependency_d stuff dependency_c: dependency_e stuff dependency_d: dependency_f stuff dependency_e: stuff dependency_f: stuff If we do a standard single thread operation (-j 1), the order of operation might be: dependency_f -> dependency_d -> dependency_a -> dependency_b -> dependency_e \ -> dependency_c For two threads (-j 2), we might see: 1: dependency_f -> dependency_d -> dependency_a -> dependency_b 2: dependency_e -> dependency_c Does anyone have any suggestions on either a package already built or an approach? I'm totally open, provided it's a pythonic solution/approach. Please and Thanks in advance!

    Read the article

  • How to add deploy.jar to classpath?

    - by dma_k
    I am facing the problem: I need to add ${java.home}/lib/deploy.jar JAR file to classpath in the runtime (dynamically from java). The solution with Thread#setContextClassLoader(ClassLoader) (mentioned here) does not work because of this bug (if somebody can explain what is really a problem – you are welcome). The solution with -Xbootclasspath/a:"%JAVA_HOME%/jre/lib/deploy.jar" does not work well for me, because I want to have "pure executable jar" as a deliverable: no wrapping scripts please (more over %JAVA_HOME% may not be defined in user's environment in Windows for example, plus I need to write a script per platform) The solution with merging deploy.jar file into my deliverable works only if I make a build on Windows platform. Unfortunately, when the deliverable is produced on build server running on Linux, I got Linux-dependant JAR, which does not execute on Windows – it fails with the trace below. I have read How the Java Launcher Finds Classes and Java programming dynamics: Java classes and class loading articles but I've got no extra ideas, how to correctly handle this situation. Any advices or solutions are very welcomed. Trace: java.lang.NoClassDefFoundError: Could not initialize class com.sun.deploy.config.Config at com.sun.deploy.net.proxy.UserDefinedProxyConfig.getBrowserProxyInfo(UserDefinedProxyConfig.java:43) at com.sun.deploy.net.proxy.DynamicProxyManager.reset(DynamicProxyManager.java:235) at com.sun.deploy.net.proxy.DeployProxySelector.reset(DeployProxySelector.java:59) ... java.lang.NullPointerException at com.sun.deploy.net.proxy.DynamicProxyManager.getProxyList(DynamicProxyManager.java:63) at com.sun.deploy.net.proxy.DeployProxySelector.select(DeployProxySelector.java:166)

    Read the article

  • How to change internal buffer size of DataInputStream

    - by Gaks
    I'm using this kind of code for my TCP/IP connection: sock = new Socket(host, port); sock.setKeepAlive(true); din = new DataInputStream(sock.getInputStream()); dout = new DataOutputStream(sock.getOutputStream()); Then, in separate thread I'm checking din.available() bytes to see if there are some incoming packets to read. The problem is, that if a packet bigger than 2048 bytes arrives, the din.available() returns 2048 anyway. Just like there was a 2048 internal buffer. I can't read those 2048 bytes when I know it's not the full packet my application is waiting for. If I don't read it however - it'll all stuck at 2048 bytes and never receive more. Can I enlarge the buffer size of DataInputStream somehow? Socket receive buffer is 16384 as returned by sock.getReceiveBufferSize() so it's not the socket limiting me to 2048 bytes. If there is no way to increase the DataInputStream buffer size - I guess the only way is to declare my own buffer and read everything from DataInputStream to that buffer? Regards

    Read the article

  • Diffrernce between BackgroundWorker.ReportProgress() and Control.Invoke()

    - by ohadsc
    What is the difference between options 1 and 2 in the following? private void BGW_DoWork(object sender, DoWorkEventArgs e) { for (int i=1; i<=100; i++) { string txt = i.ToString(); if (Test_Check.Checked) //OPTION 1 Test_BackgroundWorker.ReportProgress(i, txt); else //OPTION 2 this.Invoke((Action<int, string>)UpdateGUI, new object[] {i, txt}); } } private void BGW_ProgressChanged(object sender, ProgressChangedEventArgs e) { UpdateGUI(e.ProgressPercentage, (string)e.UserState); } private void UpdateGUI(int percent, string txt) { Test_ProgressBar.Value = percent; Test_RichTextBox.AppendText(txt + Environment.NewLine); } Looking at reflector, the Control.Invoke() appears to use: this.FindMarshalingControl().MarshaledInvoke(this, method, args, 1); whereas BackgroundWorker.Invoke() appears to use: this.asyncOperation.Post(this.progressReporter, args); (I'm just guessing these are the relevant function calls.) If I understand correctly, BGW Posts to the WinForms window its progress report request, whereas Control.Invoke uses a CLR mechanism to invoke on the right thread. Am I close? And if so, what are the repercussions of using either ? Thanks

    Read the article

  • Java Servlet: getInitParameter not work in Service()

    - by Gabriele
    I've added some parameters in my web.xml config file, as follow: <context-param> <param-name>service1</param-name> <param-value>http://www.example.com/example2.html</param-value> </context-param> <context-param> <param-name>service2</param-name> <param-value>http://www.example.com/example2.html</param-value> </context-param> ... I try to get the parameter in my servlet, in particular in my service method: protected void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println(this.getServletContext().getInitParameter("service1")); ... but at runtime I have a NullPointerException... How can I get the parameter value included in web.xml? This is the stacktrace: GRAVE: Servlet.service() for servlet DispatcherServlet threw exception java.lang.NullPointerException at javax.servlet.GenericServlet.getServletContext(GenericServlet.java:160) at it.servlethope.DispatcherServlet.service(DispatcherServlet.java:66) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.tuckey.web.filters.urlrewrite.RuleChain.handleRewrite(RuleChain.java:176) at org.tuckey.web.filters.urlrewrite.RuleChain.doRules(RuleChain.java:145) at org.tuckey.web.filters.urlrewrite.UrlRewriter.processRequest(UrlRewriter.java:92) at org.tuckey.web.filters.urlrewrite.UrlRewriteFilter.doFilter(UrlRewriteFilter.java:381) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) DispatcherServlet.java:66 is the line where I try the getInitParameter()

    Read the article

  • How to efficiently store and update binary data in Mongodb?

    - by Rocketman
    I am storing a large binary array within a document. I wish to continually add bytes to this array and sometimes change the value of existing bytes. I was looking for some $append_bytes and $replace_bytes type of modifiers but it appears that the best I can do is $push for arrays. It seems like this would be doable by performing seek-write type operations if I had access somehow to the underlying bson on disk, but it does not appear to me that there is anyway to do this in mongodb (and probably for good reason). If I were instead to just query this binary array, edit or add to it, and then update the document by rewriting the entire field, how costly will this be? Each binary array will be on the order of 1-2MB, and updates occur once every 5 minutes and across 1000s of documents. Worse, yet there is no easy way to spread these out (in time) and they will usually be happening close to one another on the 5 minute intervals. Does anyone have a good feel for how disastrous this will be? Seems like it would be problematic. An alternative would be to store this binary data as separate files on disk, implement a thread pool to efficiently manipulate the files on disk, and reference the filename from my mongodb document. (I'm using python and pymongo so I was looking at pytables). I'd prefer to avoid this though if possible. Is there any other alternative that I am overlooking here? Thanks in advnace.

    Read the article

  • C socket programming: client send() but server select() doesn't see it

    - by Fantastic Fourier
    Hey all, I have a server and a client running on two different machines where the client send()s but the server doesn't seem to receive the message. The server employs select() to monitor sockets for any incoming connections/messages. I can see that when the server accepts a new connection, it updates the fd_set array but always returns 0 despite the client send() messages. The connection is TCP and the machines are separated by like one router so dropping packets are highly unlikely. I have a feeling that it's not select() but perhaps send()/sendto() from client that may be the problem but I'm not sure how to go about localizing the problem area. while(1) { readset = info->read_set; ready = select(info->max_fd+1, &readset, NULL, NULL, &timeout); } above is the server side code where the server has a thread that runs select() indefinitely. rv = connect(sockfd, (struct sockaddr *) &server_address, sizeof(server_address)); printf("rv = %i\n", rv); if (rv < 0) { printf("MAIN: ERROR connect() %i: %s\n", errno, strerror(errno)); exit(1); } else printf("connected\n"); sleep(3); char * somemsg = "is this working yet?\0"; rv = send(sockfd, somemsg, sizeof(somemsg), NULL); if (rv < 0) printf("MAIN: ERROR send() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); rv = sendto(sockfd, somemsg, sizeof(somemsg), NULL, &server_address, sizeof(server_address)); if (rv < 0) printf("MAIN: ERROR sendto() %i: %s\n", errno, strerror(errno)); printf("MAIN: rv is %i\n", rv); and this is the client side where it connects and sends messages and returns connected MAIN: rv is 4 MAIN: rv is 4 any comments or insightful insights are appreciated.

    Read the article

  • Framework or design pattern for mailing all users of a webapp

    - by Todd Owen
    My app takes care of user registration (with the option to receive email announcements), and can easily handle the actual template-based rendering of email for a given user. JavaMail provides the mail transport layer. But how should I design the application layer between the business objects (e.g. User) and the mail transport? The straightforward approach would be a simple, synchronous loop: iterate through the users, queue the emails, and be done with it. "Queue" might mean sending them straight to the MTA (mail server), or to an in-memory queue to be consumed by another thread. However, I also plan to implement features like throttling the rate of emails, processing bounced emails (NDRs), and maintaining status across application restarts. My intuition is that a good design would decouple this from both the business layer and the mail transport layer as much as possible. I wondered if others had solved this problem before, but after much searching I haven't found any Java libraries which seem to fit this problem. Standalone mail apps such as James or list servers are too large in scope; packages like Spring's MailSender or Commons Email are too small in scope (being basically drop-in replacements for JavaMail). For other languages I haven't found anything appropriate either. I'm curious about how other developers have gone about adding bulk mailing to their applications.

    Read the article

  • The CLR has been unable to transition from COM context [...] for 60 seconds

    - by BlueRaja The Green Unicorn
    I am getting this error on code that used to work. I have not changed the code. Here is the full error: The CLR has been unable to transition from COM context 0x3322d98 to COM context 0x3322f08 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. And here is the code that caused it: var openFileDialog1 = new System.Windows.Forms.OpenFileDialog(); openFileDialog1.DefaultExt = "mdb"; openFileDialog1.Filter = "Management Database (manage.mdb)|manage.mdb"; //Stalls indefinitely on the following line, then gives the CLR error //one minute later. The dialog never opens. if(openFileDialog1.ShowDialog() == DialogResult.OK) { .... } Yes, I am sure the dialog is not open in the background, and no, I don't have any explicit COM code or unmanaged marshalling or multithreading. I have no idea why the OpenFileDialog won't open - any ideas?

    Read the article

  • Mamimum page fetch with maximum bandwith

    - by Ehsan
    Hi I want to create an application like a spider I've implement fetching page as the following code in multi-thread application but there is two problem 1) I want to use my maximum bandwidth to send/receive request, how should I config my request to do so (Like Download Accelerator application and the like) cause I heard the normal application will use 66% of the available bandwidth. 2) I don't know what exactly HttpWebRequest.KeepAlive do, but as its name implies I think i can create a connection to a website and without closing the connection sending another request to that web site using existing connection. does it boost performance or Im wrong public PageFetchResult Fetch() { PageFetchResult fetchResult = new PageFetchResult(); try { HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(URLAddress); HttpWebResponse resp = (HttpWebResponse)req.GetResponse(); Uri requestedURI = new Uri(URLAddress); Uri responseURI = resp.ResponseUri; if (Uri.Equals(requestedURI, responseURI)) { string resultHTML = ""; byte[] reqHTML = ResponseAsBytes(resp); if (!string.IsNullOrEmpty(FetchingEncoding)) resultHTML = Encoding.GetEncoding(FetchingEncoding).GetString(reqHTML); else if (!string.IsNullOrEmpty(resp.CharacterSet)) resultHTML = Encoding.GetEncoding(resp.CharacterSet).GetString(reqHTML); req.Abort(); resp.Close(); fetchResult.IsOK = true; fetchResult.ResultHTML = resultHTML; } else { URLAddress = responseURI.AbsoluteUri; relayPageCount++; if (relayPageCount > 5) { fetchResult.IsOK = false; fetchResult.ErrorMessage = "Maximum page redirection occured."; relayPageCount = 0; return fetchResult; } req.Abort(); resp.Close(); return Fetch(); } } catch (Exception ex) { fetchResult.IsOK = false; fetchResult.ErrorMessage = ex.Message; } return fetchResult; }

    Read the article

  • I am not able to kill a child process using TerminateProcess

    - by user1681210
    I have a problem to kill a child process using TerminateProcess. I call to this function and the process still there (in the Task Manager) This piece of code is called many times launching the same program.exe many times and these process are there in the task manager which i think is not good. sorry, I am quiet new in c++ I will really appreciate any help. thanks a lot!! the code is the following: STARTUPINFO childProcStartupInfo; memset( &childProcStartupInfo, 0, sizeof(childProcStartupInfo)); childProcStartupInfo.cb = sizeof(childProcStartupInfo); childProcStartupInfo.hStdInput = hFromParent; // stdin childProcStartupInfo.hStdOutput = hToParent; // stdout childProcStartupInfo.hStdError = hToParentDup; // stderr childProcStartupInfo.dwFlags = STARTF_USESTDHANDLES | STARTF_USESHOWWINDOW; childProcStartupInfo.wShowWindow = SW_HIDE; PROCESS_INFORMATION childProcInfo; /* for CreateProcess call */ bOk = CreateProcess( NULL, // filename pCmdLine, // full command line for child NULL, // process security descriptor */ NULL, // thread security descriptor */ TRUE, // inherit handles? Also use if STARTF_USESTDHANDLES */ 0, // creation flags */ NULL, // inherited environment address */ NULL, // startup dir; NULL = start in current */ &childProcStartupInfo, // pointer to startup info (input) */ &childProcInfo); // pointer to process info (output) */ CloseHandle( hFromParent ); CloseHandle( hToParent ); CloseHandle( hToParentDup ); CloseHandle( childProcInfo.hThread); CloseHandle( childProcInfo.hProcess); TerminateProcess( childProcInfo.hProcess ,0); //this is not working, the process thanks

    Read the article

  • How do I keep from running out of memory on graphics for an Android app?

    - by user279112
    I've been working on an Android app in Eclipse, and so far, my program hasn't really grown past midget size. However I've already run into an issue with an Out of Memory error. You see, I've been using graphics comprised solely of bitmaps and PNGs in this program, and recently, when I tried to add a little bit more functionality to the program (mainly including a few more bitmaps and causing an extra sprite to be created), it started crashing in the graphics thread's constructor - sprite's constructor. When I tracked the problem down, it turned out to be an Out of Memory error that is seemingly caused by adding too many picture files to the program and creating Drawables out of them. This would be a problem, as I really don't have that many picture resources worked into that program...maybe 20 or so. I haven't even started to include sound yet. These images aren't all that fancy. My questions are this: 1) Are programs for the Android phone really that limited on how much memory they can employ, or is it probably something other than the 20-30 resource pictures causing that error? 2) If the memory for Android apps is so awful it can't even handle 20-30 picture resources being loaded into Drawables that exist at the same time, then how in the world are you supposed to make decent graphics and sound for that thing? Thanks.

    Read the article

  • Producer consumer with qualifications

    - by tgguy
    I am new to clojure and am trying to understand how to properly use its concurrency features, so any critique/suggestions is appreciated. So I am trying to write a small test program in clojure that works as follows: there 5 producers and 2 consumers a producer waits for a random time and then pushes a number onto a shared queue. a consumer should pull a number off the queue as soon as the queue is nonempty and then sleep for a short time to simulate doing work the consumers should block when the queue is empty producers should block when the queue has more than 4 items in it to prevent it from growing huge Here is my plan for each step above: the producers and consumers will be agents that don't really care for their state (just nil values or something); i just use the agents to send-off a "consumer" or "producer" function to do at some time. Then the shared queue will be (def queue (ref [])). Perhaps this should be an atom though? in the "producer" agent function, simply (Thread/sleep (rand-int 1000)) and then (dosync (alter queue conj (rand-int 100))) to push onto the queue. I am thinking to make the consumer agents watch the queue for changes with add-watcher. Not sure about this though..it will wake up the consumers on any change, even if the change came from a consumer pulling something off (possibly making it empty) . Perhaps checking for this in the watcher function is sufficient. Another problem I see is that if all consumers are busy, then what happens when a producer adds something new to the queue? Does the watched event get queued up on some consumer agent or does it disappear? see above I really don't know how to do this. I heard that clojure's seque may be useful, but I couldn't find enough doc on how to use it and my initial testing didn't seem to work (sorry don't have the code on me anymore)

    Read the article

  • NSTimer as a timeout mechanism

    - by alexantd
    I'm pretty sure this is really simple, and I'm just missing something obvious. I have an app that needs to download data from a web service for display in a UITableView, and I want to display a UIAlertView if the operation takes more than X seconds to complete. So this is what I've got (simplified for brevity): MyViewController.h @interface MyViewController : UIViewController <UITableViewDelegate, UITableViewDataSource> { NSTimer *timer; } @property (nonatomic, retain) NSTimer *timer; MyViewController.m @implementation MyViewController @synthesize timer; - (void)viewDidLoad { timer = [NSTimer scheduledTimerWithTimeInterval:20 target:self selector:@selector(initializationTimedOut:) userInfo:nil repeats:NO]; [self doSomethingThatTakesALongTime]; [timer invalidate]; } - (void)doSomethingThatTakesALongTime { sleep(30); // for testing only // web service calls etc. go here } - (void)initializationTimedOut:(NSTimer *)theTimer { // show the alert view } My problem is that I'm expecting the [self doSomethingThatTakesALongTime] call to block while the timer keeps counting, and I'm thinking that if it finishes before the timer is done counting down, it will return control of the thread to viewDidLoad where [timer invalidate] will proceed to cancel the timer. Obviously my understanding of how timers/threads work is flawed here because the way the code is written, the timer never goes off. However, if I remove the [timer invalidate], it does.

    Read the article

  • Scala newbie vproducer/consumer attempt running out of memory

    - by Nick
    I am trying to create a producer/consumer type Scala app. The LoopControl just sends a message to the MessageReceiver continually. The MessageReceiver then delegates work to the MessageCreatorActor (whose work is to check a map for an object, and if not found create one and start it up). Each MessageActor created by this MessageCreatorActor is associated with an Id. Eventually this is where I want to do business logic. But I run out of memory after 15 minutes. Its finding the cached actors,but quickly runs out of memory. Any help is appreciated. Or any one has any good code on producers consumers doing real stuff (not just adding numbers), please post. import scala.actors.Actor import java.util.HashMap import scala.actors.Actor._ case object LoopControl case object MessageReceiver case object MessageActor case object MessageActorCreator class MessageReceiver(msg: String) extends Actor { var messageActorMap = new HashMap[String, MessageActor] val messageCreatorActor = new MessageActorCreator(null, null) def act() { messageCreatorActor.start loop { react { case MessageActor(messageId) => if (msg.length() > 0) { var messageActor = messageActorMap.get(messageId); if(messageActor == null) { messageCreatorActor ! MessageActorCreator(messageId, messageActorMap) }else { messageActor ! MessageActor } } } } } } case class MessageActorCreator(msg:String, messageActorMap: HashMap[String, MessageActor]) extends Actor { def act() { loop { react { case MessageActorCreator(messageId, messageActorMap) => if(messageId != null ) { var messageActor = new MessageActor(messageId); messageActorMap.put(messageId, messageActor) println(messageActorMap) messageActor.start messageActor ! MessageActor } } } } } class LoopControl(messageReceiver:MessageReceiver) extends Actor { var count : Int = 0; def act() { while (true) { messageReceiver ! MessageActor ("00-122-0X95-FEC0" + count) //Thread.sleep(100) count = count +1; if(count > 5) { count = 0; } } } } case class MessageActor(msg: String) extends Actor { def act() { loop { react { case MessageActor => println() println("MessageActor: Got something-> " + msg) } } } } object messages extends Application { val messageReceiver = new MessageReceiver("bootstrap") val loopControl = new LoopControl(messageReceiver) messageReceiver.start loopControl.start }

    Read the article

  • Silverlight performance with many loaded controls

    - by gius
    I have a SL application with many DataGrids (from Silverlight Toolkit), each on its own view. If several DataGrids are opened, changing between views (TabItems, for example) takes a long time (few seconds) and it freezes the whole application (UI thread). The more DataGrids are loaded, the longer the change takes. These DataGrids that slow the UI chanage might be on other places in the app and not even visible at that moment. But once they are opened (and loaded with data), they slow showing other DataGrids. Note that DataGrids are NOT disposed and then recreated again, they still remain in memory, only their parent control is being hidden and visible again. I have profiled the application. It shows that agcore.dll's SetValue function is the bottleneck. Unfortunately, debug symbols are not available for this Silverlight native library responsible for drawing. The problem is not in the DataGrid control - I tried to replace it with XCeed's grid and the performance when changing views is even worse. Do you have any idea how to solve this problem? Why more opened controls slow down other controls? I have created a sample that shows this issue: http://cenud.cz/PerfTest.zip UPDATE: Using VS11 profiler on the sample provided suggests that the problem could be in MeasureOverride being called many times (for each DataGridCell, I guess). But still, why is it slower as more controls are loaded elsewhere? Is there a way to improve the performance?

    Read the article

  • ReaderWriterLockSlim and Pulse/Wait

    - by Jono
    Is there an equivalent of Monitor.Pulse and Monitor.Wait that I can use in conjunction with a ReaderWriterLockSlim? I have a class where I've encapsulated multi-threaded access to an underlying queue. To enqueue something, I acquire a lock that protects the underlying queue (and a couple of other objects) then add the item and Monitor.Pulse the locked object to signal that something was added to the queue. public void Enqueue(ITask task) { lock (mutex) { underlying.Enqueue(task); Monitor.Pulse(mutex); } } On the other end of the queue, I have a single background thread that continuously processes messages as they arrive on the queue. It uses Monitor.Wait when there are no items in the queue, to avoid unnecessary polling. (I consider this to be good design, but any flames (within reason) are welcome if they help me learn otherwise.) private void DequeueForProcessing(object state) { while (true) { ITask task; lock (mutex) { while (underlying.Count == 0) { Monitor.Wait(mutex); } task = underlying.Dequeue(); } Process(task); } } As more operations are added to this class (requiring read-only access to the lock protected underlying), someone suggested using ReaderWriterLockSlim. I've never used the class before, and assuming it can offer some performance benefit, I'm not against it, but only if I can keep the Pulse/Wait design.

    Read the article

  • Creating PowerShell Automatic Variables from C#

    - by Uros Calakovic
    I trying to make automatic variables available to Excel VBA (like ActiveSheet or ActiveCell) also available to PowerShell as 'automatic variables'. PowerShell engine is hosted in an Excel VSTO add-in and Excel.Application is available to it as Globals.ThisAddin.Application. I found this thread here on StackOverflow and started created PSVariable derived classes like: public class ActiveCell : PSVariable { public ActiveCell(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveCell; } } } public class ActiveSheet : PSVariable { public ActiveSheet(string name) : base(name) { } public override object Value { get { return Globals.ThisAddIn.Application.ActiveSheet; } } } and adding their instances to the current POwerShell session: runspace.SessionStateProxy.PSVariable.Set(new ActiveCell("ActiveCell")); runspace.SessionStateProxy.PSVariable.Set(new ActiveSheet("ActiveSheet")); This works and I am able to use those variables from PowerShell as $ActiveCell and $ActiveSheet (their value change as Excel active sheet or cell change). Then I read PSVariable documentation here and saw this: "There is no established scenario for deriving from this class. To programmatically create a shell variable, create an instance of this class and set it by using the PSVariableIntrinsics class." As I was deriving from PSVariable, I tried to use what was suggested: PSVariable activeCell = new PSVariable("ActiveCell"); activeCell.Value = Globals.ThisAddIn.Application.ActiveCell; runspace.SessionStateProxy.PSVariable.Set(activeCell); Using this, $ActiveCell appears in my PowerShell session, but its value doesn't change as I change the active cell in Excel. Is the above comment from PSVariable documentation something I should worry about, or I can continue creating PSVariable derived classes? Is there another way of making Excel globals available to PowerShell?

    Read the article

  • Checking if a console application is still running using the Process class

    - by Ced
    I'm making an application that will monitor the state of another process and restart it when it stops responding, exits, or throws an error. However, I'm having trouble to make it reliably check if the process (Being a C++ Console window) has stopped responding. My code looks like this: public void monitorserver() { while (true) { server.StartInfo = new ProcessStartInfo(textbox_srcdsexe.Text, startstring); server.Start(); log("server started"); log("Monitor started."); while (server.Responding) { if (server.HasExited) { log("server exitted, Restarting."); break; } log("server is running: " + server.Responding.ToString()); Thread.Sleep(1000); } log("Server stopped responding, terminating.."); try { server.Kill(); } catch (Exception) { } } } The application I'm monitoring is Valve's Source Dedicated Server, running Garry's Mod, and I'm over stressing the physics engine to simulate it stopping responding. However, this never triggers the process class recognizing it as 'stopped responding'. I know there are ways to directly query the source server using their own protocol, but i'd like to keep it simple and universal (So that i can maybe use it for different applications in the future). Any help appreciated

    Read the article

  • using a While loop to control the value of a keyframe in a timeline in Javafx

    - by dragoknight
    i want to create an animation where the ball will move in one of the 4 sectors of a circle randomly.this will happen 5 times.so,i create a while loop(i<5) and call the random function.i then create an if loop and attach some x and y values according to the random fn value.i then create an timeline object inside the while loop and in the key frame value,i use these x and y values.but when i run the program,what happens is that all the values are being created(x and y values seen through println) but only the last x and y value(for i=5)is being rendered in the screen.the animations for the previous values(1<=i<=4)are not being rendered.Please advise where have i gone wrong? public function run(args:String[]) { var i=0; while(i<5) { var z=gety(); println(z); // var relativeTime:Duration=0s; if(z==1) {xbind=120; ybind=80; } else if(z==2) {xbind=120; ybind=120; } else if(z==3) { xbind=80; ybind=120; } else if(z==4) { xbind=80; ybind=80; } var t:Timeline=Timeline{ //time: bind pos with inverse; repeatCount: Timeline.INDEFINITE autoReverse: true keyFrames: [ KeyFrame{ time: 0s values: [ x => 100.0,y => 100.0]}, KeyFrame{time: 2s values:[x => xbind tween Interpolator.LINEAR, y => ybind tween Interpolator.LINEAR,] }, ] }//end timeline i++; t.play(); Thread.sleep(2000); }//end while }

    Read the article

  • What limits scaling in this simple OpenMP program?

    - by Douglas B. Staple
    I'm trying to understand limits to parallelization on a 48-core system (4xAMD Opteron 6348, 2.8 Ghz, 12 cores per CPU). I wrote this tiny OpenMP code to test the speedup in what I thought would be the best possible situation (the task is embarrassingly parallel): // Compile with: gcc scaling.c -std=c99 -fopenmp -O3 #include <stdio.h> #include <stdint.h> int main(){ const uint64_t umin=1; const uint64_t umax=10000000000LL; double sum=0.; #pragma omp parallel for reduction(+:sum) for(uint64_t u=umin; u<umax; u++) sum+=1./u/u; printf("%e\n", sum); } I was surprised to find that the scaling is highly nonlinear. It takes about 2.9s for the code to run with 48 threads, 3.1s with 36 threads, 3.7s with 24 threads, 4.9s with 12 threads, and 57s for the code to run with 1 thread. Unfortunately I have to say that there is one process running on the computer using 100% of one core, so that might be affecting it. It's not my process, so I can't end it to test the difference, but somehow I doubt that's making the difference between a 19~20x speedup and the ideal 48x speedup. To make sure it wasn't an OpenMP issue, I ran two copies of the program at the same time with 24 threads each (one with umin=1, umax=5000000000, and the other with umin=5000000000, umax=10000000000). In that case both copies of the program finish after 2.9s, so it's exactly the same as running 48 threads with a single instance of the program. What's preventing linear scaling with this simple program?

    Read the article

  • Unable to get data from a WCF client

    - by Scott
    I am developing a DLL that will provide sychronized time stamps to multiple applications running on the same machine. The timestamps are altered in a thread that uses a high performance timer and a scalar to provide the appearance of moving faster than real-time. For obvious reasons I want only 1 instance of this time library, and I thought I could use WCF for the other processes to connect to this and poll for timestamps whenever they want. When I connect however I never get a valid time stamp, just an empty DateTime. I should point out that the library does work. The original implementation was a single DLL that each application incorporated and each one was synced using windows messages. I'm fairly sure it has something to do with how I'm setting up the WCF stuff, to which I am still pretty new. Here are the contract definitions: public interface ITimerCallbacks { [OperationContract(IsOneWay = true)] void TimerElapsed(String id); } [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(ITimerCallbacks))] public interface ISimTime { [OperationContract] DateTime GetTime(); } Here is my class definition: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class SimTimeServer: ISimTime The host setup: // set up WCF interprocess comms host = new ServiceHost(typeof(SimTimeServer), new Uri[] { new Uri("net.pipe://localhost") }); host.AddServiceEndpoint(typeof(ISimTime), new NetNamedPipeBinding(), "SimTime"); host.Open(); and the implementation of the interface function server-side: public DateTime GetTime() { if (ThreadMutex.WaitOne(20)) { RetTime = CurrentTime; ThreadMutex.ReleaseMutex(); } return RetTime; } Lastly the client-side implementation: Callbacks myCallbacks = new Callbacks(); DuplexChannelFactory pipeFactory = new DuplexChannelFactory(myCallbacks, new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/SimTime")); ISimTime pipeProxy = pipeFactory.CreateChannel(); while (true) { string str = Console.ReadLine(); if (str.ToLower().Contains("get")) Console.WriteLine(pipeProxy.GetTime().ToString()); else if (str.ToLower().Contains("exit")) break; }

    Read the article

< Previous Page | 275 276 277 278 279 280 281 282 283 284 285 286  | Next Page >