Search Results

Search found 24301 results on 973 pages for 'execution process mfg'.

Page 189/973 | < Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >

  • What's the right way to kill child processes in perl before exiting?

    - by rarbox
    I'm running an IRC Bot (Bot::BasicBot) which has two child processes running File::Tail but when exiting, they don't terminate. So I'm killling them using Proc::ProcessTable like this before exiting: my $parent=$$; my $proc_table=Proc::ProcessTable->new(); for my $proc (@{$proc_table->table()}) { kill(15, $proc->pid) if ($proc->ppid == $parent); } It works but I get this warning: 14045: !!! Child process PID:14047 reaped: 14045: !!! Child process PID:14048 reaped: 14045: !!! Your program may not be using sig_child() to reap processes. 14045: !!! In extreme cases, your program can force a system reboot 14045: !!! if this resource leakage is not corrected. What else can I do to kill child processes? The forked process is created using the forkit method in Bot::BasicBot.

    Read the article

  • Capture and display console output at the same time

    - by Patrick
    Hi, MSDN states that it is possible in .NET to capture the output of a process and display it in the console window at the same time. Normally when you set StartInfo.RedirectStandardOutput = true; the console window stays blank. As the MSDN site doesn't provide a sample for this I was wondering if anyone would have a sample or could point me to a sample? When a Process writes text to its standard stream, that text is normally displayed on the console. By redirecting the StandardOutput stream, you can manipulate or suppress the output of a process. For example, you can filter the text, format it differently, or write the output to both the console and a designated log file. MSDN This post is similar to http://stackoverflow.com/questions/786726/capture-standard-output-and-still-display-it-in-the-console-window by the way. But that post didn't end up with a working sample. Thanks a lot, Patrick

    Read the article

  • Converting raw bytes into audio sound

    - by Afro Genius
    In my application I inherit a javastreamingaudio class from the freeTTS package then bypass the write method which sends an array of bytes to the SourceDataLine for audio processing. Instead of writing to the data line, I write this and subsequent byte arrays into a buffer which I then bring into my class and try to process into sound. My application processes sound as arrays of floats so I convert to float and try to process but always get static sound back. I am sure this is the way to go but am missing something along the way. I know that sound is processed as frames and each frame is a group of bytes so in my application I have to process the bytes into frames somehow. Am I looking at this the right way? Thanx in advance for any help.

    Read the article

  • How should I deploy a patch to a Passenger-based production Rails application without downtime?

    - by Olly
    I have a Passenger-based production Rails application which has thousands of users. Occasionally we need to apply a code patch (we use git) and the current process for doing this (you can assume there are no data migrations) is: Perform git pull origin [production-branch-name] on the server touch tmp/restart.txt to restart Passenger This allows us to patch the server without having to resort to putting up a maintenance page, which is great, but it doesn't feel quite right since it's not actually a proper 'deployment', and we still need to manually update the revision file and our deployment doesn't appear in the Hoptoad or NewRelic services we use. Ideally I would run cap production deploy and just let the standard Capistrano deployment script take care of everything, but is this a dangerous thing to do without putting up a maintenance page? This deployment process seems to be fairly safe in that the new revision is deployed to a completely separate folder and only right at the end of the process is a symlink re-created to switch the currently deployed version, but I'm still fairly paranoid about this somehow resulting in a lost or failed request.

    Read the article

  • Check to see if file transfer is complete

    - by Cymon
    We have a daily job that processes files delivered from an external source. The process usually runs fine without any issues but every once in a while we have an issue of attempting to process a file that is not completely transferred. The external source SCPs these files from a UNIX server to our Windows server. From there we try to process the files. Is there a way to check to see if a file is still being transferred? Does UNIX put a lock on a file while SCPing it that we could check on the Windows side?

    Read the article

  • Using boost::asio::async_read with stdin?

    - by yeus
    hi poeple.. short question: I have a realtime-simulation which is running as a backround process and is connected with pipes to the calling pogramm. I want to send commands to that process using stdin to get certain information from it via stdout. Now because it is a real-time process, it has to be a non blocking input. Is boost::asio::async_read in conjunction with iostream::cin a good idea for this task? how would I use that function if it is feasible? Any more suggestions?

    Read the article

  • Image processing in a multhithreaded mode using Java

    - by jadaaih
    Hi Folks, I am supposed to process images in a multithreaded mode using Java. I may having varying number of images where as my number of threads are fixed. I have to process all the images using the fixed set of threads. I am just stuck up on how to do it, I had a look ThreadExecutor and BlockingQueues etc...I am still not clear. What I am doing is, - Get the images and add them in a LinkedBlockingQueue which has runnable code of the image processor. - Create a threadpoolexecutor for which one of the arguements is the LinkedBlockingQueue earlier. - Iterate through a for loop till the queue size and do a threadpoolexecutor.execute(linkedblockingqueue.poll). - all i see is it processes only 100 images which is the minimum thread size passed in LinkedBlockingQueue size. I see I am seriously wrong in my understanding somewhere, how do I process all the images in sets of 100(threads) until they are all done? Any examples or psuedocodes would be highly helpful Thanks! J

    Read the article

  • segfault during fclose()

    - by Hristo
    fclose() is causing a segfault. I have : char buffer[L_tmpnam]; char *pipeName = tmpnam(buffer); FILE *pipeFD = fopen(pipeName, "w"); // open for writing ... ... ... fclose(pipeFD); I don't do any file related stuff in the ... yet so that doesn't affect it. However, my MAIN process communicates with another process through shared memory where pipeName is stored; the other process fopen's this pipe for reading to communicated with MAIN. Any ideas why this is causing a segfault? Thanks, Hristo

    Read the article

  • jgraph like component on vaadin framework

    - by vsp
    Hi, Currently I am working on a process manager kind application, which needs jgraph( http://www.jgraph.com ) kind of component. The process manager is a kind of process flow chart. I am currently using vaadin framework. Is there any such built in components available in vaadin or can i have any plugin to create a drag & drop graph for my app. Please let me know if any one has already done this. Thanks.

    Read the article

  • Google App Engine modifyThreadGroup problem

    - by Frank
    I'm using Google App Engine to process Paypal IPN messages, when my servlet starts I use the following lines to start another process to process massages : public class PayPal_Monitor_Servlet extends HttpServlet { PayPal_Message_To_License_File_Worker PayPal_message_to_license_file_worker; public void init(ServletConfig config) throws ServletException // Initializes the servlet. { super.init(config); PayPal_message_to_license_file_worker=new PayPal_Message_To_License_File_Worker(); } public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { } ... } public class PayPal_Message_To_License_File_Worker implements Runnable { static Thread PayPal_Message_To_License_File_Thread; ... PayPal_Message_To_License_File_Worker() { start(); } void start() { if (PayPal_Message_To_License_File_Thread==null) { PayPal_Message_To_License_File_Thread=new Thread(this); PayPal_Message_To_License_File_Thread.setPriority(Thread.MIN_PRIORITY); PayPal_Message_To_License_File_Thread.start(); } ... } But "PayPal_Message_To_License_File_Thread=new Thread(this);" is causing the following error : javax.servlet.ServletContext log: unavailable java.security.AccessControlException: access denied (java.lang.RuntimePermission modifyThreadGroup) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:355) at java.security.AccessController.checkPermission(AccessController.java:567) Why, how to fix it ? Frank

    Read the article

  • Algorithmic trading software safety guards

    - by Adal
    I'm working on an automatic trading system. What sorts of safe-guards should I have in place? The main idea I have is to have multiple pieces checking each other. I will have a second independent little process which will also connect to the same trading account and monitor simple things, like ensuring the total net position does not go over a certain limit, or that there are no more than N orders in 10 minutes for example, or more than M positions open simultaneously. You can also check that the actual open positions correspond to what the strategy process thinks it actually holds. As a bonus I could run this checker process on a different machine/network provider. Besides the checks in the main strategy, this will ensure that whatever weird bug occurs, nothing really bad can happen. Any other things I should monitor and be aware of?

    Read the article

  • MySQL Insert Statement Queue

    - by Justin
    We are building an ajax application in which a users input is submitted for processing to a php script. We are currently writing every request to a log file for tracking. I would like to move this tracking into a database table but I do not want to run a insert statement after request. What I would like to do is set up a 'queue' of transactions (inserts and updates) that need to be processed on the MySQL database. I would then set up a cron job or process to check and process the transactions in the queue. Is there something out there that we could build upon or do we have to just write to plain ol' text log files and process them?

    Read the article

  • Detect a file is closed

    - by User1
    Is there a simple way to detect when a file for writing is closed? I have a process that kicks out a new file every 5 seconds. I can not change the code for that process. I am writing another process to analyze the contents of each file immediately when it is created. Is there a simple way to watch a certain directory to see when a file is no longer opened for writing? I'm hoping for some sort of Linux command.

    Read the article

  • How does a programmer think?

    - by Gordon Potter
    This may be a hopelessly vague question. But I am interested to hear whatever logical thought processes people go through when learning a new concept or trying to get their brain around code they might not have ever seen before. Basically, what general steps does one take to to break down problems and what does it take to "get it"? If you were to diagram a flowchart of how your mental process works when you look at code or try to solve a problem what might it look like? What common references, tips, and mental assumptions do you find useful in problem solving? How is this different between different domains? For example in what ways is a web programmer's thought process similar or different from a traditional desktop app developer's process?

    Read the article

  • High Throughput and Windows Workflow Foundation

    - by SometimesUseful
    Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time? We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls. We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance. Should we use multiple workflow instances or parallel activities? Are there any known patterns for high performance WWF processing?

    Read the article

  • How do I manage specs in Scrum ?

    - by this. __curious_geek
    Referring to this buddy question, I want to know how one can manage specs in Scrum process ? I'm facing this problem while assigning tasks to my team for the sprint. Needless to say - I'm new to Agile/Scrum. Currently, we are using our own specs sheet to map StoryId to SpecId and vice versa. I'm getting the felling that Scrum is more about project management [getting things done on time] and you need a seperate process to manage specs and requirements. How do we manage specs in a Scrum process ?

    Read the article

  • What are some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • scheduled task or windows service

    - by czuroski
    Hello, I have to create an app that will read in some info from a db, process the data, write changes back to the db, and then send an email with these changes to some users or groups. I will be writing this in c#, and this process must be run once a week at a particular time. This will be running on a Windows 2008 Server. In the past, I would always go the route of creating a windows service with a timer and setting the time/day for it to be run in the app.config file so that it can be changed and only have to be restarted to catch the update. Recently, though, I have seen blog posts and such that recommend writing a console application and then using a scheduled task to execute it. I have read many posts talking to this very issue, but have not seen a definitive answer about which process is better. What do any of you think? Thanks for any thoughts.

    Read the article

  • Multiple .NET processes memory footprint

    - by mr.b
    I am developing an application suite that consists of several applications which user can run as needed (they can run all at the same time, or only several..). My concern is in physical memory footprint of each process, as shown in task manager. I am aware that Framework does memory management behind the curtains in terms that it devotes parts of memory for certain things that are not directly related to my application. The question. Does .NET Framework has some way of minimizing memory footprint of processes it runs when there are several processes running at the same time? (Noobish guess ahead) Like, if System.dll is already loaded by one process, does framework load it for each process, or it has some way of sharing it between processes? I am in no way striving to write as small (resource-wise) apps as possible (if I were, I probably wouldn't be using .NET Framework in the first place), but if there's something I can do something about over-using resources, I'd like to know about it.

    Read the article

  • DesignMode with Controls

    - by John Dyer
    Has anyone found a useful solution to the DesignMode problem when developing controls? The issue is that if you nest controls then DesignMode only works for the first level. The second and lower levels DesignMode will always return FALSE. The standard hack has been to look at the name of the process that is running and if it is "DevEnv.EXE" then it must be studio thus DesignMode is really TRUE. The problem with that is looking for the ProcessName works its way around through the registry and other strange parts with the end result that the user might not have the required rights to see the process name. In addition this strange route is very slow. So we have had to pile additional hacks to use a singleton and if an error is thrown when asking for the process name then assume that DesignMode is FALSE. A nice clean way to determine DesignMode is in order. Acually getting Microsoft to fix it internally to the framework would be even better!

    Read the article

  • Understanding the 'High Performance' meaning in Extreme Transaction Processing

    - by kyap
    Despite my previous blogs entries on SOA/BPM and Identity Management, the domain where I'm the most passionated is definitely the Extreme Transaction Processing, commonly called XTP.I came across XTP back to 2007 while I was still FMW Product Manager in EMEA. At that time Oracle acquired a company called Tangosol, which owned an unique product called Coherence that we renamed to Oracle Coherence. Beside this innovative renaming of the product, to be honest, I didn't know much about it, except being a "distributed in-memory cache for Extreme Transaction Processing"... not very helpful still.In general when people doesn't fully understand a technology or a concept, they tend to find some shortcuts, either correct or not, to justify their lack-of understanding... and of course I was part of this category of individuals. And the shortcut was "Oracle Coherence Cache helps to improve Performance". Excellent marketing slogan... but not very meaningful still. By chance I was able to get away quickly from that group in July 2007* at Thames Valley Park (UK), after I attended one of the most interesting workshops, in my 10 years career in Oracle, delivered by Brian Oliver. The biggest mistake I made was to assume that performance improvement with Coherence was related to the response time. Which can be considered as legitimus at that time, because after-all caches help to reduce latency on cached data access, hence reduce the response-time. But like all caches, you need to define caching and expiration policies, thinking about the cache-missed strategy, and most of the time you have to re-write partially your application in order to work with the cache. At a result, the expected benefit vanishes... so, not very useful then?The key mistake I made was my perception or obsession on how performance improvement should be driven, but I strongly believe this is still a common problem to most of the developers. In fact we all know the that the performance of a system is generally presented by the Capacity (or Throughput), with the 2 important dimensions Speed (response-time) and Volume (load) :Capacity (TPS) = Volume (T) / Speed (S)To increase the Capacity, we can either reduce the Speed(in terms of response-time), or to increase the Volume. However we tend to only focus on reducing the Speed dimension, perhaps it is more concrete and tangible to measure, and nicer to present to our management because there's a direct impact onto the end-users experience. On the other hand, we assume the Volume can be addressed by the underlying hardware or software stack, so if we need more capacity (scale out), we just add more hardware or software. Unfortunately, the reality proves that IT is never as ideal as we assume...The challenge with Speed improvement approach is that it is generally difficult and costly to make things already fast... faster. And by adding Coherence will not necessarily help either. Even though we manage to do so, the Capacity can not increase forever because... the Speed can be influenced by the Volume. For all system, we always have a performance illustration as follow: In all traditional system, the increase of Volume (Transaction) will also increase the Speed (Response-Time) as some point. The reason is simple: most of the time the Application logics were not designed to scale. As an example, if you have a while-loop in your application, it is natural to conceive that parsing 200 entries will require double execution-time compared to 100 entries. If you need to "Speed-up" the execution, you can only upgrade your hardware (scale-up) with faster CPU and/or network to reduce network latency. It is technically limited and economically inefficient. And this is exactly where XTP and Coherence kick in. The primary objective of XTP is about designing applications which can scale-out for increasing the Volume, by applying coding techniques to keep the execution-time as constant as possible, independently of the number of runtime data being manipulated. It is actually not just about having an application running as fast as possible, but about having a much more predictable system, with constant response-time and linearly scale, so we can easily increase throughput by adding more hardwares in parallel. It is in general combined with the Low Latency Programming model, where we tried to optimize the network usage as much as possible, either from the programmatic angle (less network-hoops to complete a task), and/or from a hardware angle (faster network equipments). In this picture, Oracle Coherence can be considered as software-level XTP enabler, via the Distributed-Cache because it can guarantee: - Constant Data Objects access time, independently from the number of Objects and the Coherence Cluster size - Data Objects Distribution by Affinity for in-memory data grouping - In-place Data Processing for parallel executionTo summarize, Oracle Coherence is indeed useful to improve your application performance, just not in the way we commonly think. It's not about the Speed itself, but about the overall Capacity with Extreme Load while keeping consistant Speed. In the future I will keep adding new blog entries around this topic, with some sample codes experiences sharing that I capture in the last few years. In the meanwhile if you want to know more how Oracle Coherence, I strongly suggest you to start with checking how our worldwide customers are using Oracle Coherence first, then you can start playing with the product through our tutorial.Have Fun !

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • The difference between the 'Local System' account and the 'Network Service' account?

    - by jmatthias
    I have written a Windows service that spawns a separate process. This process creates a COM object. If the service runs under the 'Local System' account everything works fine, but if the service runs under the 'Network Service' account, the external process starts up but it fails to create the COM object. The error returned from the COM object creation is not a standard COM error (I think it's specific to the COM object being created). So, how do I determine how the two accounts, 'Local System' and 'Network Service' differ? These built-in accounts seem very mysterious and nobody seems to know much about them.

    Read the article

  • What interprocess locking calls should I monitor?

    - by Matt Joiner
    I'm monitoring a process with strace/ltrace in the hope to find and intercept a call that checks, and potentially activates some kind of globally shared lock. While I've dealt with and read about several forms of interprocess locking on Linux before, I'm drawing a blank on what to calls to look for. Currently my only suspect is futex() which comes up very early on in the process' execution. Update0 There is some confusion about what I'm after. I'm monitoring an existing process for calls to persistent interprocess memory or equivalent. I'd like to know what system and library calls to look for. I have no intention call these myself, so naturally futex() will come up, I'm sure many libraries will implement their locking calls in terms of this, etc. Update1 I'd like a list of function names or a link to documentation, that I should monitor at the ltrace and strace levels (and specifying which). Any other good advice about how to track and locate the global lock in mind would be great.

    Read the article

  • Web-Cam Video frames to System.Drawing.Bitmap

    - by Hooch
    Hello. I'm using AForge.NET framework to process some data from webcam. My Image processor is working. class WebProc { public void Process(System.Drawing.Bitmap image) { .... } } I know how to send image from HDD. I was thinking about something like that: private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { while(!backgroundWorker1.CancellationPending) { Bitmap Frame GetVidoFrameSomeWay(); WebProc.Process(Frame); } } Can you help me with those questions? How to get frame from webcam? How to list all available webcam for user to chose from? How to open Webcam setting window? Is there a way to set frame size to 640x480 o user size?

    Read the article

< Previous Page | 185 186 187 188 189 190 191 192 193 194 195 196  | Next Page >