Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 12/880 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • Oracle Tutor: Top 10 to Implement Sustainable Policies and Procedures

    - by emily.chorba(at)oracle.com
    Overview Your organization (executives, managers, and employees) understands the value of having written business process documents (process maps, procedures, instructions, reference documents, and form abstracts). Policies and procedures should be documented because they help to reduce the range of individual decisions and encourage management by exception: the manager only needs to give special attention to unusual problems, not covered by a specific policy or procedure. As more and more procedures are written to cover recurring situations, managers will begin to make decisions which will be consistent from one functional area to the next.Companies should take a project management approach when implementing an environment for a sustainable documentation program and do the following:1. Identify an Executive Champion2. Put together a winning team3. Assign ownership4. Centralize publishing5. Establish the Document Maintenance Process Up Front6. Document critical activities only7. Document actual practice8. Minimize documentation9. Support continuous improvement10. Keep it simple 1. Identify an Executive ChampionAppoint a top down driver. Select one key individual to be a mentor for the procedure planning team. The individual should be a senior manager, such as your company president, CIO, CFO, the vice-president of quality, manufacturing, or engineering. Written policies and procedures can be important supportive aids when known to express the thinking for the chief executive officer and / or the president and to have his or her full support. 2. Put Together a Winning TeamChoose a strong Project Management Leader and staff the procedure planning team with management members from cross functional groups. Make sure team members have the responsibility - and the authority - to make things happen.The winning team should consist of the Documentation Project Manager, Document Owners (one for each functional area), a Document Controller, and Document Specialists (as needed). The Tutor Implementation Guide has complete job descriptions for these roles. 3. Assign Ownership It is virtually impossible to keep process documentation simple and meaningful if employees who are far removed from the activity itself create it. It is impossible to keep documentation up-to-date when responsibility for the document is not clearly understood.Key to the Tutor methodology, therefore, is the concept of ownership. Each document has a single owner, who is responsible for ensuring that the document is necessary and that it reflects actual practice. The owner must be a person who is knowledgeable about the activity and who has the authority to build consensus among the persons who participate in the activity as well as the authority to define or change the way an activity is performed. The owner must be an advocate of the performers and negotiate, not dictate practices.In the Tutor environment, a document's owner is the only person with the authority to approve an update to that document. 4. Centralize Publishing Although it is tempting (especially in a networked environment and with document management software solutions) to decentralize the control of all documents -- with each owner updating and distributing his own -- Tutor promotes centralized publishing by assigning the Document Administrator (gate keeper) to manage the updates and distribution of the procedures library. 5. Establish a Document Maintenance Process Up Front (and stick to it) Everyone in your organization should know they are invited to suggest changes to procedures and should understand exactly what steps to take to do so. Tutor provides a set of procedures to help your company set up a healthy document control system. There are many document management products available to automate some of the document change and maintenance steps. Depending on the size of your organization, a simple document management system can reduce the effort it takes to track and distribute document changes and updates. Whether your company decides to store the written policies and procedures on a file server or in a database, the essential tasks for maintaining documents are the same, though some tasks are automated. 6. Document Critical Activities Only The best way to keep your documentation simple is to reduce the number of process documents to a bare minimum and to include in those documents only as much detail as is absolutely necessary. The first step to reducing process documentation is to document only those activities that are deemed critical. Not all activities require documentation. In fact, some critical activities cannot and should not be standardized. Others may be sufficiently documented with an instruction or a checklist and may not require a procedure. A document should only be created when it enhances the performance of the employee performing the activity. If it does not help the employee, then there is no reason to maintain the document. Activities that represent little risk (such as project status), activities that cannot be defined in terms of specific tasks (such as product research), and activities that can be performed in a variety of ways (such as advertising) often do not require documentation. Sometimes, an activity will evolve to the point where documentation is necessary. For example, an activity performed by single employee may be straightforward and uncomplicated -- that is, until the activity is performed by multiple employees. Sometimes, it is the interaction between co-workers that necessitates documentation; sometimes, it is the complexity or the diversity of the activity.7. Document Actual Practices The only reason to maintain process documentation is to enhance the performance of the employee performing the activity. And documentation can only enhance performance if it reflects reality -- that is, current best practice. Documentation that reflects an unattainable ideal or outdated practices will end up on the shelf, unused and forgotten.Documenting actual practice means (1) auditing the activity to understand how the work is really performed, (2) identifying best practices with employees who are involved in the activity, (3) building consensus so that everyone agrees on a common method, and (4) recording that consensus.8. Minimize Documentation One way to keep it simple is to document at the highest level possible. That is, include in your documents only as much detail as is absolutely necessary.When writing a document, you should ask yourself, What is the purpose of this document? That is, what problem will it solve?By focusing on this question, you can target the critical information.• What questions are the end users likely to have?• What level of detail is required?• Is any of this information extraneous to the document's purpose? Short, concise documents are user friendly and they are easier to keep up to date. 9. Support Continuous Improvement Employees who perform an activity are often in the best position to identify improvements to the process. In other words, continuous improvement is a natural byproduct of the work itself -- but only if the improvements are communicated to all employees who are involved in the process, and only if there is consensus among those employees.Traditionally, process documentation has been used to dictate performance, to limit employees' actions. In the Tutor environment, process documents are used to communicate improvements identified by employees. How does this work? The Tutor methodology requires a process document to reflect actual practice, so the owner of a document must routinely audit its content -- does the document match what the employees are doing? If it doesn't, the owner has the responsibility to evaluate the process, to build consensus among the employees, to identify "best practices," and to communicate these improvements via a document update. Continuous improvement can also be an outgrowth of corrective action -- but only if the solutions to problems are communicated effectively. The goal should be to solve a problem once and only once, which means not only identifying the solution, but ensuring that the solution becomes part of the process. The Tutor system provides the method through which improvements and solutions are documented and communicated to all affected employees in a cost-effective, timely manner; it ensures that improvements are not lost or confined to a single employee. 10. Keep it Simple Process documents don't have to be complex and unfriendly. In fact, the simpler the format and organization, the more likely the documents will be used. And the simpler the method of maintenance, the more likely the documents will be kept up-to-date. Keep it simply by:• Minimizing skills and training required• Following the established Tutor document format and layout• Avoiding technology just for technology's sake No other rule has as major an impact on the success of your internal documentation as -- keep it simple. Learn More For more information about Tutor, visit Oracle.Com or the Tutor Blog. Post your questions at the Tutor Forum.   Emily Chorba Principle Product Manager Oracle Tutor & BPM 

    Read the article

  • .NET4: In-Process Side-by-Side Execution Explained

    - by emptyset
    Overview: I'm interested in learning more about the .NET4 "In-Process Side-by-Side Execution" of assemblies, and need additional information to help me demystify it. Motivation: The application in question is built against .NET2, and uses two third-party libraries that also work against .NET2. The application is deployed (via file copy) to client machines in a virtual environment that includes .NET2. Not my architecture, please bear with me. Goal: To see if it's possible to rebuild the application assemblies (or a subset) against .NET4, and ship the application as before, without changing the third-party libraries and including the .NET4 Client Profile (as described here) in the deployment. Steps Taken: The following articles were read, but didn't quite provide me enough information: In-Process Side-by-Side Execution: Browsed this article, and Scenario Two is the closest it comes to describing something that resembles my situation, but doesn't really cover it with any depth. ASP.NET Side-by-Side Execution Overview: This article covers a web application, but I'm dealing with a client WinForms application. CLR Team Blog: In-Process Side-by-Side: This is useful to explain how plug-ins to host processes function under .NET4, but I don't know if this applies to the third-party libraries. Further Steps: I'm also unclear on how to proceed upgrading a single .NET2 assembly to .NET4, with the overall application remaining in .NET2 (i.e. how to configure the solution/project files, if any special code needs to be included, etc.).

    Read the article

  • Emacs Lisp: how to set encoding for call-process

    - by RamyenHead
    I thought I knew how to set coding-system (or encoding): use process-coding-system-alist. Apparently, it's not working. ;; -*- coding: utf-8 -*- (require 'cl) (let ((process-coding-system-alist '("cygwin/bin/bash" . (utf-8-dos . utf-8-unix)))) (setq my-words (list "Lilo" "?_?" "_?" "?_" "?" "Stitch") my-cygwin-bash "C:/cygwin/bin/bash.exe" my-outbuf (get-buffer-create "*my cygwin bash echo test*") ) (with-current-buffer my-outbuf (goto-char (point-max)) (loop for word in my-words do (insert (concat "echo " word "\n")) (call-process my-cygwin-bash nil my-outbuf nil "-c" (concat "echo " word))) ) (display-buffer my-outbuf) ) Running the above code, the output is this: echo Lilo Lilo echo ?_? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo _? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo ?_ /usr/bin/bash: $'echo \346\267\205?': command not found echo ? /usr/bin/bash: -c: line 0: unexpected EOF while looking for matching `"' /usr/bin/bash: -c: line 1: syntax error: unexpected end of file echo Stitch Stitch Anything sent to cygwin in unicode is failing (MS Windows, Korean).

    Read the article

  • Why calling Process.killProcess(Process.myPid()) is a bad idea?

    - by Tal Kanel
    I've read some posts saying using this method is "not good", shouldn't been use, it's not the right way to "close" the application and it's not how android works... I understand and accept the fact that Android OS knows better then me when it's the right time to terminate the process, but I didn't heard yet a good explanation why it's wrong using the killProcess() method?. after all - it's part of the android API... what I do know is that calling this method while other threads doing in potential an important work (operations on files, writing to DB, HTTP requests, running services..) can be terminated in the middle, and it's clearly not good. also I know I can benefit from the fact that "re-open" the application will be faster, cause the system maybe still "holds" in memory state from last time been used, and killProcess() prevents that. beside this reason, in assumption I don't have such operations, and I don't care my application will load from scratch each run, there are other reasons why not using the killProcess() method? I know about finish() method to close an Activity, so don't write me about that please.. finish() is only for Activity. not to all application, and I think I know exactly why and when to use it... and another thing - I'm developing also games with the Unity3D framework, and exporting the project to android. when I decompiled the generated apk, I was very suprised to find out that the java source code created from unity - implementing Unity's - Application.quit() method, with Process.killProcess(Process.myPid()). Application.quit() is suppose to be the right way to close game according to Unity3d guides (is it really?? maybe I'm wrong, and missed something), so how it happens that the Unity's framework developers which doing a very good work as it seems implemented this in native android to killProcess()? anyway - I wish to have a "list of reasons" why not using the killProcess() method, so please write down your answer - if you have something interesting to say about that. TIA

    Read the article

  • Access Denied while using System.Diagnostics.Process

    - by Mike C
    I am trying to use the unmanaged ImageMagick library in my ASP.NET application from the command line using System.Diagnostics.Process. Basically, users will upload an .eps file to the site, and then I will run the command line command to convert it into .jpg. This is the code I'm using to try and run the command: Dim proc As New System.Diagnostics.Process proc.StartInfo.RedirectStandardOutput = True proc.StartInfo.RedirectStandardError = True proc.StartInfo.FileName = "C:\Program Files (x86)\ImageMagick-6.6.1-Q16\convert.exe" proc.StartInfo.UseShellExecute = False proc.StartInfo.Arguments = String.Format("{0} {1}", Server.MapPath("~/logo/test.eps"), _ Server.MapPath("~/certificates/temp/test-1234.jpg")) proc.StartInfo.CreateNoWindow = True proc.Start() I am able to run this code just fine on our development Win 2k3 server, but not on our production Win 2k3 Server. I get the error "System.ComponentModel.Win32Exception: Access is denied". The main between the two servers is that the production is 64-bit and runs Plesk to manage multiple domains. I've tried adding rights asp.net user to the ImageMagick directory. The PS Admin says that in the case of Plesk, it's the same account that I use to access the site in VS using FPE. Does anyone know what I might do in order to allow this process to run on my production server? Thanks, Mike

    Read the article

  • Checking if a console application is still running using the Process class

    - by Ced
    I'm making an application that will monitor the state of another process and restart it when it stops responding, exits, or throws an error. However, I'm having trouble to make it reliably check if the process (Being a C++ Console window) has stopped responding. My code looks like this: public void monitorserver() { while (true) { server.StartInfo = new ProcessStartInfo(textbox_srcdsexe.Text, startstring); server.Start(); log("server started"); log("Monitor started."); while (server.Responding) { if (server.HasExited) { log("server exitted, Restarting."); break; } log("server is running: " + server.Responding.ToString()); Thread.Sleep(1000); } log("Server stopped responding, terminating.."); try { server.Kill(); } catch (Exception) { } } } The application I'm monitoring is Valve's Source Dedicated Server, running Garry's Mod, and I'm over stressing the physics engine to simulate it stopping responding. However, this never triggers the process class recognizing it as 'stopped responding'. I know there are ways to directly query the source server using their own protocol, but i'd like to keep it simple and universal (So that i can maybe use it for different applications in the future). Any help appreciated

    Read the article

  • Why would a process monitoring script use exit 1; on finding no problems?

    - by user568458
    General question: On a Linux (Centos) server, if a process monitoring script run by cron is set to close with exit 1; rather than exit 0; on finding that everything is okay and that no action is needed, is that a mistake? Or are there legitimate reasons for calling exit 1; instead of exit 0; on the "Everything's fine, no action needed" condition? exit 0; on finding no problems seems to me to be more appropriate. But maybe there's something I'm not aware of. For example, maybe there's something specific to Cron? Or maybe there's a convention in process monitoring scripts that 'failure' means 'this script failed to need to fix a problem' (rather than what I would expect which is that exit 1; would mean 'the process being monitored has failed'?) My specific case: I'm looking at a process monitoring script written by my web hosting company. By process monitoring script, I mean a script executed by Cron on a regular basis that checks if an important system process is running, and if it isn't running, takes actions such as mailing an administrator or restarting the process. Here's the (generalised) structure of their script, for a service running on port 8080 (in this case, Apache Tomcat): SERVICE=$(/usr/sbin/lsof -i tcp:8080 | wc -l); if [ $SERVICE != 0 ]; then exit 1; else #take action fi Seems simple enough even for someone with limited knowledge like me, except the exit 1; part seems odd. As I understand it, exit 0; closes a program and signifies to the parent that executed the program that everything is fine, exit n; where n0 and n<127 signifies that there has been some kind of error or problem. Here, their script seems to go against that rule - it calls exit 1; in the condition where everything is fine, and doesn't exit after taking remedial action in the problem condition. To me, this looks like a mistake - but my experience in this area is limited. Are there cases where calling exit 1; in the "Everything's fine, no action needed" condition is more appropriate than calling exit 0;? Or is it a mistake? Wider context is pretty simple. It's a Centos VPS, running Plesk. The script is being called by Cron via Plesk's "Scheduled tasks" Cron manager. There's no custom layer between Cron and this script that would respond in an unusual way to the exit call. It's a fairly average, almost out-of-the box Plesk-managed Centos VPS (in so far as there is such a thing). The process being monitored by this script is Apache Tomcat.

    Read the article

  • How to use process monitor to view or log a windows login?

    - by leeand00
    We're having some issues with Windows 7 Roaming profiles and I was reading here that the login process can be monitored using process monitor. "There are a couple of ways to configure Process Monitor to record logon operations: one is to use Sysinternals PsExec to launch it in the session 0 so that it survives the logoff and subsequent logon and another is to use the boot logging feature to capture activity from early in the boot, including the logon." How does one do either of these options using process monitor to find out what is happening during a user login?

    Read the article

  • I have an apache process that takes 98% CPU. How can I find what apache call it runs?

    - by Nir
    As you can see below, a single Apache process hangs and takes large amount of CPU resources. How can I find what http call this apache process runs? PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12554 www-data 20 0 776m 285m 199m R 97 3.7 67:15.84 apache2 14580 www-data 20 0 748m 372m 314m S 4 4.8 0:13.60 apache2 12561 www-data 20 0 784m 416m 322m S 3 5.4 0:58.10 apache2 12592 www-data 20 0 785m 427m 332m S 2 5.6 0:57.06 apache2

    Read the article

  • Getting TF215097 error after modifying a build process template in TFS Team Build 2010

    - by Jakob Ehn
    When embracing Team Build 2010, you typically want to define several different build process templates for different scenarios. Common examples here are CI builds, QA builds and release builds. For example, in a contiuous build you often have no interest in publishing to the symbol store, you might or might not want to associate changesets and work items etc. The build server is often heavily occupied as it is, so you don’t want to have it doing more that necessary. Try to define a set of build process templates that are used across your company. In previous versions of TFS Team Build, there was no easy way to do this. But in TFS 2010 it is very easy so there is no excuse to not do it! :-)   I ran into a scenario today where I had an existing build definition that was based on our release build process template. In this template, we have defined several different build process parameters that control the release build. These are placed into its own sectionin the Build Process Parameters editor. This is done using the ProcessParameterMetadataCollection element, I will explain how this works in a future post.   I won’t go into details on these parametes, the issue for this blog post is what happens when you modify a build process template so that it is no longer compatible with the build definition, i.e. a breaking change. In this case, I removed a parameter that was no longer necessary. After merging the new build process template to one of the projects and queued a new release build, I got this error:   TF215097: An error occurred while initializing a build for build definition <Build Definition Name>: The values provided for the root activity's arguments did not satisfy the root activity's requirements: 'DynamicActivity': The following keys from the input dictionary do not map to arguments and must be removed: <Parameter Name>.  Please note that argument names are case sensitive. Parameter name: rootArgumentValues <Parameter Name> was the parameter that I removed so it was pretty easy to understand why the error had occurred. However, it is not entirely obvious how to fix the problem. When open the build definition everything looks OK, the removed build process parameter is not there, and I can open the build process template without any validation warnings. The problem here is that all settings specific to a particular build definition is stored in the TFS database. In TFS 2005, everything that was related to a build was stored in TFS source control in files (TFSBuild.proj, WorkspaceMapping.xml..). In TFS 2008, many of these settings were moved into the database. Still, lots of things were stored in TFSBuild.proj, such as the solution and configuration to build, wether to execute tests or not. In TFS 2010, all settings for a build definition is stored in the database. If we look inside the database we can see what this looks like. The table tbl_BuildDefinition contains all information for a build definition. One of the columns is called ProcessParameters and contains a serialized representation of a Dictionary that is the underlying object where these settings are stoded. Here is an example:   <Dictionary x:TypeArguments="x:String, x:Object" xmlns="clr-namespace:System.Collections.Generic;assembly=mscorlib" xmlns:mtbwa="clr-namespace:Microsoft.TeamFoundation.Build.Workflow.Activities;assembly=Microsoft.TeamFoundation.Build.Workflow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <mtbwa:BuildSettings x:Key="BuildSettings" ProjectsToBuild="$/PathToProject.sln"> <mtbwa:BuildSettings.PlatformConfigurations> <mtbwa:PlatformConfigurationList Capacity="4"> <mtbwa:PlatformConfiguration Configuration="Release" Platform="Any CPU" /> </mtbwa:PlatformConfigurationList> </mtbwa:BuildSettings.PlatformConfigurations> </mtbwa:BuildSettings> <mtbwa:AgentSettings x:Key="AgentSettings" Tags="Agent1" /> <x:Boolean x:Key="DisableTests">True</x:Boolean> <x:String x:Key="ReleaseRepositorySolution">ERP</x:String> <x:Int32 x:Key="Major">2</x:Int32> <x:Int32 x:Key="Minor">3</x:Int32> </Dictionary> Here we can see that it is really only the non-default values that are persisted into the databasen. So, the problem in my case was that I removed one of the parameteres from the build process template, but the parameter and its value still existed in the build definition database. The solution to the problem is to refresh the build definition and save it. In the process tab, there is a Refresh button that will reload the build definition and the process template and synchronize them:   After refreshing the build definition and saving it, the build was running successfully again.

    Read the article

  • Process Centric Banking: Loan Origination Solution

    - by Manish Palaparthy
    There is an old proverb that goes, "The difference between theory and practice is greater in practice than in theory". So, we keep doing numerous "Proof of Concepts" with our own products on various business cases to analyze them deeply, understand and explain to our customers. We then present our learnings as they happened. The awareness of each PoC should help readers increase the trustworthiness of the results coming out of these PoCs. I present one such PoC where we invested a lot of time&effort.  Process Centric Banking : Loan Origination Solution Loan Origination is a process by which a borrower applies for a new loan and the lender processes that application. Loan origination includes the series of steps taken by the bank from the point the customer shows interest in a loan product all the way to disbursal of funds. The Loan Origination process is relevant for many kind of lenders in Financial services: Banks, Credit Unions, NBFCs(Non Banking Financial Companies) and so on. For simplicity sake, I will use "Bank" as the lending institution in the rest of my article.  Loan Origination is one of the core processes for Banks as it is the process by which the it creates assets against which the Institution earns most of its profits from. A well tuned loan origination process can affect the Bank in many positive ways. Banks have always shown great interest in automating the loan origination process for the above reason. However, due the constant changes in customer environment, market dynamics, prevailing economic conditions, cost pressures & regulatory environment they run into lot of challenges. Let me categorize some of these challenges for you Customer Environment Multiple Channels: Customer can use any of the available channels (Internet Banking, Email, Fax, Branch, Phone Banking, ATM, Broker, Mobile, Snail Mail) to perform all or some of the activities related to her Visibility into the origination process: Expect immediate update on the status of loan processing & alert messages Reduced Turn Around Time: Expect loans to be processed with least turn around time Reduced loan processing fees: Partly due to market dynamics the customer expects the loan processing fee to be negligible Market Dynamics Competitive environment:  The competition keeps creating many variants of loan products to attract customers, the bank needs to create similar product variants with better offers to attract customers or keep existing ones Ability to migrate loans from one vendor to another: It has become really easy for retail customers to move from one bank to the other given the low fee of loan processing and highly attractive offers. How does the bank protect it's customer base while actively engaging with potential customers banking with competitor banks Flexibility to react to market developments: Market development greatly influence loan processing, underwriting, asset valuation, risk mitigation rules. Can the bank modify rules and policies, the idea is not just to react to market developments but to pro-actively manage new developments Economic conditions Constant change in various rates and their implications on the rates and rules applied when on-boarding a loan: How quickly can the bank apply changes to rates offered to customers when the central bank changes various rates Requirements of Audit by the central banker: Tough economic conditions have demanded much more stringent audit rules and tests. The banks needs to produce ready reports(historic & operational) for audit compliance Risk Mitigation: While risk mitigation has always been a key concern for the bank, this is the area where the bank's underwriters & risk analysts spend the maximum time when processing a loan application. In order to reduce TAT the bank cannot compromise on its risk mitigation strategies Cost pressures Reduce Cost of processing per application: To deliver a reduced loan processing fee to the customer, the bank needs to keep its cost per processing loan application low. Meet customer TAT expectations while reducing the queues and the systems being used to process the loan application: The loan application could potentially be spending a lot of time waiting in the queue for further processing. Different volumes & patterns of applications demand different queuing algorithms. The bank needs to have real-time visibility into these queues and have the flexibility to change queuing algorithms at runtime  Increase the use of electronic communication and reduce the branch channel usage: Lesser automation leads not only leads to Increased turn around time, it also impacts more costs to reach out to customers The objective of our PoC was to implement a Loan Origination Solution whose ownership lies with the bank and effectively meet the challenges listed above. We built a simple story board for the solution We then went about implementing our storyboard using Oracle BPM Suite, Webcenter Content : Imaging. The web UI has been built on ADF technolgies, while the integration with core-services has been implemented using the underlying SOA infrastructure. The BPM process model is quite exhaustive can meet all the challenges listed above to reasonable degree. A bank intending to implement an end-to-end Loan Origination Solution has multiple options at it's disposal. It can Develop a customer Loan Origination Application from scratch: Gives maximum opportunity to build what you want but inflexible to upgrade and maintain. Higher TCO in long term Buy a Packaged application & customize it: Customizing a generic loan application can be tedious and prove as difficult as above. Build it using many disparate & un-integrated tools: Initially seems easier than developing from scratch. But, without integrated tool sets this is not a viable approach either or A solution based on a Framework: Independent Services and Business Process Modeling provide decoupled architecture that is flexible. We built this framework end-to-end with processes the core process of loan origination & several sub-processes such as Analyse and define customer needs, customer credit verification, identity check processes, legal review process, New customer registration & risk assessment.

    Read the article

  • process monitoring in c under linux

    - by poly
    I'm trying to write a multi threaded/processes application and it need to know how to monitor a process from another process all the time. So here is what I have, I have a 2 processes, each with multiple threads that handle the network part, then another 2 process also with multiple threads that interact with DB and with the network processes, what I need to do is that if for example one of the network processes goes down the DB process start sending to the live network process until the second one is up again. I'm using fifo between the DB and the network process. I was thinking of sending messages with message passing all the time but not sure whether this is a good idea or I need to use some other IPC for this issue, or probably neither is good and I need to use entirely something else? Thanks,

    Read the article

  • Process monitoring in Linux environment?

    - by poly
    I'm trying to write a multi threaded/processes application and it need to know how to monitor a process from another process all the time. So here is what I have, I have a 2 processes, each with multiple threads that handle the network part, then another 2 process also with multiple threads that interact with DB and with the network processes, what I need to do is that if for example one of the network processes goes down the DB process start sending to the live network process until the second one is up again. I'm using fifo between the DB and the network process. I was thinking of sending messages with message passing all the time but not sure whether this is a good idea or I need to use some other IPC for this issue, or probably neither is good and I need to use entirely something else?

    Read the article

  • How to update GUI thread/class from worker thread/class?

    - by user315182
    First question here so hello everyone. The requirement I'm working on is a small test application that communicates with an external device over a serial port. The communication can take a long time, and the device can return all sorts of errors. The device is nicely abstracted in its own class that the GUI thread starts to run in its own thread and has the usual open/close/read data/write data basic functions. The GUI is also pretty simple - choose COM port, open, close, show data read or errors from device, allow modification and write back etc. The question is simply how to update the GUI from the device class? There are several distinct types of data the device deals with so I need a relatively generic bridge between the GUI form/thread class and the working device class/thread. In the GUI to device direction everything works fine with [Begin]Invoke calls for open/close/read/write etc. on various GUI generated events. I've read the thread here (How to update GUI from another thread in C#?) where the assumption is made that the GUI and worker thread are in the same class. Google searches throw up how to create a delegate or how to create the classic background worker but that's not at all what I need, although they may be part of the solution. So, is there a simple but generic structure that can be used? My level of C# is moderate and I've been programming all my working life, given a clue I'll figure it out (and post back)... Thanks in advance for any help.

    Read the article

  • Java Process "The pipe has been ended" problem

    - by Amit Kumar
    I am using Java Process API to write a class that receives binary input from the network (say via TCP port A), processes it and writes binary output to the network (say via TCP port B). I am using Windows XP. The code looks like this. There are two functions called run() and receive(): run is called once at the start, while receive is called whenever there is a new input received via the network. Run and receive are called from different threads. The run process starts an exe and receives the input and output stream of the exe. Run also starts a new thread to write output from the exe on to the port B. public void run() { try { Process prc = // some exe is `start`ed using ProcessBuilder OutputStream procStdIn = new BufferedOutputStream(prc.getOutputStream()); InputStream procStdOut = new BufferedInputStream(prc.getInputStream()); Thread t = new Thread(new ProcStdOutputToPort(procStdOut)); t.start(); prc.waitFor(); t.join(); procStdIn.close(); procStdOut.close(); } catch (Exception e) { e.printStackTrace(); printError("Error : " + e.getMessage()); } } The receive forwards the received input from the port A to the exe. public void receive(byte[] b) throws Exception { procStdIn.write(b); } class ProcStdOutputToPort implements Runnable { private BufferedInputStream bis; public ProcStdOutputToPort(BufferedInputStream bis) { this.bis = bis; } public void run() { try { int bytesRead; int bufLen = 1024; byte[] buffer = new byte[bufLen]; while ((bytesRead = bis.read(buffer)) != -1) { // write output to the network } } catch (IOException ex) { Logger.getLogger().log(Level.SEVERE, null, ex); } } } The problem is that I am getting the following stack inside receive() and the prc.waitfor() returns immediately afterwards. The line number shows that the stack is while writing to the exe. The pipe has been ended java.io.IOException: The pipe has been ended at java.io.FileOutputStream.writeBytes(Native Method) at java.io.FileOutputStream.write(FileOutputStream.java:260) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:105) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.write(BufferedOutputStream.java:109) at java.io.FilterOutputStream.write(FilterOutputStream.java:80) at xxx.receive(xxx.java:86) Any advice about this will be appreciated.

    Read the article

  • Starting Firefox using Process.Start: Firefox not starting when you set Usename and Password

    - by Mohammadreza
    Hi there. When I try to start Firefox using Process.Start and ProcessStartInfo (.NET) everything seems to work fine. But when I specify a username and password of another account (a member of Users), nothing seems to happen. The same code works fine with Calc.exe or IE. This is weird. Any ideas? Here is the code: System.Diagnostics.ProcessStartInfo pInfo = new System.Diagnostics.ProcessStartInfo(); pInfo.CreateNoWindow = false; pInfo.WindowStyle = System.Diagnostics.ProcessWindowStyle.Normal; pInfo.WorkingDirectory = "{WorkingDirectory}"; pInfo.Arguments = "{CommandLineArgs}"; pInfo.FileName = "{ExecutableAddress}"; pInfo.ErrorDialog = true; pInfo.UseShellExecute = false; pInfo.UserName = "{LimitedAccountUserName}"; pInfo.Password = "{SecureLimitedAccountPassword}"; System.Diagnostics.Process.Start(pInfo); Thanks everyone.

    Read the article

  • .NET Build Process

    - by Nix
    All I am looking for the best free set of tools to be used in a MS Based build process. Checkout, Build, Package, Test, Deploy, etc. I know this question has been asked before but it was over 2 years ago, and in our world that is an eternity. I am looking to develop a pattern that is easily adapted to similar projects. Almost like a template/cookie cutter system. I am currently looking into using CruiseControl, Powershell, MSBuild suite of tools. If we choose to move to 4.0 will we have issues? Are there better alternatives? Limitations ? Or will these pretty much meet my needs. One piece that i am never happy with is the process of packaging. We actually have opted in the past to just use Visual Studio Deployment Projects but those are very* ancient and my fear is WIX will be too complicated for the people implementing it.

    Read the article

  • Spawn a background process in Ruby

    - by Dave DeLong
    I'm writing a ruby bootstrapping script for a school project, and part of this bootstrapping process is to start a couple of background processes (which are written and function properly). What I'd like to do is something along the lines of: `/path/to/daemon1 &` `/path/to/daemon2 &` `/path/to/daemon3 &` However, that blocks on the first call to execute daemon1. I've seen references to a Process.spawn method, but that seems to be a 1.9+ feature, and I'm limited to Ruby 1.8. I've also tried to execute these daemons from different threads, but I'd like my bootstrap script to be able to exit. So how can I start these background processes so that my bootstrap script doesn't block and can exit (but still have the daemons running in the background)? Thanks!

    Read the article

  • How to start a process from within a windows service

    - by BaBu
    I want to pop a browser with a given url from within a windows service. Like so: System.Diagnostics.Process.Start("http://www.venganza.org/"); Works fine when running in a console but not from within the service. No error messages, no exceptions, the Process.Start() command just seem to do nothing. It smells of some security issue, maybe something with the service properties and/or logon options? Annoying stuff this... Anybody? (Oh, and on windows 7/.NET framework 3.5.)

    Read the article

  • Kill a 10 minute old zombie process in linux bash script

    - by Steve
    I've been tinkering with a regex answer by yukondude with little success. I'm trying to kill processes that are older than 10 minutes. I already know what the process IDs are. I'm looping over an array every 10 min to see if any lingering procs are around and need to be killed. Anybody have any quick thoughts on this? Thanks, Steve ps -eo uid,pid,etime 3233332 | egrep ' ([0-9]+-)?([0-9]{2}:?){3}' | awk '{print $2}' | xargs -I{} kill {} I've been tinkering with the answer posted by yukondude with little success. I'm trying to kill processes that are older than 10 minutes. I already know what the process IDs are. I'm looping over an array every 10 min to see if any lingering procs are around and need to be killed. Anybody have any quick thoughts on this? Thanks, Steve

    Read the article

  • C# Process <instance>.StandardOutput InvalidOperationException "Cannot mix synchronous and asynchron

    - by Rahul2047
    I tried this myProcess = new Process(); myProcess.StartInfo.CreateNoWindow = true; myProcess.StartInfo.WindowStyle = ProcessWindowStyle.Hidden; myProcess.StartInfo.FileName = "Hello.exe"; myProcess.StartInfo.Arguments ="-say Hello"; myProcess.StartInfo.UseShellExecute = false; myProcess.OutputDataReceived += new DataReceivedEventHandler(myProcess_OutputDataReceived); myProcess.ErrorDataReceived += new DataReceivedEventHandler(myProcess_OutputDataReceived); myProcess.Exited += new EventHandler(myProcess_Exited); myProcess.EnableRaisingEvents = true; myProcess.StartInfo.RedirectStandardOutput = true; myProcess.StartInfo.RedirectStandardError = true; myProcess.StartInfo.ErrorDialog = true; myProcess.StartInfo.WorkingDirectory = "D:\\Program Files\\Hello"; myProcess.Start(); myProcess.BeginOutputReadLine(); myProcess.BeginErrorReadLine(); Then I am getting this error.. My process takes very long to complete, so I need to show progress in runtime.

    Read the article

  • Redirect and parse in realtime stdout of an long running process in vb.net

    - by Richard
    Hello there, This code executes "handbrakecli" (a command line application) and places the output into a string: Dim p As Process = New Process p.StartInfo.FileName = "handbrakecli" p.StartInfo.Arguments = "-i [source] -o [destination]" p.StartInfo.UseShellExecute = False p.StartInfo.RedirectStandardOutput = True p.Start Dim output As String = p.StandardOutput.ReadToEnd p.WaitForExit The problem is that this can take up to 20 minutes to complete during which nothing will be reported back to the user. Once it's completed, they'll see all the output from the application which includes progress details. Not very useful. Therefore I'm trying to find a sample that shows the best way to: Start an external application (hidden) Monitor its output periodically as it displays information about it's progress (so I can extract this and present a nice percentage bar to the user) Determine when the external application has finished (so I can't continue with my own applications execution) Kill the external application if necessary and detect when this has happened (so that if the user hits "cancel", I get take the appropriate steps) Does anyone have any recommended code snippets?

    Read the article

  • Where to start when programming process synchronization algorithms like clone/fork, semaphores

    - by David
    I am writing a program that simulates process synchronization. I am trying to implement the fork and semaphore techniques in C++, but am having trouble starting off. Do I just create a process and send it to fork from the very beginning? Is the program just going to be one infinite loop that goes back and forth between parent/child processes? And how do you create the idea of 'shared memory' in C++, explicit memory address or just some global variable? I just need to get the overall structure/idea of the flow of the program. Any references would be appreciated.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >