Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 169/976 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • Changing the sequencing strategy for File/Ftp Adapter

    - by [email protected]
    The File/Ftp Adapter allows the user to configure the outbound write to use a sequence number. For example, if I choose address-data_%SEQ%.txt as the FileNamingConvention, then all my files would be generated as address-data_1.txt, address-data_2.txt,...and so on. But, where does this sequence number come from? The answer lies in the "control directory" for the particular adapter project(or scenario). In general, for every project that use the File or Ftp Adapter, a unique directory is created for book keeping purposes. And since this control directory is required to be unique, the adapter uses a digest to make sure that no two control directories are the same. For example, for my FlatStructure sample, the control information for my project would go under FMW_HOME/user_projects/domains/soainfra/fileftp/controlFiles/[DIGEST]/outbound where the value of DIGEST would differ from one project to another. If you look under this directory, you will see a file control_ob.properties and this is where the sequence number is maintained. Please note that the sequence number is maintained in binary form and you hence you might need a hex editor to view its content. You will also see another zero byte file, SEQ_nnn, but, ignore that for now. We'll get to it some other time. For now, please remember that this extra file is maintained as a backup. One of the challenges faced by the adapter runtime is to guard all writes to the control files so no two threads inadverently try to update them at the same time. And, it does so with the help of a "Mutex". For now, please remember that the mutex comes in different flavors: In-memory DB-based Coherence-based User-defined Again, we will talk about these mutexes some other time. Please note that there might be scenarios, particularly under heavy load, where the mutex might become a bottleneck. The adapter, however,  allows you to change the configuration so that the adapter sequence value comes from a database sequence or a stored procedure and in such situation, the mutex is acually by-passed and thereby resulting in better throughputs. In later releases, the behavior of the adapter would be defaulted to use a db-sequence.  The simplest way to achieve this is by switching your JNDI for the outbound JCA file to use "eis/HAFileAdapter" as shown   But, what does this do? Internally, the adapter runtime creates a sequence on the oracle database. For example, if you do a "select * from user_sequences" in your soa-infra schema, you will see a new sequence being created with name as SEQ_<GUID>__ where the GUID will differ from one project to another. However, if you want to use your own sequence, then it would require you to add a new property to your JCA file called SequenceName as shown below. Please note that you will need to create this sequence on your soainfra schema beforehand.     But, what if we use DB2 or MSSQL Server as the dehydration support? DB2 supports sequences natively but MSSQL Server does not. So, the adapter runtime uses a natively generated sequence for DB2, but, for MSSQL server, the adapter relies on a stored procedure that ships with the product. If you wish to achieve the same result for SOA Suite running DB2 as the dehydration store, simply change your connection factory JNDI name in the JCA file to eis/HAFileAdapterDB2 and for MSSQL, please use eis/HAFileAdapterMSSQL. And, if you wish to use a stored procedure other than the one that ships with the product, you will need to rely on binding properties to override the adapter behavior. Particularly, you will need to instruct the adapter that you wish to use a stored procedure as shown:       Please note that if you're using the File/Ftp Adapter in Append mode, then the adapter runtime degrades the mutex to use pessimistic locks as we don't want writers from different nodes to append to the same file at the same time.                    

    Read the article

  • How should I clean up hung grandchild processes when an alarm trips in Perl?

    - by brian d foy
    I have a parallelized automation script which needs to call many other scripts, some of which hang because they (incorrectly) wait for standard input. That's not a big deal because I catch those with alarm. The trick is to shut down those hung grandchild processes when the child shuts down. I thought various incantations of SIGCHLD, waiting, and process groups could do the trick, but they all block and the grandchildren aren't reaped. My solution, which works, just doesn't seem like it is the right solution. I'm not especially interested in the Windows solution just yet, but I'll eventually need that too. Mine only works for Unix, which is fine for now. I wrote a small script that takes the number of simultaneous parallel children to run and the total number of forks: $ fork_bomb <parallel jobs> <number of forks> $ fork_bomb 8 500 This will probably hit the per-user process limit within a couple of minutes. Many solutions I've found just tell you to increase the per-user process limit, but I need this to run about 300,000 times, so that isn't going to work. Similarly, suggestions to re-exec and so on to clear the process table aren't what I need. I'd like to actually fix the problem instead of slapping duct tape over it. I crawl the process table looking for the child processes and shut down the hung processes individually in the SIGALRM handler, which needs to die because the rest of real code has no hope of success after that. The kludgey crawl through the process table doesn't bother me from a performance perspective, but I wouldn't mind not doing it: use Parallel::ForkManager; use Proc::ProcessTable; my $pm = Parallel::ForkManager->new( $ARGV[0] ); my $alarm_sub = sub { kill 9, map { $_->{pid} } grep { $_->{ppid} == $$ } @{ Proc::ProcessTable->new->table }; die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } If you want to run out of processes, take out the kill. I thought that setting a process group would work so I could kill everything together, but that blocks: my $alarm_sub = sub { kill 9, -$$; # blocks here die "Alarm rang for $$!\n"; }; foreach ( 0 .. $ARGV[1] ) { print "."; print "\n" unless $count++ % 50; my $pid = $pm->start and next; setpgrp(0, 0); local $SIG{ALRM} = $alarm_sub; eval { alarm( 2 ); system "$^X -le '<STDIN>'"; # this will hang alarm( 0 ); }; $pm->finish; } The same thing with POSIX's setsid didn't work either, and I think that actually broke things in a different way since I'm not really daemonizing this. Curiously, Parallel::ForkManager's run_on_finish happens too late for the same clean-up code: the grandchildren are apparently already disassociated from the child processes at that point.

    Read the article

  • grep for value of keyvaue pair and format

    - by imerez
    When I do the following ps -aef|grep "asdf" I get a list of processes that are running. Each one of my process has the following text in the output: -ProcessName=XXXX I'd like to be able to format the out put so all I get is: The following processes are running: Process A Process B etc..

    Read the article

  • Wrapper around bash, control STDIN and STDOUT

    - by blinry
    I would like to talk to a interactive bash process. Here is an example, so you know what I want to archieve: Program starts a new bash process. User types "ls" into my program. Program sends this command to the bash process. Program reads all available output of the bash (including the prompt) and displays it back to the user. GOTO 1 As you can guess, there is much room for nifty manipulations here and there... ;-) It would be wonderful if this also worked for subprocesses (started by the bash process) and curses-based programs. I would like to implement this functionality in Ruby, and already have experimented with IO.popen, but strange things happen. You are also welcome to do this in other languages.

    Read the article

  • "Mem Usage" higher than "VM Size" in WinXP Task Manager

    - by Frederick
    In my Windows XP Task Manager, some processes display a higher value in the Mem Usage column than the VMSize. My Firefox instance, for example shows 111544 K as mem usage and 100576 K as VMSize. According to the help file of Task Manager Mem Usage is the working set of the process and VMSize is the committed memory in the Virtual address space. My question is, if the number of committed pages for a process is A and the number of pages in physical memory for the same process is B, shouldn't it always be B = A? Isn't the number of pages in physical memory per process a subset of the committed pages? Or is this something to do with sharing of memory among processes? Please explain. (Perhaps my definition of 'Working Set' is off the mark). Thanks.

    Read the article

  • Deadlock error in INSERT statement

    - by Gnanam
    We've got a web-based application. There is a time-bound database operation (INSERTs and UPDATEs) in the application which takes more time to complete, hence this particular flow has been changed into Java Thread so it will not wait (block) for the complete database operation to be completed. My problem is, if more than 1 user come across this particular flow, I'm facing the following error thrown by PostgreSQL: org.postgresql.util.PSQLException: ERROR: deadlock detected Detail: Process 13560 waits for ShareLock on transaction 3147316424; blocked by process 13566. Process 13566 waits for ShareLock on transaction 3147316408; blocked by process 13560. Above error is consistently thrown in INSERT statements. Additional Information: 1) I have PRIMARY KEY defined in this table. 2) There are FOREIGN KEY references in this table. 3) Separate database connection is passed to each Java Thread. Technologies Web Server: Tomcat v6.0.10 Java v1.6.0 Servlet Database: PostgreSQL v8.2.3 Connection Management: pgpool II

    Read the article

  • Use $_FILES on a page called by .ajax

    - by RachelD
    I have two .php pages that I'm working with. Index.php has a file upload form that posts back to index.php. I can access the $_FILES no problem on index.php after submitting the form. My issue is that I want (after the form submit and the page loads) to use .ajax (jQuery) to call another .php file so that file can open and process some of the rows and return the results to ajax. The ajax then displays the results and recursively calls itself to process the next batch of rows. Basically I want to process (put in the DB etc) the csv in chunks and display it for the user in between chunks. Im doing it this way because the files are 400,000+ rows and the user doesnt want to wait the 10+ min for them all to be processed. I dont want to move this file (save it) because I just need to process it and throw it away and if a user closes the page while its processing the file wont be thrown away. I could cron script it but I dont want to. What I would really like to do is pass the (single) $_FILES through .ajax OR Save it in a $_POST or $_SESSION to use on the second page. Is there any hope for my cause? Heres the ajax code if that helps: function processCSV(startIndex, length) { $.ajax({ url: "ajax-targets/process-csv.php", dataType: "json", type: "POST", data: { startIndex: startIndex, length: length }, timeout: 60000, // 1000 = 1 sec success: function(data) { // JQuery to display the rows from the CSV var newStart = startIndex+length; if(newStart <= data['csvNumRows']) { processCSV(newStart, length); } } }); } processCSV(1, 2); }); P.S. I did try this Passing $_FILES or $_POST to a new page with PHP but its not working for me :( SOS.

    Read the article

  • NServiceBus pipeline with Distributors

    - by David
    I'm building a processing pipeline with NServiceBus but I'm having trouble with the configuration of the distributors in order to make each step in the process scalable. Here's some info: The pipeline will have a master process that says "OK, time to start" for a WorkItem, which will then start a process like a flowchart. Each step in the flowchart may be computationally expensive, so I want the ability to scale out each step. This tells me that each step needs a Distributor. I want to be able to hook additional activities onto events later. This tells me I need to Publish() messages when it is done, not Send() them. A process may need to branch based on a condition. This tells me that a process must be able to publish more than one type of message. A process may need to join forks. I imagine I should use Sagas for this. Hopefully these assumptions are good otherwise I'm in more trouble than I thought. For the sake of simplicity, let's forget about forking or joining and consider a simple pipeline, with Step A followed by Step B, and ending with Step C. Each step gets its own distributor and can have many nodes processing messages. NodeA workers contain a IHandleMessages processor, and publish EventA NodeB workers contain a IHandleMessages processor, and publish Event B NodeC workers contain a IHandleMessages processor, and then the pipeline is complete. Here are the relevant parts of the config files, where # denotes the number of the worker, (i.e. there are input queues NodeA.1 and NodeA.2): NodeA: <MsmqTransportConfig InputQueue="NodeA.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeA.Distrib.Control" DistributorDataAddress="NodeA.Distrib.Data" > <MessageEndpointMappings> </MessageEndpointMappings> </UnicastBusConfig> NodeB: <MsmqTransportConfig InputQueue="NodeB.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeB.Distrib.Control" DistributorDataAddress="NodeB.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventA, Messages" Endpoint="NodeA.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> NodeC: <MsmqTransportConfig InputQueue="NodeC.#" ErrorQueue="error" NumberOfWorkerThreads="1" MaxRetries="5" /> <UnicastBusConfig DistributorControlAddress="NodeC.Distrib.Control" DistributorDataAddress="NodeC.Distrib.Data" > <MessageEndpointMappings> <add Messages="Messages.EventB, Messages" Endpoint="NodeB.Distrib.Data" /> </MessageEndpointMappings> </UnicastBusConfig> And here are the relevant parts of the distributor configs: Distributor A: <add key="DataInputQueue" value="NodeA.Distrib.Data"/> <add key="ControlInputQueue" value="NodeA.Distrib.Control"/> <add key="StorageQueue" value="NodeA.Distrib.Storage"/> Distributor B: <add key="DataInputQueue" value="NodeB.Distrib.Data"/> <add key="ControlInputQueue" value="NodeB.Distrib.Control"/> <add key="StorageQueue" value="NodeB.Distrib.Storage"/> Distributor C: <add key="DataInputQueue" value="NodeC.Distrib.Data"/> <add key="ControlInputQueue" value="NodeC.Distrib.Control"/> <add key="StorageQueue" value="NodeC.Distrib.Storage"/> I'm testing using 2 instances of each node, and the problem seems to come up in the middle at Node B. There are basically 2 things that might happen: Both instances of Node B report that it is subscribing to EventA, and also that NodeC.Distrib.Data@MYCOMPUTER is subscribing to the EventB that Node B publishes. In this case, everything works great. Both instances of Node B report that it is subscribing to EventA, however, one worker says NodeC.Distrib.Data@MYCOMPUTER is subscribing TWICE, while the other worker does not mention it. In the second case, which seem to be controlled only by the way the distributor routes the subscription messages, if the "overachiever" node processes an EventA, all is well. If the "underachiever" processes EventA, then the publish of EventB has no subscribers and the workflow dies. So, my questions: Is this kind of setup possible? Is the configuration correct? It's hard to find any examples of configuration with distributors beyond a simple one-level publisher/2-worker setup. Would it make more sense to have one central broker process that does all the non-computationally-intensive traffic cop operations, and only sends messages to processes behind distributors when the task is long-running and must be load balanced? Then the load-balanced nodes could simply reply back to the central broker, which seems easier. On the other hand, that seems at odds with the decentralization that is NServiceBus's strength. And if this is the answer, and the long running process's done event is a reply, how do you keep the Publish that enables later extensibility on published events?

    Read the article

  • *Code owner* system: is it an efficient way?

    - by sergzach
    There is a new developer in our team. An agile methodology is in use at our company. But the developer has another experience: he considers that particular parts of the code must be assigned to particular developers. So if one developer had created a program procedure or module it would be considered normal that all changes of the procedure/module would be made by him only. On the plus side, supposedly with the proposed approach we save common development time, because each developer knows his part of the code well and makes fixes fast. The downside is that developers don't know the system entirely. Do you think the approach will work well for a medium size system (development of a social network site)?

    Read the article

  • Netstat -ban (or -oan) equivalent in C#

    - by mztan
    I'd like to know if a particular process is using a given port, i.e. netstat -ban. I came across using IPGlobalProperties to get the list of active connections, but this doesn't seem to include process information. It would be nice if there exists some class in C# that lets me do this programmatically. Ideally, I wouldn't have to pipe the cmd shell Process output. Thanks in advance.

    Read the article

  • Threading in C#

    - by Lalit Dhake
    Hi, I have console application. In that i have some process that fetch the data from database through different layers ( business and Data access). and generate outputs. This process will execute many times say 10 times. Ok? I want to run simultaneously this process. not one will start, it will finish and then second will start. I want after starting 1'st process, just 2'nd , 3rd....10'th must be start. Means it should be multithreading. how can i achieve this ? is that will give me error while connection with data base open and close ?

    Read the article

  • java OutofMemoryError

    - by dqm
    I am running the following command on unix box. java -Xms3800m -Xmx3800m org.apache.xalan.xslt.Process -out Cust.txt -in test13l.xml -xsl CustDetails.xsl It is a java command, which calls Xalan processor to parse through the xml file (test131.xml) using the xsl stylesheet (CustDetails.xsl) and returns Cust.txt. The command works fine and the output is generated. It takes 12 minutes to process an xml file size of 1.1 GB. It takes 22 minutes to process a file size of 1.44 GB. However, when I try to process a file size of 1.66 GB, it errors out with the following message: (Location of error unknown)XSLT Error (java.lang.OutOfMemoryError): null I have increased the java heap size to 3800 not sure what I can do more. Many thanks for your help.

    Read the article

  • Erlang: How to view output of io:format/2 calls in processes spawned on remote nodes.

    - by jkndrkn
    Hello, I am working on a decentralized Erlang application. I am currently working on a single PC and creating multiple nodes by initializing erl with the -sname flag. When I spawn a process using spawn/4 on its home node, I can see output generated by calls io:format/2 within that process in its home erl instance. When I spawn a process remotely by using spawn/4 in combination with register_name, output of io:format/2 is sometimes redirected back to the erl instance where the remote spawn/4 call was made, and sometimes remains completely invisible. Similarly, when I use rpc:call/4, output of io:format/2 calls is redirected back to the erl instance where the `rpc:call/4' call is made. How do you get a process to emit debugging output back to its parent erl instance?

    Read the article

  • Most efficient way to send images across processes

    - by Heinrich Ulbricht
    Goal Pass images generated by one process efficiently and at very high speed to another process. The two processes run on the same machine and on the same desktop. The operating system may be WinXP, Vista and Win7. Detailled description The first process is solely for controlling the communication with a device which produces the images. These images are about 500x300px in size and may be updated up to several hundred times per second. The second process needs these images to display them. The first process uses a third party API to paint the images from the device to a HDC. This HDC has to be provided by me. Note: There is already a connection open between the two processes. They are communicating via anonymous pipes and share memory mapped file views. Thoughts How would I achieve this goal with as little work as possible? And I mean both work for me and the computer. I am using Delphi, so maybe there is some component available for doing this? I think I could always paint to any image component's HDC, save the content to memory stream, copy the contents via the memory mapped file, unpack it on the other side and paint it there to the destination HDC. I also read about a IPicture interface which can be used to marshall images. What are your ideas? I appreciate every thought on this!

    Read the article

  • i don't understand how...

    - by Hristo
    how can something print 3 times when it only goes the printing code twice? I'm coding in C and the code is in a SIGCHLD signal handler I created. void chld_signalHandler() { int pidadf = (int) getpid(); printf("pidafdfaddf: %d\n", pidadf); while (1) { int termChildPID = waitpid(-1, NULL, WNOHANG); if (termChildPID == 0 || termChildPID == -1) { break; } dll_node_t *temp = head; while (temp != NULL) { printf("stuff\n"); if (temp-pid == termChildPID && temp-type == WORK) { printf("inside if\n"); // read memory mapped file b/w WORKER and MAIN // get statistics and write results to pipe char resultString[256]; // printing TIME int i; for (i = 0; i < 24; i++) { sprintf(resultString, "TIME; %d ; %d ; %d ; %s\n",i,1,2,temp->stats->mboxFileName); fwrite(resultString, strlen(resultString), 1, pipeFD); } remove_node(temp); break; } temp = temp-next; } printf("done printing from sigchld \n"); } return; } the output for my MAIN process is this: MAIN PROCESS 16214 created WORKER PROCESS 16220 for file class.sp10.cs241.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld MAIN PROCESS 16214 created WORKER PROCESS 16221 for file class.sp10.cs225.mbox pidafdfaddf: 16214 stuff stuff inside if done printing from sigchld and the output for the MONITOR process is this: MONITOR: pipe is open for reading MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs225.mbox MONITOR PIPE: TIME; 0 ; 1 ; 2 ; class.sp10.cs241.mbox MONITOR: end of readpipe ( I've taken out repeating lines so I don't take up so much space ) Thanks, Hristo

    Read the article

  • stack management in CLR

    - by enableDeepak
    I understand the basic concept of stack and heap but great if any1 can solve following confusions: Is there a single stack for entire application process or for each thread starting in a project a new stack is created? Is there a single Heap for entire application process or for each thread starting in a project a new stack is created? If Stack are created for each thread, then how process manage sequential flow of threads (and hence stacks)

    Read the article

  • How should modules access data outside their scope?

    - by Joe
    I run into this same problem quite often. First, I create a namespace and then add modules to this namespace. Then issue I always run into is how best to initialize the application? Naturally, each module has its own startup procedure so should this data(not code in some cases, just a list of items to run) stay with the module? Or should there be a startup procedure in the global namespace which has the startup data for ALL the modules. Which is the more robust way of organizing this situation? Should some things be made centralized or should there be strict adherence to modules encapsulating everything about themselves? Though this is a general architecture questions, Javascript centric answers would be really appreciated!

    Read the article

  • What kinds of job scheduler framework or solution do you recomend on window server system

    - by Samuel.P
    Hi Guys My Company is running a lots of batch jobs to process data for partners. We used to use sql server agent to execute batch process. I found it's very difficult to get batch process information like log or status when i working on sql server's agent. So I'd like to change my company's job scheduling process to another stable solution But I dind't know which solutions to choose. I tried to find as same solution as CA Autosys. My Company use window server system. We shoud consider framework or solution should be configured using solution's api because i plan to make a asp.net web application to manage our job shceduler. What kinds of job scheduler Framework or inexpensive solution do you suggest? Regards, Park PS: I think Quartz Framework has a good reputation on J2EE system. But I am not sure Quartz.NET is as good as orginal java version.

    Read the article

  • sshd running but no PID file

    - by dunxd
    I'm recently started using monit to monitor the status of sshd on my CentOS 5.4 server. This works fine, but every so often monit reports that sshd is no longer running. This isn't true - I am still able to login to the server via ssh, however I note the following: There is no longer any PID file at /var/run/sshd.pid - after a reboot this file exists. Once it is gone, restarting sshd via service sshd restart does not create the PID file. sudo service sshd status reports openssh-daemon is stopped - again, restarting sshd does not change this, but a reboot does. sudo service sshd stop reports failed, presumably because of the missing PID file. Any idea what is going on? Update sudo netstat -lptun gives the following output relating to port 22 tcp 0 0 :::22 :::* LISTEN 20735/sshd Killing the process with this PID as suggested by @Henry and then starting sshd via service results in service sshd status recognising the process by PID again. Would still like to understand this better. RPM verify suggested by a couple of answerers shows this: sudo rpm -vV openssh openssh-server openssh-clients | grep 'S\.5' S.5....T c /etc/pam.d/sshd S.5....T c /etc/ssh/sshd_config /etc/pam.d/sshd has the following contents: #%PAM-1.0 auth include system-auth account required pam_nologin.so account include system-auth password include system-auth session optional pam_keyinit.so force revoke session include system-auth #session required pam_loginuid.so Should that last line be commented out? Update Here's the output of @YannickGirouard 's script: $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 21330 Command line for PID 21330: /usr/sbin/sshd Listing process(es) relating to PID 21330: UID PID PPID C STIME TTY TIME CMD root 21330 1 0 14:04 ? 00:00:00 /usr/sbin/sshd Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ However, I've since got things working by killing the process and starting afresh, as suggested by @Henry below, so perhaps I am no longer seeing the same thing. Will try again if I am seeing the issue again after next reboot. Update - 14 March Monit alerted me that sshd had disappeared, and again I am able to ssh onto the server. So now I can run the script $ sudo ./sshd_test Searching for the process listening on port 22... Found the following PID: 2208 Command line for PID 2208: /usr/sbin/sshd Listing process(es) relating to PID 2208: UID PID PPID C STIME TTY TIME CMD root 2208 1 0 Mar13 ? 00:00:00 /usr/sbin/sshd root 1885 2208 0 21:50 ? 00:00:00 sshd: dunx [priv] Listing RPM information about openssh packages: Name : openssh Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:50:57 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 745390 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH implementation of SSH protocol versions 1 and 2 ------------------------------------------------------ Name : openssh-clients Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : Applications/Internet Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 871132 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH client applications ------------------------------------------------------ Name : openssh-server Relocations: (not relocatable) Version : 4.3p2 Vendor: CentOS Release : 72.el5_7.5 Build Date: Tue 30 Aug 2011 12:34:14 AM BST Install Date: Sun 06 Nov 2011 12:51:04 AM GMT Build Host: builder10.centos.org Group : System Environment/Daemons Source RPM: openssh-4.3p2-72.el5_7.5.src.rpm Size : 492478 License: BSD Signature : DSA/SHA1, Fri 02 Sep 2011 01:13:01 AM BST, Key ID a8a447dce8562897 URL : http://www.openssh.com/portable.html Summary : The OpenSSH server daemon ------------------------------------------------------ Again, when I look for /var/run/sshd.pid I don't find it. $ cat /var/run/sshd.pid cat: /var/run/sshd.pid: No such file or directory $ sudo netstat -anp | grep sshd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 2208/sshd $ sudo kill 2208 $ sudo service sshd start Starting sshd: [ OK ] $ cat /var/run/sshd.pid 3794 $ sudo service sshd status openssh-daemon (pid 3794) is running... Is it possible that sshd is restarting and not creating a pidfile for some reason?

    Read the article

  • Compile IL code at runtime using .NET 3.5 and C# from file

    - by nitefrog
    I would like to take a file that is an IL file, and at run time compile it back to an exe. Right now I can use process.start to fire off the command line with parameters (ilasm.exe) but I would like to automate this process from a C# service I will create. Is there a way to do this with reflection and reflection.emit? While this works: string rawText = File.ReadAllText(string.Format("c:\\temp\\{0}.il", Utility.GetAppSetting("baseName")), Encoding.ASCII); rawText = rawText.Replace("[--STRIP--]", guid); File.Delete(string.Format("c:\\temp\\{0}.il", Utility.GetAppSetting("baseName"))); File.WriteAllText(string.Format("c:\\temp\\{0}.il", Utility.GetAppSetting("baseName")),rawText, Encoding.ASCII); pi = new ProcessStartInfo(); pi.WindowStyle = ProcessWindowStyle.Hidden; pi.FileName = "\"" + ilasm + "\""; pi.Arguments = string.Format("c:\\temp\\{0}.il", Utility.GetAppSetting("baseName")); using(Process p = Process.Start(pi)) { p.WaitForExit(); } It is not ideal as I really would like this to be a streamlined process. I have seen examples of creating the IL at runtime, then saving, but I need to use the IL I already have in file form and compile it back to an exe. Thanks.

    Read the article

  • Programmers joy: the proccess or the result?

    - by faya
    Hello, Recently I stumbled upon this curious question: What is importing for yourself when programming: process or result? I found myself that I love outcome, when everything is done! So I tried to ask some colleagues at work, but all of them responded that they like the development process the most. Myself I like process too, but not as much as outcome. So to which people category you belong too? And if there is a reason, could you express why?

    Read the article

  • Right way to create [self]respawning app in python

    - by grapescan
    I am using jabber bot written in python to log some MUC talks. Sometimes it drops on some network or XMPP problems. In this case I have to start it again by myself. The goal is to make it "self-respawning". I have some variants about how to do it. Bot is one process. Another process monitors its activity and starts it if bot died. Main process spawns bot subprocess and controls it. Also I think daemonizing bot process is useful here. Platform is Linux, as you could guess. What is the right way to solve this problem?

    Read the article

  • Do minidump files contain the timestamp of the crash?

    - by Roger Lipscombe
    The MiscInfoStream in a minidump file contains the process create time. I'd like to find out how long the process has been running for before the crash. Does a minidump file contain the exception timestamp anywhere? WinDbg on this dump file displays the following, which implies that it's in there somewhere... Debug session time: Tue Dec 29 15:49:20.000 2009 (GMT+0) System Uptime: not available Process Uptime: 0 days 0:33:03.000 Note that today's Mar 15, so this is almost certainly the timestamp of the crash. I'd like a programmatic way to retrieve that value and the "Process Uptime" value. I found the MINIDUMP_MISC_INFO_3 structure, which contains some timezone information, but it doesn't seem to contain the exception time.

    Read the article

  • Ruby and Forking

    - by Cory
    Quick question about Ruby forking - I ran across a bit of forking code in Resque earlier that was sexy as hell but tripped me up for a few. I'm hoping for someone to give me a little more detail about what's going on here. Specifically - it would appear that forking spawns a child (expected) and kicks it straight into the 'else' side of my condition (less expected. Is that expected behavior? A Ruby idiom? My IRB hack here: def fork return true if @cant_fork begin if Kernel.respond_to?(:fork) Kernel.fork else raise NotImplementedError end rescue NotImplementedError @cant_fork = true nil end end def do_something puts "Starting do_something" if foo = fork puts "we are forking from #{Process.pid}" Process.wait else puts "no need to fork, let's get to work: #{Process.pid} under #{Process.ppid}" puts "doing it" end end do_something

    Read the article

  • problem with piping in my own implementation of shell

    - by codemax
    Hey guys, i am implementing my own shell. I want to involve piping. i searched here and i got a code. But it is not working.Can any one help me? this is my code #include <sys/types.h> #include <sys/wait.h> #include <sys/ipc.h> #include <fcntl.h> #include <unistd.h> #include <string.h> #include <iostream> #include <cstdlib> using namespace std; char temp1[81][81],temp2[81][81] ,*cmdptr1[40], *cmdptr2[40]; void process(char**,int); int arg_count, count; int arg_cnt[2]; int pip,tok; char input[81]; int fds[2]; void process( char* cmd[])//, int arg_count ) { pid_t pid; pid = fork(); //char path[81]; //getcwd(path,81); //strcat(path,"/"); //strcat(path,cmd[0]); if(pid < 0) { cout << "Fork Failed" << endl; exit(-1); } else if( pid == 0 ) { execvp( cmd[0] , cmd ); } else { wait(NULL); } } void pipe(char **cmd1, char**cmd2) { cout<<endl<<endl<<"in pipe"<<endl; for(int i=0 ; i<arg_cnt[0] ; i++) { cout<<cmdptr1[i]<<" "; } cout<<endl; for(int i=0 ; i<arg_cnt[1] ; i++) { cout<<cmdptr2[i]<<" "; } pipe(fds); if (fork() == 0 ) { dup2(fds[1], 1); close(fds[0]); close(fds[1]); process(cmd1); } if (fork() == 0) { dup2(fds[0], 0); close(fds[0]); close(fds[1]); process(cmd2); } close(fds[0]); close(fds[1]); wait(NULL); } void pipecommand(char** cmd1, char** cmd2) { cout<<endl<<endl; for(int i=0 ; i<arg_cnt[0] ; i++) { cout<<cmd1[i]<<" "; } cout<<endl; for(int i=0 ; i<arg_cnt[1] ; i++) { cout<<cmd2[i]<<" "; } int fds[2]; // file descriptors pipe(fds); // child process #1 if (fork() == 0) { // Reassign stdin to fds[0] end of pipe. dup2(fds[0], STDIN_FILENO); close(fds[1]); close(fds[0]); process(cmd2); // child process #2 if (fork() == 0) { // Reassign stdout to fds[1] end of pipe. dup2(fds[1], STDOUT_FILENO); close(fds[0]); close(fds[1]); // Execute the first command. process(cmd1); } wait(NULL); } close(fds[1]); close(fds[0]); wait(NULL); } void splitcommand1() { tok++; int k,done=0,no=0; arg_count = 0; for(int i=count ; input[i] != '\0' ; i++) { k=0; while(1) { count++; if(input[i] == ' ') { break; } if((input[i] == '\0')) { done = 1; break; } if(input[i] == '|') { pip = 1; done = 1; break; } temp1[arg_count][k++] = input[i++]; } temp1[arg_count][k++] = '\0'; arg_count++; if(done == 1) { break; } } for(int i=0 ; i<arg_count ; i++) { cmdptr1[i] = temp1[i]; } arg_cnt[tok] = arg_count; } void splitcommand2() { tok++; cout<<"count is :"<<count<<endl; int k,done=0,no=0; arg_count = 0; for(int i=count ; input[i] != '\0' ; i++) { k=0; while(1) { count++; if(input[i] == ' ') { break; } if((input[i] == '\0')) { done = 1; break; } if(input[i] == '|') { pip = 1; done = 1; cout<<"PIP"; break; } temp2[arg_count][k++] = input[i++]; } temp2[arg_count][k++] = '\0'; arg_count++; if(done == 1) { break; } } for(int i=0 ; i<arg_count ; i++) { cmdptr2[i] = temp2[i]; } arg_cnt[tok] = arg_count; } int main() { cout<<endl<<endl<<"Welcome to unique shell !!!!!!!!!!!"<<endl; tok=-1; while(1) { cout<<endl<<"***********UNIQUE**********"<<endl; cin.getline(input,81); count = 0,pip=0; splitcommand1(); if(pip == 1) { count++; splitcommand2(); } cout<<endl<<endl; if(strcmp(cmdptr1[0], "exit") == 0 ) { cout<<endl<<"EXITING UNIQUE SHELL"<<endl; exit(0); } //cout<<endl<<"Arg count is :"<<arg_count<<endl; if(pip == 1) { cout<<endl<<endl<<"in main :"; for(int i=0 ; i<arg_cnt[0] ; i++) { cout<<cmdptr1[i]<<" "; } cout<<endl; for(int i=0 ; i<arg_cnt[1] ; i++) { cout<<cmdptr2[i]<<" "; } pipe(cmdptr1, cmdptr2); } else { process (cmdptr1);//,arg_count); } } } I know it is not well coded. But try to help me :(

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >