Search Results

Search found 10023 results on 401 pages for 'manage processes'.

Page 13/401 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Macports, Fink, Hombrew: Background Processes?

    - by Yar
    If I install a package manager such as Macports, Fink or Homebrew, how does it affect the startup and running of my system? It seems like the answer should be "Not at all when you're not using them" but I'm worried that they will break other software (like Mono) or run background processes. Is my fear totally misplaced? Are they just programs like any others? They sure seem to put their footprint around the OS in quite a few places.

    Read the article

  • How to diagnose causes of oom-killer killing processes

    - by dunxd
    I have a small virtual private server running CentOS and www/mail/db, which has recently had a couple of incidents where the web server and ssh became unresponsive. Looking at the logs, I saw that oom-killer had killed these processes, possibly due to running out of memory and swap. Can anyone give me some pointers at how to diagnose what may have caused the most recent incident? Is it likely the first process killed? Where else should I be looking?

    Read the article

  • How to get Remote Processes on Windows 2003 with cpu percentage

    - by Brettski
    I have a production server with it's cpu's running excessively high. Except in critical circumstances nobody is allowed to logon to servers during non maintenance times. I am looking for an application I can use to look at the processes on the remote server which include CPU % usage. An application like top. Windows native tasklist.exe doesn't show percentage, nor does sysinternals pslist.exe. Suggestions?

    Read the article

  • Run a pool of processes in shell

    - by viraptor
    I'm looking for an easy method to run N selected processes at the same time with one command. It should put all the output on my terminal and shut down all of them when I exit with ctrl+c. Is there any existing app that does this? I'm thinking of some thing like exec_many 10 foo - it should keep 10 foos running and respawn any that dies.

    Read the article

  • Error running bash script - No matching processes

    - by Bashity
    I am trying to kill Xcode by running killall Xcode.app, which works normally when I run it through terminal. However, if I put it into a bash script that I keep on my Desktop called re_xcode, the script will output the following error. Please can you tell me where I am going wrong? No matching processes belonging to you were found The file /Users/Max/Desktop/Applications/Xcode.app does not exist. #!/bin/bash killall Xcode.app open ./Applications/Xcode.app

    Read the article

  • init never reaping zombie/defunct processes

    - by st9
    Hi, On my Fedora Core 9 webserver with kernel 2.6.18.8, init isn't reaping zombie processes. This would be bearable if it wasn't for the process table eventually reaching an upper limit where no new processes can be allocated. Sample output of ps -el | grep 'Z': F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD 5 Z 0 2648 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 51 2656 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 2670 1 0 75 0 - 0 exit ? 00:00:02 crond <defunct> 4 Z 0 2874 1 0 82 0 - 0 exit ? 00:00:00 mysqld_safe <defunct> 5 Z 0 28104 1 0 76 0 - 0 exit ? 00:00:00 httpd <defunct> 5 Z 0 28716 1 0 76 0 - 0 exit ? 00:00:06 lfd <defunct> 5 Z 74 10172 1 0 75 0 - 0 exit ? 00:00:00 sshd <defunct> 5 Z 0 11199 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11202 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11205 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11208 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11211 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11240 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11246 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11249 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 5 Z 0 11252 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> 1 Z 0 14106 1 0 80 0 - 0 exit ? 00:00:00 anacron <defunct> 5 Z 0 14631 1 0 75 0 - 0 exit ? 00:00:00 sendmail <defunct> Is this an OS bug? misconfiguration? I'm looking for inspiration as to the source of this problem. Thanks

    Read the article

  • Apache2 Worker Starting Tons of Processes

    - by karmic
    I am installed apache2-mpm-worker and left all config files default (I've never touched them much). Is it normal that when I restart apache there is at least 20 apache processes starting? Shouldn't it be just 2 like it says in the configuration? Also, my memory seems to grow very quickly until my machine crashes. I don't have any mods installed.

    Read the article

  • Get Remote Processes on Windows 2003 with cpu percentage

    - by Brettski
    I have a production server with it's cpu's running excessively high. Except in critical circumstances nobody is allowed to logon to servers during non maintenance times. I am looking for an application I can use to look at the processes on the remote server which include CPU % usage. An application like top. Windows native tasklist.exe doesn't show percentage, nor does sysinternals pslist.exe. Suggestions?

    Read the article

  • Sharing a serial port between two processes

    - by peterrus
    As it is not possible to directly share a serial port between two processes using Linux, I am looking for another way to achieve this, I have heard about socat but could not find a concrete example of how to realize the following: Split one physical serial port (/dev/ttyUSB0) into two virtual ports, one for reading and one for writing, as one process only needs to send data, and one only needs to receive data. I can no modify the sending application unfortunately.

    Read the article

  • Windows 7 x64 - Problem with 32-bit processes...

    - by Will A
    Hi All, Is it just me, or is Windows 7 x64 awfully unstable when it comes to 32-bit processes? If I ever find myself in a situation where a 32-bit process hangs or otherwise misbehaves, terminating the process (through e.g. task manager) seems to fail every time - there are no error messages or anything, it's just that the process refuses to terminate. Anyone else have the same problems running 32-bit applications on x64 Windows? Thanks, Will.

    Read the article

  • Software management for 2 programmers

    - by kajo
    Hi all, me and my very good friend do a small bussiness. We have company and we develop web apps using Scala. We have started 3 months ago and we have a lot of work now. We cannot afford to employ another programmer because we can't pay him now. Until now we try to manage entire developing process very simply. We use excel sheets for simple bug tracking and we work on client requests on the fly. We have no plan for next week or something similar. But now I find it very inefficient and useless. I am trying to find some rules or some methodology for small team or for only two guys. For example Scrum is, imo, unadapted for us. There are a lot of roles (ScrumMaster, Product Owner, Team...) and it seems overkill. Can you something advise me? Have you any experiences with software management in small teams? Is any methodology of current agile development fitten for pair of programmers? Is there any software management for simple bug tracking, maybe wiki or time management for two coders? thanks a lot for sharing.

    Read the article

  • pros and cons with server management gui tools to manage linux web servers

    - by ajsie
    i have stumbled upon these GUI tools that could help you manage your linux server through a web interface. ebox, webmin, ispconfig, zivios, ispcp, plesk, cpanel etc. i wonder what the pros and cons are with these solutions. a lot of people is saying that they are not as good as using pure command line (ssh) to manage your server. but i think thats yet another "linux are for advanced users" talk. i agree that a lot of things may only be done with the command line by editing directly in the configuration files. but i don't really want to do that every time and for everything. especially basic configurations these could manage. its like not having phpmyadmin for managing mysql. it would be a pain in the ass right? so if one wants to throw up a web server serving a php site oneself developed and wants all the usual stuff up and running (mysql, phpmyadmin, svn, webdav etc) is these tools the right way to go? and for more advanced features, one just use the terminal like old days. is this a smart way of managing a linux server? and which one would you choose? have you used any of these and could share your thoughts about them?

    Read the article

  • How to manage enterprise network of Linux machines?

    - by killy9999
    I work at the university. In my institute we have six computer laboratories used for teaching. Each lab has almost 20 computers, which gives over 100 machines total. Computers have either Windows XP or Windows 7 Eneterprise operating system. We use Symantec Ghost to manage all the computers. Each computer has a Ghost client installed, which allows to control computers over network. Every six months we restore a master image on one of the computers in a lab, update that image and distribute it over the network to all computers in a laboratory. Thanks to Ghost client this is done automatically with just a few clicks. Recently I suggested that it would be good to have Linux installed in the laboratories. The administrators were concerned that we would not be able to manage that many computers if each would have to be updated manually. The question is: how to manage such a huge network of Linux machines in an automated way? To make the description of our network more complete I'll add that all students have their accounts (about few thousand users) on a central server. These are accessed via LDAP. To use a computer in laboratory each student has to log in using his own account.

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Global WH_CBT hook DLL is loaded into some processes only

    - by kriau
    The main program calls the function SetHook in the wi.dll to install global WH_CBT hook. bool WI_API SetHook() { if (!g_hHook) { g_hHook = SetWindowsHookEx(WH_CBT, (HOOKPROC) CBTProc, g_hInstDll, 0); } return g_hHook != NULL; } I presume after installing global hook, wi.dll should be loaded into each process' address space. However wi.dll is loaded in to some processes only. For example, if I start Skype, MS Word I can see that wi.dll is loaded into these processes as well (using Process Explorer), however if I run Firefox, uTorrent, Adobe Reader then wi.dll is not loaded into these processes. I'm using W7 64-bit, main program and wi.dll is 32-bit, all programs mentioned here is 32-bit programs as well. Any ideas why that happens? Thanks in advance.

    Read the article

  • Processes sharing cores on Ubuntu system

    - by muckabout
    My coworkers and I share an 8-core server running Ubuntu for our batch processes. I tend to run 4 processes at a time, each of which consumes 100% CPU per core when nothing else is running. When a coworker runs his processes (typically about 4 at a time), his also get 100% per. However, when both of us run ours (he always goes first), his still get 100% and mine seem to divide the remaining processing power and linger in the 10-40% range. I even reniced his process to a lower value and it did not change. What are the issues that may cause this?

    Read the article

  • apache2-mpm-itk doesn't kill his processes

    - by rtm
    Why apache doesnt kill his processes ? Im using fresh ubuntu 10.04 64bit with php 5.2 from karmic I've istalled 5.2 using this this script phpinfo could me found here http://www.m-23.ru/2.php apache2 settings: StartServers 5 MinSpareServers 5 MaxSpareServers 30 MaxClients 30 MaxRequestsPerChild 200 I've tried strace -p and get the following sched_yield() = 0 sched_yield() = 0 sched_yield() = 0 sched_yield() = 0 sched_yield() = 0 sched_yield() = 0 sched_yield() = 0^C Process 16839 detached htop displays this picture 3887 vu2032 20 0 337M 11644 2116 R 78.0 0.1 1:00.30 /usr/sbin/apache2 -k start 3891 vu2017 20 0 337M 11308 1828 R 64.0 0.1 0:58.64 /usr/sbin/apache2 -k start 3893 vu2032 20 0 337M 11652 2120 R 57.0 0.1 1:01.35 /usr/sbin/apache2 -k start 3896 vu2033 20 0 337M 11248 1776 R 57.0 0.1 0:36.78 /usr/sbin/apache2 -k start 3842 vu2033 20 0 337M 11244 1772 R 51.0 0.1 2:00.18 /usr/sbin/apache2 -k start 3857 vu2025 20 0 337M 11288 1812 R 49.0 0.1 1:38.70 /usr/sbin/apache2 -k start All sites works under php

    Read the article

  • Why do strace/truss sometimes 'fix' stuck processes?

    - by Emmel
    Sometimes you have a stuck process that's been stuck for a while, and as soon as you go to poke at it with strace/truss just to see what's going on, it gets magically unstuck and continues to run! So from merely 'observing' these programs have some impact in the running of the stuck programs .. what's happening here? Did strace (I guess via ptrace(2)?) send a signal, causing the program to cease blocking, or such? I've seen this several times -- most recently on Linux RHEL 4 (and a Perl script mucking with processes and doing some network IO in that case), but in a few other contexts as well. Unfortunately, I can't reproduce this, as it times to happen ... in times of crisis. But my curiosity remains. :-) Any elucidation appreciated.

    Read the article

  • Postgresql spawning a ridiculous number of postmaster processes

    - by Kevin Loney
    For some reason postgres is spawning 700 postmaster processes for handling database requests and the postgres log file if full of 'unexpected EOF on client connection', 'incomplete startup packet' and 'sorry, too many clients already'. netstat tells me that all the open connections are local and I'm pretty sure are coming from postgres internally. This particular instance has been running just fine for the last 230 days or so and nothing has changed configuration wise. Any thoughts on where I should be looking to try and resolve this issue? This is my first time diagnosing a problem like this so if there is any steps I can take to help narrow down the cause that would be helpful as well.

    Read the article

  • Using rsyslog to create different log files for different processes

    - by user80203
    Scenario: I am running a cluster of machines. Each machine runs various python programs with a unique (across the cluster), but dynamically set, ID. Right now, they are all logging locally. So, I might have logs that look like: process_5.log process_6.log for processes that had ID's 5 and 6. Another machine may have: process_20.log process_25.log I wish to forward these logs to a logserver running rsyslogd. Python's logging facility has a nice syslog handler, so I understand how I could connect to the remote server. What I haven't figured out is how to use templating/DynFile to maintain log separation. e.g. on the logserver, I will want to see: process_5.log process_6.log process_20.log process_25.log which correspond to the logs of the same name on the sending machine. Is there a way to pull this off with rsyslogd templating?

    Read the article

  • systemctl (Fedora 17) and interacting spawned processes's consoles

    - by Sean
    Introduction I've recently upgraded to Fedora 17 and I'm getting used to the newer systemctl daemon manager versus shell init scripts. A feature I need on some of my daemons is the ability to interact with their consoles because unclean shutdowns not initiated by the process itself can cause database corruption. So, performing a systemctl stop service-name.service for example might cause irreversible data loss. These consoles read user input through stdin or similar methods, so what I've been doing on my old OS is to place those daemons foregrounded in a screen session, and I suspended that screen session with ^A ^z. It's also worth noting that I've now made systemctl do this automatically if the computer reboots, but it still doesn't solve my potential data corruption problem I'm trying to avoid. My Question Is there a way to use systemctl in order to directly interact with the console of processes it spawns? Can I hook a process through systemctl to get access to its console? Thanks You guys always give great answers, so I'm turning to you!

    Read the article

  • How many nginx/fastcgi processes do you use?

    - by qliq
    I have a drupal-based website on a VPS with 1GB RAM and 1Ghz processor share. The webserver is nginx along with php-fastcgi. Currently I am using 10 nginx and 13 php-fastcgi processes. The server load is high most of the times while half of the RAM is unused. The CPU usage rarely reaches 80%. I have tried some other combinations of nginx/php-fastcgi but am not sure what is the optimal combination because I am quite ignorant about what's going on below the surface. So I appreciate if you could share your experience or give me some clues.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >