Search Results

Search found 5679 results on 228 pages for 'kill processes'.

Page 137/228 | < Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >

  • CentOS 6 init script doesn't work properly

    - by user711643
    I'm setting up my ruby production server based on CentOS 6. I need a process called god (which is a process monitoring tool) to start at boot. I'm using an init script that I found here. Just as stated in the guide I ran: chkconfig --add god and then chkconfig --level 345 god on After this if I run "service god start|restart" everything works. It loads the available configurations and brings up the related processes (if they are not running). Problem is it doesn't work at boot. If I reboot the system, then I do "ps -aux | grep god". At this point "god" is running but apparently it didn't load the configuration files. If i run again service god restart, it loads everything without problems. What am I doing wrong?

    Read the article

  • Nvidia 8600 GT drivers Will not run setup

    - by Zeno
    I am attempting to upgrade my video card drivers with my nvidia 8600 gt. I downloaded the drivers from NVIDIA and when I run the setup, nothing happens. It does appear in the Processes, but never does anything. I've tried restarting, tried killing the process and re-running, I am an admin account... nothing works. What is wrong? [EDIT] The installer does run, it extracts the setup files to C:\NVIDIA as normal then when it tries to run that setup to do the actual install, nothing happens (the process does show up in Task Manager and just sits there). Attempting to run the installer manually via C:\NVIDIA\DisplayDriver\296.10\WinXP\English\Setup.exe has the process open, but nothing happens. This machine is WinXP 32bit.

    Read the article

  • Is there anything like Heroku for PHP and/or .NET?

    - by Wayne M
    In my area PHP is very widespread, so is .NET. Ruby not so much; most places have never heard of it. For some personal things I am "forced" to choose Rails because I want to take advantage of Heroku - the ability to deploy and scale on the cloud very easily is the main reason. Also, they offer a small FREE plan, with no ads or strings attached, that I can use for demo sites or, in this case, for my business' static page; as a totally bootstrapped startup I have maybe $50 or so in initial capital and cannot afford to pay monthly fees while I'm getting started. Are there any similar offerings for other languages? Specifically, I really like the small, 5MB site for free that Heroku offers - is there anything like that for PHP and/or .NET? I'm not even that concerned about the "cloud" part, but that would be a nice bonus. If there is, I might be able to kill two birds with one stone and pick up a useful skill as I'm doing my own thing instead of using something that nobody else knows or cares about. I should add I'm specifically interested in something that offers a free plan. As I said, Heroku has a 5mb plan that you can have as many as you want for free; I have yet to find anything similar for any other platform (most of the "free" sites require you to have ugly banners on your page, or don't allow you to use your own domain name), and to be honest I'm not too thrilled about using Ruby on Rails for everything simply to take advantage of this. I'm asking this here because I already asked it on StackOverflow and someone suggested it would be better suited here.

    Read the article

  • nginx logrotate config

    - by TomOP
    Whats the best way to rotate nginx logfiles? In my opinion, I should create a file "nginx" in /etc/logrotate.d/ and fill it with the following code and do a /etc/init.d/syslog restart after that. This would be my config (I havn't tested it yet): /usr/local/nginx/logs/*.log { #rotate the logfile(s) daily daily # adds extension like YYYYMMDD instead of simply adding a number dateext # If log file is missing, go on to next one without issuing an error msg missingok # Save logfiles for the last 49 days rotate 49 # Old versions of log files are compressed with gzip compress # Postpone compression of the previous log file to the next rotation cycle delaycompress # Do not rotate the log if it is empty notifempty # create mode owner group create 644 nginx nginx #after logfile is rotated and nginx.pid exists, send the USR1 signal postrotate [ ! -f /usr/local/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/nginx/logs/nginx.pid` endscript } I have both the access.log and error.log files in /usr/local/nginx/logs/ and want to rotate both daily. Can anyone please tell me if "dateext" is correct? I want the log filename to be something like "access.log-2010-12-04". One more thing: Can I do the log rotation every day on a specific time (e.g. 11 pm)? If so, how? Thanks.

    Read the article

  • Using socat to exec php cli

    - by RoyHB
    There are multiple client programs that periodically connect to a port on my server and send a single line of text. When a connection to the port is made I need to start a PHP CLI script that processes the data. There may be many of the remote scripts running/connecting at more or less the same time so I think it would be best if socat forked a process for each connection to run the script. I've gotten socat to do most of what I need, using the command socat tcp-l:myport,fork exec:mypath/socatTest.php I can read the input on php://stdIn. All is good. The problem is that the process doesn't seem to fork, so if a second external program sends data while another is doing the same it gets a connection refused error. Where have I gone wrong?

    Read the article

  • Strange Intermittent Background Sound in Windows 7

    - by NoCarrier
    Ok, this is very strange. Recently, i noticed that every 10-15 seconds, there would be this faint, annoying duh-dum sound coming out of my speakers. (hard to describe. sounds somewhat like the sound windows makes when you unplug a USB device, but not as pronounced and much quieter). I closed every app and ended as many processes as I could, but the sound persists. I look at the volume mixer and sure enough, when the sound occurs, there is a little spike in the level under "System Sounds". I haven't installed any hardware or software in the last several weeks. This started recently - completely out of the blue. Does anyone have any insight?

    Read the article

  • Linux freezes every few seconds

    - by Zeppomedio
    We're having an issue where one our Linux boxes (Ubuntu 10.04 LTS, running on EC2 with a quadruple-large size, 68GB of RAM and 8 virtual cores with 3.25GHz each) freezes up every few seconds. Typing in an ssh session will freeze, and running strace on one of the Postgresql processes that's running usually shows: 02:37:41.567990 semop(7831581, {{3, -1, 0}}, 1 for a few seconds before it proceeds (it always gets stuck at that semop). OProfile shows that most of the time is spent in the kernel (60%) versus 37% in Postgresql. The result of these halts (which began suddenly a day ago) is that load on the box has gone from 0.7 to 10+, and causes our entire stack to slow done. Any ideas on how to track down what's going on? iostat doesn't show the disks being particularly slow or overloaded, and top shows user cpu % spike from 8% to about 40% whenever these back-ups happen.

    Read the article

  • Win XP shut down window very slow to appear

    - by Heckmaier
    Hi all, I click Start-Shut Down and the Shut Down window takes 5 minutes to appear. This problem doesn't happen after a fresh boot, it only happens after I have been logged on for awhile. I am using a MacBookPro with Bootcamp. The machine actually shuts down quickly, its just really slow to bring up the shut down window. Basically I would like to know if: 1) Anyone has any ideas why this would happen 2) Who owns the "Shut Down" window (i.e. what happens on the OS after I click the Shut Down icon in the start menu) I've tried perusing the task manager to try and see if any processes look suspicious, but to no avail. thanks, H

    Read the article

  • Passenger/Rails not releasing memory

    - by michaeldelorenzo
    I have an Ubuntu server running three separate Rails (2.3.8) applications with Passenger, REE and Apache. Recently we started experiencing problems with ruby processes eating up memory and consuming entire cores on our server. Here's what we're getting... %CPU PID USER COMMAND 99.9 1717 nobody Rails: /var/www/api 99.6 5542 nobody Rails: /var/www/api 97.3 1223 nobody Rails: /var/www/api 4.7 5537 nobody Passenger ApplicationSpawner: /var/www/api 10.5 1801 nobody Rails: /var/www/api We've also seen instances where there have been over 100 instances of Apache running. These applications have been running for a few months without an of these issues, but in the last day or so we've been noticing this. The site referenced here is a Rails application that is a RESTful API so it serves many requests every minute. Any guidance on what we should be checking or looking out for would be appreciated.

    Read the article

  • VPS stops responding every now and again

    - by Or W
    I have a Linode vps that I use to host some of my websites on. It's Ubuntu based and it's up to date in terms of all packages. I don't have any cron jobs scheduled or any automatic processes. I host a few (up to date) wordpress blogs there that have very little traffic altogether. Every day (at a different time) my server stops responding, I can't SSH to it, web access is getting timed out and it just dies until I reboot it through the Linode manager. On the linode dashboard I can see that the CPU is not very high (2-3%) Incoming/Outgoing traffic is on 0 and the IO count has a spike just before the server stops responding (SWAP IO is at 2k and IO Rate is at 5k). When I reboot the server everything is just fine. I'm trying to figure out a way to analyze what's going on at these random times where the server freezes up. How can I determine the problem?

    Read the article

  • php-cgi got SIGKILL on burst.net VPS?

    - by Shawn
    I got an VPS at burst.net, very cheap one, but this doesn't matter. The strange behavior of it is, the php-cgi process, which started using lighthttpd's spawn-cgi, dies every a few minutes. however, other processes are fine and great, even include a java process, and I'm sure there is no "out of memory" issue, so it's not killed by OOM killer. I used strace to trace the process, and found out it was killed by SIGKILL, hence no single log was left on the disk, just dies suddenly. Is there anyway I can find out what process/thing sent the SIGKILL to the poor php process? Filed a ticket with the vendor, but they said they won't care. strace -p 7176 Process 7176 attached - interrupt to quit wait4(-1, <unfinished ...> +++ killed by SIGKILL +++

    Read the article

  • Problems getting auditd set up on my server

    - by Tola Odejayi
    I'm trying to figure out which processes are deleting files from a specific directory, so I want to set up and run auditd on my system. I've set up the following rule in audit.rules: -w S unlink -S truncate -S ftruncate -a exit,always -k cache_deletion -w /home/myfolder/cache Then I type this to start the audit daemon: auditctl -R /etc/audit/audit.rules -e 1 But I get this error message: Error - nested rule files not supported Does anyone know what I am doing wrong here, and how I can resolve this? Also, what do I have to do to get the daemon running at startup?

    Read the article

  • Giving the root user priority to maintain Debian (while server collapsing under heavy load)

    - by Saix
    Is there any way to setup Debian to prioritize any or specific root's activity before every other? For instance, several times per year something gets wrong (usually man's fault by overstressing apache/mysql) and system gets unresponsive under heavy load like 200 (8-core cpu). I know there are limits for php scripts to run then kill, but that's not the way because this limit has to be at least 45 minutes long. The problem is, until I'm able to login via SSH and let apache/mysql restart under this server stress, it nearly hits these 45 minutes anyway. Also hardware restart causing usually to run fsck at boot time on all harddrives since it's usually pretty long the box haven't been restarted. I was told it's really not good idea disabling fsck but then again, it takes more then hour to complete. What is the fastest way to restart apache/mysql? Is there any way to give ssh users or root user higher priority so the logging in and completing these restarts (rather stops though) commands wouldn't take so long? One comes to my mind.. use NICE for apache/mysql but no way. I can't risk limiting those two vital apps 24/7 or could I? I'm a little bit scared if any other system process wouldn't slow the pages down too much. Any backup process, swap (if any) etc. There is pretty heavy PHP framework with 20k visits a day, so it needs every hw/sw resource available. I can't throttle it the whole time, just in certain points when system gets unresponsive, so I could maintain it.

    Read the article

  • Can't see CMD.EXE on Windows 7

    - by Andrea
    I have a problem with Windows 7 and cmd.exe with these conditions: Logon as non admin user Launch cmd.exe I can see cmd.exe in task manager but it's invisible in the desktop and I don't know what to do, everything is fine and I can see cmd.exe if I do login with an admin account. I can see it in the "Process" tab but not in the "Application" tab, and if I launch five cmd.exe's, I see five processes, but from that tab I have no "Bring to front" or "Maximise" I can't find any WOW folder under C:\Windows, even with show hidden and system files enabled. I'm running Windows 7 32-bit running on a 64-bit Intel Core 2 Duo E7500

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

  • How to troubleshoot Application Popup issues 0XC0000142 and 0XC000009a

    - by DotDot
    I am running randomly into 1 of these popups when our application runs. The machines range from 8GB/8Core to 24GB/24Core and run Windows Server 2008 R2. The application is a bunch of perl scripts and exe's that are expected to utilitize the server well. The process tree can be quite deep (5-6 child levels) and quite broad (60-70 level 1 processes). We hit this issues every 1% run on random machines. The application stalls on popup, unless someone clicks the damn button. The event log reads as cmd.exe - "Failed to initialize app. Click OK to close app" How could I reliably repro these issues?

    Read the article

  • Windows Vista Home memory usage problem [closed]

    - by lordg
    Hi, I have a Windows Vista Home laptop from a client that is running on 1GB ram. The laptop is used for super basic things, word, internet, outlook, etc. What makes zero sense is that the RAM is being completely consumed, causing the PC to hang sometimes when it can't take it anymore. However, in task manager, the processes appear to only be consuming maybe 100MB (Private Working Set). The client literally has a simple setup, and is running kaspersky, though that does not seem to be indicating it is the cause of the excessive memory usage. Does anyone have a suggestion on how to resolve the memory issue or how to track down what is actually happening and fix it? G

    Read the article

  • Aligning Numbered Bullet Points in Word 2007

    - by Frustratedwithbullets
    Hello, I am putting together a very large business manual which incorportaes numbered heading, steps to follow, diagrams, etc. When using the bullet points, they align perfectly as I work through the processes. However when I include a diagram, or something different from the "norm" of text, the alignment changes. I would like all the bullets points to be aligned in the whole document regardless of where they appear in the document. Is there a way to save the settings so that the bullets always appear in the same position? Currently I am having to reset the indents by dragging the tabs on the ruler. This will be a large document, so I don't want to manually adjust the numbered bullets every time. Help would be greatly appreciated. Thanks very much.

    Read the article

  • How to Track CPU and Memory Usage Per Process

    - by Mjsk
    I have seen this question asked on here before but was unable to follow the answer which was given. I would like to monitor a processes CPU, Memory, and possibly GPU usage over a given time. The data would be useful if presented in a graph. It would be nice if I could do this using Performance Monitor, but I am open to alternative solutions as well. I have tried using Performance Monitor and my problem is that I'm not sure which performance counters to use since there are so many. I've been looking at a Process, Processor, Memory, etc. but I'm not sure which counters within those categories will be of interest to me. My OS is Windows 7.

    Read the article

  • What sort of things can cause a whole system to appear to hang for 100s-1000s of milliseconds?

    - by Ogapo
    I am working on a Windows game and while rendering, some computers will experience intermittent pauses ("hitches" for lack of a better term). When profiled they appear in seemingly random places in the code. Eventually I noticed that it wasn't just my process that was affected, but (seemingly) every process on the system. All of the threads in my application hitch at once. The CPU utilization drops during these hitches and it appears as if most processes make no progress. This leads me to believe this may be an Operating System or Driver issue, but it only occurs while playing the game (and only on some systems). What sort of operations might the operating system be doing that would require the kernel to pause all user threads and block. Some kind of I/O? At first I thought of paging but my impression is that would only affect a single process, no? Some systems in use: Windows, DirectX (3d), nVidia cards (unknown if replicates on ATI), using overlapped io for streaming

    Read the article

  • Launch script after SFTP disconnect

    - by Mates
    I'm currently using Caja (basically the same as Nautilus) to connect using SSH to my server and work with files. What I'm looking for is a way to launch a simple script when I disconnect - I can launch a script after disconnecting from the TTY by putting it into ~/.bash_logout file, but that is not executed when disconnecting from a file manager. The only idea I have is to set up a cronjob which would be checking for existing sftp-server or sshd processes periodicaly and launched the script when there's no such process running. Is there any easier way to do this?

    Read the article

  • How to run a command in a process that is not a child of the current process?

    - by amicitas
    I am having a library conflict issue with calling an external program from within a interpreted programming environment (IDL). The issue seems to be that since the program I am calling ends up as a child of IDL, libraries are not being reloaded. From within IDL I can launch sub-processes either directly or using a shell. Is there a good way that I can cause my program to be run without ending up as a child process? The only solution I have found so far is to use ssh localhost my_program. This works perfectly but I would like a more direct solution.

    Read the article

  • Limit a process's relative (not absolute) processor consumption in Linux

    - by BobBanana
    What is the standard way in Linux to enforce a system policy to limit the relative CPU use of a single process? That is, on a quad-core machine, I never want a process to use more than 2 CPUs at once, even if the process creates more threads. I do not want an absolute time limit, just a relative limit so that one task cannot dominate the machine. This is also different than renice, which allows a process to use all the resources but just politely step aside if others need them too. ulimit is the usual resource limiting tool, but it does not allow such CPU restrictions.. it can limit the number of processes per user, or absolute CPU time, not restrict the maximum number of active threads of a single process. I've found a couple of user-level tools, like CPUlimit, but not a system level tool or setting. Does such a standard resource controller exist in Linux (Red Hat Enterprise, if it matters.) If there is such a limit imposed, how would a user identify it?

    Read the article

  • What is the maximum memory that an IIS6 web site/app pool can use?

    - by Robin M
    I have an IIS 6 server running on Windows 2003 SP2 x86. The server has 4GB of RAM and runs consistently with 2GB allocated. I realise that with x86, the server won't utilize all of the 4GB RAM and the application space is also limited but the IIS processes seem to be limited elsewhere. w3wp.exe never has more than 500MB allocated and I occasionally get OutOfMemory exceptions from a busy .NET application (there are several applications running, each with a separate application pool). What is the maximum memory that an IIS6 web site/app pool can use?

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

< Previous Page | 133 134 135 136 137 138 139 140 141 142 143 144  | Next Page >