Search Results

Search found 22668 results on 907 pages for 'command prompt'.

Page 608/907 | < Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >

  • continuous hard disk access - slowing down my machine

    - by suresh
    I find from the hard disk access LED on the front of my machine that the hard disk is being accessed more often and probably because of that, my machine is quite slow. The machine becomes unresponsive even when the load as seen from w command is around 1 or so. My desktop is optiplex 360 dell machine running Ubuntu 10.04. My questions are: How to quantify hard disk access and how to see if it is more than "normal" ? If it is more than normal, what are my solutions ? thanks suresh

    Read the article

  • How to configure chrome to open magnet url's with deluge?

    - by michael_n
    After upgrading to Ubuntu 11.04 (natty) from 10.10, I can no longer open magnet (torrent) links in Chromium, and set deluge to automatically open and accept the url. (Edit: currently ".torrent" files are not a problem, but magnet url's, e.g. of the form "magnet:?xt=urn:...", are now the only problem. Not sure if something updated...?) Rather, now only transmission will automatically open torrents, magnet links, etc. There doesn't seem to be a way to set deluge to be the default torrent client. (And, there also doesn't seem to be a "default application" setting for bittorrent client to replace transmission w/ deluge.) Notes: I found some old threads on this issue, and only a one or two newer ones. The newer threads seem to suggest xdg-open is to blame. But not many people seem to be running into this problem, so... maybe it's just me? Not using firefox, so manually setting apps for mime-types or extensions doesn't work (that's not an option in chrome/chromium, afaik -- you have to rely on the OS) I uninstalled transmission, and then basically nothing happened when clicking on torrent/magnet links. running from the shell also opens transmission (not deluge): xdg-open "magnet:?xt=urn:bt..&tr=http://tracker.....com/announce" My current url handlers are: $ gconftool -a /desktop/gnome/url-handlers/magnet command = deluge "%s" needs_terminal = false enabled = true The only work-around I have (which does work) is to rename /usr/bin/transmission-gtk{,.bak} and create my own /usr/bin/transmission-gtk : $ cat /usr/bin/transmission-gtk #!/bin/bash deluge "$@" Anyone else run into this, know of a bug, workaround, or...?

    Read the article

  • rsync for copying file

    - by vinayrks
    I am migrating my old server to new server . I used this server for hosting website . first I tried sftp but due to huge number of files and connection time out , it simply didn't work . then I tried rsync .rsync working good , but only problem I am facing it updating file very nicely & fastly but do not copy new files please help me . because still i need to transfer lots of file. I am using this command : rsync -anv -e ssh oldserver:/path/ /path

    Read the article

  • Nginx/FPM/PHP all php files say 'File not found.'

    - by Boon
    i just installed nginx 1.1.13 and php 5.4.0 on a centos 5.8 final 64bit machine. Nginx and PHP/Fpm are running, and I can run php scripts via ssh command line, but in the browser I keep getting 'File not found.' errors on all my PHP files. This is how I have my nginx.conf handle PHP scripts: location ~ \.php$ { root /opt/nginx/html; fastcgi_pass unix:/tmp/fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /opt/nginx/html$fastcgi_script_name; include fastcgi_params; } This is a direct copy/paste from my other servers, where it works fine with this setup (but they run older versions of php/fpm). Why am I getting those errors?

    Read the article

  • When clicking an irc:// link, a new instance of chatzilla opens instead of the existing one being used.

    - by WebDevHobo
    That is my problem in a nutshell. I'm running Win7 32-bit. I have chatzilla on XulRunner, so not as the Firefox add-on. When I clock any irc:// link, a new instance of Chatzilla will be started. I have a lot of startup-commands set, so all those will be executed. I stop the new instance before it takes off, but this is rather annoying. Firefox application setting just link to the path where the executable is, with no option to set any command-line stuff to make the existing instance be used. Is there any firefox or windows setting that I can manipulate, so that when firefox calls chatzilla.exe, the existing instance is used instead of a new one opened?

    Read the article

  • Sensitive data in init scripts

    - by Steve Jorgensen
    I'm adapting some examples I've found by Googling to build an init script to run a VirtualBox OSE virtual machine as a daemon. I would like to specify a password for VNC access to the VM, and this must be given as an argument to the VBoxHeadless command. Conventionally, init scripts are readable by standard users, and this seems like a useful convention, but I also don't want the VNC password for this VM to be stored in easily accessible plain text. What's the most appropriate/conventional way to handle this kind of situation? Maybe put a root-readable supporting data file someplace, and have the init script load the value from there?

    Read the article

  • Vim lint check - only show message if there's an error

    - by GorillaSandwich
    I have this line in my .vimrc, which means "when I save a .rb file, run it through ruby -c" (the ruby interpreter's error checking). autocmd BufWritePost *.rb !ruby -c <afile> When I save that file, I always see output at the bottom of the screen, so I get used to it and start ignoring it. What I want is to only see output if there are errors. I can see that when there are errors, after it says what they are, at the bottom, it says "shell returned 1." How can I modify this line so that it only shows a message if the shell returns 1? Is there a way to conditionally surpress output from a shell command run in vim?

    Read the article

  • Using physical disk with VMware Workstation

    - by chx
    I am using VMware Workstation 9.0 under Windows 7 and trying to load my Linux from Physicaldisk0. And it boots, grub sees the two partitions on the disk (I checked in command line) and the kernel and the initrd loads and then it stops saying "device not found"and drops me into an emergency shell. Indeed there is absolutely nothing in /dev not the /sda device it expects not /hda nothing that looks like a disk. Edit: I can boot the Linux disk just fine if I boot it from BIOS and not as a VM. Edit2: The question is, how can I make this setup work?

    Read the article

  • Error while starting web application.

    - by Lalit
    0 When you right-click a Web site in the Microsoft Internet Information Services (IIS) Microsoft Management Console (MMC) snap-in, and then you click Start, the Web site does not start and you receive the following error message: The process cannot access the file because it is being used by another process. What have to do. To resolve this issue i got this solution form link http://support.microsoft.com/kb/890015 As: You must use the Netstat.exe utility at the command line to see if another process is using port 80 or port 443. But how to ensure that is these Ip are in use or not ? in terms of status ? What should its status ? Second solution is : HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\HTTP\Parameters\ListenOnlyList. But this key is not found .

    Read the article

  • Workaround for an Xcode/iOS SDK Issue...

    - by Joe Huang
    Hi, everyone: When you are doing ADF Mobile development, and you need to deploy the application to an iOS device, you would need to compile/deploy the app with iOS App Certificates and Provisioning Profile. This means you would need to "Deploy to Package" or "Deploy to iTunes" during deployment, and configure JDeveloper with the proper certificates/profiles. In some instances (exact combination is still not clear), deploy and signing the application to generate the ipa file may fail with similar error message at the end of the deployment log: [01:04:45 PM] Deployment failed due to one or more errors returned by '/usr/bin/xcrun'. The following is a summary of the returned error(s): Command-line execution failed (Return code: 1) error: /usr/bin/codesign --force --preserve-metadata=identifier,entitlements,resource-rules --sign iPhone Distribution: Oracle Corporation --resource-rules=/var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app/ResourceRules.plist --entitlements /var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/entitlements_plistEINPBkIG /var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app failed with error 1. Output: /var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app: replacing existing signature Program /usr/bin/codesign returned 1 : [/var/folders/x7/21sjrpx13qj9tq20z14s3j_w0000gn/T/tkROhP11qU/Payload/HelloWorld.app: replacing existing signature This issue is a known issue and is not related to ADF Mobile. The workaround is discussed in this article from StackOverflow. This article refers to the old location of Xcode, so you would need to adjust the paths accordingly. The path for Xcode 4.3 and above would be like: /Applications/Xcode.app/Contents//Developer/Platforms/iPhoneOS.platform/Developer/usr/bin/PackageApplication to this script file. To modify it, you probably can’t use Text Editor. I end up opening a terminal session, changed the file permission, and used vi to update it. Thanks, Oracle ADF Mobile Product Management Team

    Read the article

  • catch-22 with apt-get

    - by Mark J Seger
    I'd recently installed a package and discovered my scp stopped working. After removing and installing some things I got it fixed but then I stated getting errors in apt-get about dpkg: error: configuration error: /etc/dpkg/dpkg.cfg.d/multiarch:1: unknown option 'foreign-architecture' so I commented it out and thought that fixed the problem until I discovered the chrome icon in my launcher turned into a ? and chrome no longer worked. I tried to reinstall it and got the apt-get error: "ambiguous package name 'libglib2.0-0' with more than one installed instance" if I try to remove libglib2 I get the error The following packages have unmet dependencies: iceape-browser : Depends: iceape but it is not going to be installed iceape-chatzilla : Depends: iceape (>= 2.7.11) but it is not going to be installed Depends: iceape (<= 2.7.11-1.1~) but it is not going to be installed E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages. so then I tried to remove icape-browser and it complained about the 2 instances of libglinb2. in fact virtually any command I issue does the same thing so I don't know what I can do to untangle things.

    Read the article

  • setting up delegate or smtp forwarding

    - by cotiso
    for work we have a remote dedicated server to run our webservice that also runs our email services, at home(comcast residential internet) i cannot send mail using the dedicated server's SMTP, comcast spits back a error saying i can only use their SMTP server for sending mail at work(comcast business internet) we can use our dedicated server for sending mail with no problem so i set up a box at work to forward smtp traffic, i'm new to all this networking stuff by the way i used delegate to forward smtp traffic, can someone point me in the right direction on how to use this program(delegate) to fix our issue the delegate command i used to test is : delegated -P25 SERVER="smtp://dedicated.server.com:25" PERMIT=":::" -v i also opened up port 25 on the router so it points to my boxes ip are there any other ways to fool comcast into thinking im using my works ip to send mail, my coworkers and i are unable to send mail from home for some time now thanks

    Read the article

  • How can I allow a linux subversion user to only execute svnserve?

    - by sbleon
    I've got a user that I'd like to only be able to use subversion. We like to use svn+ssh:// URLs sometimes (for public keys and whatnot), so I need them to be able to connect over ssh and run only the svnserve command. When using a svn+ssh URL, svn ssh'es in and passes the arguments "-c svnserve -t". I wrote a custom shell as follows to filter the commands that can be run. This works, but it's not passing the input to svnserve, so when I try to "svn up" I get "svn: Connection closed unexpectedly". #!/bin/bash if [ "$1" == "-c" ] && [ "$2" == "svnserve" ] && [ "$3" == "-t" ] && [ "$4" == ""] ; then exec svnserve -t else echo "Access denied. User may only run svnserve." fi

    Read the article

  • What could be causing frequent display freezes?

    - by austen
    I just installed Ubuntu 14.04 two days ago (coming from Win8) and in the two days that I've been using it, my display has frozen four or five times. The mouse won't move but the keyboard does respond so I can use the Ctrl+Alt+Bkspc command to fix it. It seems like it might just be the display freezing because one of the times I was watching a Youtube video and the audio continued playing. I have an Nvidia graphics card with the most recent Nvidia drivers for it enabled. I see that a lot of questions about Ubuntu freezing get marked as a duplicate and pretty much always linked back to a thread about what to do when it freezes. Clearly, I've got that bit figured out already and I did read that thread for further advice. What I'm looking for though is how to fix this permanently. output from lspci -nnk | grep -iA2 VGA: 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) Subsystem: Lenovo Device [17aa:2200] Kernel driver in use: i915 Update: JohnnyEnglish pointed out that Ubuntu is using the integrated graphics, not my Nvidia card. It turns out my laptop uses Nvidia Optimus and I cannot enable only the graphics card through the BIOS. I found out about Nvidia Prime and got it set up using this article. The settings panel which allows you to select the graphics says that 'performance mode' is enabled but when I check which graphics controller is enabled through the terminal, it still says it's using the integrated graphics. I'm not sure if this could be causing the freezes but I guess it's a starting point. Any ideas on how to resolve this?

    Read the article

  • Forgot to unmount/eject external hard drive, lost moved files. Mac OS X

    - by balupton
    So I was using my Mac with my external hard drive connected via USB. I moved about 10 GB of data to it (via drag and drop while holding down the Command key to move the files rather than to copy them). They moved to the drive all right, but as I was having some issues and the Finder crashed after the transfer, I was unable to eject the volume and later everything froze so I had to do a hard restart (hold the power button). When I remounted the volume (plugged the external hard drive back in) it no longer had any of the files which I moved onto it. As it was a lot of data, how can I recover these files?

    Read the article

  • How to restore from file using Symantec NetBackup 7.5

    - by Tony
    I have an install of Symantec NetBackup 7.5 and I want to restore the server from a NetBackup image file. The file was created using NetBackup before I arrived. We had a hardware failure that corrupted this server and it needed to be rebuilt, now we want to restore from this image file. I can't for the life of me figure out how to restore from that file. I've installed the NetBackup application but it can't find the file when using the restore command within the application. If I double-click the file it opens the application then gives me the same "can't find any NetBackup files" error. I also can't simply drag the file into the NetBackup window. Any advice on how I restore from this file would be appreciated, thank you.

    Read the article

  • Error while compiling/installing PHP with FPM for RPM on Centos 5.4 x64

    - by Raymond
    Hi, I'm trying to make an RPM with PHP 5.3.1 and PHP-FPM 0.6 for CentOS 5.4. So far it goes quite well, but when rpmbuild gets to the installation phase it fails with the following error: Executing(%install): /bin/sh -e /var/tmp/rpm-tmp.63379 + umask 022 + cd /usr/src/redhat/BUILD + cd /usr/src/redhat/BUILD/php-5.3.1/fpm-build/ + make install Installing PHP SAPI module: fpm Installing PHP CLI binary: /usr/bin/ cp: cannot create regular file `/usr/bin/#INST@12668#': Permission denied make: *** [install-cli] Error 1 error: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) RPM build errors: Bad exit status from /var/tmp/rpm-tmp.63379 (%install) I am running rpmbuild as a normal user, so it's understandable that it will fail to install anything into /usr/bin, but it shouldn't try to install anything outside the buildroot in the first place. I have however specified the BuildRoot in the header of the spec file and I can see it is passed correctly to the make install command. Does anyone have some idea of what is going wrong here? Thanks a lot!

    Read the article

  • simple script to backup PostgreSQL database

    - by Mick
    Hello I write simple batch script to backup postgeSQL databases, but I find one strange problem whether the pg_dump command can specify a password? There is batch script: REM script to backup PostgresSQL databases @ECHO off FOR /f "tokens=1-4 delims=/ " %%i IN ("%date%") DO ( SET dow=%%i SET month=%%j SET day=%%k SET year=%%l ) SET datestr=%month%_%day%_%year% SET db1=opennms SET db2=postgres SET db3=sr_preproduction REM SET db4=sr_production ECHO datestr is %datestr% SET BACKUP_FILE1=D:\%db1%_%datestr%.sql SET FIlLENAME1=%db1%_%datestr%.sql SET BACKUP_FILE2=D:\%db2%_%datestr%.sql SET FIlLENAME2=%db2%_%datestr%.sql SET BACKUP_FILE3=D:\%db3%_%datestr%.sql SET FIlLENAME3=%db3%_%datestr%.sql SET BACKUP_FILE4=D:\%db14%_%datestr%.sql SET FIlLENAME4=%db4%_%datestr%.sql ECHO Backup file name is %FIlLENAME1% , %FIlLENAME2% , %FIlLENAME3% , %FIlLENAME4% ECHO off pg_dump -U postgres -h localhost -p 5432 %db1% > %BACKUP_FILE1% pg_dump -U postgres -h localhost -p 5432 %db2% > %BACKUP_FILE2% pg_dump -U postgres -h localhost -p 5432 %db3% > %BACKUP_FILE3% REM pg_dump -U postgres -h localhost -p 5432 %db4% > %BACKUP_FILE4% ECHO DONE ! Please give me advice Regards Mick

    Read the article

  • New Analytic settings for the new code

    - by Steve Tunstall
    If you have upgraded to the new 2011.1.3.0 code, you may find some very useful settings for the Analytics. If you didn't already know, the analytic datasets have the potential to fill up your OS hard drives. The more datasets you use and create, that faster this can happen. Since they take a measurement every second, forever, some of these metrics can get in the multiple GB size in a matter of weeks. The traditional 'fix' was that you had to go into Analytics -> Datasets about once a month and clean up the largest datasets. You did this by deleting them. Ouch. Now you lost all of that historical data that you might have wanted to check out many months from now. Or, you had to export each metric individually to a CSV file first. Not very easy or fun. You could also suspend a dataset, and have it not collect data at all. Well, that fixed the problem, didn't it? of course you now had no data to go look at. Hmmmm.... All of this is no longer a concern. Check out the new Settings tab under Analytics... Now, I can tell the ZFSSA to keep every second of data for, say, 2 weeks, and then average those 60 seconds of each minute into a single 'minute' value. I can go even further and ask it to average those 60 minutes of data into a single 'hour' value.  This allows me to effectively shrink my older datasets by a factor of 1/3600 !!! Very cool. I can now allow my datasets to go forever, and really never have to worry about them filling up my OS drives. That's great going forward, but what about those huge datasets you already have? No problem. Another new feature in 2011.1.3.0 is the ability to shrink the older datasets in the same way. Check this out. I have here a dataset called "Disk: I/O opps per second" that is about 6.32M on disk (You need not worry so much about the "In Core" value, as that is in RAM, and it fluctuates all the time. Once you stop viewing a particular metric, you will see that shrink over time, just relax).  When one clicks on the trash can icon to the right of the dataset, it used to delete the whole thing, and you would have to re-create it from scratch to get the data collecting again. Now, however, it gives you this prompt: As you can see, this allows you to once again shrink the dataset by averaging the second data into minutes or hours. Here is my new dataset size after I do this. So it shrank from 6.32MB down to 2.87MB, but i can still see my metrics going back to the time I began the dataset. Now, you do understand that once you do this, as you look back in time to the minute or hour data metrics, that you are going to see much larger time values, right? You will need to decide what size of granularity you can live with, and for how long. Check this out. Here is my Disk: Percent utilized from 5-21-2012 2:42 pm to 4:22 pm: After I went through the delete process to change everything older than 1 week to "Minutes", the same date and time looks like this: Just understand what this will do and how you want to use it. Right now, I'm thinking of keeping the last 6 weeks of data as "seconds", and then the last 3 months as "Minutes", and then "Hours" forever after that. I'll check back in six months and see how the sizes look. Steve 

    Read the article

  • Restoring GRUB2 on Software RAID 0 after Windows 7 wiped it using LiveCD

    - by unknownthreat
    I have installed Ubuntu 10.10 on my system. However, I need to install Windows 7 back, and I expect that it would alter GRUB and it did. Right now, my partition on my Software RAID 0 looks like this: nvidia_acajefec1 is Ubuntu 10.10 and nvidia_acajefec3 is Windows 7. I've been following some guides around and I am always stuck at GRUB not able to detect the usual RAID content. I've tried running: sudo grub > root (hd0,0) GRUB complains it couldn't find my hard disk. So I tried: find (hd0,0) And it complains that it couldn't find anything. So I tried: find /boot/grub/stage1 It said "file not found". Here's the text from the console: ubuntu@ubuntu:~$ grub Probing devices to guess BIOS drives. This may take a long time. [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename. ] grub> root (hd0,0) root (hd0,0) Error 21: Selected disk does not exist grub> find /boot/grub/stage1 find /boot/grub/stage1 Error 15: File not found Fortunately, I got one person suggesting that what I've been trying to do is for GRUB Legacy, not GRUB2. So I went to the suggested website, ** (http://grub.enbug.org/Grub2LiveCdInstallGuide) **try to look around, and try: ubuntu@ubuntu:~$ sudo fdisk -l Unable to seek on /dev/sda This is just the step 2 of the instruction in the http://grub.enbug.org/Grub2LiveCdInstallGuide and I cannot proceed because it cannot seek /dev/sda. However, ubuntu@ubuntu:~$ sudo dmraid -r /dev/sdb: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 /dev/sda: nvidia, "nvidia_acajefec", stripe, ok, 488397166 sectors, data@ 0 So what now? Do you have an idea for how to make fdisk see my RAID array on live cd (Ubuntu 10.10)? Honestly, I am lost, very lost in trying to restore GRUB2 on this software RAID 0 system right now.

    Read the article

  • File doesn't exist in Linux although it's located in Terminal

    - by Mazen Ayman
    I'm a bit new to unix/linux environment, but I have a small problem. I'm using "locate" to find the path of a file I need, it gives me the path for it, but the file doesn't exist in that path, like that: locate test1.txt /home/user/test files/text1.txt /home/user/test1.txt~ "test files" directory is where I was keeping the file and I copied it to the home directory once but I deleted it, no idea what it keeps telling me there is still a tmp file for it. it worth mentioning that I used the command: locate test1.txt~ |xargs -n1 rm to remove that tmp file, but maybe that what caused the problem. I tried to show hidden files, and check for temp files, didn't find it either. any clue what happened?

    Read the article

  • HTC Android Fails to mount- Mount from computer?

    - by Ben Franchuk
    I Have an HTC Incredible S (S-Off, Rooted, ViperVIVO 1.3.0 ICS) that has seemingly ceased to posses the ability to mount its SD Storage to my computer. For whatever reason, whenever I plug in my device to transfer files from computer to phone and vice versa, the computer, for some reason, cannot actually aces the phone. I get prompted with a window on my phone when I first plug it in, asking me which mode I want to put the device into (Charge mode, tether mode, etc.), and even if I select the "Disk Drive" function, the phone still cannot successfully mount to my computer. The phone itself unmounts itself from the SD and says that the computer is connected, but again, it doesn't work. Is there any way to force mount the device from my computer- either via command or otherwise? This should help in that if I unmount the SD from the phone I should be able to mount it to my computer, from my computer, Correct?

    Read the article

  • Launch elasticsearch dockerfile using my own elasticsearch.yml

    - by Kevin
    I am launching elasticsearch via a dockerfile found here: https://index.docker.io/u/ehazlett/elasticsearch/ It works great. I need to define my own hosts as my environment does not support multicast of any kind. I understand that my options are: 1) supply hosts when elasticsearch is run as a command line parameter 2) modify my elasticsearch.yml file to set the hosts. I know how to build the yml, what I need to know is how to launch elasticsearch via docker using my own yml instead of the one in the container. Is that possible? Thanks.

    Read the article

  • Seeking solution for printing-reporting .NET

    - by Parhs
    I am developing an application that prints in separate threads in extreme cases about 20-25 pages per minute to various thermal printers. Currently templates for these are XAML xps documents. All printers have graphics drivers that support EMF/GDI printing. So GDI-EMF is done by operating system resulting in slower performance. Sending raw text for printing is another good solution but doesnt work always , because some clients have old chinese thermal printer that nobody support thus impossible to change codepage / emulation. So it doesnt work always. Also most computers running my software are low end ATOM CPU. So I am thinking to return to GDI, EMF printing and have both Text-Only reports and EMF reports. Another reason i want EMF is because here receipts are signed by Electronic Fiscal Memory device.Most of these dont do good job extracting text from XPS as they dont follow the standard but how windows convert GDI to XPS.Even with text-only mode some of them dont support all character encodings and are impossible to send paper cut command after the sign. I know that using a reporting engine would solve rendering problem but I dont want to buy one. All I want is to be able to show tabular data and insert an image and replaced text.I know there is StringTemplate that could do the generation of template but the problem is i should parse somehow the template and render it using GDI commands. Is there any other solution/approach for this ? Or is there anything ready ?

    Read the article

  • Why don't %MEM values add up to mem in top?

    - by ben
    I'm currently debugging performance issues with my VPS and for that I'm trying to understand which of the processes eat the most memory. Reading top, here's what I get: Mem: 366544k total, 321396k used, 45148k free, 380k buffers Swap: 1048572k total, 592388k used, 456184k free, 7756k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 12339 ruby 20 0 844m 74m 2440 S 0 20.8 0:24.84 ruby 12363 ruby 20 0 844m 73m 1576 S 0 20.6 0:00.26 ruby 21117 ruby 20 0 171m 33m 1792 S 0 9.3 2:03.98 ruby 11846 ruby 20 0 858m 21m 1820 S 0 6.0 0:09.15 ruby 21277 ruby 20 0 219m 11m 1648 S 0 3.2 2:00.98 ruby 792 root 20 0 266m 10m 1024 S 0 3.0 1:40.06 ruby 532 mysql 20 0 234m 4760 1040 S 0 1.3 0:41.58 mysqld 793 root 20 0 250m 4616 984 S 0 1.3 1:20.55 ruby 586 root 20 0 156m 4532 848 S 0 1.2 6:17.10 god 12315 ruby 20 0 175m 2412 1900 S 0 0.7 0:07.55 ruby 3844 root 20 0 44036 2132 1028 S 0 0.6 1:08.22 ruby 10939 ruby 20 0 179m 1884 1724 S 0 0.5 0:08.33 ruby 4660 ruby 20 0 229m 1592 1440 S 0 0.4 2:55.46 ruby 3879 nobody 20 0 37428 964 520 S 0 0.3 0:01.99 nginx As you can see my memory is about 90% used (which is my issue) but when you add up the %MEM values, it goes to about 50-60% only. Same thing, RES doesn't add up to ~350mb. Why? Am I misunderstanding their meaning? Thanks

    Read the article

< Previous Page | 604 605 606 607 608 609 610 611 612 613 614 615  | Next Page >