Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 398/943 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Unable to open websites that use HTTPS on linux

    - by negai
    I have the following network configuration: My PC 192.168.1.20/24 uses 192.168.1.1/24 as a gateway. Dlink-2760U router with Local address 192.168.1.1/24 has a VPN connection open with the provider using PPTP. Whenever I'm trying to open some web-sites that has some authorization (e.g. gmail.com, coursera.org), I'm getting a request timeout. This problem is observed mostly on linux (Ubuntu 12.04 and Debian 6.0), while most of such websites work correctly on windows XP. Could you please help me diagnose the problem? Could it be related to NAT + HTTPS? Thanks

    Read the article

  • java memory allocation under linux

    - by pstanton
    I'm running 4 java processes with the following command: java -Xmx256m -jar ... and the system has 8Gb memory under fedora 12. however it is apparently going into swap. how can that be if 4 x 256m = 1Gb ? EDIT: also, how can all 8Gb of memory be used with so little memory allocated to basically the only thing running? is it java not garbage collecting because the OS tells it it doesn't need to or what? TOP: top - 20:13:57 up 3:55, 6 users, load average: 1.99, 2.54, 2.67 Tasks: 251 total, 6 running, 245 sleeping, 0 stopped, 0 zombie Cpu(s): 50.1%us, 2.9%sy, 0.0%ni, 45.1%id, 1.1%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8252304k total, 8195552k used, 56752k free, 34356k buffers Swap: 10354680k total, 74044k used, 10280636k free, 6624148k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1948 xxxxxxxx 20 0 1624m 240m 4020 S 96.8 3.0 164:33.75 java 1927 xxxxxxxx 20 0 139m 31m 27m R 91.8 0.4 38:34.55 postgres 1929 xxxxxxxx 20 0 1624m 200m 3984 S 86.2 2.5 183:24.88 java 1969 xxxxxxxx 20 0 1624m 292m 3984 S 65.6 3.6 154:06.76 java 1987 xxxxxxxx 20 0 137m 29m 27m R 28.5 0.4 75:49.82 postgres 1581 root 20 0 159m 18m 4712 S 22.5 0.2 52:42.54 Xorg 2411 xxxxxxxx 20 0 309m 9748 4544 S 20.9 0.1 45:05.08 gnome-system-mo 1947 xxxxxxxx 20 0 137m 28m 27m S 13.3 0.4 44:46.04 postgres 1772 xxxxxxxx 20 0 135m 25m 25m S 4.0 0.3 1:09.14 postgres 1966 xxxxxxxx 20 0 137m 29m 27m S 3.0 0.4 64:27.09 postgres 1773 xxxxxxxx 20 0 135m 732 624 S 1.0 0.0 0:24.86 postgres 2464 xxxxxxxx 20 0 15028 1156 744 R 0.7 0.0 0:49.14 top 344 root 15 -5 0 0 0 S 0.3 0.0 0:02.26 kdmflush 1 root 20 0 4124 620 524 S 0.0 0.0 0:00.88 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0

    Read the article

  • Run a shell script using cron

    - by Blanca
    Hi! I have this FeedIndexer.sh: #!/bin/sh java -jar FeedIndexer.jar Just to run FeedIndexer.jar which is in the same directory as the .sh, I would like to run it using crontab, so I did this: # /etc/crontab: system-wide crontab # Unlike any other crontab you don't have to run the `crontab' # command to install the new version when you edit this file # and files in /etc/cron.d. These files also have username fields, # that none of the other crontabs do. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly 25 6 * * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily ) 47 6 * * 7 root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly ) 52 6 1 * * root test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly ) 01 01 * * * root run-parts --report /home/slosada/workspace/FeedIndexer/target/FeedIndexer.sh # But I don't know how to run it. Have i made any mistake?? Thank you!

    Read the article

  • Nginx return 444 depending on upstream response code

    - by Mark
    I have nginx setup to pass to an upstream using proxy pass. The upstream is written to return a 502 http response on certain requests, rather then returning the 502 with all the header I would like nginx to recoginse this and return 444 so nothing is returned. Is this possible? I also tried to return 444 on any 50x error but it doesn't work either. location / { return 444; } location ^~ /service/v1/ { proxy_pass http://127.0.0.1:3333; proxy_next_upstream error timeout http_502; error_page 500 502 503 504 /50x.html; } location = /50x.html { return 444; } error_page 404 /404.html; location = /404.html { return 444; }

    Read the article

  • SFTP not working, but SSH is

    - by Dan
    I've had a server running CentOS for a few months now. A few days ago, I stopped being able to connect to it over SFTP. I've tried from multiple computers, OSes, clients, and internet connections. I can SSH in just fine, though. For example, Nautilus gives me this: Error: DBus error org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken. Please select another viewer and try again. I was under the impression that SFTP was just pure SSH, and if one worked, the other would, and vice-versa. Clearly that's not the case, though. What could I have done wrong?

    Read the article

  • Amazon EC2 - Free memory

    - by Damo
    We have an amazon ec2 small instance running and over the past few days we noticed that the memory is going down and down. On the small instance, we are running apache and tomcat6 Tomcat is started with the following JVM parameters -Xms32m -Xmx128m -XX:PermSize=128m -XX:MaxPermSize=256m We use nagios to monitor stuff like updates to apply, free disk space and memory. Everything else is behaving as expected but our memory is going down all the time. Our app receives approx half a million hits a day When I shutdown apache and tomcat, and ran free -m, we had only 594mb of memory free out out of the 1.7gb of memory. Not much else is running on the small instance and when running the top command I cannot see where the memory is going. The app we run on tomcat is a grails webapp. Could there be a possibility that there is a memory leak within our application? I read online and folks say that a small amazon instance is perfect for running apach and tomcat. I found a few posts online that showed how to setup apache and tomcat to limit the memory usage and I have already performed those steps. The memory is not being used up as quick but the memory is still decreasing over time. We have other amazone ec2 small instances running grails apps and the memory is fairly standard on those nodes. But they would not be receiving as much traffic Just to add, when I run the top command on the problem server, I cannot see where all the memory is being used Any help with this is greatly appreciated The output of free -m when run on my server is as follows total used free shared buffers cached Mem: 1657 1380 277 0 158 773 -/+ buffers/cache: 447 1209 Swap: 895 0 895 In your opinion, does this look ok? At what stage would the OS give back memory, would it wait to the memory reaches 0% or is this OS dependent?

    Read the article

  • Hard Reset USB in Ubuntu 10.04

    - by Cory
    I have a USB device (a modem) that is really finicky. Sometimes it works fine, but other times it refuses to connect. The only solution I have found to fix it once it gets into a bad state is to physically unplug the device and plug it back in. However, I don't always have physical access to the machine it is plugged in on, so I'm looking for a way to do this through the command line. This post suggests running: $ sudo modprobe -w -r usb_storage; sudo modprobe usb_storage However I get an "unknown option -w" output. This slightly modified command: $ sudo modprobe -r usb_storage Fails with the message FATAL: Module usb_storage is in use. If I try to kill -9 the processes marked [usb-storage] before running they refuse to die (I think because they are deeply tied to the kernel). Anyone know of a way to do this? NOTE: I cross-posted this on serverfault as I didn't know which was more appropriate. I will delete and/or link whichever one is answered first.

    Read the article

  • Can't get 1440x900 resolution with GRUB2 although vbeinfo says it's available

    - by TomSW
    I'm trying to use GRUB2 in graphical mode with 1440x900 resolution, but the result is always garbled nonsense: the highest resolution I can get is 1280x800. Word is from googling that long as vbeinfo lists a resolution, GRUB2 can use it. This doesn't seem to be true: vbeinfo says that 1440x900 is available but it doesn't work. Testing it from the GRUB2 command line: set gxfmode=1440x900 terminal_output gfxterm # -> garbled nonsense # back to trusty 640x480 terminal_output console The graphics card is an Intel GM965. Once linux boots the framebuffer switches to 1440x900. Added after epheminent's reply and various experiments vbeinfo lists two sets of modes. The first set runs from 0x160 to 0x16b, with resolutions 768x480, 960x600, 1280x800 and 1440x900 Then - after a bunch of text-only modes - the second set, containing resolutions 1024x768, 800x600, and 640x480 The first set of modes aren't altered by 915resolution. They all work except 1440x900. The resolution of modes in the second set can be altered using the 915resolution module / command available in GRUB2 = 1.99. # in /boot/grub/grub.cfg insmod 915resolution # 30, 32, 34 all work for me: all that varies is which modes are altered 915resolution 30 1440 900 # setting an impossible resolution changes the mode to "text-only" # in my case 1280x1024 is not supported 915resolution 30 1280 1024 Clearly, 1440x900 should just work: adding it with 915resolution is just a workaround.

    Read the article

  • Transferring files from ftp to local system

    - by user1056221
    I want to copy a file from FTP and paste it to my local system. I want to run this through a batch file. I am trying this for a week. But I couldn't find the solution. Anyone help me please.... This is my actual work Want to copy a file named "Friday.bat" from ftp://172.16.3.132 (with username and password) So I use the below coding: @echo off @ftp -i -s:"%~f0"&GOTO:EOF open 172.16.3.132 mmftp ((((pasword entered here))))) binary get Friday.bat pause Result: ftp> @echo off ftp> @ftp -i -s:"%~f0"&GOTO:EOF Invalid command. ftp> open 172.16.3.132 Connected to 172.16.3.132. 220 Welcome to ABL FTP service. User (172.16.3.132:(none)): 331 Please specify the password. 230 Login successful. ftp> binary 200 Switching to Binary mode. ftp> get Friday.bat 200 PORT command successful. Consider using PASV. 550 Failed to open file. ftp> pause Finally, a file named Friday.bat is copied to my local system with 0 bytes, but it will not open.

    Read the article

  • "The network path was not found" - shortening the delay before Windows tries again

    - by Harry Johnston
    If I try to connect (over Windows file sharing) to a machine that has gone to sleep, I get a timeout followed by "The network path was not found". If I then wake the machine and try again, I still get "The network path was not found" because the connection failure has been cached. If I wait a while (about 30 seconds?) and then try again I can connect successfully. I understand this behaviour. My question is: is there any way to shorten the delay before I can try the connection again?

    Read the article

  • Apache HTTP Not Working When SSL Enabled

    - by dominic7il
    I've got a very bizarre problem in that after enabling SSL support in Apache I'm only able to access my site via SSL and not through http as well. I can confirm that Apache is definitely listening on both ports 80 and 443 (accdording to netstat). Additionally the Apache access logs are showing the requests - it's just that going in through http results in a timeout and I'm never actually able to reach the content. Like I said going through https works. Here is my httpd.conf: http://pastebin.com/kG2dPjJ2 and here is my httpd-ssl.conf: http://pastebin.com/thqvjgGJ Can anyone spot any issues with those configurations? or Have any suggestion at all? I've searched and searched but there appear to be very few people who have experienced the same. Also worth mentioning that I did a comparision between those configurations and those of a working set up and I couldn't spot anything.

    Read the article

  • How To Completely Move Users/Program Files/Program Files (x86)/ProgramData (Folders) To Another Partition(s) On Windows 8?

    - by Enigma83
    I am attempting to move folders Users Program Files Program Files (x86), ProgramData (at the root of the C drive) to at least 2 other partitions, preferably on a fresh install. I have read that there are methods for doing this post-install, but it seems like it would be a bit more tedious to do things that way. I want to move the 2 Program Files folders to another partition on the same HDD, and Users/ProgramData will go to yet another partition on same HDD. I have done a bit of research on this, read up on some things that involved booting into Audit Mode, using the RoboCopy command to copy folders via booting into my Windows 8 USB drive, creating NTFS junctions/symbolic links, Registry edits, as well as accomplishing this automatically by creating an auto-attend file which Windows Setup processes automatically before the user is ever booted in for the 1st time. I tried this morning and now have a basic installation in which programs like Internet Explorer fail to open, certain files can't be found/opened (even if I click on them directly), an example is Regedit. Also, I can't run the Command/DOS (CMD) prompt as Administrator (or otherwise, as any other user), can't activate the real Administrator account or open any of the Administrative Tools (despite having added them to my Start Screen). So far I have only tried RoboCopy-ing Program Files and Program Files (x86) so far, creating junction points for them, and editing the Registry in the relevant locations. This is what I'm left with now. I also found the following blog article which describes how to do this for Windows 7 So, where should I go from here and where can I find more information? And how can this be done without disabling the Metro apps, which I've read will stop working if you move ProgramData. Once I have everything moved, where do I install programs to? Do I tell them to install to C:\Program Files\Program Files (x86) or to the junctioned/symbolic-linked partition/drive? I plan to test in VMware virtual machines from here on until things are working correctly, while using a baseline default install for daily tasks.

    Read the article

  • How to "open" existing VMs in Hyper-V without importing them?

    - by Borek
    I had a PC with two physical disks: C: containing the host operating system D: containing a folder D:\VMs where all my virtual machines were stored Now, the C: disk died. I bought a new one, reinstalled Windows on it, enabled Hyper-V feature and now I just need to open the VMs from the D:\VMs folder. However, I don't seem to be able to find a menu item or anything that would allow me to do that - the only thing I see is the "import" command which unfortunately requires the VMs to be explicitly exported (my machines weren't). I firmly believe that when I have all the files constituting a VM (the VHD file, some XML files describing the settings etc.) it must be somehow possible to just "open" these existing VMs in Hyper-V, right? What command am I missing? Edit: I know I can create a blank virtual machines and then just point them to use existing VHDs. However, I am not sure about all the different settings I've made to those VMs so I hope there's a way to simply open those existing VMs instead of recreating them.

    Read the article

  • Subversion: Secure connection truncated

    - by Nick
    Hi, I'm trying to set-up a subversion server with apache2/webdav access. I've created the repository and configure Apache according to the official book, and I can see the repository in a webbrowser. The browser shows: conf/ db/ hooks/ locks/ Although clicking any of those links gives an empty xml document like: <D:error> <C:error/> <m:human-readable errcode="2"> Could not open the requested SVN filesystem </m:human-readable> </D:error> I've never used subversion before so I assume this is correct? Anyway, when I try to connect via a command line client, it asks for my password, I give it, then I get the (useless) error message: svn: OPTIONS of 'https://svn.mysite.com': Could not read status line: Secure connection truncated (https://svn.mysite.com) The command I'm using is: svn checkout https://svn.mysite.com/ svn.mysite.com Subversion was installed using Ubuntu's package manager. It's version 1.6.6 on Ubuntu 10.04. My Virtualhost Cofiguration: <VirtualHost 123.123.12.12:443> ServerAdmin [email protected] ServerName svn.mysite.com <Location /> DAV svn SVNParentPath /var/svn/repos SVNListParentPath On AuthType Basic AuthName "Subversion Repository" AuthUserFile /etc/subversion/passwd Require valid-user </Location> # Setup The SSL Certificate Paths SSLEngine On SSLCertificateFile /etc/ssl/certs/mysite.com.crt SSLCertificateKeyFile /etc/ssl/private/dmysite.com.key </VirtualHost>

    Read the article

  • Which rdisk value in boot.ini maps to which disk?

    - by MA1
    Following are the contents of a sample boot.ini: [boot loader] timeout=30 default=multi(0)disk(0)rdisk(0)partition(1)\WINDOWS [operating systems] multi(0)disk(0)rdisk(0)partition(1)\WINDOWS="Microsoft Windows XP Professional" /NOEXECUTE=OPTIN /FASTDETECT multi(0)disk(0)rdisk(0)partition(2)\WINNT="Windows 2000 Professional" /fastdetect multi(0)disk(0)rdisk(1)partition(1)\WINDOWS="Microsoft Windows XP Home Edition" /NOEXECUTE=OPTIN /FASTDETECT The rdisk value tells the physical disk number. So, if I have three hard disks say: /dev/sda /dev/sdb /dev/sdc Then how to know which disk (/dev/sda or /dev/sdb or /dev/sdc) is rdisk(0) and which disk is rdisk(1), etc.?

    Read the article

  • "ssh root@server" hangs indefinitely long

    - by Thibaut
    Hi, Sometimes my ssh client will take forever to login. This happens when the server is not responding (overloaded, killed processed, ...). My automated scripts will then fail because the ssh process will never exit. Is there a ssh configuration value to set a timeout in order to fail if ssh can't login after a predefined number of seconds? I know there are knobs on the server side, but I have to set this on the client side as the sshd process is not responding, or responding incorrectly. Thanks!

    Read the article

  • Vmware peaks NFS load every 30 seconds

    - by gtirloni
    We were troubleshooting a performance problem on one of our storage servers and after investigating almost everything in sight we saw that every 30 seconds, Vmware would go from 10k IOPS (NFS) to 30k, 50k, 100k or whatever the server would handle. Most of it were reads. What could cause this raise in NFS operations per second every 30 seconds? The virtual machines are managed by external customers and there isn't much in common between them. While breaking utilization down by filename, we discovered 5-10 virtual machines that contributed more to those peaks but it still doesn't explain why every 30 seconds. There are no other peaks outside that 30 sec period (ie. it stays in an almost constant average). Is there an NFS tweak in Vmware to change that 30 second period? If that's really necessary, we would like to introduce some variation so all that workload isn't dropped all at once. It's causing NFS timeout on the ESX 3.5/4.0 hosts when the storage gets overloaded.

    Read the article

  • Running nph-script.cgi keeps outputting Server details at the end

    - by wgewweg
    I am running a nph-script.cgi on my server. The server keeps adding HTTP/1.1 200 OK Date: Thu, 05 Nov 2009 02:28:53 GMT Server: Apache/2.2.8 (Ubuntu) PHP/5.2.8-1hardy~ppa1 with Suhosin-Patch mod_perl/2.0.3 Perl/v5.8.8 Content-Length: 0 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/plain X-Pad: avoid browser bug At the bottom of each page loaded via the .cgi script. why is this the case? How do I remove this annoying message that is appended to all pages ?

    Read the article

  • Apache timeouts and error log

    - by BlackFire27
    I have an error whenever I run my code for a certain time. I use a lot of loops and sql connections. I basically, put and take out links from my database. The problem is that there is some error thrown that I cant see, whenever I execute a long sql operation .. Note that the fault isnt the code. The code runs well, whenever there are a few links involved. But when there are over 200 links.. an error is thrown that I cants see. I tried to trace the error in a few places: C:\Program Files\Zend\ZendServer\logs\php_error.log C:\Program Files\Zend\phpMyAdmin\config.inc.php Edit Viewer in win xp I am running XP: Windows xp php version: 5.3.9-ZS5.6.0. Apache/2.2.21 Apacher version: (Win32) mod_ssl/2.2.21 OpenSSL/0.9.8o I cant trace the error at all, and why it happens. All I can suspect of, is that there is a server timeout..

    Read the article

  • Auto-focus xdvi after running viewdvi in Emacs with AUCTeX.

    - by D Connors
    I've been using emacs with AUCTeX mode to edit my latex documents for a few days now, but there's something that's really bugging me. As it should be, whenever I do C-c C-c RET it compiles the file, and if repeat the command it views the output in xdvi. It's also set to the mini-mode TeX-source-specials-mode, so instead of opening a new window in xdvi it only reloads the window that's already open, brings it to the front, and sends me to wherever the pointer was in emacs (forward search). Now here's the problem: Even though the xdvi window is brought to the front, it's not focused. Instead, the emacs windows stays with focus (and that's where any keyboard input goes). And I keep forgetting about that, which leads me to accidentally editing the source file while trying to navigate in xdvi. Not to mention I'm forced to alt-tab in order to focus xdvi, and alt-tab twice if I just want to get back to emacs. Is there a way around this problem? I just want xdvi to be focused whenever I run the view command from emacs.

    Read the article

  • Remote connect to mysql server?

    - by LF4
    I've been trying to figure out why I keep getting this error when I try to connect to the MySQL server with the following commands. $~ mysql -u username -h SQLserver -p Enter password: ERROR 1045 (28000): Access denied for user 'username'@'myIP' (using password: YES) I've done the following: Port is open in the firewall other wise I wouldn't get the error it'd just timeout. MySQL server not running with skip-networking or bind-address username has host as '%' and I can connect locally so the password is correct. GRANT USAGE ON *.* TO username@% IDENTIFIED BY 'password'; FLUSH PRIVILEGES; I wanted to know if anyone had ideas or ran into this issue before and solved it? mysql> select user, host from mysql.user where user='username'; +----------+------+ | user | host | +----------+------+ | username | % | +----------+------+ 1 row in set (0.00 sec) mysql> show grants for 'username'; +----------------------------------------------------------------------------------------------------------------+ | Grants for username@% | +----------------------------------------------------------------------------------------------------------------+ | GRANT ALL PRIVILEGES ON *.* TO 'username'@'%' IDENTIFIED BY PASSWORD '*F42AD03PASSWORDHASHADF4021C86B' | | GRANT ALL PRIVILEGES ON `DB2`.* TO 'username'@'%' | +----------------------------------------------------------------------------------------------------------------+ 2 rows in set (0.00 sec)

    Read the article

  • Previous Versions delayed with SBS 2008 / Windows 7

    - by indeed005
    The SBS 2008 at this site has Previous Versions enabled on a mapped drive. The snapshots happen 3 times a day. I doubt they take very long; the diff size is only a few GB. The problem is that the users on Windows 7 cannot see their previous versions until a few hours later. Is there some indexing that has to happen before the Previous Versions are visible to the client machines? Edit: In the application log, 3 minutes after every scheduled backup, there is an event for VSS (EventID 8224) saying "The VSS service is shutting down due to idle timeout" Apparently this means it has finished successfully, even though modified files still do not show another version.

    Read the article

  • Resolving Domainnames differently for different services

    - by mlaug
    Some time ago we had an issue with our network infrastructure and php with curl. Our Network infrastructure is fairly simple. LoadBalancer/Firewall = 5 servers The Domainname of our website is set to the ip of the Loadbalancer, of course. But calling curl from one of the servers did result in a timeout. It appears that a server could not call for its own domain it is serving. So we had to set the domains via /etc/hosts to the sever itself. But now We have implemented a Varnish in front of the Loadbalancer, which we want to automatically purge, once a change on a page happens. So now we need to call the domain www.example.com/url_to_purge. Sadly this call what be resolved to the server itself instead of the varnish, because of the /etc/hosts entries. So now I am wondering, if you could resolve domain names differently for different services :)

    Read the article

  • What Logs / Process Stats to monitor on a Ubuntu FTP server?

    - by Adam Salkin
    I am administering a server with Ubuntu Server which is running pureFTP. So far all is well, but I would like to know what I should be monitoring so that I can spot any potential stability and security issues. I'm not looking for sophisticated software, more an idea of what logs and process statistics are most useful for checking on the health of the system. I'm thinking that I can look at various parameters output from the "ps" command and compare to see if I have things like memory leaks. But I would like to know what experienced admins do. Also, how do I do a disk check so that when I reboot, I don't get a message saying something like "disk not checked for x days, forcing check" which delays the reboot? I assume there is command that I can run as a cron job late at night. How often should it be run? What things should I be looking at to spot intrusion attempts? The only shell access is SSH on a non-standard port through UFW firewall, and I regularly do a grep on auth.log for "Fail" or "Invalid". Is there anything else I should look at? I was logging the firewall (UFW) but I have very few open ports (FTP and SSH on a non standard port) so looking at lists of IP's that have been blocked did not seem useful. Many thanks

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >