Search Results

Search found 7629 results on 306 pages for 'tum bin'.

Page 249/306 | < Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >

  • ssh - "Connection closed by xxx.xxx.xxx.xxx" - using password

    - by Michael B
    I attempted to create an new user account that I wish to use to log in using ssh. I did this (in CentOs): /usr/sbin/adduser -d /home/testaccount -s /bin/bash user passwd testaccount This is the error I receive when trying to log in via ssh: ~/.ssh$ ssh -v [email protected] OpenSSH_5.1p1 Debian-5ubuntu1, OpenSSL 0.9.8g 19 Oct 2007 debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to xxx.xxx.xxx [xxx.xxx.xxx.xxx] port 22. debug1: Connection established. debug1: identity file /home/user/.ssh/identity type -1 debug1: identity file /home/user/.ssh/id_rsa type 1 debug1: Checking blacklist file /usr/share/ssh/blacklist.RSA-2048 debug1: Checking blacklist file /etc/ssh/blacklist.RSA-2048 debug1: identity file /home/user/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_4.3 debug1: match: OpenSSH_4.3 pat OpenSSH_4* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.1p1 Debian-5ubuntu1 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY debug1: Host 'xxx.xxx.xxx.xxx' is known and matches the RSA host key. debug1: Found key in /home/user/.ssh/known_hosts:8 debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Unspecified GSS failure. Minor code may provide more information No credentials cache found debug1: Unspecified GSS failure. Minor code may provide more information No credentials cache found debug1: Unspecified GSS failure. Minor code may provide more information debug1: Next authentication method: publickey debug1: Offering public key: /home/user/.ssh/id_rsa debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Trying private key: /home/user/.ssh/identity debug1: Trying private key: /home/user/.ssh/id_dsa debug1: Next authentication method: password testaccount@xxx's password: Connection closed by xxx.xxx.xxx.xxx The "connection closed" message appeared immediately after entering the password (if I enter the wrong password it waits and then prompts for another password) I am able to log in from the same computer using other accounts that had been setup previously. When logged into the remote machine I am able to do 'su testaccount' Thanks for your time.

    Read the article

  • Problem with ubuntu 10.10 running from USB drive

    - by Surjya Narayana Padhi
    I recently downloaded Ubuntu 10.10 and created an USB drive with that. I started to run the Ubuntu from that USB drive. But I am facing so much problem. I am thinking why its not so much easy like Windows to do all my job in ubuntu. Always I get some error message or to install something. This time I am getting the following errors. I am trying to download and install Aircrack-ng. So used the command sudo apt-get install aircrack-ng. But the installation stops with the following error : update-initramfs: deferring update (trigger activated) cp: cannot stat `/vmlinuz': No such file or directory dpkg: error processing bcmwl-kernel-source (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: initramfs-tools bcmwl-kernel-source E: Sub-process /usr/bin/dpkg returned an error code (1) I don't even have the aptitude command installed till now. Are all these errors because of I am running the ubuntu from USB drive? Is there any simple and easy way to go to Ubuntu Software Center and download all the required essentials at one shot and then Aircrack-ng? I could not find the Aircrack-ng in Ubuntu Software Center Can anybody give me detail steps to solve all my problems above. I am frustrated searching for updates and installations. When something works and something does not work. Can anybody suggest me how I should proceed after installing ubuntu to run on a USB drive. So that I can use the OS like Windows. Like software download,wireless driver, sound, video, documents, C:, D: all things should be there. Please somebody help.

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Is it possible to use rsync over sftp (without an ssh shell) ?

    - by Tom Feiner
    Rsync over ssh, works great every time. However, trying to rsync to a host which allows only sftp logins, but not ssh logins, provides the following error: rsync -av /source ssh user@remotehost:/target/ protocol version mismatch -- is your shell clean? (see the rsync man page for an explanation) rsync error: protocol incompatibility (code 2) at compat.c(171) [sender=3.0.6] Here's the relevant section from the rsync man page: This message is usually caused by your startup scripts or remote shell facility producing unwanted garbage on the stream that rsync is using for its transport. The way to diagnose this problem is to run your remote shell like this: ssh remotehost /bin/true > out.dat then look at out.dat. If everything is working correctly then out.dat should be a zero length file. If you are getting the above error from rsync then you will probably find that out.dat contains some text or data. Look at the contents and try to work out what is producing it. The most com- mon cause is incorrectly configured shell startup scripts (such as .cshrc or .profile) that contain output statements for non-interactive logins. Trying this on my system produced the following in out.dat: ssh-dummy-shell: Command not allowed. As I thought, the host is not allowing ssh logins. The following link shows that it is possible to accomplish this task using fuse with sshfs - however it is extremely slow, and not fit for production use. Is there any chance of getting rsync sftp to work?

    Read the article

  • How do I troubleshoot nginx not recognizing passenger?

    - by Jade
    Issue: nginx does not seem to recognize my rails application Symptoms: When the server starts up, it shows the "Welcome to nginx!" message instead of my Rails application. Nginx seems to be using the local nginx path instead of the Rails root I specified: 2010/04/18 06:29:06 [error] 783#0: *1 "/usr/local/nginx/html/blog/index.html" is not found (2: No such file or directory), client: 1.2.3.4, server: www.farmerjade.com, request: "GET /blog/ HTTP/1.1", host: "www.farmerjade.com" I used [RVM and Passenger Setup on NGINX][1] to install nginx and passenger on a virtual machine. Here is my nginx configuration: user farmerjade; worker_processes 1; ... http { include mime.types; default_type application/octet-stream; passenger_ruby /home/farmerjade/.rvm/bin/passenger_ruby; passenger_root /home/farmerjade/.rvm/gems/ree-1.8.7-head/gems/passenger-2.2.11; ... server { listen 80; server_name www.farmerjade.com; root /home/farmerjade/farmerjade/public; passenger_enabled on; rails_env development; ... I'd appreciate any help anyone has to offer -- I'm quite new to nginx.

    Read the article

  • Rails 3 + Nginx + Passenger -- Routing index

    - by Bijan
    I have no index.html file in my public folder. My rails routes file routes this, and it works fine when I run 'rails server' on my machine. I'm trying to deploy the app. I have passenger and nginx running When I run rails server on my local machine, it works fine. But it's just trying to access static file when I try to access it on the production server. Here's my nginx conf: worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name mmjconsult.com; root /www/mmjs/public; access_log logs/host.access.log; passenger_enabled on; } } Thank you for any help. I really appreciate it.

    Read the article

  • Use preforker(ruby gem) with supervisor

    - by user1548832
    I also asked same question on stackoverflow.com http://stackoverflow.com/questions/13871169/use-preforkerruby-gem-with-supervisor But, superuser.com might much help to me. Can anyone amswer this? I want to run a server program using preforker ruby gem with supervisor. But error has occured. I wrote a following test program using preforker. #!/usr/bin/env ruby require 'rubygems' require 'preforker' Preforker.new(:app_name => 'test-preforker', :timeout => 60, :workers => 1) do |master| while master.wants_me_alive? do puts "hello" sleep 10 end end.run And a following supervisor config. [program:test-preforker] command=/home/tkono/tmp/test-preforker.rb stdout_logfile_maxbytes=1MB stderr_logfile_maxbytes=1MB stdout_logfile=/var/log/%(program_name)s.log stderr_logfile=/var/log/%(program_name)s.log autorestart=true Then, reload supervisor. # supervisorctl reload Restarted supervisord Here is the log file of supervisor. 2012-12-13 17:50:47,161 CRIT Supervisor running as root (no user in config file) 2012-12-13 17:50:47,163 WARN Included extra file "/etc/supervisor.d/test-preforker.ini" during parsing 2012-12-13 17:50:47,209 INFO RPC interface 'supervisor' initialized 2012-12-13 17:50:47,213 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2012-12-13 17:50:47,215 INFO supervisord started with pid 12437 2012-12-13 17:50:48,231 INFO spawned: 'test-preforker' with pid 12440 2012-12-13 17:50:48,233 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:49,248 INFO spawned: 'test-preforker' with pid 12441 2012-12-13 17:50:49,261 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:51,267 INFO spawned: 'test-preforker' with pid 12442 2012-12-13 17:50:51,284 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:54,305 INFO spawned: 'test-preforker' with pid 12443 2012-12-13 17:50:54,308 INFO exited: test-preforker (exit status 1; not expected) 2012-12-13 17:50:55,311 INFO gave up: test-preforker entered FATAL state, too many start retries too quickly Please tell me what is wrong? A program using preforker cannot run with supervisor? preforker https://github.com/dcadenas/preforker supervisor http://supervisord.org/index.html

    Read the article

  • Using sed to Download ComboFix automatically

    - by user901398
    I'm trying to write a shell script to grab the dynamic URL which ComboFix is located at at BleepingComputer.com/download/combofix However, for some reason I can't seem to get my regex to match the download link of the "click here" if the download doesn't work. I used a regex tester and it said I matched the link, but I can't seem to get it to work when I execute it, it turns up an empty result. Here's my entire script: #!/bin/bash # Download latest ComboFix from BleepingComputer wget -O Listing.html "http://www.bleepingcomputer.com/download/combofix/" -nv downloadpage=$(sed -ne 's@^.*<a href="\(http://www[.]bleepingcomputer[.]com/download/combofix/dl/[0-9]\+/\)" class="goodurl">.*$@\1@p' Listing.html) echo "DL Page: $downloadpage" secondpage="$downloadpage" wget -O Download.html $secondpage -nv file=$(sed -ne 's@^.*<a href="\(http://download[.]bleepingcomputer[.]com/dl/[0-9A-Fa-f]\+/[0-9A-Fa-f]\+/windows/security/anti[-]virus/c/combofix/ComboFix[.]exe\)">.*$@\1@p' Download.html) echo "File: $file" wget -O "ComboFix.exe" "$file" -nv rm Listing.html rm Download.html mkdir Tools mv "ComboFix.exe" "Tools/ComboFix.exe" -f The first two downloads work successfully, and I end up with: http://www.bleepingcomputer.com/download/combofix/dl/12/ But it fails to match the final sed that will give me the download link. The code it's supposed to match is: <a href="http://download.bleepingcomputer.com/dl/6c497ccbaff8226ec84c97dcdfc3ce9a/5058d931/windows/security/anti-virus/c/combofix/ComboFix.exe">click here</a>

    Read the article

  • NGINX: How do I calculate an optimal no. of worker processes and worker connections?

    - by bodacious
    Our web app is running on a Linode 2048 server at the moment (~ 2048 GB of RAM) The MYSQL database is on another linode of it's own so this server is really only handling NGINX and and the Rails application. The application itself uses about 185976 of memory per instance (RSS). Our traffic is < 1000 per day and the pages are mostly cached so there are fewer hits to the rails app itself. My question is - how can I calculate optimal NGINX config settings for my app? Below is the current config: worker_processes 1; # pid of nginx master process pid /var/run/nginx.pid; events { worker_connections 1024; } http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; passenger_root /home/user/.rvm/gems/ree-1.8.7-2011.01@URTV/gems/passenger-3.0.3; passenger_ruby /home/user/.rvm/rubies/ree-1.8.7-2011.01/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; tcp_nopush on; tcp_nodelay on; # gzip settings gzip on; gzip_http_version 1.0; gzip_comp_level 2; gzip_vary on; gzip_proxied any; gzip_types text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; # load extra modules from the vhosts directory include /opt/nginx/vhosts/*.conf; } Any advice would be appreciated! :)

    Read the article

  • Low-traffic WordPress website on Apache keeps crashing server

    - by OC2PS
    I have recently moved my low-moderate traffic (1000 UAUs, 5000 pageviews on a busy day) website from shared hosting to a Centos 6 64-bit VPS with Apache and cPanel running on 4 quad-core processor (likely oversold) and 3GB memory (Xen). We've had problems from the beginning. The server keeps crashing. It seems PHP keeps expanding till it consumes all the memory and crashes the server. Some folks have suggested that I should abandon Apache/cPanel/PHP/mySQL and go with nginX/Varnish/PHP-FPM/SQLite. But that's just not possible for me as I am not very tech savvy and need a simple GUI like cPanel to be able to manage the mundane management tasks (can't afford to hire system administrator or get fully managed hosting). I have come across several posts discussing optimization of Apache for WordPress. But all of these lead to articles that are pretty dated such as this ~4 year old one from Jan 2009 - http://thethemefoundry.com/blog/optimize-apache-wordpress/ The article is pretty detailed and seems helpful, but I stumble even on the first step. My httpd.conf only has 2 loadmodule commands LoadModule fastinclude_module modules/mod_fastinclude.so LoadModule bwlimited_module modules/mod_bwlimited.so So I go total bust right there. Further, my httpd.conf says Direct modifications to the Apache configuration file may be lost upon subsequent regeneration of the configuration file. To have modifications retained, all modifications must be checked into the configuration system by running: /usr/local/cpanel/bin/apache_conf_distiller I am having trouble finding where to change the modules in WHM. Please can someone help me with updated guidelines on how to optimize Apache for WordPress? Many thanks! P.S. The WordPress installation also has WP Super Cache installed. P.P.S. I also have phpBB, OpenCart, and Menalto Gallery installed.

    Read the article

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • Is there any method of backing up Google Drive files in some sort of versioning system?

    - by VictorKilo
    Backstory My company is utilizing Google Drive for our shared files. Each user has their own Drive account. In addition, we have a corporate Drive account which holds documents which are shared to each user. Each folder is shared to different users depending on their permissions and positions in the company. Many users are able to add files, and updated folders within this shared Drive account. This is fine. What is not fine, is when someone deletes something that they shouldn't. I have little to no way of knowing when I file is deleted wrongfully. Furthermore, anything that gets deleted goes into the trash bin of the file's creator, so I can't just restore it from the trash. Question Is there any method of backing up Google Drive files in some sort of versioning system that would allow me to revert files back to defined points in time? What i have Tried I currently have this corporate drive account synced up to my personal computer through the Google Drive application. Each night, I run a backup on the file using Windows "Backup and Restore." This allows me to at least get back files that are lost, but I a cleaner method than this. It's very possible that I may not have the very latest version of a document on my computer when the utility runs.

    Read the article

  • What may be wrong with String::ToIdentifier::EN tests?

    - by wk01
    I try to install Perl module String::ToIdentifier::EN (as depndency of DBIx::Class::Schema::Loader) but it fails on tests. I googled those errors but get no picture, where is problem: Building and testing String-ToIdentifier-EN-0.07 cp lib/String/ToIdentifier/EN.pm blib/lib/String/ToIdentifier/EN.pm cp lib/String/ToIdentifier/EN/Unicode.pm blib/lib/String/ToIdentifier/EN/Unicode.pm Manifying blib/man3/String::ToIdentifier::EN.3pm Manifying blib/man3/String::ToIdentifier::EN::Unicode.3pm PERL_DL_NONLAZY=1 /usr/bin/perl "-MExtUtils::Command::MM" "-e" "test_harness(0, 'inc', 'blib/lib', 'blib/arch')" t/00_basic.t t/10_ascii.t t/20_capitalization.t Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 25 tests but ran 4. # Looks like your test exited with 25 just after 4. t/00_basic.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 21/25 subtests Byte order is not compatible at ../../lib/Storable.pm (autosplit into ../../lib/auto/Storable/_retrieve.al) line 380, at /home/wanradt/perl5/lib/perl5/Lingua/EN/Tagger.pm line 167 # Looks like you planned 768 tests but ran 512. # Looks like your test exited with 25 just after 512. t/10_ascii.t ........... Dubious, test returned 25 (wstat 6400, 0x1900) Failed 256/768 subtests t/20_capitalization.t .. ok Test Summary Report ------------------- t/00_basic.t (Wstat: 6400 Tests: 4 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 25 tests but ran 4. t/10_ascii.t (Wstat: 6400 Tests: 512 Failed: 0) Non-zero exit status: 25 Parse errors: Bad plan. You planned 768 tests but ran 512. Files=3, Tests=528, 1 wallclock secs ( 0.07 usr 0.02 sys + 0.42 cusr 0.04 csys = 0.55 CPU) Result: FAIL Failed 2/3 test programs. 0/528 subtests failed. make: *** [test_dynamic] Error 255 -> FAIL Installing String::ToIdentifier::EN failed. See /home/wanradt/.cpanm/build.log for details. Byte order is not compatible at... seems a key, but to where?

    Read the article

  • redirecting output from telnet / nc to file in script fails when cron'd

    - by qhartman
    So, I have device on my network which sits there listening on a port for a connection, and when a connection is made it dumps ascii data out. I need to capture that data to a file. I wrote a dead simple shell script that does this: #!/bin/bash #Config Variables. Age is in Days. DATA_ROOT=/root/data FILENAME=data_`date +%F`.dat HOST=device COMPRESS_AGE=3 #Sanity Checks if [ ! -e $DATA_ROOT ] then echo "The directory $DATA_ROOT seems to not exist. Please create it." exit 1 fi if [ -e $DATA_ROOT/$FILENAME ] then echo "You seem to have extracted data already today. Aborting" exit 1 fi #Get Data nc $HOST 2202 > $DATA_ROOT/$FILENAME #Compress old Data find $DATA_ROOT -type f -mtime +$COMPRESS_AGE -exec gzip {} \; exit 0 It works great when I run it by hand, but when I run it from cron, it doesn't capture any of the output. If I replace nc with telnet I see the initial telnet headers about escape sequences and whatnot, but not the data. Ideas? I've tried forcing bash to act like an interactive shell with -i. I've tried redirecting both stderr and stdout. I know it's got to be some silly simple thing, but I'm utterly failing. This is driving me nuts... EDIT I also just noticed that the nc processes from all my previous attempts at this have been siting sleeping, and when I killed them, cron sent me a bunch of non-sensical error messages. At least now I have something to dig into!

    Read the article

  • Convention location for JAR files for a LaunchDaemon on OS X?

    - by Barry Wark
    I'm setting up a Hudson build slave on an OS X machine. I'm using launchd to start the slave using the following plist in `/Library/LaunchDaemons/': <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>KeepAlive</key> <true/> <key>Label</key> <string>org.hudson-ci.jnlpslave</string> <key>ProgramArguments</key> <array> <string>/usr/bin/java</string> <string>-jar</string> <string>/Users/Shared/Hudson/slave.jar</string> <string>-noCertificateCheck</string> <string>-jnlpUrl</string> <string>file:///Users/Shared/Hudson/slave-agent.jnlp</string> </array> <key>RunAtLoad</key> <true/> </dict> </plist> I'm currently putting the slave.jar and slave-agent.jnlp files in /Users/Shared/Hudson but this seems like an unnecessarily user-visible location. What's the convention? Where should I be putting these JARs for a daemon?

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • Determine from where is "sh" being run under apache www-data user using using PF or NETSTAT

    - by Eugene van der Merwe
    I am working with a compromised Ubuntu 8.04 Plesk 9.5.4 server. It seems that a script on the server is continuously doing reverse lookups to random IPs on the Internet. I first spotted it during by using top and then noticed flashes of this coming up continuously: sh -c host -W 1 '198.204.241.10' I wrote a this script to interrogate ps every 1 second to see how frequently this script happens: #!/bin/bash while : do ps -ef | egrep -i "sh -c host" sleep 1 done The results are that this script runs often, every few seconds: www-data 17762 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17772 8332 1 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 17879 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' root 18031 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18078 16704 0 10:07 ? 00:00:00 sh -c host -W 1 '59.58.139.134' www-data 18125 17996 0 10:07 ? 00:00:00 sh -c host -W 1 '91.124.51.65' root 18131 17756 0 10:07 pts/2 00:00:00 egrep -i sh -c host www-data 18137 17869 0 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' www-data 18137 17869 1 10:07 ? 00:00:00 sh -c host -W 1 '198.204.241.10' My theory is if I can see who is launching the sh process or form where it's launched I can isolate the problem further. Can somebody please guide me using netstat or ps to identify from where sh is being run? I might get many suggestions that the OS is out of date and so the Plesk, but please bear in mind there are some very concrete reasons why this server is running legacy software. My question is aimed at a advanced Linux systems administrators who have in depth experience with security compromises and using netstat and ps to get to the bottom of it.

    Read the article

  • Facter - custom fact, returns empty data set when invoked by Puppet agent

    - by user3684494
    According to this puppet labs article, I can create custom facts from shell scripts. I have created a bash script that returns a single fact, it is packaged in a modules facts.d directory. The module is included on the target system via an ENC class. When invoked by the puppet agent on the target it returns an empty set, when run by hand on the agent it correctly returns the fact. The script has execute permission on the master, but does not have it on the agent. I saw a bug report related to permissions and file types, but that was windows and supposed to be fixed in puppet version 3. What am I doing wrong? ENC definition: --- classes: facttest: Shell script: #!/bin/bash echo "test_fact1=$(hostname)" Permissions: master: -rwxr-xr-x 1 root root ... modules/facttest/facts.d/testfact.sh agent: -rw-r--r-- 1 root root ... /var/lib/puppet/facts.d/testfact.sh Agent message: Fact file /var/lib/puppet/facts.d/testfact.sh was parsed but returned an empty data set Version information: Puppet master: 3.5.1 (Debian) Facter master: 2.0.1 Puppet agent: 3.6.1 (OpenSUSE) Facter agent: 2.0.1

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • Howto get exit code of a script started in screen session

    - by Bettina
    Hi folks, I am currently creating a backup script which uses screen to start a backup job with rsync inside a screen session. The backup jobs are started as follows. screen -dmS backup /usr/bin/rsync ... As soon as the rsync job is finished, the screen session is terminated automatically. To make sure, that the backup was successful, I would like to check the exit code of the rsync job but unfortunately I really don't know how to get the exit code after the screen was terminated. Does someone have a good idea how to automatically check, if the rsync job was successful or not? Would be great if someone does. I already thought about using a temp file but like this: screen -dmS myScreen "rsync -av ... ; echo $? /tmp/myExitCode" but this unfortunately does not work. Then I thought about using stderr like in the example below: screen -dmS myScreen "rsync -av ... 2 /tmp/rsync-sterr None of my ideas worked out so far, since stderr is not written when I use the command above. :-( ? Would be great if someone has a good idea or even a solution. Cheers, Bettina

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • How do I use command line and wmctrl to make a window larger than the screen to get a huge screenshot?

    - by Mnebuerquo
    I use a program which makes a large image which I have to scroll to view. The program has no way to save the image, and I have no access to the source to modify it. The only way I have to get the image from the program is by screenshot. My goal is to save the full size image without having to piece together individual screenshots. I'm using this script to try taking a screenshot: #!/bin/bash window=$(wmctrl -l | grep "Program$" | awk '{print $1}') wmctrl -v -i -r $window -e '0,0,0,6030,5828' wmctrl -i -a $window import -window $window ~/Desktop/screenshot.png This uses wmctrl to get the window id ($window) for a window named "Program". It then tries to resize the window to the desired dimensions. It uses imagemagick (import) to save a screenshot.png on the user's Desktop. All of this works except the resize step. I can resize the window using wmctrl -r -e, but sizes greater than the screen size don't work. I'm using Ubuntu 10.04 and the Gnome Desktop. I run two monitors, but I've tried this with one of them disabled. Is there a way to resize the window larger than my screen to get a huge screenshot?

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

  • Configure X connections over TCP without using an X connection

    - by Darren Cook
    I want to run a GUI application on a remote machine I only have ssh access to. I don't need to, or want to, see the GUI window. (I know I could use something like ssh -C -X remote_server if I wanted the GUI to be on my client.) I know X is running on the remote machine, as ps shows this: root ... /usr/bin/Xorg :0 -br -audit 0 -auth /var/gdm/:0.Xauth -nolisten tcp vt7 I set DISPLAY=:0.0 but I then get "Xlib: connection to ":0.0" refused by server" when I try to use it. At Get remote x display working in linux without ssh tunneling and Xserver doesn't work unless DISPLAY=0.0 I see the advice to use gdmsetup to allow X to listen on TCP. But, gdmsetup is a GUI application! And trying to run it over ssh -X did not work ("X11 connection rejected because of wrong authentication"). So, is there a text file I can edit to remove -nolisten? And, after editing it, how do I safely restart X, remotely? (There is other stuff running on this machine, so requesting a reboot is possible, but undesirable.) If not, should gdmsetup be able to run over ssh and I should persevere in that direction? UPDATE: I had to do the ssh -X session as root (ssh as a normal user, then sudo or su, does not work.) So, I did the edit with gdmsetup. I then restarted X with gdm-restart. I've also done xhost + from that ssh -X session. The ps line no longer shows the -nolisten tcp part. But still no luck connecting to it, with either DISPLAY=:0 or DISPLAY=localhost:0

    Read the article

  • IPTables configuration help

    - by Sam
    I'm after some help with setting up IPTables. Mostly the configuration is working, but regardless of what I try I cannot allow localhost to access the local Apache only (i.e. localhost to access localhost:80 only). Here is my script: !/bin/bash Allow root to access external web and ftp iptables -t filter -A OUTPUT -p tcp --dport 21 --match owner --uid-owner 0 -j ACCEPT iptables -t filter -A OUTPUT -p tcp --dport 80 --match owner --uid-owner 0 -j ACCEPT Allow DNS queries iptables -A OUTPUT -p udp --dport 53 -j ACCEPT iptables -A OUTPUT -p tcp --dport 53 -j ACCEPT Allow in and outbound SSH to/from any server iptables -A INPUT -p tcp -s 0/0 --dport 22 -j ACCEPT iptables -A OUTPUT -p tcp -d 0/0 --sport 22 -j ACCEPT Accept ICMP requests iptables -A INPUT -p icmp -s 0/0 -j ACCEPT iptables -A OUTPUT -p icmp -d 0/0 -j ACCEPT Accept connections from any local machines but disallow localhost access to networked machines iptables -A INPUT -s 10.0.1.0/24 -j ACCEPT iptables -A OUTPUT -d 10.0.1.0/24 -j DROP Drop ALL other traffic iptables -A OUTPUT -p tcp -d 0/0 -j DROP iptables -A OUTPUT -p udp -d 0/0 -j DROP Now I have tried many permutations and I'm obviously missing everything. I place them above the in/out bound SSH to/from, so it's not the precedence order. If someone could give me the heads up on allowing only the local machine to access the local web server, that'd be great. Cheers guys.

    Read the article

< Previous Page | 245 246 247 248 249 250 251 252 253 254 255 256  | Next Page >