Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 460/1051 | < Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >

  • Applications getting killed automatically

    - by nebi
    I am running httperf client on my m/c and after few seconds it is getting killed. dmesg shows: The command is: httperf --hog --client=0/1 --server=39.0.0.2 --port=80 --uri=/50kb --rate=20000 --send-buffer=4096 --recv-buffer=16384 --num-conns=6000000 --num-calls=1 Although I had done this test no. of times but never faced this error any time. From last two days I am observing this. My Ubuntu version is ubuntu 10.04. and httperf version is httperf-0.9.0 [ 2997.180620] Out of memory: kill process 7977 (apache2) score 70532 or a child [ 2997.180632] Killed process 7977 (apache2) [ 2997.184837] Out of memory: kill process 7971 (rsyslogd) score 8702 or a child [ 2997.184844] Killed process 7971 (rsyslogd) [ 2997.188823] Out of memory: kill process 7978 (apache2) score 1354 or a child [ 2997.188829] Killed process 7978 (apache2) [ 2997.192817] Out of memory: kill process 7973 (atd) score 561 or a child [ 2997.192822] Killed process 7973 (atd) [ 2997.196805] Out of memory: kill process 8102 (httperf) score 471 or a child [ 2997.196811] Killed process 8102 (httperf) Output of free command: total used free shared buffers cached Mem: 3862768 163000 3699768 0 2384 13068 -/+ buffers/cache: 147548 3715220 Swap: 3905528 0 3905528

    Read the article

  • How to backup Servers to an SSH-Host with low traffic and access to versions and encryption?

    - by leto
    Hello, I've not run backups for the past dont't remember anymore years for my personal stuff until waking up lately and realising contrary to my prior belief: Actually. I care! :) Now I have a central data server at home where I want to attach an external media to, to which I want to save backups of my most important stuff, like years of self-written scripts, database dumps, you name it. I've tinkered with rsync+ssh over the last two years, also tried tar over ssh, but don't know the simplest and most easy to maintain way to do it yet. Heres my workload: A typical LAMP-Server (<5GB Data) which I'd like to backup fully so lots of small files connected via 10Mbit My personal stuff (<750GB Data) from a Mac connected via GE My passwords in an encrypted container (100Mb) from OpenBSD connected via serial-PPP My E-Mail from the last ten years (<25GB) as Maildir which I need to keep in readable format Some archives (tar.*) which I need to backup only once and keep in readable format (Deleted my ideas, as I'm here for suggestions) What I need: 1. Use an ssh-tunnel for data transfer 2. Be quick with lots of small files 3. Keep revisions 4. Be sure the data I save is not corrupted 5. Intelligent resume functions and be able to deal with network congestion :) 6. Compressed and optionally encrypted storage 7. Be able to extract data from backup easily (filesystem like usage would be nice) How would and with what software would you backup this stuff? Hints to tools that can help solve only part of my problem (like encryption) also greatly appreciated. Greets

    Read the article

  • ubuntu 64 or 32 bit for macbook/vps?

    - by ajsie
    i've got macbook pro and wonder if i should use 64 or 32 bits ubuntu server? and then i've got a vps not hosted by med. how do i know what version to choose? how do you check how many bits your cpu i working with? can i use 64 on 32 and vice versa?

    Read the article

  • curl makes a site work externally once run locally (apache)

    - by Kyle_at_NU
    Currently when I visit mysite.mydomain.com external to the local network I get in the browser: This is the default web page for this server. Nothing to see here. This is not even the "It Work's" Apache page. Then if locally (Apache2 on Ubuntu Server 12.04 with curl installed ) type: curl mysite.mydomain.com I get the site I expect. Then the next time i visit the page externally I get the correct site. Has anyone seen this before? Tips/Suggestions?

    Read the article

  • Is data=journal on a separate device on Ext4 as good as using a RAID controller with battery backed cache for file system consistency?

    - by Jeff Strunk
    It seems to me that data=journal prevents file system inconsistency in the case of power failure. Using it with a dedicated journal device mitigates the performance penalty of writing the data twice. A power outage would still lose the data that is currently being written to the journal, but the file system on disk would always be consistent. If that amount of loss is acceptable, is a RAID controller with battery backed cache really worthwhile?

    Read the article

  • I do not understand -printf script

    - by jerzdevs
    I have taken over the responsibility of RHLE5 scripting and I've not had any training in this platform or BASH scripting. There's a script that has multiple pieces to it and I will ask only about the second piece but also show you the first, I think it will help with my question below. The first part of the script shows the output of users on a particular server: cut -d : -f 1 /etc/passwd The output will look something like: root bin joe rob other... The second script requires me to fill in each of the accounts listed from the above script and run. From what I can gather, and from my search on the man pages and other web searches, it goes out and finds the group owner of a file or directory and obviously sorts and picks out just unique records but not really sure - so that's my question, what does the below script really do? (The funny thing is, is that if I plug in each name from the output above, I'll sometimes receive a "cannot find username blah, blah, blah" message.) find username -printf %G | sort | uniq

    Read the article

  • Partitioning of Ubuntu server which will use OpenVZ and encrypted partitions (unlocked through SSH l

    - by DeletedAccount
    Hi, I'm about to install a server. Some context: My HDD is 1 TB and I have 2 GB RAM Ubuntu Server Lucid Lynx AMD 64 I will use OpenVZ and have most functionality separated into containers. To support disk quotas I need to use ext3 (not ext4) for the container partition. Each time I reboot the server I want to be forced to login through SSH and mount the encrypted partitions by typing my password (if someone steals the server, no critical data should be available). I want to have as much as possible encrypted. Yet I want to be able to login through SSH as I don't have a monitor or keyboard at the server. I am not sure how big I need my partitions to be. Being able to resize them later would be nice. I guess it implies using LVM? But the manual partition mount using SSH is also very important (in fact it's more important, if I have to pick one). How do you recommend that I partition the HDD? If I have daemons which needs the encrypted partitions, will they fail and can I just restart them after mounting the needed partitions?

    Read the article

  • sudoers entries

    - by Pochi
    Is there a way to have a sudoers entry that allows executing of only a particular command, without any extra arguments? I can't seem to find a resource that describes how command matching works with sudoers. Say I want to grant sudo for /path/to/executable arg. Does an entry like the following: user ALL=(ALL) /path/to/executable arg strictly allow sudo access to a command exactly matching that? That is, it doesn't grant user sudo privileges for /path/to/executable arg arg2?

    Read the article

  • Using Monit to monitor Resque

    - by Alex
    I'm trying to use resque as a job runner for Rails. I've tried this config, and many other ways of demonizing the rescue task (because running rake resque:work leaves the terminal tied to that command). Unfortunately, their example configuration doesn't work for me. Does the configuration look correct? Or is there another way to turn the process into a daemon? Thank you :) check process resque_worker_QUEUE with pidfile /data/APP_NAME/current/tmp/pids/resque_worker_QUEUE.pid start program = "/bin/sh -c 'cd /data/APP_NAME/current; RAILS_ENV=production QUEUE=queue_name VERBOSE=1 nohup rake environment resque:work& > log/resque_worker_QUEUE.log && echo $! > tmp/pids/resque_worker_QUEUE.pid'" as uid deploy and gid deploy stop program = "/bin/sh -c 'cd /data/APP_NAME/current && kill -s QUIT `cat tmp/pids/resque_worker_QUEUE.pid` && rm -f tmp/pids/resque_worker_QUEUE.pid; exit 0;'" if totalmem is greater than 300 MB for 10 cycles then restart # eating up memory?

    Read the article

  • PcLinuxOs demands I use only one repository at time. Is it right?

    - by m33600
    I come to your presence with this question that is paralyzing my coding efforts. PclinuxOs was my distro of choice for reliability, but it is jealous and does not permit me to add repos from, say, Debian. The wiki is clear advising on using just one repo, and I end up not finding what I used to find on normal Debians. Multimon, the audio decoder, for example (my other question) is not there. When I try to install multimon with hammer and plies, it returns errors of all kinds. Is there a way to safely and temporarily add a repository, make the install and remove the repo, returning pclinuxos to its stable state?

    Read the article

  • Le noyau Linux 3.2 disponible : intégration du code d'Android, améliorations réseaux, Btrfs et support d'une nouvelle architecture

    Le noyau Linux 3.2 disponible : intégration du code d'Android améliorations réseaux, Btrfs et support d'une nouvelle architecture Linus Torvalds vient d'annoncer la disponibilité de la version 3.3 du noyau Linux. Au menu des nouveautés, on notera essentiellement la réintégration des portions de code du noyau d'Android . Pour rappel, en 2009, les pilotes d'Android avaient été exclus du noyau parce qu'ils n'étaient pas suffisamment maintenus. L'intégration d'Android permettra aux développeurs d'utiliser le noyau Linux pour faire fonctionner un système Android, développer un pilote pour les deux et réduira les couts de maintenance des correctifs indépendants d'une...

    Read the article

  • Automate hashing for each file in a folder?

    - by Kennie R.
    I have quite a few FTP folders, and I add a few each month and prefer to leave some sort of method of verifying their integrity, for example the files MD5SUMS, SHA256SUMS, ... which I could create using a script. Take for example: find ./ -type f -exec md5sum $1 {} \; This works fine, but when I run it each time for each shaxxx sum afterwards, it creates a sum of the MD5SUMs file which is really not wanted. Is there a simpler way, or script, or common way of hashing all the files in to their sums file without causing problems like that? I could really use a better option.

    Read the article

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • iptables redirect single website traffic to port 8080

    - by Luke John Southard
    My goal is to be able to make a connection to one, and only one, website through a proxy. Everything else should be dropped. I have been able to do this successfully without a proxy with this code: ./iptables -I INPUT 1 -i lo -j ACCEPT ./iptabels -A OUTPUT -p udp --dport 53 -j ACCEPT ./iptables -A OUTPUT -p tcp -d www.website.com --dport 80 -j ACCEPT ./iptables -A INPUT -m conntrack --cstate ESTABLISHED,RELATED -j ACCEPT ./iptables -P INPUT DROP ./iptables -P OUTPUT DROP How could I do the same thing except redirect the traffic to port 8080 somewhere? I've been trying to redirect in the PREROUTING chain in the nat table. I'm unsure if this is the proper place to do that tho. Thanks for your help!

    Read the article

  • I wrote a new X11 keyboard layout file, how do I get my system to recognize it?

    - by grimborg
    I like to configure my keys my way, so I wrote a keyboard symbols file and I put it in /usr/share/X11/xkb/symbols/cat I use it by running setxkbmap cat -variant dvorak (and it works), but it doesn't show up in the console configuration (dpkg-reconfigure console-setup) nor in the Gnome keyboard settings... nor anywhere else, so I have to run setxkbmap every time. I suppose that I have to register it somewhere, but where? Any hints? Thanks!

    Read the article

  • Replacing every 10th pipe with new line in unix

    - by user327958
    Lets say I have fields: name, number, id I have a data file: name1|number1|id1|name2|number2|id2...etc I want to replace every 3rd pipe with a new line or '\n' so I get: name1|number1|id1 name2|number2|id2 I'm having no luck with awk or sed. I've tried the following, and variations of: awk '/"\|"/{c++;if(c==10){sub("\|","\n");c=0}}1' inputfile.txt sed 's/"|"/"\n"/2' inputfile.txt It tells me awk: syntax error near line 1 awk: illegal statement near line 1 awk: syntax error near line 1 awk: bailing out near line 1 Any help is greatly appreciated! EDIT: Thank you!

    Read the article

  • Subversion Permission Denied when adding or committing

    - by Rungano
    Hi guys I am running subversion 1.4 on Centos 5.2 and my clients are using tortoise to do their check out, commit etc. I think I have permissions problems but I have configured the folder to accessible to everyone with 777 attribute but I seem not to be getting anywhere. Its generating this error on tortoise "svn: Can't open file 'PATH/TO/MY/FILES/entries': Permission denied". Some guy was suggesting some indexing software installed on the client machine like google desktop, any suggestions?

    Read the article

  • Bridging two wireless interfaces with brctl?

    - by AK_
    I have this topology: [internet] ^ L-------[wlan0]-[host]-[wlan1]-----[client-1] I tried to bridge wlan0 wlan1 but it wont work with brctl; but magically when I issue this command #iw set dev wlan0 4addr on it adds wlan0 to the bridge BUT I lost all internet connection and I was unable to hook it to the internet router. can somebody please explain why did that happen and is there a way to get this done ?

    Read the article

  • How can I use apt-get to resolve package dependencies when there are multiple versions in the repository?

    - by user1165144
    I've package a-package.deb which depends on b-package.deb in version 1.0. Everything works fine. But now a b-package in version 1.1 gets added to the repository. I'd suspect that apt-get installs the a-package and version 1.0 of the b-package. What really happens is, that a-package won't get installed: # apt-get install a-package Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: a-package : Depends: b-package (= 1.0) but 1.1 is to be installed E: Unable to correct problems, you have held broken packages. Is there a workaround to fix the behavior? Is there other software to use, that can handle the dependencies as defined?

    Read the article

  • Change directory upwards to specified goal

    - by haakon
    I'm often deep inside a directory tree, moving upwards and downwards to perform various tasks. Is there anything more efficient than going 'cd ../../../..'? I was thinking something along the lines of this: If I'm in /foo/bar/baz/qux/quux/corge/grault and want to go to /foo/bar/baz, I want to do something like 'cdto baz'. I can write some bash script for this, but I'd first like to know if it already exists in some form.

    Read the article

  • Provide credentials to process in a safe manner

    - by Erik Aigner
    On system startup I need to launch a process which requires credentials for other services (database etc.) to interact. I obviously don't want to store those on disk for security reasons. I'm trying to think of a way to provide those credentials to the process on launch - and on launch only. After that they should be only available to the process. Is this possible somehow? The bottom line is to make it as hard as possible for an intruder to get to those credentials.

    Read the article

  • Display issues on new OpenSUSE install

    - by user1319182
    I installed OpenSUSE 13.1 on my newly built PC, but the display is just horrible : the edges of the screen are missing. For example, I can't see all the top part, I can barely read the date and I see "ctivities" instead of "Activities". However, when I take a screenshot everything seems to be fine (the cursor doesn't appear though) the characters are sometimes too big and sometimes too small the cursor is huge and many other strange things. I took a few pictures I'm using an Intel integrated GPU (HD4400) and made all the possible updates with YaST. Any idea how I can fix this? Thanks

    Read the article

  • List installed packages with the repo they came from?

    - by Sandra
    With rpm it is possible to list installed packages with additional info rpm -qa --queryformat "%-35{NAME} %-35{DISTRIBUTION} %{VERSION}-%{RELEASE}\n" | sort -k 1,2 -t " " -i which will produce something like xorg-x11-drv-ur98 (none) 1.1.0-1.1 xorg-x11-drv-vesa CentOS-5 1.3.0-8.3.el5 xorg-x11-drv-vga (none) 4.1.0-2.1 xorg-x11-drv-via (none) 0.2.1-9 On Ubnutu server would I like to list all installed packages and show from which repository in came from. Can that be done?

    Read the article

< Previous Page | 456 457 458 459 460 461 462 463 464 465 466 467  | Next Page >