Search Results

Search found 24623 results on 985 pages for 'linux'.

Page 426/985 | < Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >

  • Backup all plesk MySQL Databases to individual files

    - by Michael
    Hy, Because I'm new to shell scripting I need a hand. I currently backup all mydatabases to a single file, thing that makes the restore preaty hard. The second problem that my MySQL password dosen't work because of a Plesk bug and i get the password from "/etc/psa/.psa.shadow". Here is the code that I use to backup all my databases to a single file. mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` --all-databases | bzip2 -c > /root/21.10.2013.sql.bz2 I found some scripts on the web that backup each database to individual files but I don't know how to make them work for my situation. Here is a example script: for db in $(mysql -e 'show databases' -s --skip-column-names); do mysqldump $db | gzip > "/backups/mysqldump-$(hostname)-$db-$(date +%Y-%m-%d-%H.%M.%S).gz"; done Can someone help me make the script above work for my situation? Requirements: Backup each database to individual file using plesk password location.

    Read the article

  • Maximise network transfer speed of various applications

    - by Alex
    When using nc, scp, wget to transfer files between 2 machines on a dedicated 2Mbps link, I get speeds between 0.5 and 1 Mbps. However, when I use iperf -c 10.0.1.4 -t 20 -P 12 (for example) I can maximise the speed of the link (getting stable 2Mbps). Is there a way to make single stream transfers (such as those done by scp) to utilise all/most of the link? Some kind of tcp settings, or iptables...?

    Read the article

  • What could cause a file system to spontaneously unmount or become invalid for a short time?

    - by Ichorus
    We've got DB2 LUW running on a RHEL box. We had a crash of DB2 and IBM came back and said that a file that DB2 was trying to access (through open64()) unmounted or became invalid. We have done nothing but restart the database and things seem to be running fine. Also, the file in question looks perfectly normal now: $ cd /db/log/TEAMS/tmsinst/NODE0000/TEAMS/T0000000/ $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT $ file C0000000.CAT C0000000.CAT: data $ lsattr C0000000.CAT ------------- C0000000.CAT $ ls -l total 557604 -rw------- 1 tmsinst tmsinst 570425344 Jan 14 10:24 C0000000.CAT With those facts in hand (please correct me if I am mis-interpreting the data at hand) what could cause a file system to 'spontaneously unmount or become invalid for a short time'? What should my next step be? This is on Dell hardware and we ran their diagnostic tools against the hardware and it came back clean.

    Read the article

  • Running two Magentos installations, one of which has 3 stores set up as multi-store. Which server?

    - by Pedro Peixoto
    I want to run 4 Magento stores in 2 different installations. 1 is a standalonne installation with 3 languages. The other is a multi-store with 3 different online stores in different domains. At the moment we have a VPS with 1GB memory, would that be enough? I ask because I've finished the standalone store and already put it online, and the server is already running on 62% memory. The ideal would be that this is enough as my company wouldn't like to move to a Dedicated Server (as it involves costs). I'm sure I can try to optimize Magento to run on lower memory (I'm expecting visits averaging 2000/day on all sites), if I could have some tips on the best way to do that Id appreciate it too.

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • How do I set default group ownership for files in a directory?

    - by tnichols
    I am running a cakephp webapp on Linode LAMP. I am finding that my temp files are created with root:root ownership. But the webapp is running with Apache's permissions (www-data). This causes warnings any time there is a new file created because it is not writable for user www-data. How do I change the default ownership to www-data on any new files created in the temp folder? Thanks for your help!

    Read the article

  • When using gt5 in my home directory I get a blank page.

    - by MT
    When using gt5 in various directories on my system (including my home directory) I get blank results. If I limit the max-depth enough, I get results. For example, in my home directory 'gt5 --max-depth 2' produces a listing, while 'gt5 --max-depth 3' produces a blank page. I've noticed that the temporary html file that gets created in tmp (such as '/tmp/gt5.9035.kJVM08Y9/gt5.html' is a zero-byte file. I can successfully do a du in the same directory (which is what I thought gt5 was using), so I'm not sure what to check?

    Read the article

  • How to filter Varnish logs based on XID?

    - by Martijn Heemels
    I'm running into infrequent 503 errors which appear hard to pinpoint. Varnishlog is driving me mad, since I can't seem to get the information I want out of it. I'd like to see both the client- and backend-communications as seen by Varnish. I thought the XID number, which is logged on Varnish's default error page, would allow me to filter the exact request out of the logging buffer. However, no combination of varnishlog parameters gives me the output I need. The following only shows the client-side communication: varnishlog -d -c -m ReqStart:1427305652 while this only shows the resulting backend communication: varnishlog -d -b -m TxHeader:1427305652 Is there a one-liner to show the entire request?

    Read the article

  • Multiple user directories on EC2

    - by Joseph
    Im trying to set up multiple user directories on EC2 running Ubuntu, but im not sure how to set it up correctly so that i can serve files in the following format: http://<ec2 ip address>/user_1/public_html/file1.html and http://<ec2 ip address>/user_2/public_html/file3.html and so on for every user that i add. I tried looking for the httpd.conf file but i coulndt find it i only found apache2.conf Thank you guys.

    Read the article

  • I wanna save some terminal commands in a file

    - by Jakob Abfalter
    I am using Opensuse 12.3 What I wanna do is, create a link on my desktop for some specific terminal commandos. The backround is, that I do some backup via rsync and don`t wanna type the commandos everytime new. I also dont wanna use a cronjob, since my computer isnt running everytime. Perfect would be some desktop icons, which on clicking execute the command(s). Could somebody tell me how to do this?

    Read the article

  • SSMTP to forward root@localhost mail

    - by Redconnection
    I would like to forward mail that gets sent root@localhost on multiple servers to our company admin account (e-mail is hosted on gmail) I have installed ssmtp on centos 5.5 via yum and configured it. i've also changed the last line in /etc/aliases to reflect where mail to root should go to. I've then tried sending mail to root - this gets delivered without a problem (mail -v root) I've also tried sending mail to root@localhost - this is not delivered to the specified gmail account.

    Read the article

  • Batch converting video from avc1 to xvid

    - by Tommy Brunn
    I need a way to batch convert 720p video files from avc1 to xvid in Ubuntu 10.04. I'm not terribly concerned about file size, but I do wish to retain the picture quality as much as possible. I believe the audio is encoded as aac, which is fine for my purposes. What would be the best and easiest way to do this? I've tried using Handbrake. During my first attempt, I had it using ffmpeg to convert to MPEG-4, but that just gave me a super-low quality video at twice the file size. Trying h.264 now, so we'll see how that works out. But just in case it doesn't pan out so well, what other ways do you recommend? I was thinking I'd write a bash script to reencode the files one by one, but the problem is that I have very little knowledge about codecs and containers and whatnot - so I wouldn't know what parameters I would pass ffmpeg/mencoder.

    Read the article

  • How to write rules for persistent net names?

    - by ndemou
    I know that a process generates persistent network card names based on rules found in /lib/udev/rules.d/75-persistent-net-generator.rules. I also know how to completely disable this process with a simple echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules but I've read that I "could also write my own rules file to give the interface a name — the persistent rules generator ignores the interface if a name has already been set" (/etc/udev/rules.d/README confirms that this is possible). Do you have any pointers to documentation about how to write such rules? (I mostly care about Debian/Ubuntu and a bit less for CentOS) As a specific example of why I want to write custom rules: I have two identical servers with one onboard LAN and one PCI LAN. In case of HW failure I want to be able to move disks from HW#1 to HW#2 and it's important for eth0 to continue pointing to the onboard card and eth1 to the PCI card (no one wants to mess with cabling in the middle of a HW failure panic). My current workaround works but is a lot of work[1] so I wonder if writing custom rules would allow me to express something simple like this: cards with MAC A or B should be named eth0 cards with MAC C or D should be named eth1 follow default naming scheme for anything else [1] install the OS in HW#1 and keep a copy of /etc/udev/rules.d/70-persistent-net.rules. Move the disks to HW#2 and keep a second copy of the same file. Concatenate the two copies and manually edit the NAME="ethX" part. Replace /etc/udev/rules.d/70-persistent-net.rules with my version. Finally disable auto-creation of a new 70-persistent-net.rules using echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • Find largest directories/files recursively

    - by Robert Munteanu
    I'm looking for a script/program which will display the top x largest directories/files and then descend into those folders and display the x largest directories/files for a configurable depth. 231MB bin - 220MB ls - 190MB dir - 15MB def - 3MB lpr - 10MB asd - 1MB link How can I do that?

    Read the article

  • Gnome, open with, custom command, filename reference

    - by Tergiver
    I want to execute this custom command on a file from the Gnome File Browser: hexdump -C $f > $f.dump That would create a hexdump of the file with the file's name + .dump in the directory that the file exists in. When I say $f above I mean something that would substitute the name of the file that was opened. So I've tried "Open with", "Use a custom command". I can't get it to work. I've tried a number of symbols in place of $f. Is it even possible? Before you suggest getting a GUI hexdump program, this is just one example. I have the need to do this sort of thing for many terminal-type programs. Am I the only person on Earth who wishes for a hybrid File-Browser-slash-Command-Terminal? That would be a file browser which contained a terminal pane who's current directory always matched that of the file browser. One could execute shell commands in the context of what they were viewing in the browser.

    Read the article

  • grub2 error: out of disk

    - by Carl Smotricz
    I'm trying to make a 250G USB hard disk Ubuntu-bootable on a Compaq nc6220 laptop. I've removed all other disks, so /dev/sda (the USB disk) is the only disk other than CDROM. I installed Ubuntu 9.10 to this disk from the live CD, putting the bootloader on /dev/sda . The default system couldn't be booted, and nothing I did in the Grub menu/cmdline helped. So I chrooted onto the disk and did grub-install /dev/sda. That seemed to work fine, but Grub (1.97 beta 4) keeps coming up with error: out of disk Even when I drop to the command line to do something simple like ls or help, it's always the same error message. Any hints for resolving this, please?

    Read the article

  • How do I install yum on Redhat Enterprise 4?

    - by Bob Cross
    For historical reasons, one of the machines that I manage has a Redhat Enterprise 4 boot disk (among others). Every now and then, we have to boot into RHEL4 to bring up some of the legacy software that we support and connect to. Since it's a fringe system, the Redhat support has long since lapsed and I can't convince myself that it would be worth paying just to get RPMs that I can go and get for myself. That said, the default RHEL tools are heavily biased against letting you do exactly that. I would like to install yum and use that as my package discovery and installation. So, is there an installation guide to integrating yum with an older RHEL 4 system?

    Read the article

  • what is the meaning of *this* crontab setting?

    - by aXqd
    * */1 * * * sh foo.sh I found this setting on one production machine. And foo.sh was executed every one minute. I am guessing that the original author of this setting wants it to be executed every one hour. And I cannot find the official meaning of this setting in the crontab man page. Hence please help. UPDATE: I extracted these logs from that machine, however I cannot find the law out of them. 2013-06-29 20:47:01 2013-06-29 20:50:02 2013-06-29 20:51:01 2013-06-29 20:53:01 2013-06-29 20:54:01 2013-06-29 20:57:01 2013-06-29 20:58:01 2013-06-29 21:00:01 2013-06-29 21:05:02 2013-06-29 21:10:02

    Read the article

  • How to Exclude an URL for Apache Mod_proxy?

    - by Mughil
    We have two Apache server as front-end and 4 tomcat server as back-end configured using mod_proxy module as load balancer. Now, we want to exclude an single tomcat url from the mod_proxy load balancer. Is there any way or rule to exclude? Proxy Balancer Setting: <Proxy balancer://backend-cluster1> BalancerMember http://10.0.0.1:8080 loadfactor=1 route=test1 retry=10 BalancerMember http://10.0.0.2:8080 loadfactor=1 route=test2 retry=10 </Proxy>

    Read the article

  • ip route add HOMEIP via SERVERIP disconnects me from ssh

    - by Arya
    I want to use a vpn connection on my Debian server but I get disconnected from ssh if I connect to the vpn. I thought by using the "ip route add" I can prevent getting disconnected from my server and it will continue to use the main connection for communication between my computer and the server, and the vpn for communication with other ips. This is the command I use ip route add PUBLICHOMEIP via PUBLICSERVERIP But I get disconnected after the "ip route add" command too. Am I making a mistake anywhere?

    Read the article

  • Sound plays on headphones and speakers with Lenovo ThinkPad L512 + Ubuntu 10

    - by Oscar Godson
    The only thing really missing from this install is this issue with the sound. I've searched all over the forums and i found one thing where you get the model and codecs and write them to a file, however, I can't seem to find what my "model" is because none of the postings have anything about Lenovo laptops. Here is the command they all asked for: Code: cat /proc/asound/card0/codec#* | grep Codec Codec: Realtek ALC269 Codec: Intel G45 DEVIBX With that info, how do I get the model, and how do I get my speakers to stop playing when headphones are plugged in. Also, I don't have any software installed like pulse audio either, so it's not that. Thanks so much to whoever can answer this... The Ubuntu forums are nearly useless... ive never gotten a correct answer back on that site.

    Read the article

  • What is the best vfat driver for FUSE?

    - by Vi
    FUSE filesystem list show some FuseFat and FatFuse. One is 404 not found, others is old, not buildable and probably depends on glib. Now I'm using mountlo for the task (mounting USB drives in generic way without root access or suid things (except of fusermount itself), but it looks too big for such task. Is there good vfat FUSE driver?

    Read the article

< Previous Page | 422 423 424 425 426 427 428 429 430 431 432 433  | Next Page >