Search Results

Search found 24814 results on 993 pages for 'linux distro'.

Page 428/993 | < Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >

  • How to write rules for persistent net names?

    - by ndemou
    I know that a process generates persistent network card names based on rules found in /lib/udev/rules.d/75-persistent-net-generator.rules. I also know how to completely disable this process with a simple echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules but I've read that I "could also write my own rules file to give the interface a name — the persistent rules generator ignores the interface if a name has already been set" (/etc/udev/rules.d/README confirms that this is possible). Do you have any pointers to documentation about how to write such rules? (I mostly care about Debian/Ubuntu and a bit less for CentOS) As a specific example of why I want to write custom rules: I have two identical servers with one onboard LAN and one PCI LAN. In case of HW failure I want to be able to move disks from HW#1 to HW#2 and it's important for eth0 to continue pointing to the onboard card and eth1 to the PCI card (no one wants to mess with cabling in the middle of a HW failure panic). My current workaround works but is a lot of work[1] so I wonder if writing custom rules would allow me to express something simple like this: cards with MAC A or B should be named eth0 cards with MAC C or D should be named eth1 follow default naming scheme for anything else [1] install the OS in HW#1 and keep a copy of /etc/udev/rules.d/70-persistent-net.rules. Move the disks to HW#2 and keep a second copy of the same file. Concatenate the two copies and manually edit the NAME="ethX" part. Replace /etc/udev/rules.d/70-persistent-net.rules with my version. Finally disable auto-creation of a new 70-persistent-net.rules using echo '#' > /etc/udev/rules.d/75-persistent-net-generator.rules

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • What is the best way to handle the multitude of different logs created all around the place?

    - by Low Kian Seong
    I run a few applications which creates their own logs. Then I run cron scripts on the same server to do importing of data for my app. When these cron errors out, the default is it sends emails to the user that runs the cron job. There are just too many places that I need to check the logs and mails for stuff that might have potentially went wrong. My question is, what is the best way to do this or even better is like a log parser application which will go through all the system logs when something really goes wrong instead of me having to go through it daily?

    Read the article

  • Batch converting video from avc1 to xvid

    - by Tommy Brunn
    I need a way to batch convert 720p video files from avc1 to xvid in Ubuntu 10.04. I'm not terribly concerned about file size, but I do wish to retain the picture quality as much as possible. I believe the audio is encoded as aac, which is fine for my purposes. What would be the best and easiest way to do this? I've tried using Handbrake. During my first attempt, I had it using ffmpeg to convert to MPEG-4, but that just gave me a super-low quality video at twice the file size. Trying h.264 now, so we'll see how that works out. But just in case it doesn't pan out so well, what other ways do you recommend? I was thinking I'd write a bash script to reencode the files one by one, but the problem is that I have very little knowledge about codecs and containers and whatnot - so I wouldn't know what parameters I would pass ffmpeg/mencoder.

    Read the article

  • How do I install yum on Redhat Enterprise 4?

    - by Bob Cross
    For historical reasons, one of the machines that I manage has a Redhat Enterprise 4 boot disk (among others). Every now and then, we have to boot into RHEL4 to bring up some of the legacy software that we support and connect to. Since it's a fringe system, the Redhat support has long since lapsed and I can't convince myself that it would be worth paying just to get RPMs that I can go and get for myself. That said, the default RHEL tools are heavily biased against letting you do exactly that. I would like to install yum and use that as my package discovery and installation. So, is there an installation guide to integrating yum with an older RHEL 4 system?

    Read the article

  • Apache configuration FollowSymlinks- Apply to php scripts?

    - by Josh
    I have Options set to none for my webroot directory. I also have a symmlink /var/coderoot - /var/webroot/coderoot In the php script I can do include("/var/coderoot/file"); and it works fine. Regardless of the option (yes I save and restart apache.) Does follow symlinks only apply to symlinks used in a certain way? Is there a performance loss using the include with a symmlink?

    Read the article

  • Gentoo+urxvt+terminus: How do I change font version?

    - by gaidal
    In my Debian installation I can type extended ASCII characters such as åäö by default using the terminus font, however in Gentoo I can't get it to work so far. Nothing happens when I hit those keys, like in this thread: Missing glyphs in Terminus font, how to setup a fallback font ? But in this case I know terminus supports those characters in at least some of its versions, since it's works in Debian. So what I want is to find out how to see and choose which of the many different terminus font files is being used. I set the font in the same way on both Debian and Gentoo, using URxvt*font: xft:terminus:size=xx in .Xdefaults. Both systems use en_US.UTF-8 as default locale.

    Read the article

  • Are PHP session files ever deleted?

    - by GetFree
    I see there are thousands of files in my "/tmp" directory (a CentOS machine) and almost all of them are PHP session files. I'm worried about the possible impact this might have on my system. Are those files ever deleted either by the OS, Apache or PHP? or I have to take care of it myself?

    Read the article

  • is there a way to prevent network manager from storing the password for a wireless network

    - by tolomea
    Our corporate wireless network uses continuously changing passwords with RSA tokens. So every time we need to connect to the wireless we need to enter a new password off the RSA token. For extra fun using the wrong password a couple of times in a row causes the users account to be locked. Network manager automatically stores and reuses the password, with the net result that it is constant getting my account locked. Is there some way to prevent it from storing my password for that network? Or perhaps someway to get the gnome keyring to not store it?

    Read the article

  • Tmux installation problems

    - by RayQuang
    hI, I am trying to install the terminal multiplexer tmux on my Debian Lenny server so that I can have multiple terminals through ssh. However I have had a lot of difficulty installing it from the debian package, and by compiling it. When I try the package it says something about the wrong version of libc6, and when I compile it I get the following error: server.o: In function `server_start': server.c:(.text+0x273): undefined reference to `event_reinit' collect2: ld returned 1 exit status make: *** [tmux] Error 1 Help would be very much appreciated, RayQuang

    Read the article

  • Wireless performance on Ubuntu 9.10

    - by Brian
    Is there something I should do to my networking configuration in Ubuntu to better the performance of my wireless connection? I'm on a netbook dual-booting Windows 7 and Ubuntu 9.10. I pick up much stronger wifi signal when in Windows than Ubuntu. As soon as I boot Ubuntu, it will connect to the network with a stronger signal, and then loses signal very quickly. After it dies, I can't reconnect. I've tested this on a couple of different networks with the same outcome.

    Read the article

  • How to Create Boot CD

    - by joe
    How to Create boot CD for Dual Booting System? Just consider I am having Windows and Ubuntu, Grup is boot loader. I just want to create Dual Boot CD for the same operation.

    Read the article

  • Uploads fail with shorewall enabled

    - by JamesArmes
    I have an Ubuntu 8.04 server with shorewall 4.0.6 installed. When I try to upload files using FTP, SCP, or cURL the file upload stalls almost immediatly and eventually times out. If I turn off shorewall then the uploads work fine. I don't have any rules that specifically allow FTP and I'm not too concerned with it, but I do need to be able to upload via 22 (SCP) and 80 & 443 (cURL). This is what my rules look like: COMMENT Allow Server to respond to any web (80) and SSL (443) requests ACCEPT net $FW tcp 80 ACCEPT $FW net tcp 80 ACCEPT net $FW tcp 443 ACCEPT $FW net tcp 443 COMMENT Allow Server to respond to SNMPD (161) requests ACCEPT net $FW udp 161 COMMENT Allow Server to respond to MySQL (3306) requests (for MySQL Graphing) ACCEPT net $FW tcp 3306 COMMENT Allow Server to respond to any SSH connection attempts, and to SSH out. SSH/ACCEPT net $FW SSH/ACCEPT $FW net COMMENT Allow Server to make DNS Requests out. DNS/ACCEPT $FW net COMMENT Default "close" anything else. Ping/REJECT net $FW ACCEPT $FW net icmp #LAST LINE -- ADD YOUR ENTRIES BEFORE THIS ONE -- DO NOT REMOVE I expected the top four ACCEPT lines to allow inbound and outbound traffic over 80 and 443 and I expected the two SSH/ACCEPT lines to allow inbound and outbound trffic over 22, including SCP. Any help is greatly appreciated. /etc/shorewall/policy contains the following (all lines above are commented out): # # Allow all connection requests from teh firewall to the internet # $FW net ACCEPT # # Policies for traffic originating from the Internet zone (net) # Drop (ignore) all connection requests from the Internet to the firewall # net all DROP info # THE FOLLOWING POLICY MUST BE LAST # Reject all other connection requests all all REJECT info #LAST LINE -- ADD YOUR ENTRIES ABOVE THIS LINE -- DO NOT REMOVE

    Read the article

  • Ubuntu: Memory Leak

    - by Keener
    I'm having trouble finding from where this memory leak is occurring. I'm running Ubuntu 8.04 LTS on a Dell XPS M1530. I have 3GB of ram and I'm finding after about an hour or so of use top shows me 2GBs+ used. The strange thing is when I add up the memory percentages by PID either from top or ps aux I find that I should only be using about 20-25% of my available ram. What brought this to my attention was I've begun running vmware server again. Now, obviously the ram usage spikes when I load a virtual machine, but the memory VMware is using does not account for the memory usage I'm seeing via top or free. Stopping vmware server releases the memory which was allocated to it, but I'm still unable to find where this RAM is being used. After a complete reboot, of course, the memory is fine, but very quickly it climbs to 60-80% usage with the processes only appearing to account for a third of that. Any ideas where I should look for more information on what this could be?

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • How to get the PID of a process started by /bin/su -c

    - by crash3k
    I'm writing a init.d-script for an java-app. But the java-app should be run by another user. (The OS I'm using is Debian Squeeze.) I already got this: /bin/su - $USER - c "cd $PATH;echo $PASSWORD | $JAVA -Xmx256m -jar $PATH/app.jar -d > /dev/null" & PID=$! /bin/su - $USER - c "echo $PID > $PIDFILE" But this will of course only save the pid of the "/bin/su"-process instead of the pid of the created java-process.

    Read the article

  • What does *:* in netstat output stands for?

    - by chello
    While executing the command /usr/sbin/lsof -l -i -P -n under root user, I am getting this output. COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ... httpd 9164 70 3u IPv4 0x2f70270 0t0 TCP 127.0.0.1:9010 (LISTEN) httpd 9164 70 4u IPv6 0x25af4bc 0t0 TCP *:80 (LISTEN) httpd 9164 70 5u IPv4 0x3149e64 0t0 TCP *:* (CLOSED) httpd 9180 70 3u IPv4 0x2f70270 0t0 TCP 127.0.0.1:9010 (LISTEN) httpd 9180 70 4u IPv6 0x25af4bc 0t0 TCP *:80 (LISTEN) httpd 9180 70 5u IPv4 0x3149e64 0t0 TCP *:* (CLOSED) Please let me know what does *:* stands for? I am interested to know both the ipaddress and port fields. Also what does (CLOSED) mean here?

    Read the article

  • MySQL writing to net

    - by seengee
    I have a server that has been running at high CPU load due to MySQL activity, when i run the command mysqladmin pr i often see a few queries with the state "writing to net". I had a look around and couldn't find much out about this other than the fact i read somewhere that this shouldnt be expected in usual MySQL activity. Any ideas what this could mean? Running MySQL 5.0.91-community on CentOS 4.8

    Read the article

  • rsync per-site configuration file?

    - by Scott
    I know how to configure a per-site entry for ssh, but is there any kind of a client configuration for rsync that allows per-site configuration options and aliases or similar shortcuts like the .ssh/config? I'm curious because I have a minimal ssh server installed on my android phone and I also have a minimal rsync tool on it as well. I'm getting tired of having to root login onto the phone and sym-link both tools to standard places the android OS looks for executables as the ssh server is bare bones and has a typical *bear multi-link binary for the basic unix commands (that does not include rsync) I end up having to include --rsync-path=/path/to/rsync/android/files/rsync every time I want to do any rsyncing of the files on my phone, but this path is always the same. I've gotten around it in the meantime with a glob approach in a shell script wrapper, but this sometimes limits the customization I can do with the rsync call. I'm just wondering if there is anything similar to the .ssh/config file where I can create an alias for my phone (e.g. 'android') where specifying rsync android:/mnt/sdcard will automatically assume --rsync-path=/blah/blah/blah --no-g --no-p --no-t etc. Tre`

    Read the article

  • slow software raid

    - by Jure1873
    I've got software raid 1 for / and /home and it seems I'm not getting the right speed out of it. Reading from md0 I get around 100 MB/sec Reading from sda or sdb I get around 95-105 MB/sec I thought I would get more speed (while reading data) from two drives. I don't know what is the problem. I'm using kernel 2.6.31-18 hdparm -tT /dev/md0 /dev/md0: Timing cached reads: 2078 MB in 2.00 seconds = 1039.72 MB/sec Timing buffered disk reads: 304 MB in 3.01 seconds = 100.96 MB/sec hdparm -tT /dev/sda /dev/sda: Timing cached reads: 2084 MB in 2.00 seconds = 1041.93 MB/sec Timing buffered disk reads: 316 MB in 3.02 seconds = 104.77 MB/sec hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 2150 MB in 2.00 seconds = 1075.94 MB/sec Timing buffered disk reads: 302 MB in 3.01 seconds = 100.47 MB/sec Edit: Raid 1

    Read the article

  • Corrupt tar (Resulting folder smaller than packed file)

    - by Om3g4
    Hi everybody, I have an issue with a tarball created on a SuSe 10.3 Server version. The .tar file has a size of 6.5 GB but if I untar it under Ubuntu 9.10 the resulting folder only has a size of 1.5 GB. commands used: tar cvf for packing, tar xvf for unpacking. Perhaps somebody knows how this can be fixed, would be great, cheers

    Read the article

  • Is it safe to run two instances of svnserve on one repository, or only one?

    - by fredden
    We've two nodes running heartbeat/drbd, and one of the services we're using is subversion. What I want to know is: is it safe to run svnserve on both nodes all the time, or should it only run on the active node? Does svnserve use file-level locking, or is it all in memory? What are the implications of running svnserve without its repositories accessible? Please let me know if this isn't clear, and I'll try my best to rephrase/clarify. :)

    Read the article

  • Munin "Available entropy" when using adress space layout randomization

    - by clawspoon
    Having just configured munin for statistics logging on my gentoo server (hardened profile), I am noticing that my "Available entropy" is consitently in the 200-300 range. This seems way to low, so I checked it manually using the command $ cat /proc/sys/kernel/random/entropy_avail 3544 Odd. Consistently very low values in Munin and practically filled up when checking manually. After thinking about the problem for a while I came to the conclusion that the problem is probably that I'm using Adress Space Layout Randomization which is using the entropy when running commands/programs. Since Munin runs a whole slew of programs all the entropy is used up, and Munin then measures how much entropy there is, resulting in the low values. Does anyone have any experience with this? How can this be avoided?

    Read the article

< Previous Page | 424 425 426 427 428 429 430 431 432 433 434 435  | Next Page >