Search Results

Search found 24965 results on 999 pages for 'linux kvm'.

Page 339/999 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • pxelinux hanging when booting client machine

    - by Blasphemophagher
    I'm kind of new to all of this, so please forgive any vagueness/misunderstandings on my part. I'm using pxelinux and VMs to create CentOS 6.0 machines that have the same install every time. I have a new VM set to boot from network, but in the process of booting up it gets stuck at "Loading 10.1.1.20:/pxelinux.0" (10.1.1.20 is the address of the server it's getting info from). pxelinux conf: http://pastebin.com/4XfZZPY1 I'm pretty sure all my config files are correct, could it be VirtualBox related? I have both the building server and the new client set to Host-only adapter and PCNET-FAST.

    Read the article

  • Problems using Mesa demos

    - by Rodnower
    Hello, I successfully installed Mesa with "yum install Mesa*" and downloaded MesaDemos-7.8.tar.gz archive. Now I try follow instructions from "Mesa3d.org - Download / Insall - Compiling and Installing - 1.5 Running the demos", but in progs/demos there is only *.c files, when I try to compile them, I get many similar errors like: gears.c:(.text+0x54): undefined reference to `glShadeModel' I guess that this is very noob question, and I understand that there is very simple solution, but I haven't any idea... In beggining of the file there are all necessary #includes: #include <math.h> #include <stdlib.h> #include <stdio.h> #include <string.h> #include <GL/glut.h> So I have some questions: Is there some Mesa forum on the web? Is there some compiled demos? Is there some site with well described examples of Mesa using? What I need for compile those examples? I have CentOS 5 Thank you for ahead.

    Read the article

  • dpkg error code 1

    - by Prithvi Raj
    I am unable to add/remove any packages in ubuntu karmic I keep getting the following Errors were encountered while processing: crossplatfromui E: Sub-process /usr/bin/dpkg returned an error code (1) What do I do to completely remove this package ?

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • Mounted HDD not having enough permissions from Apache/PHP

    - by Dan
    Piwigo gallery, on apache and php. The root system is a RAID 128GB. /var/www/html is on the root file system. Mounted the 320GB hdd to /var/www/html/320 using defaults, it's an ext4 fs. Put a symlink to it in /var/www/html/galleries which is read by the gallery script so I can upload images to there, then click sync. It gives me the error: [./galleries/] PWG-ERROR-NO-FS (File/directory read error) PWG-ERROR-NO-FS: The file or directory cannot be accessed (either it does not exist or the access is denied) chmod 777 set on /dev/sdb1, /var/www/html, and /var/www/html/320 as well as the symlink galleries too. All recursive. chown apache:apache to everything too. PHP just can't read/write to it. I tried with and without the symlink, I've tried everything I can think of. Nothing. Any ideas how I can give apache/php permission to read/write to this drive? With 777 permissions all around it should already be able to.

    Read the article

  • Load balancing with rsync

    - by David
    i have 2 server with public ip: SERVER A - 10.10.10.11 SERVER B - 10.10.10.12 both of them are centos 6 in OS, installed nginx with php-fpm, 2 exact same website stored at: /var/www/html. Domain with: myxdomain.com and dns hosted with cloudflare ( since cloudflare do support round robin ) to point the domain to A record of 10.10.10.11 and 10.10.10.12. I know that round robin dns does not cover the failover or fallover, but it does not matter, what i need is: How do i sync the both content of /var/www/html server A and server B to be exactly same? Lets say: 1) user uploaded their file to server A, the file content will be sync to server B as well. 2) user uploaded their file to server B, the file content will be sync to server A as well. rsync will be good choice here? Any example of command line and cronjob time that suitable? thanks

    Read the article

  • How can I pass environment variables to a WSGI script, using uWSGI?

    - by orokusaki
    I've added the following line to /etc/environment: FOO_DEPLOYMENT_ENV="vbox" Upon logging in via SSH, I can echo $FOO_DEPLOYMENT_ENV and, of course, see vbox output to the shell. If I open a Python shell and run os.getenv('FOO_DEPLOYMENT_ENV'), it will return 'vbox', but the same code in my Python application, when run by uWSGI (as the www-data user), it does not see the environment variable. Clearly, this isn't a problem of uWSGI, and is rather a problem with my understanding of environment variables, or how they're properly set, and the contexts in which they can be retrieved. What am I doing or understanding incorrectly?

    Read the article

  • How to copy directories using debugfs?

    - by tjbp
    The debugfs manpage gives the impression that the command 'rdump . .' will recursively copy all files found on the specified filesystem from the debugfs cwd to the native filesystem's cwd. Instead I seem to receive a syntax error, and no copy is initiated? These are the commands I run: cd /path/to/transfer/destination debugfs /dev/sda1 -R rdump . . My task is to copy the entire contents of a clean yet unmountable USB storage device to its host machine's HD. The host machine does not support the inode size used by the USB device's filesystem (256) and its software is not upgradeable, so my intention was to use debugfs to transfer the files. If anyone has any other suggestions for this task I'd be grateful.

    Read the article

  • VNC error: "Could not connect to session bus: Failed to connect to socket"

    - by GJ
    I started a vncserver on display :1 on an ubuntu machine. When I connect to it, I get a grey X window with an error message Could not connect to session bus: Failed to connect to socket. The vnc log is: Xvnc Free Edition 4.1.1 - built Apr 9 2010 15:59:33 Copyright (C) 2002-2005 RealVNC Ltd. See http://www.realvnc.com for information on VNC. Underlying X server release 40300000, The XFree86 Project, Inc Sun Mar 20 15:33:59 2011 vncext: VNC extension running! vncext: Listening for VNC connections on port 5901 vncext: created VNC server for screen 0 error opening security policy file /etc/X11/xserver/SecurityPolicy Could not init font path element /usr/X11R6/lib/X11/fonts/Type1/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/Speedo/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/misc/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/75dpi/, removing from list! Could not init font path element /usr/X11R6/lib/X11/fonts/100dpi/, removing from list! cat: /var/run/gdm/auth-for-link2-eGnVvf/database: No such file or directory gnome-session[24880]: WARNING: Could not make bus activated clients aware of DISPLAY=:1.0 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of GNOME_DESKTOP_SESSION_ID=this-is-deprecated environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused gnome-session[24880]: WARNING: Could not make bus activated clients aware of SESSION_MANAGER=local/dell:@/tmp/.ICE-unix/24880,unix/dell:/tmp/.ICE-unix/24880 environment variable: Failed to connect to socket /tmp/dbus-FhdHHIq8jt: Connection refused Sun Mar 20 15:34:10 2011 Connections: accepted: 0.0.0.0::51620 SConnection: Client needs protocol version 3.8 SConnection: Client requests security type VncAuth(2) VNCSConnST: Server default pixel format depth 16 (16bpp) little-endian rgb565 VNCSConnST: Client pixel format depth 16 (16bpp) little-endian rgb565 gnome-session[24880]: Gtk-CRITICAL: gtk_main_quit: assertion `main_loops != NULL' failed gnome-session[24880]: CRITICAL: dbus_g_proxy_new_for_name: assertion `connection != NULL' failed Any ideas how to fix it?

    Read the article

  • Transparent Squid : Logging client ip problem

    - by llazzaro
    Hello, I am using the following rules in iptables in my network to use a transparent proxy * iptables -t nat -A PREROUTING -i eth0 -s ! squid-box -p tcp --dport 80 -j DNAT --to squid-box:3128 * iptables -t nat -A POSTROUTING -o eth0 -s local-network -d squid-box -j SNAT --to iptables-box * iptables -A FORWARD -s local-network -d squid-box -i eth0 -o eth0 -p tcp --dport 3128 -j ACCEPT But my squid log, always logs gateway IP (172.16.0.1) Do you know an alternative to not lose client IP? (of course avoid saing manual proxy setup!)

    Read the article

  • Tweaks on nvidia-settings only working when the program is opened

    - by Igoru
    I have two monitors. The master one (17") is 1yo, and the secondary (15") is really old, like 4yo. This old screen is having problems displaying colors... They are a little bit darker, what is a problem when I'm viewing pics. I have a GeForce 9800, so I changed some settings inside nvidia-settings, that fit better with this second screen. But those settings just are applied when I first open nvidia-settings. First time I configured this, it worked. I turned off computer, next day turned it on, and screen is dark again. As soon as I open nvidia-settings again, the screens get lighter again! How can I make those settings permanent and loaded at startup?

    Read the article

  • All my files uploaded have unusable permissions

    - by cosmicbdog
    I've just moved to a new server and have come across some strange permissions issues. Every file I upload has permissions of 600, owned by the user account and is also in the same group. With this permission, the server is unable to make changes to these files. The folder I'm uploading to (via regular ftp) has permissions of 755. Why are any new files I upload here given this permission of 600? And how do I change it so that files added are given permissions so they can be modified by the webserver?

    Read the article

  • How to monitor traffic on certain ports with ntop

    - by Claudiu
    How to configure ntop so I can get the amount of upload traffic sent through a certain port ? I've added port in ntop/protocol.list, restarted ntop and after some time I've checked Summary - Traffic - TCP/UDP Traffic Port Distribution: Last Minute View, but data from that table is not too relevant. I think there is much more about this ntop that I don't know (configuration, usage).

    Read the article

  • Is the field BusID necessary in XF86Config?

    - by Greg
    Hello, I am using a cluster of machines running on Ubuntu 10.04 LTS which are supposed to be homogeneous, but apparently they are not. In particular, I am configuring the X server on these machines, and I pushed a /etc/X11/XF86Config that includes the following section: Section "Device" Identifier "Device0" Driver "nvidia" VendorName "NVIDIA Corporation" BusID "PCI:5:0:0" EndSection The problem is that the BusID of the graphic card is PCI:5:0:0 for some machines, and PCI:3:0:0 for others. Is there a way that the X server automatically detects the appropriate Device (based on the name for instance)? Thanks,

    Read the article

  • Problems with LDAP auth in Apache, works only for one group

    - by tore-
    Hi, I'm currently publishing some subversions repos within Apache: <Location /dev/> DAV svn SVNPath /opt/svn/repos/dev/ AuthType Basic AuthName "Subversion repo authentication" AuthBasicProvider ldap AuthzLDAPAuthoritative On AuthLDAPBindDN "CN=readonlyaccount,OU=Objects,DC=invalid,DC=now" AuthLDAPBindPassword readonlyaccountspassword AuthLDAPURL "ldap://invalid.domain:389/OU=Objects,DC=invalid,DC=domain?sAMAccountName?sub?(objectClass=*)" Require ldap-group cn=dev,ou=SVN,DC=invalid,DC=domain </Location> This setup works great, but now we want to give an LDAP group read only access to our repo, then my apache config looks like this: <Location /dev/> DAV svn SVNPath /opt/svn/repos/dev/ AuthType Basic AuthName "Subversion repo authentication" AuthBasicProvider ldap AuthzLDAPAuthoritative On AuthLDAPBindDN "CN=readonlyaccount,OU=Objects,DC=invalid,DC=now" AuthLDAPBindPassword readonlyaccountspassword AuthLDAPURL "ldap://invalid.domain:389/OU=Objects,DC=invalid,DC=domain?sAMAccountName?sub?(objectClass=*)" <Limit OPTIONS PROPFIND GET REPORT> Require ldap-group cn=dev-ro,ou=SVN,dc=invalid,dc=domain </Limit> <LimitExcept OPTIONS PROPFIND GET REPORT> Require ldap-group cn=dev-rw,ou=SVN,dc=invalid,dc=domain </LimitExcept> </Location> All of my user accounts is under: OU=Objects,DC=invalid,DC=domain All groups related to subversion is under: ou=SVN,dc=invalid,dc=domain The problem after modification, only users in the dev-ro LDAP group is able to authenticate. I know that authentication with LDAP works, since my apache logs show my usernames: 10.1.1.126 - tore [...] "GET /dev/ HTTP/1.1" 200 339 "-" "Mozilla/5.0 (...)" 10.1.1.126 - - [...] "GET /dev/ HTTP/1.1" 401 501 "-" "Mozilla/4.0 (...)" 10.1.1.126 - readonly [...] "GET /dev/ HTTP/1.1" 401 501 "-" "Mozilla/4.0 (...) line = user in group dev-rw, 2. line is unauthenticated user, 3. line is unauthenticated user, authenticated as a user in group dev-ro So I think I've messed up my apache config. Advise?

    Read the article

  • HIGH CPU USAGE + low memory usage

    - by hadi
    as you can see in below , there are high cpu usage by httpd request. please help me to decrease them. thanks. 28577 apache 15 0 99676 53m 3488 S 21 0.2 1:13.67 httpd 28568 apache 15 0 99676 53m 3496 S 19 0.2 1:14.92 httpd 28608 apache 15 0 99676 53m 3428 R 19 0.2 0:28.28 httpd 28615 apache 15 0 99676 53m 3436 R 19 0.2 0:25.33 httpd 28616 apache 15 0 99676 53m 3440 S 19 0.2 0:25.83 httpd 28619 apache 15 0 99676 53m 3436 R 19 0.2 0:26.12 httpd 28635 apache 15 0 97.9m 54m 3416 S 19 0.2 0:24.86 httpd 28558 apache 15 0 97.9m 54m 3432 R 17 0.2 1:40.75 httpd 28560 apache 15 0 97.9m 54m 3496 R 17 0.2 1:40.02 httpd 28621 apache 15 0 97.9m 54m 3420 S 17 0.2 0:25.61 httpd 28641 apache 16 0 97.9m 54m 3428 R 17 0.2 0:21.52 httpd 28642 apache 15 0 99756 53m 3424 R 15 0.2 0:21.46 httpd 28643 apache 15 0 99676 53m 3424 S 15 0.2 0:21.59 httpd 28594 apache 15 0 99756 53m 3428 R 13 0.2 0:44.41 httpd 28618 apache 15 0 99676 53m 3420 S 13 0.2 0:26.15 httpd 28654 apache 15 0 99676 53m 3472 S 13 0.2 0:04.27 httpd 28575 apache 15 0 99756 53m 3436 R 11 0.2 1:14.02 httpd 28576 apache 15 0 99676 53m 3496 S 11 0.2 1:16.79 httpd 28634 apache 15 0 99676 53m 3436 S 11 0.2 0:25.36 httpd 28653 apache 15 0 99676 53m 3424 S 11 0.2 0:04.35 httpd 28574 apache 15 0 99676 53m 3440 S 10 0.2 1:13.05 httpd 28592 apache 15 0 99676 53m 3492 R 10 0.2 0:45.78 httpd 28595 apache 15 0 99676 53m 3432 R 10 0.2 0:47.02 httpd 28617 apache 16 0 99676 53m 3436 S 10 0.2 0:25.32 httpd 28620 apache 15 0 99676 53m 3432 S 10 0.2 0:25.35 httpd 28597 apache 15 0 99676 53m 3428 S 8 0.2 0:43.56 httpd 11345 mysql 15 0 2927m 198m 4472 R 4 0.6 1624:43 mysqld 1 root 15 0 2036 648 552 S 0 0.0 0:16.97 init 2 root RT 0 0 0 0 S 0 0.0 0:48.50 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:26.72 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT 0 0 0 0 S 0 0.0 0:04.98 migration/1 6 root 34 19 0 0 0 R 0 0.0 0:27.51 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root RT 0 0 0 0 S 0 0.0 0:15.42 migration/2 9 root 34 19 0 0 0 S 0 0.0 0:26.50 ksoftirqd/2 10 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/2

    Read the article

  • How to prioritize openvpn traffic?

    - by aditsu
    I have an openvpn server, with one network interface. VPN traffic is extremely slow. I tried to do traffic control with this configuration (currently): qdisc del dev eth0 root qdisc add dev eth0 root handle 1: htb default 12 class add dev eth0 parent 1: classid 1:1 htb rate 900mbit #vpn class add dev eth0 parent 1:1 classid 1:10 htb rate 1500kbit ceil 3000kbit prio 1 #local net class add dev eth0 parent 1:1 classid 1:11 htb rate 10mbit ceil 900mbit prio 2 #other class add dev eth0 parent 1:1 classid 1:12 htb rate 500kbit ceil 1000kbit prio 2 filter add dev eth0 protocol ip parent 1:0 prio 1 u32 match ip sport 1194 0xffff flowid 1:10 filter add dev eth0 protocol ip parent 1:0 prio 2 u32 match ip dst 192.168.10.0/24 flowid 1:11 qdisc add dev eth0 parent 1:10 handle 10: sfq perturb 10 qdisc add dev eth0 parent 1:11 handle 11: sfq perturb 10 qdisc add dev eth0 parent 1:12 handle 12: sfq perturb 10 But it's still extremely slow. I have an imaps connection that keeps transferring data continuously (I successfully limited the rate) but with openvpn I can't seem to get more than about 100kbit/s The internet connection speed is about 3mbit/s (symmetric) What could be the problem? Does the sport filter work for udp?

    Read the article

  • Secondary backup server

    - by verdy
    I've been given a task to implement a backup solution in the event of our website goes down. It is a dedicated server running centos 6. From what i've experience on our server, our server may go down because of PHP application crash or hardware failure. I have couple of questions: In the first case, is it possible to get the server restart the PHP automatically, how can I do that? Because in my mind, if it is only the application that goes down, probably I can still make use of the server itself. In the second case, can I redirect a request to a secondary server? How can I do that? What do I need other than another server? For now it is gonna be a simple server which shows the user a static landing page so later the system notify us via email that the primary server went down so that we can restart the server manually. Is it possible to setup just a vps or even a shared server for the secondary server ? As I think there is only gonna be a static page. Thanks. Any help would be much appreciated

    Read the article

  • Fedora Installation with software repository in DVD does not work

    - by Raks
    I bought a new assembled PC with processor as Core-i3(2120) and Intel H61 motherboard and was trying to install Fedora 16 from a DVD. This DVD contains all the packages so that installation does not require to download packages from internet. I have used this DVD to install Fedora 16 offline many a times though in machines with different hardware configuration. But in this new machine when the installation reaches the stage wherein it asks for Software repository selection I select CD/DVD but the system fails to read the media and throws up an error that it cannot detect the media. THe LED in the DVD writer also indicate that the DVD is not being read. Now there is neither a problem with the DVD or the DVD drive because the installation started from the DVD itself. So what could the problem be, anything in the BIOS that is causing the problem, Is there any way I could utilize the packages already existing in the CD so that I save downloading the packages from DVD

    Read the article

  • kernel: journal commit I/O error

    - by jasondewitt
    I am having some problems with a Dell 1950 server. I am installing RHEL 4.6 along with Oracle and some other software on here. I am randomly getting an error message saying "kernel: journal commit I/O error" on my ssh session and on the monitor I have hooked up to the server I see an error scrolling by that says "EXT3-fs error (device sda5) in start_transaction: Journal has aborted." It has happened several times but never at the same point during the install. Actually, this last time the system was up and running and I was just trying to import a database into oracle. This has happened on several hard drives, so I'm pretty sure that is not the problem. This makes me think the raid controller is going bad. What do you guys think? ** UPDATE ** Pretty sure it was a bad hard drive. I threw another drive in the server and it's been running for about 48 hours with out problems.

    Read the article

  • Error in Bind9 named.conf file. Bind won't start.

    - by tj111
    I'm trying to setup a DNS server on an Ubuntu Server machine (10.04). I configured an entry in named.conf.local to test it, but when trying to restart bind9 I get the following error: * Starting domain name service... bind9 [fail] So I checked the output of syslog and this is what I get. May 20 18:11:13 empression-server1 named[4700]: starting BIND 9.7.0-P1 -u bind May 20 18:11:13 empression-server1 named[4700]: built with '--prefix=/usr' '--mandir=/usr/share/man' '--infodir=/usr/share/info' '--sysconfdir=/etc/bind' '--localstatedir=/var' '--enable-threads' '--enable-largefile' '--with-libtool' '--enable-shared' '--enable-static' '--with-openssl=/usr' '--with-gssapi=/usr' '--with-gnu-ld' '--with-dlz-postgres=no' '--with-dlz-mysql=no' '--with-dlz-bdb=yes' '--with-dlz-filesystem=yes' '--with-dlz-ldap=yes' '--with-dlz-stub=yes' '--with-geoip=/usr' '--enable-ipv6' 'CFLAGS=-fno-strict-aliasing -DDIG_SIGCHASE -O2' 'LDFLAGS=-Wl,-Bsymbolic-functions' 'CPPFLAGS=' May 20 18:11:13 empression-server1 named[4700]: adjusted limit on open files from 1024 to 1048576 May 20 18:11:13 empression-server1 named[4700]: found 4 CPUs, using 4 worker threads May 20 18:11:13 empression-server1 named[4700]: using up to 4096 sockets May 20 18:11:13 empression-server1 named[4700]: loading configuration from '/etc/bind/named.conf' May 20 18:11:13 empression-server1 named[4700]: /etc/bind/named.conf:10: missing ';' before 'include' May 20 18:11:13 empression-server1 named[4700]: loading configuration: failure May 20 18:11:13 empression-server1 named[4700]: exiting (due to fatal error) So it thinks I have an error in the default named.conf file, which is pretty ridiculous. I went through it and deleted a blank line just for the hell of it, but I can't see how it figures there's an error in there. Note that before this I did have an error in named.conf.local, but it showed up properly in syslog and I fixed it, so it is reporting the correct file. Here is the contents of named.conf: // This is the primary configuration file for the BIND DNS server named. // // Please read /usr/share/doc/bind9/README.Debian.gz for information on the // structure of BIND configuration files in Debian, *BEFORE* you customize // this configuration file. // // If you are just adding zones, please do that in /etc/bind/named.conf.local include "/etc/bind/named.conf.options"; include "/etc/bind/named.conf.local"; include "/etc/bind/named.conf.default-zones";

    Read the article

  • Unix Server Partitioning & Filesystem Layout

    - by user1717735
    There's a lot of contradictory information about Unix server partitioning out on the internet, so I need some advice on how to proceed. So far, on the servers I in our test environment I didn't really care about partitioning and I configured a single monolithic / plus a swap partition. This partitioning scheme doesn't seem like a good idea for our production servers. I have found a good starting point here, but it seems very vague on the details. Basically I have a server on which I will be running a basic LAMP stack (Apache, PHP, and MySQL). It will have to handle file uploads (up to 2GB). The system has a 2TB RAID 1 array. I plan to set : / 100GB /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB Does this sound sane, or am I over-complicating things?

    Read the article

  • Router failover not detecting outside interface link lost

    - by Matt
    Suppose I have two routers configured in master/slave configuration. They look something like this (addresses are not real ones) 123.123.123.10 <===> [eth0] Router 1 (10.1.1.2) [eth1] ===> +----------+ | 10.1.1.1 | ===> LAN 172.123.123.10 <===> [eth0] Router 2 (10.1.1.3) [eth1] ===> +----------+ The 10.1.1.1 is the default route for the Network (10.1.1.0). What's slightly different in this config to other's I've seen is that I don't have an external virtual IP. Also, the 10.1.1.1 addresses are in real life, public IP's (not private ones shown here). This is more of a router setup than a firewall setup so I'm not using NAT here. Now the issue that I'm having is that I can't see any way to configure UCARP or VRRP to monitor both eth0 & eth1 and fail over to the backup router should either of them go down. What I'm seeing is that if Router1 is the master and I unplug eth0 on router1, it doesn't fail over to router 2. However, it will if instead I unplug eth1 of router 1. In VRRP I see there is a cluster group, but it seems that for this to work you need to have virtual ip's or vrrp instances rather than actual interfaces assigned to it. I hope my explanation is clear. How do I get around this?

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >