qemu-kvm virtual machine virtio network freeze under load

Posted by Rick Koshi on Server Fault See other posts from Server Fault or by Rick Koshi
Published on 2012-02-21T01:16:42Z Indexed on 2012/07/07 3:17 UTC
Read the original article Hit count: 535

I'm having a problem with my virtual machines, where the network will freeze under heavy load. I'm using CentOS 6.2 as both host and guest, not using libvirt, just running qemu-kvm directly as follows:

/usr/libexec/qemu-kvm \
   -drive file=/data2/vm/rb-dev2-www1-vm.img,index=0,media=disk,cache=none,if=virtio \
   -boot order=c \
   -m 2G \
   -smp cores=1,threads=2 \
   -vga std \
   -name rb-dev2-www1-vm \
   -vnc :84,password \
   -net nic,vlan=0,macaddr=52:54:20:00:00:54,model=virtio \
   -net tap,vlan=0,ifname=tap84,script=/etc/qemu-ifup \
   -monitor unix:/var/run/vm/rb-dev2-www1-vm.mon,server,nowait \
   -rtc base=utc \
   -device piix3-usb-uhci \
   -device usb-tablet

/etc/qemu-ifup (used by the above command) is a very simple script, containing the following:

#!/bin/sh

sudo /sbin/ifconfig $1 0.0.0.0 promisc up
sudo /usr/sbin/brctl addif br0 $1
sleep 2

And here's the info on br0 and other interfaces:

avl-host3 14# brctl show
bridge name     bridge id               STP enabled     interfaces
br0             8000.180373f5521a       no              bond0
                                                        tap84
virbr0          8000.525400858961       yes             virbr0-nic
avl-host3 15# ip addr show 
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: em1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
    link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff
3: em2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP qlen 1000
    link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff
4: em3: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 18:03:73:f5:52:1e brd ff:ff:ff:ff:ff:ff
5: em4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 1000
    link/ether 18:03:73:f5:52:20 brd ff:ff:ff:ff:ff:ff
6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP 
    link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff
    inet6 fe80::1a03:73ff:fef5:521a/64 scope link 
       valid_lft forever preferred_lft forever
7: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 18:03:73:f5:52:1a brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.46/24 brd 172.16.1.255 scope global br0
    inet6 fe80::1a03:73ff:fef5:521a/64 scope link 
       valid_lft forever preferred_lft forever
8: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN 
    link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
9: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500
    link/ether 52:54:00:85:89:61 brd ff:ff:ff:ff:ff:ff
12: tap84: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500
    link/ether ba:e8:9b:2a:ff:48 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::b8e8:9bff:fe2a:ff48/64 scope link 
       valid_lft forever preferred_lft forever

bond0 is a bond of em1 and em2.

virbr0 and virbr0-nic are vestigial interfaces left over from CentOS's default installation. They are unused (as far as I know).

The guest runs perfectly until I run a large 'rsync', when the network will freeze after some seemingly-random time (usually under a minute). When it freezes, there is no network activity in or out of the guest. I can still connect to the guest's console via vnc, but it is unable to speak out its network interface. Any attempt to 'ping' from the guest gives a "Destination Host Unreachable" error for 3/4 packets and no reply for every fourth packet.

Sometimes (perhaps two thirds of the time), I can bring the interface back to life by doing a "service network restart" from the guest's console. If this works (and if I do it before the rsync times out), the rsync will resume. Usually it will freeze again within a minute or two. If I repeat, the rsync will eventually finish, and I presume the machine goes back to waiting for another period of heavy load.

Throughout the whole process, there are no console errors or relevant (that I can see) syslog messages on either guest or host machine.

If the "service network restart" doesn't work the first time, trying again (and again and again) never seems to work. The command completes normally, with normal output, but the interface stays frozen. However, a soft reboot of the guest machine (without restarting qemu-kvm) always seems to bring it back.

I am aware of the "lowest mac address" assignment problem, where the bridge takes on the mac address of the slave interface with the lowest mac address. This causes temporary network freezes, but is definitely not what's happening for me. My freezes are permanent until manual intervention, and you can see from the 'ip addr show' output above that the mac address being used by br0 is that of the physical ethernet.

There are no other virtual machines running on the host. I've verified that each virtual machine on the subnet has its own unique mac address.

I have rebuilt the guest machine several times, and I have tried this on three different host machines (identical hardware, built identically). Oddly, I do have one virtual host (the second of this series) which never seemed to have a problem. It never had its network freeze when it was running the same rsync during its build. It's particularly odd because it was the second build. The first, on a different host, did have the freezing problem, but the second did not. I assumed at the time that I had done something wrong with the first build, and that the problem was resolved. Unfortunately, the problem reappeared when I built the third VM. Also unfortunately, I can't do many tests with the working VM, as it's now in production use, and I'm hoping I can find the cause of this issue before that machine starts having problems. It's possible that I just got really lucky while running the rsync on the working machine, and that one time it didn't freeze.

Of course it's possible that I somehow changed the build scripts without realizing it and re-broke something, but I can't find any such thing.

In any case, I'm hoping someone has some idea what could cause this.

Addendum: Preliminary tests suggest that I don't have the problem if I substitute e1000 for virtio in the first -net flag to qemu-kvm. I don't consider this a solution, but it is suitable for a stopgap. Has anyone else had (or better yet, solved) this problem with the virtio network driver?

© Server Fault or respective owner

Related posts about linux

Related posts about networking