NFS issue brings down entire vSphere ESX estate

Posted by growse on Server Fault See other posts from Server Fault or by growse
Published on 2011-11-30T11:11:36Z Indexed on 2012/03/21 11:31 UTC
Read the original article Hit count: 396

I experienced an odd issue this morning where an NFS issue appeared to have taken down the majority of my VMs hosted on a small vSphere 5.0 estate.

The infrastructure itself is 4x IBM HS21 blades running around 20 VMs. The storage is provided by a single HP X1600 array with attached D2700 chassis running Solaris 11. There's a couple of storage pools on this which are exposed over NFS for the storage of the VM files, and some iSCSI LUNs for things like MSCS shared disks. Normally, this is pretty stable, but I appreciate the lack of resiliancy in having a single X1600 doing all the storage.

This morning, in the logs of each ESX host, at around 0521 GMT I saw a lot of entries like this:

2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4cf9a8  3
2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4dc9e8  3
2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4d3fa8  3
2011-11-30T05:21:54.161Z cpu2:2050)NFSLock: 608: Stop accessing fd 0x41000a4de0a8  3
[....]
2011-11-30T06:16:07.042Z cpu0:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank")
2011-11-30T06:17:01.459Z cpu2:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank")
2011-11-30T06:25:17.887Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000a4c2b28  3
2011-11-30T06:27:16.063Z cpu3:4011)NFSLock: 568: Start accessing fd 0x41000a4d8928 again
2011-11-30T06:35:30.827Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)")
2011-11-30T06:36:37.953Z cpu6:2054)NFS: 292: Restored connection to the server 10.13.111.197 mount point /tank/ISO, mounted as 5acdbb3e-410e56e3-0000-000000000000 ("ISO (1)")
2011-11-30T06:40:08.242Z cpu6:2054)NFSLock: 608: Stop accessing fd 0x41000a4c3e68  3
2011-11-30T06:40:34.647Z cpu3:2051)NFSLock: 568: Start accessing fd 0x41000a4d8928 again
2011-11-30T06:44:42.663Z cpu1:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank")
2011-11-30T06:44:53.973Z cpu0:4011)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank")
2011-11-30T06:51:28.296Z cpu5:2058)NFSLock: 608: Stop accessing fd 0x41000ae3c528  3
2011-11-30T06:51:44.024Z cpu4:2052)NFSLock: 568: Start accessing fd 0x41000ae3b8e8 again
2011-11-30T06:56:30.758Z cpu4:2058)WARNING: NFS: 283: Lost connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank")
2011-11-30T06:56:53.389Z cpu7:2055)NFS: 292: Restored connection to the server 10.13.111.197 mount point /sastank/VMStorage, mounted as f0342e1c-19be66b5-0000-000000000000 ("SAStank")
2011-11-30T07:01:50.350Z cpu6:2054)ScsiDeviceIO: 2316: Cmd(0x41240072bc80) 0x12, CmdSN 0x9803 to dev "naa.600508e000000000505c16815a36c50d" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.
2011-11-30T07:03:48.449Z cpu3:2051)NFSLock: 608: Stop accessing fd 0x41000ae46b68  3
2011-11-30T07:03:57.318Z cpu4:4009)NFSLock: 568: Start accessing fd 0x41000ae48228 again

(I've put a complete dump from one of the hosts on pastebin: http://pastebin.com/Vn60wgTt)

When I got in the office at 9am, I saw various failures and alarms and troubleshooted the issue. It turned out that pretty much all of the VMs were inaccessible, and that the ESX hosts either were describing each VM as 'powered off', 'powered on', or 'unavailable'. The VMs described as 'powered on' where not in any way reachable or responding to pings, so this may be lies.

There's absolutely no indication on the X1600 that anything was awry, and nothing on the switches to indicate any loss of connectivity. I only managed to resolve the issue by rebooting the ESX hosts in turn.

I have a number of questions:

  1. What the hell happened?
  2. If this was a temporary NFS failure, why did it put the ESX hosts into a state from which a reboot was the only recovery?
  3. In the future, when the NFS server goes a little off-piste, what would be the best approach to add some resilience? I've been looking at budgeting for next year and potentially have budget to purchase another X1600/D2700/disks, would an identical mirrored disk setup help to mitigate these sorts of failures automatically?

Edit (Added requested details)

To expand with some details as requested:

The X1600 has 12x 1TB disks lumped together in mirrored pairs as tank, and the D2700 (connected with a mini SAS cable) has 12x 300GB 10k SAS disks lumped together in mirrored pairs as sastank

zpool status

  pool: rpool
 state: ONLINE
 scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          c7t0d0s0  ONLINE       0     0     0

errors: No known data errors

  pool: sastank
 state: ONLINE
 scan: scrub repaired 0 in 74h21m with 0 errors on Wed Nov 30 02:51:58 2011
config:

        NAME         STATE     READ WRITE CKSUM
        sastank      ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c7t14d0  ONLINE       0     0     0
            c7t15d0  ONLINE       0     0     0
          mirror-1   ONLINE       0     0     0
            c7t16d0  ONLINE       0     0     0
            c7t17d0  ONLINE       0     0     0
          mirror-2   ONLINE       0     0     0
            c7t18d0  ONLINE       0     0     0
            c7t19d0  ONLINE       0     0     0
          mirror-3   ONLINE       0     0     0
            c7t20d0  ONLINE       0     0     0
            c7t21d0  ONLINE       0     0     0
          mirror-4   ONLINE       0     0     0
            c7t22d0  ONLINE       0     0     0
            c7t23d0  ONLINE       0     0     0
          mirror-5   ONLINE       0     0     0
            c7t24d0  ONLINE       0     0     0
            c7t25d0  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: ONLINE
 scan: scrub repaired 0 in 17h28m with 0 errors on Mon Nov 28 17:58:19 2011
config:

        NAME         STATE     READ WRITE CKSUM
        tank         ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            c7t1d0   ONLINE       0     0     0
            c7t2d0   ONLINE       0     0     0
          mirror-1   ONLINE       0     0     0
            c7t3d0   ONLINE       0     0     0
            c7t4d0   ONLINE       0     0     0
          mirror-2   ONLINE       0     0     0
            c7t5d0   ONLINE       0     0     0
            c7t6d0   ONLINE       0     0     0
          mirror-3   ONLINE       0     0     0
            c7t8d0   ONLINE       0     0     0
            c7t9d0   ONLINE       0     0     0
          mirror-4   ONLINE       0     0     0
            c7t10d0  ONLINE       0     0     0
            c7t11d0  ONLINE       0     0     0
          mirror-5   ONLINE       0     0     0
            c7t12d0  ONLINE       0     0     0
            c7t13d0  ONLINE       0     0     0

errors: No known data errors

The filesystem exposed over NFS for the primary datastore is sastank/VMStorage

zfs list

NAME                          USED  AVAIL  REFER  MOUNTPOINT
rpool                        45.1G  13.4G  92.5K  /rpool
rpool/ROOT                   2.28G  13.4G    31K  legacy
rpool/ROOT/solaris           2.28G  13.4G  2.19G  /
rpool/dump                   15.0G  13.4G  15.0G  -
rpool/export                 11.9G  13.4G    32K  /export
rpool/export/home            11.9G  13.4G    32K  /export/home
rpool/export/home/andrew     11.9G  13.4G  11.9G  /export/home/andrew
rpool/swap                   15.9G  29.2G   123M  -
sastank                      1.08T   536G    33K  /sastank
sastank/VMStorage            1.01T   536G  1.01T  /sastank/VMStorage
sastank/comstar              71.7G   536G    31K  /sastank/comstar
sastank/comstar/sql_tempdb   6.31G   536G  6.31G  -
sastank/comstar/sql_tx_data  65.4G   536G  65.4G  -
tank                         4.79T   578G    42K  /tank
tank/FTP                      269G   578G   269G  /tank/FTP
tank/ISO                     28.8G   578G  25.9G  /tank/ISO
tank/backupstage             2.64T   578G  2.49T  /tank/backupstage
tank/cifs                     301G   578G   297G  /tank/cifs
tank/comstar                 1.54T   578G    31K  /tank/comstar
tank/comstar/msdtc           1.07G   579G  32.8M  -
tank/comstar/quorum           577M   578G  47.9M  -
tank/comstar/sqldata         1.54T   886G   304G  -
tank/comstar/vsphere_lun     2.09G   580G  22.2M  -
tank/mcs-asset-repository    7.01M   578G  6.99M  /tank/mcs-asset-repository
tank/mscs-quorum               55K   578G    36K  /tank/mscs-quorum
tank/sccm                    16.1G   578G  12.8G  /tank/sccm

As for the networking, all connections between the X1600, the Blades and the switch are either LACP or Etherchannel bonded 2x 1Gbit links. Switch is a single Cisco 3750.

Storage traffic sits on its own VLAN segregated from VM machine traffic.

© Server Fault or respective owner

Related posts about vmware-esxi

Related posts about nfs