Best NIC config when virtual servers need iSCSI storage?

Posted by icky2000 on Server Fault See other posts from Server Fault or by icky2000
Published on 2010-04-23T23:01:31Z Indexed on 2010/04/23 23:03 UTC
Read the original article Hit count: 254

Filed under:
|

I have a Windows 2008 server running Hyper-V. There are 6 NICs on the server configured like this:

  • NIC01 & NIC02: teamed administrative interface (RDP, mgmt, etc)
  • NIC03: connected to iSCSI VLAN #1
  • NIC04: connected to iSCSI VLAN #2
  • NIC05: dedicated to one virtual switch for VMs
  • NIC06: dedicated to another virtual switch for VMs

The iSCSI NICs are used obviously for storage to host the VMs. I put half the VMs on the host on the switch assigned to NIC05 and the other half on the switch assigned to NIC06. We have multiple production networks that the VMs could appear on so the switch ports that NIC05 & NIC06 are connected to are trunked and we then tag the NIC on the VM for the appropriate VLAN. No clustering on this host.

Now I wish to assign some iSCSI storage direct to a VM. As I see it I have 2 options:

  1. Add the iSCSI VLANs to the trunked ports (NIC05 and NIC06), add two NICs to the VM that needs iSCSI storage, and tag them for the iSCSI VLANs

  2. Create two additional virtual switches on the host. Assign one to NIC03 and one to NIC04. Add two NICs to the VM that needs iSCSI storage and let them share that path to the SAN with the host.

I'm wondering about how much overhead the VLAN tagging in Hyper-V has and haven't seen any discussion about that. I'm also a bit concerned that something funky on the iSCSI-connected VM could saturate the iSCSI NICs or cause some other problem that could threaten storage access for the entire host which would be bad.

Any thoughts or suggestions? How do you configure your hosts when VMs connect direct to iSCSI?

© Server Fault or respective owner

Related posts about hyper-v

Related posts about iscsi