I'm back from my summer break (and some pressing business that kept me away from this), ready to continue with Oracle VM Server for SPARC ;-)
In this article, we'll have a closer look at virtual
networking. Basic connectivity as we've seen it in the first, simple example, is easy enough. But there are numerous options for the virtual
switches and virtual
network ports, which we will discuss in more detail now. In this section, we will concentrate on virtual
networking - the capabilities of virtual
switches and virtual
network ports - only. Other options involving hardware assignment or redundancy will be covered in separate sections later on.
There are two basic components involved in virtual
networking for LDoms: Virtual
switches and virtual
network devices. The virtual
switch should be seen just like a real ethernet switch. It "runs" in the service domain and moves ethernet packets back and forth. A virtual
network device is plumbed in the guest domain. It corresponds to a physical network device in the real world. There, you'd be plugging a cable into the network port, and plug the other end of that cable into a switch. In the virtual
world, you do the same: You create a virtual
network device for your guest and connect it to a virtual
switch in a service domain. The result works just like in the physical world, the network device sends and receives ethernet packets, and the switch does all those things ethernet switches tend to do.
If you look at the reference manual of Oracle VM Server for SPARC, there are numerous options for virtual
switches and network devices. Don't be confused, it's rather straight forward, really. Let's start with the simple case, and work our way to some more sophisticated options later on.
In many cases, you'll want to have several guests that communicate with the outside world on the same ethernet segment. In the real world, you'd connect each of these systems to the same ethernet switch. So, let's do the same thing in the virtual
# ldm add-vsw net-dev=nxge2 admin-vsw primary
# ldm add-vnet admin-net admin-vsw mars
# ldm add-vnet admin-net admin-vsw venus
We've just created a virtual
switch called "admin-vsw" and connected it to the physical device nxge2. In the physical world, we'd have powered up our ethernet switch and installed a cable between it and our big enterprise datacenter switch. We then created a virtual
network interface for each one of the two guest systems "mars" and "venus" and connected both to that virtual
switch. They can now communicate with each other and with any system reachable via nxge2. If primary were running Solaris 10, communication with the guests would not be possible. This is different with Solaris 11, please see the Admin Guide for details. Note that I've given both the vswitch and the vnet devices some sensible names, something I always recommend.
Unless told otherwise, the LDoms Manager software will automatically assign MAC addresses to all network elements that need one. It will also make sure that these MAC addresses are unique and reuse MAC addresses to play nice with all those friendly DHCP servers out there. However, if we want to do this manually, we can also do that. (One reason might be firewall rules that work on MAC addresses.) So let's give mars a manually assigned MAC address:
# ldm set-vnet mac-addr=0:14:4f:f9:c4:13 admin-net mars
Within the guest, these virtual
network devices have their own device driver. In Solaris 10, they'd appear as "vnet0". Solaris 11 would apply it's usual vanity naming scheme. We can configure these interfaces just like any normal interface, give it an IP-address and configure sophisticated routing rules, just like on bare metal.
In many cases, using Jumbo Frames helps increase throughput performance. By default, these interfaces will run with the standard ethernet MTU of 1500 bytes. To change this, it is usually sufficient to set the desired MTU for the virtual
switch. This will automatically set the same MTU for all vnet devices attached to that switch. Let's change the MTU size of our admin-vsw from the example above:
# ldm set-vsw mtu=9000 admin-vsw primary
Note that that you can set the MTU to any value between 1500 and 16000. Of course, whatever you set needs to be supported by the physical network, too.
Another very common area of network configuration is VLAN tagging. This can be a little confusing - my advise here is to be very clear on what you want, and perhaps draw a little diagram the first few times. As always, keeping a configuration simple will help avoid errors of all kind. Nevertheless, VLAN tagging is very usefull to consolidate different networks onto one physical cable. And as such, this concept needs to be carried over into the virtual
world. Enough of the introduction, here's a little diagram to help in explaining how VLANs work in LDoms:
Let's remember that any VLANs not explicitly tagged have the default VLAN ID of 1. In this example, we have a vswitch connected to a physical network that carries untagged traffic (VLAN ID 1) as well as VLANs 11, 22, 33 and 44. There might also be other VLANs on the wire, but the vswitch will ignore all those packets. We also have two vnet devices, one for mars and one for venus. Venus will see traffic from VLANs 33 and 44 only. For VLAN 44, venus will need to configure a tagged interface "vnet44000". For VLAN 33, the vswitch will untag all incoming traffic for venus, so that venus will see this as "normal" or untagged ethernet traffic. This is very useful to simplify guest configuration and also allows venus to perform Jumpstart or AI installations over this network even if the Jumpstart or AI server is connected via VLAN 33. Mars, on the other hand, has full access to untagged traffic from the outside world, and also to VLANs 11,22 and 33, but not 44. On the command line, we'd do this like this:
# ldm add-vsw net-dev=nxge2 pvid=1 vid=11,22,33,44 admin-vsw primary
# ldm add-vnet admin-net pvid=1 vid=11,22,33 admin-vsw mars
# ldm add-vnet admin-net pvid=33 vid=44 admin-vsw venus
Finally, I'd like to point to a neat little option that will make your live easier in all those cases where configurations tend to change over the live of a guest system. It's the "id=<somenumber>" option available for both vswitches and vnet devices. Normally, Solaris in the guest would enumerate network devices sequentially. However, it has ways of remembering this initial numbering. This is good in the physical world. In the virtual
world, whenever you unbind (aka power off and disassemble) a guest system, remove and/or add network devices and bind the system again, chances are this numbering will change. Configuration confusion will follow suit. To avoid this, nail down the initial numbering by assigning each vnet device it's device-id explicitly:
# ldm add-vnet admin-net id=1 admin-vsw venus
Please consult the Admin Guide for details on this, and how to decipher these network ids from Solaris running in the guest.
Thanks for reading this far. Links for further reading are essentially only the Admin Guide and Reference Manual and can be found above. I hope this is useful and, as always, I welcome any comments.