In this post I will show and demonstrate how you can model your logical network in Virtual Machine Manager to support Converged Network fabric (NIC teaming, QoS and virtual network adapters).
Before we get started, what is converged network?
As we discussed in a previous post on how to Isolate DPM Traffic in Hyper-V by leveraging the converged network on the Hyper-V host were combining multiple physical NICs with NIC teaming, QoS and vNICs, we can isolate each network traffic and sustaining network resiliency if one NIC failed as shown in below diagram:
To use NIC teaming in a Hyper-V environment with QoS, you need to use PowerShell to separate the traffic, however in VMM we can deploy the same using the UI.
More information about QoS Common PowerShell Configurations can be found here.
So without further ado, let’s jump right in.
The Hyper-V server in this Demo is using 2X10GB fiber, I want to team those NICs to leverage QoS and converged networking. The host is connected to the same physical fiber switch, configured with static IP address on one of the NICs, it’s joined to the domain and managed by VMM.
We must create Logical Networks in VMM.
What is logical Network and how to create Logical Network in Virtual Machine Manager 2012 R2? click here.
We will create several logical networks in this demo for different purposes:
- Management / VM: Contains the IP subnet used for host management and Virtual Machines. This network does also have an associated IP Pool so that VMM can manage IP assignment to hosts, and virtual machines connected to this network. In this demo the management and Virtual Machines are on the same network, however in production environment both networks must be separated.
- Live Migration: Contains the IP subnet and VLAN for Live Migration traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
- Backup: Contains the IP subnet and VLAN for Hyper-V Backup traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
- Hyper-V Replica: Contains the IP subnet and VLAN for Hyper-V Replica traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
- Cluster: Contains the IP subnet and VLAN for Cluster communication. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
Creating IP pools for host Management, Live Migration, Backup, Hyper-V Replica and Cluster network.
We will create IP pools for each logical network site so that VMM can assign the right IP configuration to the Virtual NIC within this network. This is an awesome feature in VMM so that we don’t have to perform this manually or rely on DHCP server. We can also exclude IP addresses from the pool that has already been assigned to other resources.
The VM Networks step is not necessary if you selected ‘Create a VM network with the same name to allow virtual machines to access this logical network directly’ while creating the logical networks above. If you did not, please continue to create VM networks with 1:1 mapping with the Logical Networks in the Fabric workspace.
Creating Uplink Port Profiles, Virtual Port Profiles and Port Classifications.
Virtual Machine Manager does not ship with a default Uplink port profile, so we must create one on our own.
We will create one Uplink port profile, so in this demo one profile for our production Hyper-V host.
Production Uplink Port Profile:
Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile.
As you can see in below screenshot, Windows Server 2012 R2 supports three different load balancing algorithms:
Hashing, Hyper-V switch port and Dynamic.
For Teaming mode, we have Static teaming, Switch independent and LACP.
More information about NIC Teaming can be found here.
Assign a name, description, and make sure that ‘Uplink port profile’ is selected, then specify the load balancing algorithm together with teaming mode, as best practice we will select Dynamic and Switch Independent for Hyper-V workload. Click Next.
Select the network sites supported by this uplink port profile. VMM will tell the Hyper-V hosts that they are connected and mapped to the following logical networks and sites in our fabric: Backup (for Hyper-V backup traffic), Live Migration (for live migration traffic), Management/VM (for management and virtual machine communication), Replica (for Hyper-V replica traffic), Cluster (for Hyper-V cluster communication).
Virtual Port Profile:
If you navigate to Port Profiles under the networking tab in fabric workspace, you will see several port profiles already shipped with VMM. You can take advantage of these and use the existing profiles for Host management, Cluster and Live Migration.
For the purpose of this demo, I will create 2 additional virtual port profiles for Hyper-V Backup and Hyper-V Replica as well.
Note: Make sure you don’t exceed the total weight of 100 in Bandwidth settings for all virtual port profiles that you intend to apply on the Hyper-V host.
Here are the different bandwidth settings for each profile that I used:
Host Management: 10
Hyper-V Replica: 10
Hyper-V Cluster: 10
Hyper-V Backup: 10
Live Migration: 20
If you summit up, we have already weight of 60 and the rest can be assigned to the Virtual Machines or other resources.
Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile one for backup and another one for replica. Assign a name, description, and by default ‘Virtual network adapter profile’ will be selected. Click Next.
Click on Offload settings to see the settings for the virtual network adapter profile.
Click on Bandwidth settings and adjust the QoS as we described above.
Repeat the same steps for each Virtual network adapter profile.
Creating a port classification.
We must also create a port classification that we can associate with each virtual network port profile. When you are configuring virtual network adapters on a team on a Hyper-V host, you can map the network adapter to a classification that will ensure that the configuration within the virtual port profiles are mapped.
Please note that this is a description (label) only and does not contains any configuration, this is very useful in a host provider deployment were the tenant/customer see the message for their Virtual Machines for example High bandwidth, but effectively you can limit the customer with a port profile very Low bandwidth weight and they think they have a very high speed VMs sneaky yes (don’t do this) .
Navigate to fabric, expand networking and right click on Port Classification to create a new port classification.
Assign a name, description and click OK.
Creating Logical Switches:
A logical switch is the last networking fabric in VMM before we apply it into our Hyper-V host, it’s basically a container of Uplink profiles and Virtual port profiles.
Right click on Logical Switches in the Fabric workspace and create a new logical switch.
Assign the logical switch a name and a description. Leave out the option for ‘Enable single root I/O virtualization (SR-IOV)’, this is beyond our scope in this demo.
We are using the default Microsoft Windows Filtering Platform. Click Next.
Specify the uplink port profiles that are part of this logical switch, sure enough we will enable uplink mode to be ‘Team’, and add our Production Uplink. Click Next.
Specify the port classifications for each virtual ports part of this logical switch. Click ‘add’ to configure the virtual ports. Browse to the right classification and include a virtual network adapter port profile to associate the classification. Repeat this process for all virtual ports so you have added classifications and profiles for management, backup, cluster, live migration and replica, low, medium and high bandwidth are used for the Virtual Machines. Click Next.
Review the settings and click Finish.
Last but not least, we need to apply the networking template on the Hyper-V host.
Until this point, we created different networking fabric in VMM and we ended up with Logical Switch template that contains all networking configurations that we intended to apply on the host.
Navigate to the host group in fabric workspace that contains your production Hyper-V hosts.
Right click on the host and click ‘Properties’.
Navigate to Virtual switches.
Click ‘New Virtual Switch’ and ‘New Logical Switch’. Make sure that Production Converged vSwitch is selected and add the physical adapters that should participate in this configuration. Make sure that ‘Production Uplink’ is associated with the adapters.
Click on ‘New Virtual Network Adapter’ to add virtual adapters to the configuration.
I will add in total 5 virtual network adapters. One adapter for host management, one for live migration, one for backup, one for replica and one for cluster. Please note that the virtual adapter used for host management, will have the setting ‘This virtual network adapter inherits settings from the physical management adapter’ enabled. This means that the physical NIC on the host configured for management, will transfer its configuration into a virtual adapter created on the team. This very important, if you don’t select this you will loose the network connectivity to the host, therefore you cannot access it anymore unless you have ILO or iDRAC management interface on the physical host.
Repeat the process for Live Migration and Cluster virtual adapters, etc… and ensure they are connected to the right VM networks with the right VLAN, IP Pool and port profile classification.
Once you are done, click OK. VMM will now communicate with its agent on the host, and configure NIC teaming with the right configurations and corresponding virtual network adapters.
The last step is to validate the network configuration on the host
Until next time… Enjoy your weekend!