Can You Mix MTU on Windows Server 2016 Converged Network With Switch Embedded Teaming? #WS2016 #HyperV

4 min read

With the release of Windows Server 2016, Microsoft introduced a new type of teaming approach called Switch Embedded Teaming (known as SET). SET is a virtualization aware virtual switch, and it’s different from the traditional LBFO NIC Teaming as follows:

SET is embedded into the Hyper-V virtual switch, that means a couple of things, the first one you don’t have any team interfaces on the host anymore, you won’t be able to build anything extra on top of it, you can’t set property on the team because it’s part of the virtual switch, you need to set all the properties directly on the virtual switch. SET is targeted to support Software Defined Networking (SDN) switch capabilities, it’s not a general purpose use everywhere teaming solution that NIC Teaming was intended to be. It’s specifically integrated with Converged RDMA vNIC and SDN stack. Last but not least, SET is only supported in Nano Server, however, LBFO teaming solution is not supported in Nano.

Above diagram illustrates the networking architecture for converged network in Windows Server 2016.

The other day, a colleague of mine asked me the following question:

{Can I mix MTUs on Windows Server 2016 with SET Team converged network? For example: 9014 for Live Migration and SMB storage and 1514 for Management OS and virtual machines}.

For more information about Maximum Transmission Unit (MTU), check here.


The short answer is YES! and here are the steps on how to do it:

Step 1: Physical NICs

You should configure the physical NICs (pNICs) that are used by the SET virtual switch which are connected to your Top of Rack Switches (TOR). Here is the PowerShell command to set the Jumbo Packet to 9014.

Get-NetAdapterAdvancedProperty –Name “HPE Ethernet 40Gb*” -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014

This command will set the Jumbo Packet to 9014 for all physical NICs started by the name “HPE Ethernet 40Gb*”.

Step 2: Physical Switches

Configure jumbo frames on your physical switches. Check your manufacturer’s instructions on how to enable jumbo frames. The following example illustrates on how to  enable jumbo frames globally (all ports) on Cisco Nexus Switches:

policy-map type network-qos jumbo
  class type network-qos class-default
          mtu 9216
system qos
  service-policy type network-qos jumbo

Step 3: SET Hyper-V Virtual Switch

You don’t need to set anything on the SET Team virtual switch. This has been the case since Windows Server 2012 with traditional LBFO NIC Teaming as well.

Step 4: Virtual Machine OS NIC

By default, jumbo frame is disabled inside the guest operating system. If you want to keep the default standard MTU to 1514, then skip this step.

However, if you want to configure jumbo frame for the virtual machine, then you need to enable it inside the guest.

Option 1: Advanced properties of the virtual adapter


Option 2: Windows PowerShell

Invoke-Command -VMName (Get-VM).Name -Credential Domain\Administrator -ScriptBlock { Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014 }

This single line of PowerShell will set the Jumbo Packet for all virtual machines on a particular Hyper-V host without any network connection. For more information, check PowerShell Direct.

Step 5: Management OS vNIC

Referring to the diagram above, we have several vNICs for different type of traffic on the management partition (the Hyper-V host), they can also be enabled/disabled for jumbo frames via the Advanced properties of the virtual adapter or through PowerShell, as follows:

Get-NetAdapterAdvancedProperty -Name "vEthernet (MGT_HostOS)" -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 1514

Here is an example for the vNICs on the Hyper-V host, we disabled Jumbo Packet for the MGT_HostOS vNIC and for the MGT_HVR vNIC, and we enabled Jumbo Packet for Live Migration and Backup vNICs.

Step 6: Verify

You should now be able to send jumbo packets, which you can verify by using the following commands from the host vNIC or from the guest vmNIC.

Verify Jumbo Frames {9014}:

Ping -f -l 8972 <target IP>

Verify Jumbo Frames {1514}:

Ping -f -l 1472 <target IP>

Here is an example showing that ping is failing to send larger Jumbo Packet via the MGT_HostOS vNIC, but I am able to send standard MTU packet {1472}.

On the other side, I am able to send larger Jumbo Packet {8972} via Live Migration vNIC.


Note that I use a packet size of 8972 because there are wrappers around the data sent; 9000 is unlikely to work, but 8972 should work and still confirms that you’re sending jumbo frames. The same for standard MTU as well, I use a packet size of 1472; 1500 is unlikely to work.

Hope this helps!

[email protected]

About Charbel Nemnom 559 Articles
Charbel Nemnom is a Cloud Architect, ICT Security Expert, Microsoft Most Valuable Professional (MVP), and Microsoft Certified Trainer (MCT), totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 17 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize the performance of mission-critical enterprise systems. Excellent communicator is adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design, business continuity, and cloud security.

Be the first to comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.