You dont have javascript enabled! Please enable it!

Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC

6 Min. Read

In this article, we will show you how to deploy Switch Embedded Teaming (SET) on Hyper-V and on virtual machines using PowerShell DSC.

Before doing that, we would like to give you a little bit of context around the capabilities of this new teaming mode.

Introduction

In Windows Server 2012 and 2012 R2, Microsoft introduced NIC Teaming natively in the OS where you can have a collection of NICs up to 32 maximum, and create your team of those NICs through the UI (lbfoadmin) or using PowerShell by defining the load balancing algorithm and the teaming mode. You build your team and then if you want to bind the virtual switch to that team using Hyper-V to allow Virtual Machines to communicate out through that team set of adapters, then you go across Hyper-V Manager or PowerShell and create your Virtual Switch and bind it to that Team, in either way its multiple steps, the virtual switch and the team are two separate constructs.

If you want to deep dive into Microsoft NIC Teaming in Windows Server 2012 R2 and later release, I highly recommend checking my earlier post about Microsoft NIC Teaming.

Switch Embedded Teaming (SET) overview

With the release of Windows Server 2016, Microsoft introduced a new type of teaming approach called Switch Embedded Teaming (SET) which is a virtualization-aware, how is that different from NIC Teaming, the first part is embedded into the Hyper-V virtual switch, which means a couple of things, the first one you don’t have any team interfaces anymore, you won’t be able to build anything extra on top of it, you can’t set a property on the team because it’s part of the virtual switch, you set all the properties directly on the vSwitch.

SET is targeted to support Software Defined Networking (SDN) switch capabilities, it’s not a general-purpose use everywhere teaming solution that NIC Teaming was intended to be. So this is specifically integrated with Packet Direct, Converged RDMA vNIC, and SDN-QoS. It’s only supported when using the SDN-Extension. Packet Direct provides a high network traffic throughput and low-latency packet processing infrastructure.

At the time of writing this article, the following list of networking features are compatible and not compatible in Windows Server 2016:

SET is compatible with:

  • Datacenter bridging (DCB).
  • Hyper-V Network Virtualization – NV-GRE and VxLAN are both supported in Windows Server 2016 Technical Preview.
  • Receive-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if any of the SET team members support them.
  • Remote Direct Memory Access (RDMA).
  • SDN Quality of Service (QoS).
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if all of the SET team members support them.
  • Virtual Machine Queues (VMQ).
  • Virtual Receive Side Scaling (RSS).

SET is not compatible with:

  • 802.1X authentication.
  • IPsec Task Offload (IPsecTO).
  • QoS in the host or native OSs.
  • Receive side coalescing (RSC).
  • Receive side scaling (RSS).
  • Single root I/O virtualization (SR-IOV).
  • TCP Chimney Offload.
  • Virtual Machine QoS (VM-QoS).

SET Modes and Settings:

  • Switch independent teaming mode only.
  • Dynamic and Hyper-V port mode load distributions only.
  • Managed by SCVMM or PowerShell, no GUI.
  • Only team’s identical ports (same manufacturer, same driver, same capabilities) (e.g., dual or quad-port NIC).
  • The switch must be created in SET mode. (SET can’t be added to the existing switch; you cannot change it later).
  • Up to eight physical NICs maximum into one or more software-based virtual network adapters.
  • The use of SET is only supported in Hyper-V Virtual Switch in Windows Server 2016 or later release. You cannot deploy SET in Windows Server 2012 R2.

How you turn on this new Switch, it’s very simple:

New-VMSwitch -Name SETswitch -NetAdapterName "Ethernet1","Ethernet2" -EnableEmbeddedTeaming $true

One tip, you do not necessarily need to specify –EnableEmbeddedTeaming $true, if the –NetAdapter parameter is followed by an array instead of a single NIC, it automatically creates the vSwitch and put it in embedded teaming mode. However, if the –NetAdapter parameter has a single NIC, you can then set it up and enable embedded teaming mode by including the flag and then later adding another NIC to it. It’s a one-time decision you want to make at the switch creation time.

In Windows Server 2012 R2, it was not possible to configure RDMA (Remote Direct Memory Access) on network adapters that are bound to a NIC Team or to a Hyper-V Virtual Switch. This increases the number of physical network adapters that are required to be installed in the Hyper-V host. In Windows Server 2016, you can use fewer network adapters and enable RDMA on network adapters that are bound to a Hyper-V Virtual Switch with or without Switch Embedded Teaming (SET).

The diagram below illustrates the software architecture changes between Windows Server 2012 R2 and Windows Server 2016:

Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC 1

NIC Teaming versus Switch Embedded Teaming (SET) with converged RDMA in Windows Server 2016 Hyper-V [Image Credit: Microsoft]

The goal by moving to Windows Server 2016 is to cut the cost of networking in half, we now have the Hyper-V switch with embedded teaming, we are doing SMB with RDMA directly to a NIC that is bound to the Hyper-V switch, is managed by the Hyper-V switch, by the way, you can have another channel from SMB to the other physical NIC (light green line in the diagram above), so they teamed the RDMA NICs which allow the sessions to be failover by SMB in the event if we lose a NIC. We will have multiple RDMA clients bound to the same NIC (Live Migration, Cluster, Management, etc…).

The Converged NIC with RDMA allows:

  • Host vNICs to expose RDMA capabilities to kernel processes (e.g., SMB-Direct).
  • Switch Embedded Teaming (SET), allows multiple RDMA NICs to expose RDMA to multiple vNICs (SMB Multichannel over SMB-Direct).
  • Switch Embedded Teaming (SET), allows RDMA fail-over for SMB-Direct when two RDMA-capable vNICs are exposed.
  • Operates at full speed with the same performance as native RDMA.

How you turn on RDMA on vNICs, it’s very simple with PowerShell of course!

Add-VMNetworkAdapter -SwitchName SETswitch -Name SMB_01
Add-VMNetworkAdapter -SwitchName SETswitch -Name SMB_02

Enable-NetAdapterRDMA "vEthernet (SMB_01)", "vEthernet (SMB_02)"

Get-NetadapterRdma

As noted earlier, SET supports only switch-independent configuration by using either Dynamic or Hyper-V Port load-balancing algorithms. For best performance, Hyper-V Port is recommended for use on all NICs that operate at or above 10 Gbps.

Deploy Switch Embedded Teaming with DSC

So without further ado, let’s jump into the demo.

We have here four Hyper-V nodes deployed up and running, but without any configuration yet:

SET_NICTeaming_vSwitch03
If I query the Virtual Switch for each node, I don’t see any:

Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC 2

We will query all network adapters that are available on each host.

As you can see, each node has 3 NICs, one for Management and two RDMA NICs.

Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC 3

Next, I will install the custom DSC cHyper-V resource module on each node by running the following:

# Install cHyper-V Custom DSC Module on all Nodes 

Invoke-Command HVNODE1,HVNODE2,HVNODE3,HVNODE4 -ScriptBlock {
Save-Module -Name cHyper-V -Path C:\
Install-Module -Name cHyper-V
}

Last but not least, I will push the DSC Configuration across all nodes and let the fun begins J

# Applying DSC Configuration
.\SET-NICTeaming.ps1 -targetNode HVNODE1,HVNODE2,HVNODE3,HVNODE4

# Credit to my fellow MVP - Ravikanth
# https://www.powershellgallery.com/packages/cHyper-V/
# PowerShell DSC Resources to Configure SET and NAT Switch in Windows Server 2016

Param
(
    [String[]]$targetNode
)

Configuration SETSwitchTeam
 {
    # Importing the resource from custom DSC Module
      Import-DscResource -ModuleName cHyper-V -Name cSwitchEmbeddedTeam

    # List of Hyper-V Nodes which needs to be configured
      node $targetNode
      {
        # Create Switch Embedded Team for given interfaces
        cSwitchEmbeddedTeam DemoSETteam
        {
        Name = "SET-TEAM01"
        NetAdapterName = "RDMA_01","RDMA_02"
        AllowManagementOS = $false
        Ensure = "Present" 
        }
     } 
}

SETSwitchTeam

Start-DscConfiguration -Path SETSwitchTeam -Verbose -Wait -ComputerName $targetNode -credential $DomainCred -Force

And here you go:

Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC 4

Let’s validate the new SET-TEAM01 vSwitch gets created on all Hyper-V nodes using PowerShell:

Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC 5

I believe this is so cool on how you can deploy, configure and standardize your configuration across all hosts.

Remember: “Treat your servers like Cattle, not as Pets…”

Deploying Switch Embedded Teaming on a VM

A new question came up recently by one of my blog readers: Would it be possible to enable SET on a virtual machine to run tests?

The short answer is YES! Let’s see how to do this.

First, we need to install the Hyper-V role on the virtual machine because SET is built and managed by the Hyper-V virtual switch.

Open an elevated command (cmd) prompt window and type the following, this command will also restart the virtual machine.

# Install the Hyper-V role in silent mode
dism.exe /Online /Enable-feature /All /FeatureName:Microsoft-Hyper-V /FeatureName:Microsoft-Hyper-V-Management-PowerShell /quiet

Once the virtual machine is restarted, open PowerShell in administrative mode and run the following command:

# Create Switch Embedded Teaming (SET) on a VM
New-VMSwitch -Name "SET-VMSwitch" -NetAdapterName "Ethernet 2","Ethernet 3" -EnableEmbeddedTeaming $true
Get-VMSwitch | FL Name, EmbeddedTeamingEnabled, NetAdapterInterfaceDescriptions, SwitchType
Create SET Team on a VM
Create SET Team on a Virtual Machine

Please note that this scenario is NOT supported in production, just use it for test and lab purposes.

If you want to enable the traditional LBFO teaming mode on a virtual machine, please check the official supported documentation.

Summary

In this article, we showed you how to deploy Switch Embedded Teaming (SET) on Hyper-V host and on virtual machines using PowerShell DSC.

Microsoft recommends using Switch Embedded Teaming (SET) as the default teaming mechanism whenever possible, particularly when using Hyper-V.

Thanks to my fellow MVP Ravikanth for creating this custom DSC resource module.

Until then… Enjoy your day!

Cheers,
~Charbel

Related Posts

Previous

Get The Most Out of the Cloud with System Center and Azure #SysCtr #Azure #OMS

StarWind News 15th December 2015 @starwindsan

Next

2 thoughts on “Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC”

Leave a comment...

  1. Charbel,
    As always thanks for your great content. I’d commented on another post in which you had recommended moving forward to go to SET – and ‘lo, in Version 10.0.20348.288 (aka 2022) Microsoft has now laid the law down: LBFO is NOT recommended, and you have to jump through hoops to use it.

    Which is great! Except, documentation on some of this is VERY sparse (publicly available – possibly they have mountains of it in training documents) – among the lack of information is to be a pretty critical one:

    How do virtualized adapters on the hosts work – particularly with failover and management?

    This is the script I’ve written currently, but I’m not confident that it’s right – would love your thoughts on it.

    New-VMSwitch -Name "VM Network" –NetAdapterName "ETH01","ETH02etcetc" -EnableEmbeddedTeaming $True
    ##Note: ports are trunked with primary VLAN the hyper-v primary vlan, management VLAN as tagged/untagged depending on syntax
    Add-VMNetworkAdapter -ManagementOS -Name Management -SwitchName 'Management'
    Get-VMNetworkAdapter -ManagementOS | Set-VMNetworkAdapterVlan -Access -VlanId xxxx

    In some documents I’ve seen – other blogs – people create MULTIPLE management adapters on the SET – but in doing so you have different IP addresses which don’t seem the proper way to handle management (or indeed any ‘purpose built’ adapter such as a dedicated adapter for backup traffic)

    In VMWare, this is MUCH more simple, in no small part because their GUI has supported this for ages – and WAC does NOT support setting up Management NICs with different VLANs, or indeed any virtual adapter off a SET (ironically Server Manager did offer this for LBFOs)

    We’re also working on just deploying SCVMM as supposedly it handles this better – but compared to vCenter, SCVMM is far more complicated (having a VCP I may be slightly biased here).

  2. Hello Ryan, thanks for the comment and feedback!
    To make it short and address your question: How do virtualized adapters on the hosts’ work – particularly with failover and management?

    I will share with you my production experience:

    1) I always have 2 pairs of RDMA cards with double NIC each. Considering you are using a hyper-converged solution.
    2) I put the Storage and Live Migration on dedicated NICs (one NIC from each physical card for redundancy). Then I create a SET for those NICs with 2 virtual NICs (vSMB1 and vSMB2). Each NIC has its own subnet and VLANs to separate storage traffic and leverage SMB Multichannel. Then I set affinity between a virtual NIC and a physical NIC with Set-VMNetworkAdapterTeamMapping. I assume you are using Storage Spaces Direct (a.k.a Azure Stack HCI) Hyper-Converged. You can disregard this point if you are using a traditional SAN over fiber. If you have iSCSI, then you need dedicated 2xNICs for storage (iSCSI traffic). Here is the Traffic bandwidth allocation for the hyper-converged solution host network requirements.
    3) For the remaining 2 NICs from each physical adapter, I also create another SET team to be used for application workload (Hyper-V Switch) compute VM workload, then I create only one management vNIC for the host (for cluster management, such as Active Directory, Remote Desktop, Windows Admin Center, and Windows PowerShell), and optionally, I also create another vNIC dedicated for Backup running on a separate VLAN. Then I set affinity between a virtual NIC and a physical NIC.

    Last, here is a script that I use to create the virtualized adapters on the hosts:

    # Create a new VM Switch with RDMA1 and RDMA3
    New-VMSwitch –Name S2DSwitch –NetAdapterName "RDMA1","RDMA3" -EnableEmbeddedTeaming $true -AllowManagementOS $false
    # Create a virtual NIC called vMGMT. If you change it, make sure the subsequent lines in this script are changed as well
    Add-VMNetworkAdapter –SwitchName S2DSwitch –Name vMGMT –ManagementOS
    # Rename the NIC to just vMGMT
    Rename-NetAdapter "*vMGMT*" vMGMT
    # Set VLAN number for this vNIC. You will need to change Vlan to match environment
    Set-VMNetworkAdapterVlan -VMNetworkAdapterName vMGMT -VlanId 9 -Access –ManagementOS
    # Configure standard IP info into the NIC. You will need to change Vlan to match environment
    Set-NetIPInterface -InterfaceAlias "vMGMT" -DHCP Disabled
    Remove-NetIPAddress -InterfaceAlias "vMGMT" -Confirm:$false
    New-NetIPAddress -InterfaceAlias "vMGMT" -IPAddress 10.10.9.11 -PrefixLength 24 -DefaultGateway 10.10.9.1
    Set-DnsClientServerAddress -InterfaceAlias "vMGMT" -ServerAddresses 10.10.3.79, 10.10.3.77

    I am sorry, I can’t provide further support in the comment section, if you need to work on it as a project, please contact me on this page.
    Hope this helps!

Let me know what you think, or ask a question...

error: Alert: The content of this website is copyrighted from being plagiarized! You can copy from the 'Code Blocks' in 'Black' by selecting the Code. Thank You!