Deploying Switch Embedded Teaming (SET) on Hyper-V using PowerShell DSC #PowerShell #DSC #HyperV

In Windows Server 2012 and 2012 R2, Microsoft introduced NIC Teaming natively in the OS where you can have a collection of NICs up to 32 maximum, and create your team of those NICs through the UI (lbfoadmin) or using PowerShell by defining the load balancing algorithm and the teaming mode. You build your team and then if you want to bind the virtual switch to that team using Hyper-V to allow Virtual Machines to communicate out through that team set of adapters, then you go across Hyper-V Manager or PowerShell and create your Virtual Switch and bind it to that Team, in either way it’s multiple steps, the virtual switch and the team are two separate constructs.

If you want to deep dive into Microsoft NIC Teaming in Windows Server 2012 and 2012 R2, I highly recommend to check my earlier post here.

In today’s blog post, I will show you how to deploy Switch Embedded Teaming (SET) on Windows Server 2016 Hyper-V using PowerShell DSC, but before doing that, I would like to give you a little bit of context around the capabilities of this new teaming mode.

With the release of Windows Server 2016, Microsoft introduced a new type of teaming approach called Switch Embedded Teaming (SET) which is a virtualization aware, how is that different from NIC Teaming, the first part it is embedded into the Hyper-V virtual switch, that means a couple of things, the first one you don’t have any team interfaces anymore, you won’t be able to build anything extra on top of it, you can’t set property on the team because it’s part of the virtual switch, you set all the properties directly on the vSwitch. This is targeted to support Software Defined Networking (SDN) switch capabilities, it’s not a general purpose use everywhere teaming solution that NIC Teaming was intended to be. So this is specifically integrated with Packet Direct, Converged RDMA vNIC and SDN-QoS. It’s only supported when using the SDN-Extension. Packet Direct provides a high network traffic throughput and low-latency packet processing infrastructure.

At the time of writing this article, the following list of networking features are compatible and not compatible in Windows Server 2016 Technical Preview 4:

SET is compatible with:

  • Datacenter bridging (DCB).
  • Hyper-V Network Virtualization – NV-GRE and VxLAN are both supported in Windows Server 2016 Technical Preview.
  • Receive-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if any of the SET team members support them.
  • Remote Direct Memory Access (RDMA).
  • SDN Quality of Service (QoS).
  • Transmit-side Checksum offloads (IPv4, IPv6, TCP) – These are supported if all of the SET team members support them.
  • Virtual Machine Queues (VMQ).
  • Virtual Receive Side Scaling (RSS).

SET is not compatible with:

  • 802.1X authentication.
  • IPsec Task Offload (IPsecTO).
  • QoS in host or native OSs.
  • Receive side coalescing (RSC).
  • Receive side scaling (RSS).
  • Single root I/O virtualization (SR-IOV).
  • TCP Chimney Offload.
  • Virtual Machine QoS (VM-QoS).

SET Modes and Settings:

  • Switch-independent teaming mode only.
  • Dynamic and Hyper-V port mode load distributions only.
  • Managed by SCVMM or PowerShell, no GUI.
  • Only team’s identical ports (same manufacturer, same driver, same capabilities) (e.g., dual or quad port NIC).
  • Switch must be created in SET-mode. (SET can’t be added to existing switch; you cannot change it later).
  • Up to eight physical NICs maximum into one or more software-based virtual network adapters.
  • The use of SET is only supported in Hyper-V Virtual Switch in Windows Server 2016 Technical Preview. You cannot deploy SET in Windows Server 2012 R2.

How you turn on this new Switch, it’s very simple:

One tip, you do not necessary need to specify –EnableEmbeddedTeaming $true, if the –NetAdapter parameter is followed by an array instead of a single NIC, it’s automatically create the vSwitch and put it in embedded teaming mode. However, if –NetAdapter parameter has a single NIC, you can then set it up and enable embedded teaming mode by including the flag and then later add another NIC to it. It’s a one-time decision you want to make at the switch creation time.

In Windows Server 2012 R2, it was not possible to configure RDMA (Remote Direct Memory Access) on network adapters that are bound to a NIC Team or to a Hyper-V Virtual Switch. This increases the number of physical network adapters that are required to be installed in the Hyper-V host. In Windows Server 2016, you can use fewer network adapters and enable RDMA on network adapters that are bound to a Hyper-V Virtual Switch with or without Switch Embedded Teaming (SET).

The diagram below illustrates the software architecture changes between Windows Server 2012 R2 and Windows Server 2016:

NIC Teaming versus Switch Embedded Teaming (SET) with converged RDMA in Windows Server 2016 Hyper-V [Image Credit: Microsoft]

The goal by moving to Windows Server 2016 is to cut the cost of networking in half, we now have the Hyper-V switch with embedded teaming, we are doing SMB with RDMA directly to a NIC that is bound to the Hyper-V switch, is managed by the Hyper-V switch, by the way you can have another channel from SMB to the other physical NIC (light green line in the diagram above), so they teamed the RDMA NICs which allow the sessions to be failover by SMB in the event if we lose a NIC. We will have multiple RDMA clients bound to the same NIC (Live Migration, Cluster, Management, etc…).

The Converged NIC with RDMA allows:

  • Host vNICs to expose RDMA capabilities to kernel processes (e.g., SMB-Direct).
  • With Switch Embedded Teaming (SET), allows multiple RDMA NICs to expose RDMA to multiple vNICs (SMB Multichannel over SMB-Direct).
  • With Switch Embedded Teaming (SET), allows RDMA fail-over for SMB-Direct when two RDMA-capable vNICs are exposed.
  • Operates at full speed with same performance as native RDMA.

How you turn on RDMA on vNICs, it’s very simple with PowerShell of course J

So without further ado, let’s jump into the demo.

I have here four Hyper-V nodes deployed up and running, but without any configuration yet:

SET_NICTeaming_vSwitch03
If I query the Virtual Switch for each node, I don’t see any:

I will query all network adapters that are available on each host.

As you can see, each node has 3 NICs, one for Management and two RDMA NICs.

Next, I will install the custom DSC cHyper-V resource module on each node by running the following:

Last but not least, I will push the DSC Configuration across all nodes and let the fun begins J

And here you go:

Let’s validate the new SET-TEAM01 vSwitch gets created on all Hyper-V nodes using PowerShell:


I believe this is so cool on how you can deploy, configure and standardize your configuration across all hosts.

Remember: “Treat your servers like Cattle, not as Pets…”. 

Thanks to my fellow MVP Ravikanth on creating this custom DSC resource module.

Until then… Enjoy your day!

Cheers,
~Charbel

About Charbel Nemnom 298 Articles
Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

Be the first to comment

Leave a Reply