In this article, we will show how you can model your logical network in System Center Virtual Machine Manager (SCVMM) to support Converged Network fabric and Switch Embedded Teaming (SET).
Table of Contents
Introduction
A converged network is a Windows Server 2012 R2/2016/2019/2022 feature that allows the creation of virtual network adapters, meaning that you can partition the physical NIC in many Virtual NICs. Therefore, different network traffic types mentioned earlier start using the same physical adapters that are usually teamed.
NIC teaming, also known as Load Balancing and Failover (LBFO), is a core element of a converged network fabric. Starting with SCVMM 2016 and Windows Server 2016 and above, you have two ways to do teaming:
> Load Balancing and Failover (LBFO) – Check this article on how to create a Converged Network Fabric in SCVMM with LBFO “old way”. As a side note, the Remote Direct Memory Access (RDMA) cards which offer great performance cannot be teamed or used as part of the Virtual Switch built on top of the LBFO team.
> Switch Embedded Teaming (SET) – The focus of this article. Embedded Team or SET is the recommended mode for Windows Server 2016 and above.
Starting with Azure Stack HCI, Windows Server 2022, and above versions, the LBFO Team switches are being deprecated as announced officially by Microsoft (see features removed or no longer developed starting with Windows Server 2022).
SET is a very powerful technology but has some constraints. The interfaces used must have identical characteristics such as (Manufacturer, Model, Link speed 10Gb or more, and Configuration). Furthermore, creating a Switch Embedded Teaming (SET) in the GUI or Hyper-V Management Console is not available, you need to use Windows Admin Center (WAC), System Center Virtual Machine Manager (SCVMM), or by using the following PowerShell command:
New-VMSwitch -Name "SETTeam" -NetAdapterName "NIC1","NIC2" -EnableEmbeddedTeaming $true
As a side note, the -EnableEmbeddedTeaming parameter is not required anymore when creating SET, because this is the default switch teaming mechanism moving forward.
In this guide, we will show you how to create a converged network with SET in SCVMM. Please note that the same steps described in this article will also apply to Azure Stack HCI.
// See Also: Check how to install System Center Virtual Machine Manager 2022 on Windows Server 2022.
Deploying Converged Network with SET in SCVMM
Switch Embedded Teaming (SET) is a NIC teaming that has been integrated into the Hyper-V switch. SET allows you to group up to eight identical adapters into one or more software-based virtual adapters. While native host LBFO can team adapters with different speeds, SET requires Ethernet adapters with the same model, speed, and firmware. Also, there is no support for switch-dependent (LACP, Static), Active/Passive configurations, and Address Hash distribution.
SET can be configured only with Hyper-V Port or Dynamic load distribution algorithms. The default values for SET parameters LoadBalancingAlgorithm and BandwidthReservationMode are Dynamic and Weight respectively.
BandwidthReservationMode specifies how minimum bandwidth is to be configured on the SET switch. If Absolute is specified, the minimum bandwidth is bits per second. If Weight is specified, minimum bandwidth is a value ranging from 1 to 100. If None is specified, minimum bandwidth is disabled on the switch. If the Default is specified, the mode will be reset to the default settings (to Weight if SR-IOV is not enabled, and to None if SR-IOV is configured). You can define all these settings during the logical switch configuration in VMM 2016, VMM 2019, and above (more on this later).
Converged NIC can be used with SET or without (one certified RDMA adapter per host) to provide RDMA capabilities through the host-partition virtual NICs to the end services. Consequently, RDMA can be used in a team with SET and even act as a part of a virtual switch.
In the diagram below, we have two physical RDMA NICs teamed using the SET feature in Windows Server 2016/2019/2022 to make RDMA fully work in a converged network fabric. Datacenter Bridging (DCB) is a Windows Server 2016/2019/2022 feature that provides flow control and the ability to define traffic classes for the different traffic types to configure network QoS.

DCB consists of the 802.1 standards that describe Priority-based Flow Control (PFC) and Enhanced Transmission Selection (ETS) algorithms. PFC is used to minimize packet loss when a network is congested, while ETS provides the bandwidth allocation for traffic classes with different priorities.
DCB was limited in Windows Server 2012 R2 because it works independently of the LBFO team. DCB in Windows Server 2016/2019/2022 now operates cooperatively with new SDNv2 QoS and SET, taking out the previous bandwidth management challenges. In most cases, DCB should always be configured for RDMA NICs to enhance the Ethernet protocol and improve RDMA work.
Step 1: Configuring Logical Networks
A logical network is a container that contains elements that define the underlying networking infrastructure. It contains a group of IP subnets and/or VLANs. In this section, we will go through logical network configuration in VMM.
A logical network linked to a network site is a user-defined group of IP subnets, VLANs, or IP subnet/VLAN pairs that are used to organize and simplify network assignments. It can be used to label networks with different purposes, for traffic isolation, and to provision networks for different types of service-level agreements (SLAs).
For our example, we will be assuming that the physical servers have 2 x 10 GB RDMA physical network ports that will be teamed and portioned to create the following:
- Storage 1: Used for storage traffic between the hyper-converged nodes. This logical network does also have an associated IP Pool
- Storage 2: Used for storage traffic between the hyper-converged nodes. This logical network does also have an associated IP Pool
- Live Migration: Contains the IP subnet and VLAN for Live Migration traffic. This logical network does also have an associated IP Pool.
- Host Management: Contains the IP subnet used for host management. This logical network does also have an associated IP Pool so that VMM can manage IP assignment to hosts.
- Guest VM: Contains the IP subnet used for Virtual Machines. This logical network does also have an associated IP Pool so that VMM can manage IP assignment to virtual machines connected to this network.
// Check the following article to see how to create a logical network in Virtual Machine Manager.
You can use PowerShell to view created logical networks, their network sites, and host groups assigned to them. Use the following PowerShell one-liner:
Get-SCLogicalNetworkDefinition | FT Name, LogicalNetwork, HostGroups

Please note that in VMM, the setting Create logical networks automatically is enabled by default, which means that when you add a Hyper-V host to VMM, and no Logical Network is assigned with any physical NIC, VMM will automatically create a Logical Network. We always keep this setting disabled to manage networking manually after adding any hosts to the fabric. This option is located under the Settings workspace | General | Network Settings.

Step 2: Creating an IP address Pool
Now that you have created the logical networks, you need to carry out the following steps to create an IP address pool for the logical network:
1) Open the Fabric workspace, and then click on the Home tab on the ribbon.
2) Click on Fabric Resources, expand Networking in the Fabric pane to the left, and then click on Logical Networks.
3) In the Logical Networks and IP Pools main pane, click on the logical network to create the IP address pool (Management, in our example).
4) On the Home tab on the ribbon, click on Create IP Pool.
5) When Create Static IP Address Pool Wizard opens, on the Name page type the name and description (optional) for the IP address pool, for example, Hosts_Mgmt_Pool.

6) On the Network Site page, click on Use an existing network site.
7) Enter the correct IP address range and click on Next.
8) Enter the correct gateway and metric and click on Next.
9) Enter the correct DNS server address and DNS suffix (if any) and click on Next.
10) Enter the correct WINS (if any) and click on Next.
11) On the Summary page, review the settings and click on Finish.
Repeat 1-11 steps to create more IP pools for logical networks, Live Migration, Guest VM, and others. They will be used later in this guide.
After completing the Logical Network and IP Pool creation, on the Fabric workspace, expand Networking and then click on Logical Networks to confirm the Logical Networks you just created are listed. The following picture shows all logical networks and their IP pools created in our example:

To get the list of assigned addresses and their state in an IP Address Pool, you can use the following PowerShell one-liner:
Get-SCIPAddress | FT Address, AssignedToType, AllocatingAddressPool, State
Step 3: Associating Logical Network
The next step is to associate the VMM logical network with the physical adapter on the hypervisor host. To proceed with this step, you should have added the Hyper-V server to the host group first.
1) In the Fabric workspace, on the Fabric pane to the left, expand Servers | All Hosts, and then expand your host group.
2) In the Hosts main pane, select the host that you want to configure.
3) In the Host tab on the ribbon, click on Properties (or right-click on the host and click on Properties).
4) In the Host Name Properties dialog box, click on Hardware | Network Adapters and select the physical network adapter to be associated.
5) On the Logical network connectivity page, select the Logical network to associate with the physical adapter, for example, Management:

6) If you plan to use this adapter for communication between the VMM and the host, ensure that Used by management is checked as shown in the figure below. Available for placement is checked by default, meaning that this adapter can be used by virtual machines (VMs). Please note that at least one adapter for communication between the host and the VMM is required.

7) Click on OK to complete.
Repeat these steps on every host of the host group that’s using the same logical network.
Another PowerShell one-liner that can help to quickly get the list of logical networks mapped to the physical adapters on hosts:
Get-SCVMHostNetworkAdapter | FT VMHost, Name, LogicalNetworks
Step 4: Creating VM networks
The VM Networks step is not necessary if you selected ‘Create a VM network with the same name to allow virtual machines to access this logical network directly’ while creating the logical networks above.
If you did not, please continue to create VM networks with 1:1 mapping with the Logical Networks in the Fabric workspace as shown in the figure below.

Step 5a: Creating Uplink Port Profile
VMM allows you to configure port profiles and logical switches. They work as containers for network adapter capabilities and settings, and by using them, you can apply the configuration to selected adapters instead of configuring those settings on each host network adapter.
Let’s start first by creating the port profiles, and then we will create the port classification, followed by the Logical Switch.
1) In the VMM console, in the Fabric workspace, and on the Fabric pane, under Networking, click on Port Profiles.
2) On the Home tab on the ribbon, click on Create, and then click on Hyper-V Port Profile.
3) In the Create Virtual Network Adapter Port Profile window, on the General page, type the port profile name and optionally a description, for example, SET.
4) Click on the Uplink port profile, and select the load balancing algorithm and the team mode, as a best practice, we will select Host Default and Switch Independent for Hyper-V workload. As a side note, Hyper-V Port is the default load-balancing algorithm for Switch Embedded Teaming (SET) since Windows Server 2016 release. Click Next.

5) On the Network configuration page, select the network site (which could be more than one). Network sites must have at least one host group in common (in our case, all network sites are assigned to the same host group). Otherwise, you will receive an out-of-scope error. A sample configuration is shown in the following screenshot:

6) Click on Next, and then on the Summary page, click on Finish.
Repeat 1-6 steps to create additional uplink port profiles if needed.
After creating a Hyper-V uplink port profile, the profile will need to be assigned to a logical switch. Make it available through the assigned logical switch, which can then be selected to be applied to a network adapter in a host. This will make the network consistent across the Hyper-V hosts.
Now before creating the Logical Switch, you also need to check virtual adapter port profiles, port classification, and optional switch extensions and managers.
Step 5b: Creating Virtual Port Profile
VMM makes use of virtual port profiles to define the configuration for the virtual NICs: Offload settings, Security settings, and Bandwidth settings. There are already some pre-existing configured virtual port profiles (for example, Host Management, Cluster, Live Migration, iSCSI, High Bandwidth Adapter, Medium Bandwidth Adapter, and Low Bandwidth Adapter), but you can create a customized one.
For the purpose of this demo, we will create 3 additional virtual port profiles for Storage-SMB1, Storage-SMB2, and Guest-VM.
Here are the different bandwidth settings for each profile that we used:
- Host Management: 10
- Live Migration: 20
- Storage SMB1: 20
- Storage SMB2: 20
- Guest VMs: 30
It is best practice to set up the weighted configuration. The total weight of all adapters and the virtual switch default should stream to 100.
1) In the VMM console, in the Fabric workspace, and on the Fabric pane, under Networking, click on Port Profiles.
2) On the Home tab on the ribbon, click on Create and then click on Hyper-V Port Profile.
3) In the Create Virtual Network Adapter Port Profile window, on the General page, type the port profile name and optionally a description, and then click on Next.
4) On the Offload Settings page, select the settings you want to enable (if any), for this example, we need to enable virtual machine queue, IPsec task offloading, virtual receive side scaling (vRSS), and remote direct memory access (RDMA). Please note that RDMA only applies to host virtual network adapters such as (Storage, Backup, and Live Migration) and not to Guest VMs. Click on Next.

5) On the Security Settings page, select the settings you want to allow/enable (if any), such as MAC spoofing, DHCP guard, router guard, guest teaming, and IEEE priority tagging, and then click on Next.

6) On the Bandwidth Settings page, if you want to configure the bandwidth settings, specify Minimum bandwidth (Mbps) or Minimum bandwidth weight, and Maximum bandwidth (Mbps). As mentioned earlier, it is best practice to set up the weighted configuration.

7) On the Summary page, click on the Finish button.
Repeat steps 1-7 to create more virtual port profiles that will describe settings, for example, of Guest VMs virtual network adapter.
You can use the following PowerShell one-liner to check the default bandwidth weight:
Get-SCVirtualNetworkAdapterNativePortProfile | Sort MinimumBandwidthWeight -Descending | FT Name, MinimumBandwidthWeight

Step 6: Creating Port Classifications
Next, we must also create a port classification that we can associate with each virtual network port profile. When you are configuring virtual network adapters on a team on a Hyper-V host, you can map the network adapter to a classification that will ensure that the configuration within the virtual port profiles is mapped.
Please note that Port Classifications are a description (label) only and do not contain any configuration, this is very useful in a hosting provider deployment where the tenant/customer sees the message for their Virtual Machines for example High bandwidth, Medium bandwidth, or Low bandwidth.
1) In the VMM console, in the Fabric workspace, on the Fabric pane, click on Networking and then click on Port Classifications.
2) On the Home tab on the ribbon, click on Create and then click on Port Classification.
3) In the Create Port Classification Wizard window, type the port classification name and optional description and click on OK.

Repeat steps 1-3 to create more port classifications that will describe settings, for example, of Guest VMs.
Step 7: Creating a Logical Switch
Starting with VMM 2016, the wizard for creating logical switches was updated. The updates include switch embedded teaming (SET) uplink mode, the ability to create new uplink port profiles and define virtual network adapters. The Logical Switch wizard has become more logical and even simpler, providing new features and refined steps for setting it up.
A logical switch is the last networking fabric in VMM before we apply it to our Hyper-V host, it’s basically a container of Uplink profiles and Virtual port profiles. To create a logical switch, carry out the following steps:
1) In the VMM console, in the Fabric workspace, on the Fabric pane, click on Logical Switches.
2) Right-click on it and then click on Create Logical Switch.
3) On the Getting Started page, click on Next.
4) In the Create Logical Switch Wizard window, on the General page, type the logical switch name, an optional description, and select the type of teaming in Uplink Mode: Embedded Team is a SET, Team is the old LBFO/stand-alone teaming, and No Uplink Team – a teaming won’t be used for physical adapters, then click on Next.

5) As mentioned earlier, starting with VMM 2016, you can define a minimum bandwidth mode during the logical switch creation. The Weight mode is recommended and selected by default. If the network adapter has SR-IOV support and you want to enable it, click on Enable Single Root I/O Virtualization (SR-IOV), and then click Next.

6) If you are using the virtual switch extensions or your switch is not managed by Network Controller, select the extensions on the Extensions page, making sure they are in order to be processed, and click on Next.

// Please note that starting with Windows Server 2012 R2 and above, the Hyper-V Extensible Switch allows you to add extensions to the Virtual Switch which can capture, filter or forward traffic. Learn how to enable Microsoft NDIS Capture Extension on the Virtual Switch.
7) On the Virtual Port page, click on Add to add port classifications that are associated or not to a VM network adapter port profile, and click on Next.

8) On the Uplinks page, click on Add to create a New Uplink Port Profile or to use an Existing Uplink Port Profile as we described in the previous step, then, if necessary, add virtual network adapters by pressing New virtual network adapter.
In our case, we defined the virtual NICs for Live-Migration, Storage-SBM1, Storage-SBM2, and Hosts-Mgmt as shown in the following screenshot. If you don’t see the required network sites in the list, check the supported network sites by the port profile (Fabric | Networking | Port Profiles |double-click on the Port Profile | Network Configuration).
Then check: This virtual network adapter will be used for host management and Inherit connection settings from the host network adapter if you have an existing DHCP server in the management network, for example. Otherwise, select Static and define the corresponding IP Pool. Please note that only one virtual network adapter can be marked as used for host management.

You need to repeat the same process to create new virtual NICs for Live Migration, Storage SMB, etc… and ensure they are connected to the right VM networks with the right VLAN, IP Pool, and port profile classification. Click on Next.
9) On the Summary page, review the switch configuration and click on Finish.
Repeat 1-9 steps to create one more logical switch if required.
Step 8: Applying a Logical Switch to the Host
In the last step, we apply the logical switch to the Hyper-V host, Hyper-V cluster, or Azure Stack HCI.
A logical switch helps to ensure that logical networks, VLANs, IP subnets, and other network settings such as port profiles, are consistently assigned to host network adapters.
Take the following steps to add Logical Switch to hosts:
1) In the Fabric workspace, on the Fabric pane, expand Servers | All Hosts and select a host group; then select a host in the Hosts main pane.
2) Right-click on the desired host and click on Properties, and then click on the Virtual Switches tab.
3) In the Virtual Switches tab, click on New Virtual Switch and select New Logical Switch to add an existing (SET) logical switch. If you have two or more logical switches in the fabric, define the right one in the Logical switch drop-down menu, then check adapters that you plan to use with the Logical switch in Physical adapters and, if required, add more network adapters by pressing Add, and then review the list of Virtual adapters that will be automatically created and then click on OK.
As mentioned earlier, LBFO Team switches are deprecated for Azure Stack HCI and Windows Server 2022 and above versions. They will not be available for selection in the drop-down menu.

Please note that when applying the logical switch, the host might temporarily lose network connectivity when VMM creates the switch and host virtual network adapters.
4) In the Jobs workspace, check that the job Change properties of the virtual machine host are completed. The New Host Virtual Network Adapter subtasks in the job, indicate processes of creating virtual NICs that you defined for the switch creation as described in the previous step.
Once you are done, VMM will communicate with its agent on the host and configure NIC teaming with the right configurations and corresponding virtual network adapters.

After applying the logical switch, you can check that host network settings are compliant with the logical switch (Fabric | Networking | Logical Switches and switch to Home | Show | Hosts), or you can use the following PowerShell one-liner:
Get-SCVMHostNetworkAdapter | ? {$_.UplinkPortProfileSet -like "SET"} | FT Name, LogicalNetworkCompliance, LogicalNetworkComplianceErrors
That’s it there you have it!
FAQs
Can I convert LBFO Team to SET?
Yes, Microsoft has created a PowerShell tool called (Convert-LBFO2SET) to help people migrate LBFO-based Hyper-V deployments to SET. You can install this tool using the following command:
Install-Module Convert-LBFO2SET
And then run the following PowerShell command to convert LBFO Team to SET. Make sure to replace the name with your own values.
Convert-LBFO2SET -LBFOTeam "LBFOTeamName " -SETTeam "SETSwitchName"
Please note that the Convert-LBFO2SET PowerShell module will remain available until Windows Server 2019 reaches the end of mainstream support on 9 January 2024. The module will be retired sometime after that date. Microsoft is no longer putting new development effort into Convert-LBFO2SET.
Can I create LBFO Team in Windows Server 2022?
The answer is Yes and No, as mentioned earlier, starting with Windows Server 2022 and above, it is not possible to use Hyper-V Management Console to create a virtual switch on top of the LBFO team, you will see an error saying that LBFO has been deprecated. Embedded Team or SET is the recommended mode for Windows Server 2016 and above. However, it is possible to use PowerShell to create LBFO Team and the Hyper-V virtual switch.
First, you need to create the Teaming of your network cards using PowerShell or Server Manager. The command below will create a team named “LBFOTeam1” that consists of two team members named NIC1 and NIC2. The teaming mode is set to LACP and the load balancing algorithm is set to Dynamic.
New-NetLbfoTeam -Name "LBFOTeam1" -TeamMembers "NIC1","NIC2" -TeamingMode LACP -LoadBalancingAlgorithm Dynamic
Then run the PowerShell command below to create a virtual switch based on LBFO Team created in the previous step. Notice the parameter -AllowNetLbfoTeams is set to true.
New-VMSwitch -Name "vSwitch" -NetAdapterName "LBFOTeam1" -AllowNetLbfoTeams $true -AllowManagementOS $true
This command will create a Hyper-V virtual switch named “vSwitch“, the network adapter LBFO team is named “LBFOTeam1“, and the management OS (Hyper-V host) remains accessible through the aggregate LBFO Team.
If you look at the New-VMSwitch cmdlet in the Microsoft documentation, you will see that the -AllowNetLbfoTeams parameter is not available, Microsoft is directing customers to use SET, the recommended solution when using NIC teaming in conjunction with Hyper-V, SDN, S2D, etc.
Conclusion
In this article, we showed you step-by-step how you can model your logical network in System Center Virtual Machine Manager (SCVMM) to deploy a Converged Network fabric using Switch Embedded Teaming (SET).
LBFO remains a teaming solution when Hyper-V is not installed. If however, you are running virtualized or cloud scenarios like Azure Stack HCI, you should give Switch Embedded Teaming a serious consideration.
// Seel Also: How to extend Azure Arc To System Center Virtual Machine Manager (SCVMM).
Deploying a converged network with Switch Embedded Teaming (SET) in SCVMM is a powerful approach to optimize your network infrastructure and achieve improved performance, reliability, and fault tolerance.
By carefully planning the deployment process, leveraging expert tips, and keeping an eye on network performance, you can successfully implement SET to enhance your organization’s networking capabilities and meet the demands of modern business environments.
__
Thank you for reading my blog.
If you have any questions or feedback, please leave a comment.
-Charbel Nemnom-