You dont have javascript enabled! Please enable it! Microsoft’s NIC Teaming Black Belt #WS2012R2 #WS2016 #WS2019 - CHARBEL NEMNOM - MVP | MCT | CCSP | CISM - Cloud & CyberSecurity

Microsoft’s NIC Teaming Black Belt #WS2012R2 #WS2016 #WS2019

8 Min. Read

A blog reader commented on one of my previous posts about the Converged Network Fabric in VMM

The comment was:

LACP vs. Switch Independent, I know that Microsoft recommends Switch Independent / Dynamic mode, however, I can’t find what their pros and cons are. Network administrators always prefer using LACP.

I had a discussion recently about the same topic with one of my colleagues at work, I thought this is the right time to talk about NIC teaming options in the Host and Switch(es) with LACP and Switch Independent mode. The NIC teaming in general is a confusing topic for some people.

In today’s blog post, I will deep dive into Microsoft NIC Teaming options starting from Windows Server 2012, 2012 R2, 2016 and what’s coming in Windows Server 2019, I always hear people saying that Microsoft recommends Switch Independent / Dynamic mode in all cases, and why people are still using LACP, the answer is they are cases where one is a little better than the other and has more options which the other doesn’t, I will address that by the end of this post.


So without further ado, let’s start from the basics and then move into the advanced topics.

What is NIC Teaming?

NIC Teaming is also referred to as NIC Bonding called by some people, Load Balancing and Failover (LBFO).

In short, the combining of two or more network adapters so that the software above the team perceives them as a single adapter as one pipe that incorporates failure protection and bandwidth aggregation.

NIC teaming solutions historically have also provided per-VLAN interfaces for VLAN traffic segregation, and Microsoft teaming solution of course does the same thing, I will get to this shortly.

Why use Microsoft’s NIC Teaming?

  • Vendor agnostic – anyone’s NICs can be added to the team.
  • Fully integrated with Windows Server 2012 R2 / 2016 / 2019 / 2022.
  • You can configure your teams to meet your needs.
  • Server Manager-style UI that manages multiple servers at a time.
  • Microsoft is fully supported! so no more calls to NIC vendors for teaming support or getting told to turn off teaming.
  • Many vendors on the market have dropped down the teaming business.
  • Team management is easy using PowerShell, System Center Virtual Machine Manager, and Server Manager.

Microsoft’s NIC Teaming Vocabulary Lesson

    • pNICs = physical NICs on the host or Network Adapters.
    • tNICs = Team Interfaces exposed by the team, Team NICs, or tNICs.
    • vNICs = Interfaces exposed by the Hyper-V Virtual Switch to the Management OS (host partition).
    • vmNICs = Interfaces exposed by the Hyper-V Virtual Switch into a Virtual Machine.

Four Terms to be added to your dictionary: (pNICs, tNICs, vNICs and vmNICs).

Teaming Modes / Load Distribution Methods


Teaming Modes

  • Switch independent mode
    • The host is not dependent on the switch (No configuration is required on the physical switch).
    • Protects against adjacent switch failures (do not cause the team to break).
    • Easy to configure and it’s the easiest one to not mess up when you deploy it.


  • Switch dependent modes
    • Static Teaming (or Generic teaming) which is switch-dependent teaming without protocol, there is not a lot of value in using Switch dependent teaming without the protocol.
    • LACP (802.1ax teaming, also known as 802.3ad). The 802.1ax is the IEEE standard that defines the Link Aggregation control protocol.
    • Requires configuration of the adjacent switch (very dependent on the configuration of the switch).
    • If you are looking at switch-dependent mode, you should really look at LACP (I will get to Why LACP shortly).


Load Distribution Modes

  • Address Hash – comes in 3 flavors
    • 4-tuple hash: (Default distribution mode) uses the RSS hash if available, otherwise hashes the TCP/UDP ports and the IP addresses. If ports are not available use 2 tuples instead.
    • 2-tuple hash: hashes the IP addresses. If not IP traffic uses MAC-address hash instead.
    • MAC address hash: hashes the MAC addresses.
    • All of the traffic for the host arrives on one NIC, this is not very useful in a Hyper-V case, but quite useful in a native teaming case, because in a native teaming case you generally have only one MAC address visible to the network from the tNIC anyway.
  • Hyper-V Port
    • Hashes on the port number of the Hyper-V switch that the traffic is coming from (All traffic from a given VM to a given team member-only, and of course when you have too many VMs, then multiple VMs will be mapped to each team member).
    • Recommend using Hyper-V 2012, 2016, 2019, and 2022.
  • Dynamic
    • Recommend using Hyper-V 2012 R2.
    • Dynamic distribution is Address Hash on the outbound side, and Hyper-V Port on the inbound side (are you confused yet? probably).
    • What this means is that the outgoing traffic will be spread across the team members on a per-flow basis, and then watch the ARP and manage the ARPs coming from the VMs (ARP responses) in a way that ensures that each of the VM has their incoming traffic mapped to various different team members, so if you have a lot of team members, for example, you have a team of 8X1Gig NICs, this means that will take the VMs and distribute them across all incoming team members, although each VM will be mapped to exactly one NIC per incoming purposes, that means that a given VM traffic cannot exceed the bandwidth of a single team member. However, on the outbound side, the distribution is actually on a per-flow basis, so a given VM can generate more than 1 team member’s worth of traffic, and will break down into flows and distribute them across the team members.
    • And because Dynamic is based on Flowlet technology, Microsoft keeps checking gaps in the flows, and after each gap has occurred in the flow, they look at whether the flow should continue on the same NIC or whether there is a less used NIC that can move that flow to, in order to balance that outbound traffic across all of the NICs.


Teaming Modes and Load Distribution Methods (Summary)



  • A frequently used mode with NO real value.
  • Available only in Switch Independent / Address Hash operation.
  • Only one team member can be set to standby.


I like to give the analogy of building a four lanes highway, that’s a freeway with two lanes in each direction, and then taking one lane in each direction out of service till the other lane is broken.


You already have the infrastructure investment made that your company paid for, you have already got all of the connections and everything in place, and you are not using half of it because you want to be there in case you need it when the other one brakes.

It makes a lot better sense to use Active/Active, such that you are always using all of the infrastructures that you already bought and paid for.

Windows Server 2012 Switch / Load Interactions


Windows Server 2012 R2 Switch / Load Interactions


Team Interfaces (tNICs)

  • Team interfaces can be in one of two modes:
    • Default mode: passes all traffic that doesn’t match any other team interface’s VLAN id.
    • VLAN mode: passes all traffic that matches the VLAN.
  • Inbound traffic is always passed to at most one team interface only.


The Hyper-V team has said loud and clear, that if you are using Hyper-V Virtual Switch on top of a team, the team must only expose the default interface (interface a default mode) and no others. The Hyper-V virtual switch manages all of the VLAN configurations, it’s perfectly capable of that.

  • Team interfaces created after initial team creation must be VLAN mode team interfaces.
  • Team interfaces created after initial team creation can be deleted at any time (using server manager UI or PowerShell). The primary interface cannot be deleted except by deleting the team.
  • It is a violation of Hyper-V rules to have more than one team interface on a team that is bound to the Hyper-V Switch.
  • A team with only one member (one pNIC) may be created for the purpose of disambiguating VLANs.
  • A team of one has no protection against failure (of course).

Frequently Asked Questions

  • Any physical Ethernet adapter can be a team member and will work as long as the NIC meets the Windows Logo requirements.
  • Teaming of RDMA adapters is not supported in Windows Server 2012 and 2012 R2, but supported in Windows Server 2016 (I’ll get to this shortly).
  • Teaming of WiFi, WWAN, etc, adapters are not supported.
  • Teams of teams are not supported as well.
  • Teaming in a Virtual Machine is supported, but limited to:
    • Switch Independent, Address Hash mode only.
    • Teams of two team members are supported.
    • Intended/optimized to support teaming of SR-IOV Virtual Functions (VFs) but may be used with any interfaces in the VM.
    • Requires configuration of the Hyper-V Virtual or failovers may cause loss of connectivity.


  • Maximum number of NICs in a team: 32
  • Maximum number of team interfaces: 32
  • Maximum teams in a server: 32
  • Not all maximums may be available at the same time due to other systems constraints.

NIC Teaming Manageability

  • Easy-to-use NIC Teaming UI with Server Manager (lbfoadmin.exe)
    • Intuitive and Powerful.
    • UI operates completely through PowerShell – uses PowerShell cmdlets for all operations.
    • Manages Servers (including Server Core) remotely from Windows 8.1, 10, and Windows 11 clients.
  • Powerful PowerShell cmdlets
    • Object: NetlbfoTeam (New, Get, Set, Rename, Remove).
    • Object: NetLbfoTeamNic (Add, Get, Set, Remove).
    • Object: NetlbfoTeamMember (Add, Get, Set, Remove).
  • System Center Virtual Machine Manager
    • Deployment through predefined templates and profiles.
    • Consistent deployment across all hosts.

Which NIC Teaming Mode Shall I Choose?

  • Switch Independent
    • Doesn’t depend on the switch configuration.
    • Balances outbound traffic especially in Dynamic distribution mode.
    • Maps inbound traffic to different team members based on per vPort basis
      • Limits inbound traffic to a given vPort to the bandwidth of a single team member (more of an issue in 1G interfaces than 10G, 40G, or 100G interfaces).
  • Switch Dependent (LACP)
    • All interfaces in Link Aggregation Group (LAG) are treated as single pipe except:
      • Packets in the same flow are placed on the same team member to reduce the opportunity for misordering.
    • The host manages outbound packet placement, Switch determines inbound packet placement.
    • LAGs and MLAGs:
      • Many people are using multi-chassis switches (a.k.a Stacked Switches).
      • Link Aggregation Group (LAG) Versus Multi-chassis Link Aggregation Group (MLAG).
      • Every Switch vendor does it differently and depending on what vendors are you using, you may have better or worse results.


Advantages and Disadvantages (Switch Independent vs. LACP)

  • Switch Independent
    • The main one is (No switch configuration is required).
      • Less opportunity for misconfiguration
    • Does a good job of load spreading.
    • Adequate for the vast majority of deployments.
    • Works with Windows Server 2016 RDMA teaming since the host determines which interface traffic arrives on.
    • Detect cable faults, NIC faults, adjacent switch power issues, etc. but doesn’t detect dead switch port logic (This is an extremely rare failure, this is the case where a switch is still electrically alive, but the logic on the port has hung or failed. The switch independent is looking at the electric interface and won’t detect that the switch has quick passing traffic on a given port).
  • Why LACP
    • Because it maintains a heartbeat between the switch port logic and the host, this heartbeat allows the detection of switch port logic errors or failure, because if the switch port logic goes down, it will not send any heartbeat, it does not process the heartbeat, it does not send back the response, and the result is that NIC teaming on the host will detect that the switch port is not alive anymore, then they will take that particular link out of the LAG for the duration time the switch port is not responding.
    • Allows the switch to load balance ingress flows across the team members.
      • Integrate well with Equal-cost multi-path (ECMP) through Multi-chassis switches.
    • Does not work with Windows Server 2016 (RDMA) teaming, because stateful offloads like RDMA require all the traffic for that engine to arrive on a given NIC, so LACP won’t work.

What’s New for NIC Teaming in Windows Server 2016/2019?

  • Switch-embedded Teaming (SET) where the teaming will be integrated into the Hyper-V Virtual Switch.
  • SET is an alternative NIC Teaming solution that you can use in environments that include Hyper-V and the Software-Defined Networking (SDN) stack in Windows Server 2016/2019/2022.
  • The Switch-embedded Teaming (SET) mode will support RDMA and SR-IOV teaming.
  • Supporting SDN-switch capabilities (Packet Direct, Converged vNIC, and SDN Quality of Service).
  • The Switch-embedded Teaming (SET) will be limited to:
    • Switch Independent teaming mode only.
    • Dynamic and Hyper-V Port mode only for load distribution.
    • Managed by SCVMM or PowerShell only, but not with Server Manager or (lbfoadmin.exe).
    • Only teams have identical ports (same manufacturer, same model, same firmware, and driver).
    • The Switch must be created in SET mode. (you cannot change it later).
    • Up to eight physical Ethernet network adapters into one or more software-based virtual network adapters.
    • The use of SET is only supported in Hyper-V Virtual Switch in Windows Server 2016/2019. You cannot deploy SET in Windows Server 2012 R2.
  • In another article, I’ll demonstrate how to configure and use Switch-embedded Teaming (SET) in Windows Server 2016.

Congratulations! You have completed the Microsoft® Black Belt NIC Teaming Certificate!


Thanks for reading!


Photo of author
About the Author:
Charbel Nemnom
Charbel Nemnom is a Senior Cloud Architect with 21+ years of IT experience. As a Swiss Certified Information Security Manager (ISM), CCSP, CISM, Microsoft MVP, and MCT, he excels in optimizing mission-critical enterprise systems. His extensive practical knowledge spans complex system design, network architecture, business continuity, and cloud security, establishing him as an authoritative and trustworthy expert in the field. Charbel frequently writes about Cloud, Cybersecurity, and IT Certifications.

Step-by-Step: Running Hyper-V Cluster on Nano from Windows Server 2016 Using PowerShell Direct #HyperV

UCP & UCO: Unidesk Certified Professional and Operator #HyperV #VDI #RDS @UnideskCorp


6 thoughts on “Microsoft’s NIC Teaming Black Belt #WS2012R2 #WS2016 #WS2019”

Leave a comment...

  1. Hello Charbel,

    Thank you for the article very informative :)

    what’s your recommendations with windows server 2019 ( switch embedded teaming ) (switch independent – Hyper-v Port)

  2. Thank you Ahmed for the feedback! Yes, I recommend using Switch Embedded Team (SET) for Hyper-V deployment, keep the default (Switch Independent + Hyper-V Port).

  3. Very good article and well constructed. But I have a question about the LACP with Address Hash. Is there a way to force the NIC team to use MAC hash only?

  4. Hello Nawar, thanks for the comment!
    You could create the LACP Team using MAC Addresses only using the following command:

    New-NetLbfoTeam -Name "LACPTeam1" -TeamMembers "NIC1-10Gb","NIC2-10Gb" -TeamingMode LACP -LoadBalancingAlgorithm MacAddresses

    MacAddresses‘ Uses the source and destination MAC addresses to create a hash and then assigns the packets that have the matching hash value to one of the available interfaces. When you specify this algorithm with the TeamingMode parameter and the LACP value, all of the traffic for the host arrives on one NIC, this is not very useful in a Hyper-V case, but quite useful in a native teaming case, because in a native teaming case you generally have only one MAC address visible to the network from the team NIC anyway.

    Additionally, when you configure a Teaming mode of LACP, NIC Teaming always operates in LACP’s Active mode. By default, NIC Teaming uses a short-timer (3 seconds), but you can configure a long timer (90 seconds) with Set-NetLbfoTeam. By default, it’s set to ‘Fast’.

    Set-NetLbfoTeam -Name "LACPTeam1" -LacpTimer Slow

    The -LacpTimer specifies how often inter-connected devices exchange LACP protocol data units (PDUs) or control messages.

    Hope this helps!

  5. Hi,

    Thanks for the explanation, in the SET mode, what is the configuration on the switch side if we want to have MLAG? like vPC.
    Most modern data centers are using leaf and spine technology with MLAG but in the case of new Windows doesn’t support LACP how we can deal with this issue?
    please advise.

  6. Hello Moe, thanks for the comment!
    You don’t need to configure anything on the switch side except for the Trunk port so Hyper-V VMs can use different VLANs.
    SET supports only switch-independent configuration by using either Dynamic or Hyper-V Port load-balancing algorithms.
    For best performance, Microsoft recommends using Hyper-V Port for use on all NICs that operate at or above 10 Gbps.
    Hope it helps!

Let us know what you think, or ask a question...