The Available Processor Sets of the Underlying Physical NICs Belonging to the LBFO Team NIC Are not Configured Correctly #HyperV #VMQ #RSS #LBFO

| ,

Published on | Updated on February 11, 2021

4 Min. Read

Hello folks,

I have recently upgraded my Hyper-V host and my network infrastructure to 10Gb.

As soon as I moved the Hyper-V virtual switch to the LBFO team with 2X10Gb, I start receiving the following error:


If we look deeper in the event log, we can see the reason is self-explanatory.


So what is sum-queue mode and what is min-queue mode?

A while ago, I posted a detailed article on How To Enable and Configure VMQ/dVMQ on Windows Server 2012 R2 with below Ten Gig Network Adapters.

Please make sure to check the article before you proceed with the resolution.

As a quick recap, the SUM-Queues mode is the total number of VMQs of all the physical NICs that are participating in the team, however the MIN-Queues mode is the minimum number of VMQs of all the physical NICs that are participating in the team.

The question is why we don’t get the same error when you have 1Gb network adapters? because when using 1Gb network adapters VMQ is disabled by default, because Microsoft don’t see any performance benefit to VMQ on 1Gb NICs, and a single core can keep up at ~3.5Gb throughput without any problem.

If you need to enable VMQ on 1Gb NICs, please refer to this article.

In my scenario, I am using 2X10Gb adapters configured with Switch independent teaming mode and Dynamic as distribution mode.

Distribution mode→Teaming mode↓ Address Hash modes Hyper-V Port Dynamic
Switch independent Min-Queues Sum-of-Queues Sum-of-Queues
Switch dependent Min-Queues Min-Queues Min-Queues

If you look in the table above, you can see that I am using the Sum-queue mode.

First, we need to check the total number of VMQs for my LBFO team.


As you can see VMQ is enabled True, but the Base and Max processors for both 10Gb adapters are set to 0 and Max to 16, therefore the processor sets is overlapping, because the LBFO team is set up for Sum of Queues, the network adapters in the team have to use non-overlapping processor sets.

I have here one Converged Virtual Switch with 2X10Gb NICs teamed and 63 Queues for each NIC used for vNICs on the host and for vmNICs for VMs, so the total number of VMQs for the LBFO team is 126.

You may wonder why 63 queues and not 64, in my scenario here is 128 (64 NIC1 + 64 NIC2), but 1 VMQ queue is reserved by the system so you’ll see 63 per port and 126 per LBFO team.

Before we start configuring the VMQ for each adapter, we need to determine if Hyper-threading is enabled in the system by running the following cmdlet:


As you can see we have the NumberOfLogicalProcessors as twice as the NumberOfCores, so my server has two 8-core CPUs and HT is enabled, we can see 32 LPs in task manager.


Let’s start configuring virtual machine queue for both adapters.

Since my team is in Sum-Queues mode, the team members’ processors should be, non-overlapping or with little overlap as possible. For example, in my scenario I have a 16-core host (32 logical processors) with a team of 2X10Gbps NICs, I will set the first NIC1 to use base processor of 0 and use max processors 8 cores (so this NIC would use processor 0, 2, 4, 6, 8,10,12,14 for VMQ); the second NIC2 would be set to use base processor 16 and use 8 cores as well (so this NIC would use processor 16, 18, 20, 22, 24, 26, 28, 30 for VMQ).

As best practice, please make sure the base processor is not set to 0, because the first core, logical 0, is reserved for default (non-RSS and non-DVMQ) network processing.

Let’s open PowerShell and Set-NetAdapterVmq accordingly for each NIC:


Let’s verify now that VMQ is applied:


As you can see now, the baseVmqProcessor for NIC1 is 0 and the baseVmqProcessor for NIC2 is 16.

So what we have done in this case, the 126 queues are spread across the 16 processors, the first NIC in my example has 63 queues, so it can spread anywhere from processor 0 to 15, and the second NIC from processor 16 to processor 31. Keep in mind that all 16 CPUs will be used since I have more queues than CPUs. However, if you have for example 8 queues per NIC then no more than 8 CPUs will be used since there are only 8 queues.

But after I set the VMQ, the error did not go away.



Because as I mentioned in the beginning of this article, I am using one Converged Team for vmNIC (VMs) and for vNICs in the host as well.

If we look at the RSS on host, we can see the Base and Max processors for NIC1 is set to 0 and NIC2 is set to 16 as well, therefore the processor sets is overlapping with VMQ.

As a side note and best practice, you should split the vNICs on the Host from the vmNICs on a two separate physical adapters (teamed).


In this case, we will roughly split between RSS and Dynamic VMQ 50/50.

The 16 logical processors on CPU0 (0–15) will be used by RSS. The remaining 16 logical processors of CPU1 (16–31) will be used by DVMQ.

The settings for the two 10Gb NICs will depend again on whether NIC teaming is in Sum-of-Queues mode or in Min-Queues mode. NIC1 (Fiber01) and NIC2 (Fiber02) are in a Switch-Independent and Dynamic mode team, so they are in Sum-of-Queues mode. This means the NICs in the team need to use non-overlapping processor sets. The settings for the two 10Gb NICs will therefore be illustrated as the following:

Set-NetAdapterRss “Fiber01” –BaseProcessorNumber 0 –MaxProcessors 4
Set-NetAdapterRss “Fiber02” –BaseProcessorNumber 8 –MaxProcessors 4

Set-NetAdapterVmq ”Fiber01” –BaseProcessorNumber 16 –MaxProcessors 4
Set-NetAdapterVmq “Fiber02” –BaseProcessorNumber 24 –MaxProcessors 4


Note: As per Microsoft that as soon as you bond the Hyper-V Virtual Switch to the LBFO team, the RSS will be disabled on the host and VMQ will be enabled, in other words the Set-NetAdapterRss actually does not have effect here and the Set-NetAdapterVmq will take precedence, therefore if we look again, we can see that RSS will align with VMQ.


Next, you need to reboot your Virtual Machines in order for the new settings to take effect, because each vmNIC will be assigned one queue once the VM is booted.

Last but not least, you can verify this by running Get-NetAdapterVmqQueue and this will show you all the queues they are assigned across the vmNICs for all VMs on that particular Hyper-V host.


Finally, after setting the VMQ and RSS correctly on the system, the error is disappeared!

Hope this helps.

Enjoy your day!


Windows Server 2016 HyperV DEMO04: Distributed Storage QoS

Microsoft MVP Virtual Conference #MVPbuzz


6 thoughts on “The Available Processor Sets of the Underlying Physical NICs Belonging to the LBFO Team NIC Are not Configured Correctly #HyperV #VMQ #RSS #LBFO”

Leave a comment...

  1. Hi Charbel,

    I have 4 hosts with 2 physical cpu each of them with 6 core (24 logical processors totally) and with 10gigabit nics.
    I want to configure RSS and VMQ and from what I know when configuring Vmq, RSS is automaticall disabled.
    I’m a little bit confused how the command should look like on my configuration. For example when running Get-NetadapterVMQ it shows me MaxProcessors 16..

    • Hello Michael, Ce face?

      How many 10GB NICs per host do you have? and how many VMQ queues per interface?

      I will consider the following scenario:
      Each server has the following hardware:
      2 CPU’s with 6 Physical core each and 12 Logical Cores – Total 24 Logical Cores
      2 10Gbe NICs with 28 VMQ queues per interface

      Teaming configuration (with Hyper-V switch bound to it)
      • TEAM_HV
      • LBFO team configured in Switch Independent and Dynamic Port mode

      NICS NIC10_C1 and NICS NIC10_C2 are used in this team

      The commands to achieve this is the following:
      Set-NetAdapterVmq -Name “NIC10_C1” -BaseProcessorNumber 2 -MaxProcessors 5 -MaxProcessorNumber 10
      Set-NetAdapterVmq -Name “NIC10_C2” -BaseProcessorNumber 14 -MaxProcessors 5 -MaxProcessorNumber 22

      Hope this helps!

  2. Hei Charbel,

    Sorry that haven’t gave you full information.
    Each server has 2 10Gigabit Nics that are configured Switch Independent Team with Dynamic port mode. I wasn’t sure about the MaxProcessor, I was thinking to put it 6,but it makes sense to be 5, since the 1st cores has to be left outside. Is that correct?
    How many VMQ queue per interface? -isn’t this dependent on how many cores there are?

    • You could also do this:

      Set-NetAdapterVmq -Name “NIC10_C1” -BaseProcessorNumber 2 -MaxProcessors 6 -MaxProcessorNumber 12
      Set-NetAdapterVmq -Name “NIC10_C2” -BaseProcessorNumber 14 -MaxProcessors 6 -MaxProcessorNumber 24


Leave a comment below...

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to Charbel Nemnom’s Blog

Get the latest posts delivered right to your inbox

The content of this website is copyrighted from being plagiarized! However, you can copy from the 'Code Blocks'.

Please send your feedback to the author using this form for any 'Code' you like.

Thank you for visiting!