You dont have javascript enabled! Please enable it!

The Available Processor Sets of the Underlying Physical NICs Belonging to the LBFO Team NIC are not Configured Correctly #HyperV #VMQ #RSS #LBFO

4 Min. Read

Updated: Please note that the “MaxProcessor” option is no longer a requirement to be configured starting with Windows Server 2019, Windows Server 2022, or a later version. Microsoft recommends that to let the system manage it now.


Hello folks,

I have recently upgraded my Hyper-V host and my network infrastructure to 10Gb.

As soon as I moved the Hyper-V virtual switch to the LBFO team with 2X10Gb, I start receiving the following error:


If we look deeper in the event log, we can see the reason is self-explanatory.


So what is sum-queue mode and what is the min-queue mode?

A while ago, I posted a detailed article on How to Enable and Configure VMQ/dVMQ on Windows Server 2012 R2 with below Ten Gig Network Adapters.

Please make sure to check the article before you proceed with the resolution.

As a quick recap, the SUM-Queues mode is the total number of VMQs of all the physical NICs that are participating in the team, however, the MIN-Queues mode is the minimum number of VMQs of all the physical NICs that are participating in the team.

The question is why we don’t get the same error when you have 1Gb network adapters? because when using 1Gb network adapters, VMQ is disabled by default because Microsoft doesn’t see any performance benefit to VMQ on 1Gb NICs, and a single core can keep up at ~3.5Gb throughput without any problem.

If you need to enable VMQ on 1Gb NICs, please refer to this article.

In my scenario, I am using 2X10Gb adapters configured with Switch independent teaming mode and Dynamic as distribution mode.

Distribution mode→Teaming mode↓ Address Hash modes Hyper-V Port Dynamic
Switch independent Min-Queues Sum-of-Queues Sum-of-Queues
Switch dependent Min-Queues Min-Queues Min-Queues

If you look in the table above, you can see that I am using the Sum-queue mode.

First, we need to check the total number of VMQs for my LBFO team.


As you can see VMQ is enabled True, but the Base and Max processors for both 10Gb adapters are set to 0 and Max to 16, therefore the processor sets are overlapping, because the LBFO team is set up for Sum of Queues, the network adapters in the team have to use non-overlapping processor sets.

In this example, I have one Converged Virtual Switch with (2 X 10Gb) NICs teamed and 63 Queues for each NIC used for vNICs on the host and for vmNICs for VMs, so the total number of VMQs for the LBFO team is 126.

You may wonder why 63 queues and not 64, in my scenario here, is 128 (64 NIC1 + 64 NIC2), but 1 VMQ queue is reserved by the system so you’ll see 63 per port and 126 per LBFO team.

Before we start configuring the VMQ for each NIC adapter, we need to determine if Hyper-threading is enabled in the system by running the following cmdlet:


As you can see we have the NumberOfLogicalProcessors as twice as the NumberOfCores, so my server has two 8-core CPUs and Hyper-Threading is enabled, we can see 32 LPs in the task manager.


Let’s start configuring the virtual machine queue for both adapters.

Since my team is in Sum-Queues mode, the team members’ processors should not overlap or with little overlap as possible. For example, in my scenario I have a 16-core host (32 logical processors) with a team of 2X10Gbps NICs, I will set the first NIC1 to use a base processor of 0 and use max processors 8 cores (so this NIC would use processor 0, 2, 4, 6, 8,10,12,14 for VMQ); the second NIC2 would be set to use base processor 16 and use 8 cores as well (so this NIC would use processor 16, 18, 20, 22, 24, 26, 28, 30 for VMQ).

As a best practice, please make sure the base processor is not set to 0 because the first core (logical 0) is reserved for default (non-RSS and non-DVMQ) network processing. If you are using Windows Server 2019 or a later version, then this configuration is not considered as a best practice anymore, since the dynamic algorithm has changed, it will move workloads away from a burdened core (0), however, it would still be a best practice to do this in case of a driver bug. 

Let’s open PowerShell and Set-NetAdapterVmq accordingly for each NIC:


Let’s verify now that VMQ is applied:


As you can see now, the baseVmqProcessor for NIC1 is 0 and the baseVmqProcessor for NIC2 is 16.

So what we have done in this case, the 126 queues are spread across the 16 processors, the first NIC in my example has 63 queues, so it can spread anywhere from processor 0 to 15, and the second NIC from processor 16 to processor 31. Keep in mind that all 16 CPUs will be used since I have more queues than CPUs. However, if you have for example 8 queues per NIC then no more than 8 CPUs will be used since there are only 8 queues.

But after I set the VMQ, the error did not go away.



As I mentioned at the beginning of this article, I am using one Converged Team for vmNIC (VMs) and for vNICs in the host as well.

If we look at the RSS on the host, we can see the Base and Max processors for NIC1 are set to 0, and NIC2 is set to 16 as well, therefore the processor sets are overlapping with VMQ.

As a side note and best practice, you should split the vNICs among the Host from the vmNICs on two separate physical adapters (teamed).


In this case, we will roughly split between RSS and Dynamic VMQ 50/50.

The 16 logical processors on CPU0 (0–15) will be used by RSS. The remaining 16 logical processors of CPU1 (16–31) will be used by DVMQ.

The settings for the two 10Gb NICs will depend again on whether NIC teaming is in Sum-of-Queues mode or in Min-Queues mode. NIC1 (Fiber01) and NIC2 (Fiber02) are in a Switch-Independent and Dynamic mode team, so they are in Sum-of-Queues mode. This means the NICs in the team need to use non-overlapping processor sets. The settings for the two 10Gb NICs will therefore be illustrated as the following:

Set-NetAdapterRss "Fiber01" –BaseProcessorNumber 0 –MaxProcessors 4
Set-NetAdapterRss "Fiber02" –BaseProcessorNumber 8 –MaxProcessors 4

Set-NetAdapterVmq "Fiber01" –BaseProcessorNumber 16 –MaxProcessors 4
Set-NetAdapterVmq "Fiber02" –BaseProcessorNumber 24 –MaxProcessors 4


Note: According to Microsoft, as soon as you bond the Hyper-V Virtual Switch to the LBFO team, the RSS will be disabled on the host and VMQ will be enabled, in other words, the Set-NetAdapterRss actually does not have any effect and the Set-NetAdapterVmq will take precedence, therefore if we look again, we can see that RSS will align with VMQ.


Next, you need to reboot your Virtual Machines in order for the new settings to take effect because each vmNIC will be assigned one queue once the VM is booted.

Last but not least, you can verify this by running Get-NetAdapterVmqQueue and this will show you all the queues they are assigned across the vmNICs for all VMs on that particular Hyper-V host.


Finally, after setting the VMQ and RSS correctly on the system, the error is disappeared!

Hope this helps.

Enjoy your day!

Related Posts


Windows Server 2016 HyperV DEMO04: Distributed Storage QoS

Microsoft MVP Virtual Conference #MVPbuzz


26 thoughts on “The Available Processor Sets of the Underlying Physical NICs Belonging to the LBFO Team NIC are not Configured Correctly #HyperV #VMQ #RSS #LBFO”

Leave a comment...

  1. Hi Charbel,

    I have 4 hosts with 2 physical cpu each of them with 6 core (24 logical processors totally) and with 10gigabit nics.
    I want to configure RSS and VMQ and from what I know when configuring Vmq, RSS is automaticall disabled.
    I’m a little bit confused how the command should look like on my configuration. For example when running Get-NetadapterVMQ it shows me MaxProcessors 16..

  2. Hello Michael, Ce face?

    How many 10GB NICs per host do you have? and how many VMQ queues per interface?

    I will consider the following scenario:
    Each server has the following hardware:
    2 CPU’s with 6 Physical core each and 12 Logical Cores – Total 24 Logical Cores
    2 10Gbe NICs with 28 VMQ queues per interface

    Teaming configuration (with Hyper-V switch bound to it)
    • TEAM_HV
    • LBFO team configured in Switch Independent and Dynamic Port mode

    NICS NIC10_C1 and NICS NIC10_C2 are used in this team

    The commands to achieve this is the following:
    Set-NetAdapterVmq -Name “NIC10_C1” -BaseProcessorNumber 2 -MaxProcessors 5 -MaxProcessorNumber 10
    Set-NetAdapterVmq -Name “NIC10_C2” -BaseProcessorNumber 14 -MaxProcessors 5 -MaxProcessorNumber 22

    Hope this helps!

  3. Hei Charbel,

    Sorry that haven’t gave you full information.
    Each server has 2 10Gigabit Nics that are configured Switch Independent Team with Dynamic port mode. I wasn’t sure about the MaxProcessor, I was thinking to put it 6,but it makes sense to be 5, since the 1st cores has to be left outside. Is that correct?
    How many VMQ queue per interface? -isn’t this dependent on how many cores there are?

  4. You could also do this:

    Set-NetAdapterVmq -Name “NIC10_C1” -BaseProcessorNumber 2 -MaxProcessors 6 -MaxProcessorNumber 12
    Set-NetAdapterVmq -Name “NIC10_C2” -BaseProcessorNumber 14 -MaxProcessors 6 -MaxProcessorNumber 24

  5. Hello Charbel,

    Thanks for the great topic

    Could you please help me i’ve MaxProcessors 32 and two CPU, each CPU have 16 core and 64 thread

  6. Hello Ahmed, thanks for the feedback.
    In your case, for two NICs with Switch-Independent and Dynamic mode + Switch Embedded Team (Sum-of-Queues mode):
    Set-NetAdapterRss "Fiber01" –BaseProcessorNumber 1 –MaxProcessors 8 Set-NetAdapterRss "Fiber02" –BaseProcessorNumber 17 –MaxProcessors 8 Set-NetAdapterVmq "Fiber01" –BaseProcessorNumber 34 –MaxProcessors 8 Set-NetAdapterVmq "Fiber02" –BaseProcessorNumber 50 –MaxProcessors 8
    Let me know if this works for you. Thank You!

  7. Hi Charbel,
    In other blog posts it’s stated – though I can’t validate this one way or another – that 2019 Hyper-V you don’t need the -maxprocessors flag.

    As you pointed out that the first physical core is used by the Hypervisor we thus exclude 0 and 16

    However when doing this, we are still getting the ‘overlap’ errors.

    Additionally, per your suggestion of ‘maxprocessors’ – when this is tried on 2019 it doesn’t allow ‘6’ or ‘7’ – it only allows 0,2,4,8,16

    So on a 2-cpu 8c/16t (16c/32t) server, we run two X520’s in an LBFO independent team, and use the following:
    Set-NetAdapterVmq -Name HyperV01 -BaseProcessorNumber 2
    Set-NetAdapterVmq -Name HyperV02 -BaseProcessorNumber 18

    Now, by the logic of physical vs logical, IF we were to put a ‘max’ in here we can’t use 8 – that would overlap NIC1 into CPU 16 and NIC2 would go beyond available physical NICs.

    Even if we were to set NIC2 as base 16 we’d not be able to set NIC1 to Base:2 max:7 as it doesn’t accept 7 or even 6 – then we’d be forced to do max:4 which would be wasting cores.

    There’s no documentation from Microsoft I can find anywhere on this, which only makes things worse.

    Would love any insight you might have.

  8. Thank you Ryan for the comment! Yes, you can only set the ‘maxprocessors’ on Even processor numbers only (E.g: 2,4,6,8,10,12,16).
    Are you using Switch Embedded Team (SET), or LBFO Team on Hyper-V?
    In your case, you have 2-CPU with 8 Cores (so 16 cores in total) without Hyper-threading.
    Could you please try the following and let me know if it works for you: Set-NetAdapterVmq "HyperV01" –BaseProcessorNumber 1 –MaxProcessors 6 Set-NetAdapterVmq "HyperV02" –BaseProcessorNumber 9 –MaxProcessors 6
    Thank You!

  9. Unfortunately, it does not. My assumption here is that this is a 2019 issue – or possibly that this is a Hyper-V 2019 issue vs Server Standard 2019 Core w/hyper-v enabled?

    Set-NetAdapterVmq : No matching keyword value found. The following are valid keyword values: 1, 2, 4, 8, 16
    At line:1 char:1
    + Set-NetAdapterVmq “HyperV01” –BaseProcessorNumber 1 –MaxProcessors 6

    So not just ‘even’ but double the previous value.

    I really wish Microsoft would do better at documenting this sort of performance impacting qualifiers or just automate it in some fashion.

    Also, is the Max required or not? It accepts the input, but another blog says that in 2019 that isn’t a required switch; if that were the case I can’t help but ask myself why they’d have kept the switch if it was not required.

  10. Thank you Ryan for the update. Quick question, are you using Switch Embedded Team (SET), or LBFO Team on this Hyper-V host? Microsoft recommends using Switch Embedded Teaming (SET) as the default teaming mechanism whenever possible, particularly when using Hyper-V.
    Can you try to set the RSS instead and see what you get?
    Set-NetAdapterRss "HyperV01" –BaseProcessorNumber 1 –MaxProcessors 6 Set-NetAdapterRss "HyperV02" –BaseProcessorNumber 9 –MaxProcessors 6
    The -MaxProcessorNumber parameter is not required anymore with Windows Server 2019 (The system manages this now). On the other hand, configuring the -MaxProcessors is optional and unnecessary due to the enhancements in the default queue implemented in Windows Server 2016. You may still choose to do this if you’re limiting the queues as a rudimentary QoS mechanism.

  11. Sorry for not answering that the first time around!
    This is an LBFO team – we have not done SET as of yet mostly as this was how it was done before I got there, and change is difficult to impliment; I’ve only just gotten buy in on using vNICs on one team for management/hyper-v switching/backups instead of a 2nd or 3rd physical LBFO team for each ‘purpose’. I come from a VMWare background where I’m used to a bit more converged attitude.

    That having been said I want to ensure that we’re not hampering our Hyper-V by overloading the LBFO queues with a lot of i/o on the CPUs due to network traffic. This specific server for what it’s worth still has Management on its own LBFO 1gb team (2gb w/2x1gb ports really) but Hyper-v Switch is attached to the virtual NIC created by this LBFO team of 2 NICs

    These are all Intel X520 SFP+ ports, one onboard dual port and one PCIe dual port.

    That having been said, interestingly the VSS doesn’t work either but when I look at the RSS info it looks like it ‘might’ be doing things correctly:

    Name : HyperV01
    InterfaceDescription : Intel(R) Ethernet 10G 4P X520/I350 rNDC #2
    Enabled : True
    NumberOfReceiveQueues : 128
    Profile : Closest
    BaseProcessor: [Group:Number] : 0:1
    MaxProcessor: [Group:Number] : 0:30
    MaxProcessors : 8
    RssProcessorArray: [Group:Number/NUMA Distance] : 0:2/0 0:4/0 0:6/0 0:8/0 0:10/0 0:12/0 0:14/0 0:16/32767
    0:18/32767 0:20/32767 0:22/32767 0:24/32767 0:26/32767
    0:28/32767 0:30/32767
    IndirectionTable: [Group:Number] :

    Name : HyperV02
    InterfaceDescription : Intel(R) Ethernet 10G 2P X520 Adapter
    Enabled : True
    NumberOfReceiveQueues : 128
    Profile : Closest
    BaseProcessor: [Group:Number] : 0:18
    MaxProcessor: [Group:Number] : 0:30
    MaxProcessors : 7
    RssProcessorArray: [Group:Number/NUMA Distance] : 0:18/32767 0:20/32767 0:22/32767 0:24/32767 0:26/32767
    0:28/32767 0:30/32767
    IndirectionTable: [Group:Number] :

    So in this it looks like Hyper-V is telling the 2nd NIC to ONLY use 7 at max – but it also shows conflicts with NIC1 on ports 18/22/24/26/28/30 still from what that looks like anyways.

    Name InterfaceDescription Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    —- ——————– ——- —————- ————- —————
    HyperV Microsoft Network Adapter Mu…#2 True 0:0 62
    HyperV01 Intel(R) Ethernet 10G 4P X52…#2 True 0:1 8 31
    HyperV02 Intel(R) Ethernet 10G 2P X520 … True 0:18 8 31

    I appreciate your insight and expertise on this matter!

  12. Thanks, Ryan for the update! Unfortunately, the LBFO team is not tuned to get the Network acceleration benefits that Microsoft implemented in Windows Server 2019. And all the fine-tuning that we used to do in the earlier Windows Server release is gone with Windows Server 2019 + SET. With Windows Server 2019 and Dynamic VMMQ, we can now automatically move queues on an overburdened processor to other processors that aren’t doing as much work. Now workloads will have a more consistent and performant experience.
    I strongly recommend moving to SET if possible, please note that you can do this without downtime by taking one NIC at a time out of the LBFO Team and then add it to the new SET Team, and so on.
    And now back to your current LBFO configuration. Could you please do this:
    Set-NetAdapterRss "HyperV01" –BaseProcessorNumber 2 -MaxProcessorNumber 14 –MaxProcessors 13 Set-NetAdapterRss "HyperV02" –BaseProcessorNumber 18 -MaxProcessorNumber 30 –MaxProcessors 13 Get-NetAdapterRss "HyperV01" | Select Name, Enabled, MaxProcessorNumber, MaxProcessors Get-NetAdapterRss "HyperV02" | Select Name, Enabled, MaxProcessorNumber, MaxProcessors
    Hope this helps!

  13. Hello Charbel,

    What’s your recommendations using SET or LBFO for 4x 1gigabit network adapter (Windows Server 2019).

    and for 10 GB adapters your are recommend to use SET right ?

    Thanks in advance for your help

  14. Hello Ahmed. Yes, I recommend using SET for 1GB and 10GB adapters as long as the workload you are running is Hyper-V on Windows Server 2019.

  15. I am a little confused why it is using the same CPU for all of your VMQ’s, either 16 or 24. Should it not use various CPUs?
    Also, why would you not use the first 16 CPUs? By only using 16 and higher, I believe you are only allocating the VMQ’s on the 2nd NUMA node, which will need to copy all packets across for VM’s running on the first NUMA node.

    Here is a snippet of my Get-NetAdapterVMQQueue looks like, notice that it is using a different processor for each VM:
    LAN 2 00-15-5D-4A-66-09 0:10 vNOC-3
    LAN 3 00-15-5D-0A-F1-08 0:14 DEVTFS
    LAN 4 00-15-5D-0A-F1-0C 0:12 LUCYTEST
    LAN 5 00-15-5D-0A-F6-03 12 0:2 QA3
    LAN 6 00-15-5D-0A-F1-1B 0:8 SERVER1
    LAN 7 00-15-5D-0A-F1-17 0:2 AD1

  16. Thank you Brian for the comment!
    May ask, which Windows Server version are you using and which teaming mode, SET or LBFO?

    This article was written based on Windows Server 2012 R2 and apply only to the LBFO team. Microsoft has changed and improved the VMQ algorithm a lot in 2016 and 2019.
    In Windows Server 2016, you are no longer required to set the processor arrays with Set-NetAdapterVMQ or Set-NetAdapterRSS. I’ve been asked if you still can configure these settings if you have a desire to, and the answer is yes. However, the scenarios when this is useful are few and far between. For general use, this is no longer a requirement.

    As noted in the article, as soon as you bond the Hyper-V Virtual Switch to the LBFO team, the RSS will be disabled on the host and VMQ will be enabled. In other words, the Set-NetAdapterRss actually does not have any effect and the Set-NetAdapterVmq will take precedence.

    You could use the following VMQ commands if you still need to set the processor arrays (this example is based on my CPU system described in this article):
    Set-NetAdapterVmq "NICName1" –BaseProcessorNumber 2 –MaxProcessors 7
    Set-NetAdapterVmq "NICName2" –BaseProcessorNumber 18 –MaxProcessors 7
    Hope this helps!

  17. Hello Charbel,

    very good article but not an easy topic.
    According to various blogs, the MaxProcessor is no longer necessary for Server 2019.
    We have a cluster with 4 10GB NICs, 2 for iSCSI, and 2 for LAN (cluster, management, and VMLAN, migration).
    The 2 LAN NICs were combined via a Switch Embedded Team, VMQ enabled, the server has Two 8-core CPUs and HT is enabled.

    Name Enabled BaseVmqProcessor MaxProcessors NumberOfReceiveQueues
    LAN-1 True 0:0 16 93
    LAN-2 True 0:0 16 189

    What is the correct VMQ setting for this setup?

    Thank you in advance for your help.

  18. Hello Andre, thanks for the feedback!
    Yes, you are right. I have noted this in this article as well, the MaxProcessor option is no longer a requirement to be configured starting with Windows Server 2019, Windows Server 2022, or a later version. Microsoft recommends that to let the system manage it now.
    If you are using Switch Embedded Team (SET) on Windows Server 2019/2022, I’ll leave it as default. With Windows Server 2019 and Dynamic VMMQ, Microsoft can now automatically move queues on an overburdened processor to other processors that aren’t doing as much work. Now workloads will have a more consistent and performant experience.
    Hope this helps!

  19. Hello André,
    Did you check your firmware and driver version? Most often this issue is due to firmware or driver for the NIC card.
    If the issue did not resolve, then try to set VMQ manually.
    I would try the following based on your environment:
    Set-NetAdapterVmq "LAN-1" –BaseProcessorNumber 2 –MaxProcessors 6
    Set-NetAdapterVmq "LAN-2" –BaseProcessorNumber 10 –MaxProcessors 6
    Hope this helps!

  20. Hello Charbel,

    MaxProcessors 6 isn’t allowed, only 1,2,4,8,16,32.
    Firmware and Driver are both the last one, 20.5.x Intel 710X Dell R7525, but in the Eventlog are entries for a mismatch.
    (Intel(R) Ethernet Converged Network Adapter X710 #2
    The driver for the device detected a newer version of the NVM image than expected. Please install the most recent version of the network driver.) strange… The “Failed to move RSS queue” Error is displayed only after VM migration.

  21. Thank you André for the update,
    Could you please try to set with 8 instead?
    Set-NetAdapterVmq "LAN-1" –BaseProcessorNumber 2 –MaxProcessors 8
    Set-NetAdapterVmq "LAN-2" –BaseProcessorNumber 10 –MaxProcessors 8
    I believe that you have installed the latest update for Windows Server 2019 as well.
    If the issue still persists, I would suggest logging a support case with Microsoft and/or with the OEM hardware provider. Assuming that the hardware (NIC) is certified and supports Windows Server 2019.
    Hope this helps!

  22. Hello Charbel,

    I was able to solve the problem. The error only occurred with VMs that were migrated from a single Hyper-V (2016 without SET).
    VmmqEnalble was set to False in the NIC of these VMs. Activating the feature fixed the problem which always occurred after a live migration of a corresponding VM.

Let me know what you think, or ask a question...

error: Alert: The content of this website is copyrighted from being plagiarized! You can copy from the 'Code Blocks' in 'Black' by selecting the Code. Thank You!