The Available Processor Sets of the Underlying Physical NICs Belonging to the LBFO Team NIC Are not Configured Correctly #HyperV #VMQ #RSS #LBFO

| ,

Published on | Updated on April 7, 2021

4 Min. Read

Hello folks,

I have recently upgraded my Hyper-V host and my network infrastructure to 10Gb.

As soon as I moved the Hyper-V virtual switch to the LBFO team with 2X10Gb, I start receiving the following error:

LBFO-ID106-01

If we look deeper in the event log, we can see the reason is self-explanatory.

LBFO-ID106-02

So what is sum-queue mode and what is the min-queue mode?

A while ago, I posted a detailed article on How To Enable and Configure VMQ/dVMQ on Windows Server 2012 R2 with below Ten Gig Network Adapters.

Please make sure to check the article before you proceed with the resolution.

As a quick recap, the SUM-Queues mode is the total number of VMQs of all the physical NICs that are participating in the team, however, the MIN-Queues mode is the minimum number of VMQs of all the physical NICs that are participating in the team.

The question is why we don’t get the same error when you have 1Gb network adapters? because when using 1Gb network adapters VMQ is disabled by default because Microsoft doesn’t see any performance benefit to VMQ on 1Gb NICs, and a single core can keep up at ~3.5Gb throughput without any problem.

If you need to enable VMQ on 1Gb NICs, please refer to this article.

In my scenario, I am using 2X10Gb adapters configured with Switch independent teaming mode and Dynamic as distribution mode.

Distribution mode→Teaming mode↓ Address Hash modes Hyper-V Port Dynamic
Switch independent Min-Queues Sum-of-Queues Sum-of-Queues
Switch dependent Min-Queues Min-Queues Min-Queues

If you look in the table above, you can see that I am using the Sum-queue mode.

First, we need to check the total number of VMQs for my LBFO team.

LBFO-ID106-03

As you can see VMQ is enabled True, but the Base and Max processors for both 10Gb adapters are set to 0 and Max to 16, therefore the processor sets is overlapping, because the LBFO team is set up for Sum of Queues, the network adapters in the team have to use non-overlapping processor sets.

I have here one Converged Virtual Switch with 2X10Gb NICs teamed and 63 Queues for each NIC used for vNICs on the host and for vmNICs for VMs, so the total number of VMQs for the LBFO team is 126.

You may wonder why 63 queues and not 64, in my scenario here, is 128 (64 NIC1 + 64 NIC2), but 1 VMQ queue is reserved by the system so you’ll see 63 per port and 126 per LBFO team.

Before we start configuring the VMQ for each adapter, we need to determine if Hyper-threading is enabled in the system by running the following cmdlet:

LBFO-ID106-04

As you can see we have the NumberOfLogicalProcessors as twice as the NumberOfCores, so my server has two 8-core CPUs and HT is enabled, we can see 32 LPs in the task manager.

LBFO-ID106-05

Let’s start configuring the virtual machine queue for both adapters.

Since my team is in Sum-Queues mode, the team members’ processors should be, non-overlapping or with little overlap as possible. For example, in my scenario I have a 16-core host (32 logical processors) with a team of 2X10Gbps NICs, I will set the first NIC1 to use a base processor of 0 and use max processors 8 cores (so this NIC would use processor 0, 2, 4, 6, 8,10,12,14 for VMQ); the second NIC2 would be set to use base processor 16 and use 8 cores as well (so this NIC would use processor 16, 18, 20, 22, 24, 26, 28, 30 for VMQ).

As a best practice, please make sure the base processor is not set to 0, because the first core, logical 0, is reserved for default (non-RSS and non-DVMQ) network processing.

Let’s open PowerShell and Set-NetAdapterVmq accordingly for each NIC:

LBFO-ID106-06

Let’s verify now that VMQ is applied:

LBFO-ID106-07

As you can see now, the baseVmqProcessor for NIC1 is 0 and the baseVmqProcessor for NIC2 is 16.

So what we have done in this case, the 126 queues are spread across the 16 processors, the first NIC in my example has 63 queues, so it can spread anywhere from processor 0 to 15, and the second NIC from processor 16 to processor 31. Keep in mind that all 16 CPUs will be used since I have more queues than CPUs. However, if you have for example 8 queues per NIC then no more than 8 CPUs will be used since there are only 8 queues.

But after I set the VMQ, the error did not go away.

LBFO-ID106-09

Why?

As I mentioned at the beginning of this article, I am using one Converged Team for vmNIC (VMs) and for vNICs in the host as well.

If we look at the RSS on the host, we can see the Base and Max processors for NIC1 are set to 0 and NIC2 is set to 16 as well, therefore the processor sets are overlapping with VMQ.

As a side note and best practice, you should split the vNICs on the Host from the vmNICs on two separate physical adapters (teamed).

LBFO-ID106-08

In this case, we will roughly split between RSS and Dynamic VMQ 50/50.

The 16 logical processors on CPU0 (0–15) will be used by RSS. The remaining 16 logical processors of CPU1 (16–31) will be used by DVMQ.

The settings for the two 10Gb NICs will depend again on whether NIC teaming is in Sum-of-Queues mode or in Min-Queues mode. NIC1 (Fiber01) and NIC2 (Fiber02) are in a Switch-Independent and Dynamic mode team, so they are in Sum-of-Queues mode. This means the NICs in the team need to use non-overlapping processor sets. The settings for the two 10Gb NICs will therefore be illustrated as the following:

Set-NetAdapterRss “Fiber01” –BaseProcessorNumber 0 –MaxProcessors 4
Set-NetAdapterRss “Fiber02” –BaseProcessorNumber 8 –MaxProcessors 4

Set-NetAdapterVmq ”Fiber01” –BaseProcessorNumber 16 –MaxProcessors 4
Set-NetAdapterVmq “Fiber02” –BaseProcessorNumber 24 –MaxProcessors 4

LBFO-ID106-10

Note: As per Microsoft that as soon as you bond the Hyper-V Virtual Switch to the LBFO team, the RSS will be disabled on the host and VMQ will be enabled, in other words, the Set-NetAdapterRss actually does not have an effect here and the Set-NetAdapterVmq will take precedence, therefore if we look again, we can see that RSS will align with VMQ.

LBFO-ID106-12

Next, you need to reboot your Virtual Machines in order for the new settings to take effect because each vmNIC will be assigned one queue once the VM is booted.

Last but not least, you can verify this by running Get-NetAdapterVmqQueue and this will show you all the queues they are assigned across the vmNICs for all VMs on that particular Hyper-V host.

LBFO-ID106-11

Finally, after setting the VMQ and RSS correctly on the system, the error is disappeared!

Hope this helps.

Enjoy your day!
/Charbel

Previous

Windows Server 2016 HyperV DEMO04: Distributed Storage QoS

Microsoft MVP Virtual Conference #MVPbuzz

Next

14 thoughts on “The Available Processor Sets of the Underlying Physical NICs Belonging to the LBFO Team NIC Are not Configured Correctly #HyperV #VMQ #RSS #LBFO”

Leave a comment...

  1. Hi Charbel,

    I have 4 hosts with 2 physical cpu each of them with 6 core (24 logical processors totally) and with 10gigabit nics.
    I want to configure RSS and VMQ and from what I know when configuring Vmq, RSS is automaticall disabled.
    I’m a little bit confused how the command should look like on my configuration. For example when running Get-NetadapterVMQ it shows me MaxProcessors 16..

  2. Hello Michael, Ce face?

    How many 10GB NICs per host do you have? and how many VMQ queues per interface?

    I will consider the following scenario:
    Each server has the following hardware:
    2 CPU’s with 6 Physical core each and 12 Logical Cores – Total 24 Logical Cores
    2 10Gbe NICs with 28 VMQ queues per interface

    Teaming configuration (with Hyper-V switch bound to it)
    • TEAM_HV
    • LBFO team configured in Switch Independent and Dynamic Port mode

    NICS NIC10_C1 and NICS NIC10_C2 are used in this team

    The commands to achieve this is the following:
    Set-NetAdapterVmq -Name “NIC10_C1” -BaseProcessorNumber 2 -MaxProcessors 5 -MaxProcessorNumber 10
    Set-NetAdapterVmq -Name “NIC10_C2” -BaseProcessorNumber 14 -MaxProcessors 5 -MaxProcessorNumber 22

    Hope this helps!
    -Charbel

  3. Hei Charbel,

    Sorry that haven’t gave you full information.
    Each server has 2 10Gigabit Nics that are configured Switch Independent Team with Dynamic port mode. I wasn’t sure about the MaxProcessor, I was thinking to put it 6,but it makes sense to be 5, since the 1st cores has to be left outside. Is that correct?
    How many VMQ queue per interface? -isn’t this dependent on how many cores there are?

  4. Any idea why i still get the max processor 8 after configuring it?
    Nic2 True 0:14 8 63
    Nic1 True 0:2 8 63

  5. You could also do this:

    Set-NetAdapterVmq -Name “NIC10_C1” -BaseProcessorNumber 2 -MaxProcessors 6 -MaxProcessorNumber 12
    Set-NetAdapterVmq -Name “NIC10_C2” -BaseProcessorNumber 14 -MaxProcessors 6 -MaxProcessorNumber 24

  6. Hello Charbel,

    Thanks for the great topic

    Could you please help me i’ve MaxProcessors 32 and two CPU, each CPU have 16 core and 64 thread

  7. Hello Ahmed, thanks for the feedback.
    In your case, for two NICs with Switch-Independent and Dynamic mode + Switch Embedded Team (Sum-of-Queues mode):
    Set-NetAdapterRss "Fiber01" –BaseProcessorNumber 1 –MaxProcessors 8
    Set-NetAdapterRss "Fiber02" –BaseProcessorNumber 17 –MaxProcessors 8
    Set-NetAdapterVmq "Fiber01" –BaseProcessorNumber 34 –MaxProcessors 8
    Set-NetAdapterVmq "Fiber02" –BaseProcessorNumber 50 –MaxProcessors 8

    Let me know if this works for you. Thank You!

  8. Hi Charbel,
    In other blog posts it’s stated – though I can’t validate this one way or another – that 2019 Hyper-V you don’t need the -maxprocessors flag.

    As you pointed out that the first physical core is used by the Hypervisor we thus exclude 0 and 16

    However when doing this, we are still getting the ‘overlap’ errors.

    Additionally, per your suggestion of ‘maxprocessors’ – when this is tried on 2019 it doesn’t allow ‘6’ or ‘7’ – it only allows 0,2,4,8,16

    So on a 2-cpu 8c/16t (16c/32t) server, we run two X520’s in an LBFO independent team, and use the following:
    Set-NetAdapterVmq -Name HyperV01 -BaseProcessorNumber 2
    Set-NetAdapterVmq -Name HyperV02 -BaseProcessorNumber 18

    Now, by the logic of physical vs logical, IF we were to put a ‘max’ in here we can’t use 8 – that would overlap NIC1 into CPU 16 and NIC2 would go beyond available physical NICs.

    Even if we were to set NIC2 as base 16 we’d not be able to set NIC1 to Base:2 max:7 as it doesn’t accept 7 or even 6 – then we’d be forced to do max:4 which would be wasting cores.

    There’s no documentation from Microsoft I can find anywhere on this, which only makes things worse.

    Would love any insight you might have.

  9. Thank you Ryan for the comment! Yes, you can only set the ‘maxprocessors’ on Even processor numbers only (E.g: 2,4,6,8,10,12,16).
    Are you using Switch Embedded Team (SET), or LBFO Team on Hyper-V?
    In your case, you have 2-CPU with 8 Cores (so 16 cores in total) without Hyper-threading.
    Could you please try the following and let me know if it works for you:
    Set-NetAdapterVmq "HyperV01" –BaseProcessorNumber 1 –MaxProcessors 6
    Set-NetAdapterVmq "HyperV02" –BaseProcessorNumber 9 –MaxProcessors 6

    Thank You!

  10. Unfortunately, it does not. My assumption here is that this is a 2019 issue – or possibly that this is a Hyper-V 2019 issue vs Server Standard 2019 Core w/hyper-v enabled?

    Set-NetAdapterVmq : No matching keyword value found. The following are valid keyword values: 1, 2, 4, 8, 16
    At line:1 char:1
    + Set-NetAdapterVmq “HyperV01” –BaseProcessorNumber 1 –MaxProcessors 6

    So not just ‘even’ but double the previous value.

    I really wish Microsoft would do better at documenting this sort of performance impacting qualifiers or just automate it in some fashion.

    Also, is the Max required or not? It accepts the input, but another blog says that in 2019 that isn’t a required switch; if that were the case I can’t help but ask myself why they’d have kept the switch if it was not required.

  11. Thank you Ryan for the update. Quick question, are you using Switch Embedded Team (SET), or LBFO Team on this Hyper-V host? Microsoft recommends using Switch Embedded Teaming (SET) as the default teaming mechanism whenever possible, particularly when using Hyper-V.
    Can you try to set the RSS instead and see what you get?
    Set-NetAdapterRss "HyperV01" –BaseProcessorNumber 1 –MaxProcessors 6
    Set-NetAdapterRss "HyperV02" –BaseProcessorNumber 9 –MaxProcessors 6

    The -MaxProcessorNumber parameter is not required anymore with Windows Server 2019 (The system manages this now). On the other hand, configuring the -MaxProcessors is optional and unnecessary due to the enhancements in the default queue implemented in Windows Server 2016. You may still choose to do this if you’re limiting the queues as a rudimentary QoS mechanism.

  12. Sorry for not answering that the first time around!
    This is an LBFO team – we have not done SET as of yet mostly as this was how it was done before I got there, and change is difficult to impliment; I’ve only just gotten buy in on using vNICs on one team for management/hyper-v switching/backups instead of a 2nd or 3rd physical LBFO team for each ‘purpose’. I come from a VMWare background where I’m used to a bit more converged attitude.

    That having been said I want to ensure that we’re not hampering our Hyper-V by overloading the LBFO queues with a lot of i/o on the CPUs due to network traffic. This specific server for what it’s worth still has Management on its own LBFO 1gb team (2gb w/2x1gb ports really) but Hyper-v Switch is attached to the virtual NIC created by this LBFO team of 2 NICs

    These are all Intel X520 SFP+ ports, one onboard dual port and one PCIe dual port.

    That having been said, interestingly the VSS doesn’t work either but when I look at the RSS info it looks like it ‘might’ be doing things correctly:

    Name : HyperV01
    InterfaceDescription : Intel(R) Ethernet 10G 4P X520/I350 rNDC #2
    Enabled : True
    NumberOfReceiveQueues : 128
    Profile : Closest
    BaseProcessor: [Group:Number] : 0:1
    MaxProcessor: [Group:Number] : 0:30
    MaxProcessors : 8
    RssProcessorArray: [Group:Number/NUMA Distance] : 0:2/0 0:4/0 0:6/0 0:8/0 0:10/0 0:12/0 0:14/0 0:16/32767
    0:18/32767 0:20/32767 0:22/32767 0:24/32767 0:26/32767
    0:28/32767 0:30/32767
    IndirectionTable: [Group:Number] :

    Name : HyperV02
    InterfaceDescription : Intel(R) Ethernet 10G 2P X520 Adapter
    Enabled : True
    NumberOfReceiveQueues : 128
    Profile : Closest
    BaseProcessor: [Group:Number] : 0:18
    MaxProcessor: [Group:Number] : 0:30
    MaxProcessors : 7
    RssProcessorArray: [Group:Number/NUMA Distance] : 0:18/32767 0:20/32767 0:22/32767 0:24/32767 0:26/32767
    0:28/32767 0:30/32767
    IndirectionTable: [Group:Number] :

    So in this it looks like Hyper-V is telling the 2nd NIC to ONLY use 7 at max – but it also shows conflicts with NIC1 on ports 18/22/24/26/28/30 still from what that looks like anyways.

    Name InterfaceDescription Enabled BaseVmqProcessor MaxProcessors NumberOfReceive
    Queues
    —- ——————– ——- —————- ————- —————
    HyperV Microsoft Network Adapter Mu…#2 True 0:0 62
    HyperV01 Intel(R) Ethernet 10G 4P X52…#2 True 0:1 8 31
    HyperV02 Intel(R) Ethernet 10G 2P X520 … True 0:18 8 31

    I appreciate your insight and expertise on this matter!

  13. Thanks, Ryan for the update! Unfortunately, the LBFO team is not tuned to get the Network acceleration benefits that Microsoft implemented in Windows Server 2019. And all the fine-tuning that we used to do in the earlier Windows Server release is gone with Windows Server 2019 + SET. With Windows Server 2019 and Dynamic VMMQ, we can now automatically move queues on an overburdened processor to other processors that aren’t doing as much work. Now workloads will have a more consistent and performant experience.
    I strongly recommend moving to SET if possible, please note that you can do this without downtime by taking one NIC at a time out of the LBFO Team and then add it to the new SET Team, and so on.
    And now back to your current LBFO configuration. Could you please do this:
    Set-NetAdapterRss "HyperV01" –BaseProcessorNumber 2 -MaxProcessorNumber 14 –MaxProcessors 13
    Set-NetAdapterRss "HyperV02" –BaseProcessorNumber 18 -MaxProcessorNumber 30 –MaxProcessors 13
    Get-NetAdapterRss "HyperV01" | Select Name, Enabled, MaxProcessorNumber, MaxProcessors
    Get-NetAdapterRss "HyperV02" | Select Name, Enabled, MaxProcessorNumber, MaxProcessors

    Hope this helps!

Let me know what you think, or ask a question...

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to Charbel Nemnom’s Blog

Get the latest posts delivered right to your inbox

The content of this website is copyrighted from being plagiarized! You can copy from the 'Code Blocks' in Black.

Please send your feedback to the author using this form for any 'Code' you like.

Thank you for visiting!