Step By Step: Create a Converged Network Fabric in VMM 2012 R2 #VMM #SCVMM #SysCtr

6 Min. Read

Hello Folks,

In this post I will show and demonstrate how you can model your logical network in Virtual Machine Manager to support Converged Network fabric (NIC teaming, QoS and virtual network adapters).

Before we get started, what is converged network?

As we discussed in a previous post on how to Isolate DPM Traffic in Hyper-V by leveraging the converged network on the Hyper-V host were combining multiple physical NICs with NIC teaming, QoS and vNICs, we can isolate each network traffic and sustaining network resiliency if one NIC failed as shown in below diagram:ConvergedNetwork

To use NIC teaming in a Hyper-V environment with QoS, you need to use PowerShell to separate the traffic, however in VMM we can deploy the same using the UI.

More information about QoS Common PowerShell Configurations can be found here.

So without further ado, let’s jump right in.

The Hyper-V server in this Demo is using 2X10GB fiber, I want to team those NICs to leverage QoS and converged networking. The host is connected to the same physical fiber switch, configured with static IP address on one of the NICs, it’s joined to the domain and managed by VMM.

Step 1

We must create Logical Networks in VMM.
What is logical Network and how to create Logical Network in Virtual Machine Manager 2012 R2? click here.

We will create several logical networks in this demo for different purposes:

Logical Networks:

  • Management / VM: Contains the IP subnet used for host management and Virtual Machines. This network does also have an associated IP Pool so that VMM can manage IP assignment to hosts, and virtual machines connected to this network. In this demo the management and Virtual Machines are on the same network, however in production environment both networks must be separated.
  • Live Migration: Contains the IP subnet and VLAN for Live Migration traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Backup: Contains the IP subnet and VLAN for Hyper-V Backup traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Hyper-V Replica: Contains the IP subnet and VLAN for Hyper-V Replica traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Cluster: Contains the IP subnet and VLAN for Cluster communication. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.

Step 2

Creating IP pools for host Management, Live Migration, Backup, Hyper-V Replica and Cluster network.
We will create IP pools for each logical network site so that VMM can assign the right IP configuration to the Virtual NIC within this network. This is an awesome feature in VMM so that we don’t have to perform this manually or rely on DHCP server. We can also exclude IP addresses from the pool that has already been assigned to other resources.


Step 3

The VM Networks step is not necessary if you selected ‘Create a VM network with the same name to allow virtual machines to access this logical network directly’ while creating the logical networks above. If you did not, please continue to create VM networks with 1:1 mapping with the Logical Networks in the Fabric workspace.


Step 4

Creating Uplink Port Profiles, Virtual Port Profiles and Port Classifications.

Virtual Machine Manager does not ship with a default Uplink port profile, so we must create one on our own.
We will create one Uplink port profile, so in this demo one profile for our production Hyper-V host.

Production Uplink Port Profile:

Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile.

As you can see in below screenshot, Windows Server 2012 R2 supports three different load balancing algorithms:

Hashing, Hyper-V switch port and Dynamic.

For Teaming mode, we have Static teaming, Switch independent and LACP.

More information about NIC Teaming can be found here.

Assign a name, description, and make sure that ‘Uplink port profile’ is selected, then specify the load balancing algorithm together with teaming mode, as best practice we will select Dynamic and Switch Independent for Hyper-V workload. Click Next.


Select the network sites supported by this uplink port profile. VMM will tell the Hyper-V hosts that they are connected and mapped to the following logical networks and sites in our fabric: Backup (for Hyper-V backup traffic), Live Migration (for live migration traffic), Management/VM (for management and virtual machine communication), Replica (for Hyper-V replica traffic), Cluster (for Hyper-V cluster communication).



Virtual Port Profile:

If you navigate to Port Profiles under the networking tab in fabric workspace, you will see several port profiles already shipped with VMM. You can take advantage of these and use the existing profiles for Host management, Cluster and Live Migration.

For the purpose of this demo, I will create 2 additional virtual port profiles for Hyper-V Backup and Hyper-V Replica as well.

Note: Make sure you don’t exceed the total weight of 100 in Bandwidth settings for all virtual port profiles that you intend to apply on the Hyper-V host.

Here are the different bandwidth settings for each profile that I used:

Host Management: 10
Hyper-V Replica: 10
Hyper-V Cluster: 10
Hyper-V Backup: 10
Live Migration: 20

If you summit up, we have already weight of 60 and the rest can be assigned to the Virtual Machines or other resources.

Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile one for backup and another one for replica. Assign a name, description, and by default ‘Virtual network adapter profile’ will be selected. Click Next.



Click on Offload settings to see the settings for the virtual network adapter profile.


Click on Bandwidth settings and adjust the QoS as we described above.


Repeat the same steps for each Virtual network adapter profile.


Port Classifications:

Creating a port classification.

We must also create a port classification that we can associate with each virtual network port profile. When you are configuring virtual network adapters on a team on a Hyper-V host, you can map the network adapter to a classification that will ensure that the configuration within the virtual port profiles are mapped.

Please note that this is a description (label) only and does not contains any configuration, this is very useful in a host provider deployment were the tenant/customer see the message for their Virtual Machines for example High bandwidth, but effectively you can limit the customer with a port profile very Low bandwidth weight and they think they have a very high speed VMs Winking smile sneaky yes (don’t do this) Smile.

Navigate to fabric, expand networking and right click on Port Classification to create a new port classification.
Assign a name, description and click OK.



Step 5

Creating Logical Switches:

A logical switch is the last networking fabric in VMM before we apply it into our Hyper-V host, it’s basically a container of Uplink profiles and Virtual port profiles.

Right click on Logical Switches in the Fabric workspace and create a new logical switch.

Assign the logical switch a name and a description. Leave out the option for ‘Enable single root I/O virtualization (SR-IOV)’, this is beyond our scope in this demo.


We are using the default Microsoft Windows Filtering Platform. Click Next.


Specify the uplink port profiles that are part of this logical switch, sure enough we will enable uplink mode to be ‘Team’, and add our Production Uplink. Click Next.


Specify the port classifications for each virtual ports part of this logical switch. Click ‘add’ to configure the virtual ports. Browse to the right classification and include a virtual network adapter port profile to associate the classification. Repeat this process for all virtual ports so you have added classifications and profiles for management, backup, cluster, live migration and replica, low, medium and high bandwidth are used for the Virtual Machines. Click Next.


Review the settings and click Finish.


Step 6

Last but not least, we need to apply the networking template on the Hyper-V host.

Until this point, we created different networking fabric in VMM and we ended up with Logical Switch template that contains all networking configurations that we intended to apply on the host.

Navigate to the host group in fabric workspace that contains your production Hyper-V hosts.
Right click on the host and click ‘Properties’.
Navigate to Virtual switches.
Click ‘New Virtual Switch’ and ‘New Logical Switch’. Make sure that Production Converged vSwitch is selected and add the physical adapters that should participate in this configuration. Make sure that ‘Production Uplink’ is associated with the adapters.


Click on ‘New Virtual Network Adapter’ to add virtual adapters to the configuration.
I will add in total 5 virtual network adapters. One adapter for host management, one for live migration, one for backup, one for replica and one for cluster. Please note that the virtual adapter used for host management, will have the setting ‘This virtual network adapter inherits settings from the physical management adapter’ enabled. This means that the physical NIC on the host configured for management, will transfer its configuration into a virtual adapter created on the team. This very important, if you don’t select this you will loose the network connectivity to the host, therefore you cannot access it anymore unless you have ILO or iDRAC management interface on the physical host.


Repeat the process for Live Migration and Cluster virtual adapters, etc… and ensure they are connected to the right VM networks with the right VLAN, IP Pool and port profile classification.


Once you are done, click OK. VMM will now communicate with its agent on the host, and configure NIC teaming with the right configurations and corresponding virtual network adapters.


The last step is to validate the network configuration on the host Smile


Until next time… Enjoy your weekend!



Migrate VMware VMs to Hyper-V using MVMC 2.0

A New Virtual Desktop could not be created. Verify that all Hyper-V Servers have the correct network configuration…


53 thoughts on “Step By Step: Create a Converged Network Fabric in VMM 2012 R2 #VMM #SCVMM #SysCtr”

Leave a comment...

  1. Charbel, Great article. Reading through this has really give me some direction on how to do this. I have a few questions.

    I typically use Teaming at the driver level. Can a logical switch created from SCVMM interface a driver based team? Maybe I need to let go of driver teams however I question if the built in teaming is as robust and performs as well as driver based teams?

    Also, I am running into issues on the last step where you apply the Logical Switch and all Virtual Network Adapters due to SCVMM losing connection to the Host. This is occuring because the network card that it is communicating with is one of the cards that will be teamed. How do you get around this? Are you using a dedicated card that is not part of this configuration for communication with SCVMM?

    Thanks for you efforts

  2. Hello,

    I am glad that you like the article.

    For the first question.
    No, the logical switch cannot interface a driver based team.
    The built-in team in Windows Server 2012 R2 is very robust, I am using it in all my deployments and it performs very well.
    If you want to create a logical switch with NIC teaming, you should do it from VMM end to end as described in above article.

    As for the second question, no I am not using a dedicated NIC card, you need to make sure that before you apply the logical switch into the host, the management NIC is enabled as I mentioned above (This virtual network adapter inherits settings from the physical management adapter). This means that the physical NIC on the host used for management, will transfer it’s configuration into virtual adapter created by the team. This is very important, otherwise you lose remote access as you mentioned.

    Hope this helps.


  3. All is working well now, the issue I was having when applying the logical switches to the host was an error on my part not realizing the management port needs to be part of the configuration. At this point I am using a similar configuration to the config within this article. 4 hosts with dual port 10 gig cards teamed and configured using the converged network. The Converged network is configured between live migration, virtual machine, management, cluster, and eventually DPM. These 4 servers also have a quad port 1 gig intel card that I will use for ISCSI multi-path. At first my plan was to configure this the traditional route not using SCVM and just configure the cards on each server, etc. I am now testing and considering using SCVM for the ISCSI configuration. We are using Dequalogic PS4000 and PS41000 for SAN which use 2-4 1 gig ports for connectivity.

    Here is what I have tested.
    Logical network – Created ISCSI logical network with its own network site including subnet, vlan, etc.
    Port Profile – Created ISCSI uplink switch using switch independent, dynamic, no teaming here.
    Logical Switches – Created 4 ISCSI logical switches all bound to the ISCSI uplink port profile, ISCSI network site, and iscsi workload virtual port.
    Each host I have created added 4 new logical switches each bound to one of the four different port on the quad port network card and using the iscsi uplink port created earlier. Underneath the virtual switch I have added a new virtual network adapter using the respective ISCSI network, vlan, ISCSI workload port profile, ISCSI IP pool, and static IP is configured within the ISCSI subnet.
    Assuming I have configured this correctly I have to ask if you would even recommend using SCVM for ISCSI configuration in this scenario? I certainly like the concept of using SCVM for all network configurations. Since these quad port cards are already dedicated for ISCSI traffic will using SCVM with the logical and virtual networks simply be adding unnecessary overhead reducing performance? I have been reading and researching this scenario and I have not seen much guidance for this type of configuration. Any guidance you can provide is much appreciated.
    Also, I had to create a new account as my previous one says it does not exist, strange.

  4. Hi there

    Excellent article :-)

    Few questions, and appreciate your input:

    1. The article covers Management networks, for Virtual Machine traffic like production LAN, exchange Replication, etc would you recommend use separate physical adapter and follow the similar process? or can use existing management network?

    2. When creating VM Networks if I have two site (live and DR – isolated within the logical networks using vlan/subnets) do I have to create separate VM networks for each.

    3. Do you have any document for full automated bare metal build process.

    Any help is much appreciated.

  5. @Zgravity,

    1- It depends on your environment, how is it big? as best practice keep the storage traffic out of converged networks. Example: 2x10G as a team and create Converged Network with multiples vNICs, keep the storage on dedicated 2 or more physical NICs without teaming.

    2- If you have isolated logical network for each site (VLAN based), yes you need to have a VM Network for each. VM Network allows you to assign a network segment (VLAN) to a Virtual Adapter. One VM Network will typically be associated with one network segment (VLAN). This gives the network segment a friendly name that can be used so that VMM admin do not need to know subnets or VLANIDs. It also can have permissions assigned so that only certain users can select the network segment in their virtual machines.

    3- Honestly, I don’t do Bare metal deployment in VMM.

    Hope this helps.


  6. Excellent article. I have a question.
    We have multiple routable vlan in my organization, what would I choose while creating logical network . One connected or vlan based network?
    For all the configuration we need to configure the
    trunk network?

  7. Hello Sandeep,

    I am glad that you liked the article.

    The first question:
    I recommend to have one logical network as “one connected” for your infrastructure networks such as (Storage, MGMT, Cluster, Backup, etc…) and another logical network as VLAN based for your Tenants/VMs.

    As for the second question, yes you must configure trunk mode for the uplink ports on the physical switch in order to pass different VLAN ID.

    Hope that helps.


  8. Thank you very much for the quick reply.
    We are building SCVMM for Internal servers(Like Vmware).

    Action Plan

    Logical Switch -> Vlan Based network(Second option) for all the routable network(added all the Vlan ids).Also, created respective VM networks

    I have used traditional failover Hyper-V cluster configuration for Management traffic, since we are using multiple 1 GB NICs (Not configured anything inside VMM for Managemnet Traffic.

    Next year we are upgrading the network switch to support 10GB traffic.

    Network details

    4 x 1GB for – Live Migration

    2 X 1GB Host Management

    4 x 1GB for VM traffic(Configured Inside VMM(Vlan based)

    2 X 1GB for Cluster and CSV communication

    Storage FC

    Please let me know your best recommendation

  9. Hello Sandeep,

    I recommend for you the following based on your network:

    1- One Logical Network as “One Connected” for (Live Migration, Host Management, Cluster, Backup)-Team all the 8 NICs Switch Independent/Dynamic Mode
    2- One Logical Network as VLAN based for Virtual Machine Traffic (4 NICs Teamed) Switch Independent/Dynamic Mode

    Last but not least make sure you set the Port Profile appropriately for each traffic, and make sure you are using Minimum bandwidth mode weight “Live Migration@40, Host Management@10, Cluster@20, Backup@30” The total should not exceed 100.


  10. Hello Charbel,
    I am glad you enjoyed your first MVP summit, wished you passed by us in San Francisco.

    I was looking for best practices for the various networks in Hyper-V and their settings (register DNS, NetBIOS, Gateway, etc.etc.) and found the following article: that explains it in details, however it is for the 2008 version. I was wondering if all this remains valid for the 2012R2 or you have a more suitable set for the latest Hyper-V version.

    Note: I couldn’t paste the article link in here, I will try to send it by email.

    Thank you

  11. Thank you Joseph,

    Are you looking for Best Practices Network Design for Hyper-V Cluster?

  12. Yes for the 2012R2, also I was trying to verify if all the recommended settings for 2008 Hyper-V in the article are still valid for 2012.
    Thank you

  13. Thanks Charbel, maybe I should have find this on my own
    Appreciate your time.

  14. Hi Charbel

    Will the below configuration satisfy Hyper-V best practice configuration in terms of networks? Please advise.

    • 1 x 1Gb 4-port On-board adapter for: DMZ 1 (Port 1), DMZ 2 (Port 2), in-guest VM iSCSI (Port 3 and Port 4)

    • 1 x 10 Gb 2-port CNA Adapter for: iSCSI Storage connection to hosts using One Command Manager software. Using SCVMM 2012 R2 create One Connected logical networks for LiveMigration, CSV, Management and Heartbeat with weights Heartbeat: 10, Hyper-V Management: 10, Live migration: 40, Storage: 40

    • 1 x 10 Gb 2-port Adapter – Using SCVMM 2012 R2 create VLAN based logical networks for Production LAN, Exchange Replication, SQL heartbeat. Do I have to assign weights for virtual machines networks?

  15. Hi Charbel, thanks for the post. I am just wondering why do you have so many Logical Networks in your design? For the same functionality I see that others have created just one logical network, and then created one site with multiple VLANs which would represent all the Mgmt, Cluster, LM etc networks. Because essentially that’s what we are doing in converged networking; have one switch with multiple virtual ports go out through one uplink and isolated by VLANs. So technically one network which is converged. I am just looking for your thought process in doing the way you have designed. Thank you for the great posts.

  16. Hello Shawn,

    I am glad that you like the post.
    Good question!
    The answer to why I have multiple logical networks for the same functionality (Infrastructure = MGT, Cluster, LM, etc…), is because I need to use the same logical network/network subnet in multiple different sites, in other words the Hyper-V Hosts are decentralized, and each site has it is own network subnet.
    So when I create the Logical Network “Cluster” for example, I can select all my Network Sites that have cluster network, so later when you create the Port Profiles “Uplink” for each site, I can select the same “Cluster” logical network for each site.
    However you can create One Infrastructure Logical Network as you mentioned above if all the hosts are centralized, this depends on your network design.

    Hope this helps.

  17. Thanks for the reply Charbel. That helps me understand this well. I am almost done designing my fabric. I have one last question that came across my mind while looking at your configuration. Where exactly did you integrate the Virtual Port profiles from Step 4? I mean I can see where the uplink port went resulting in the team. However, where did those other Virtual Port Profiles go where you assigned them weights? I do not see them integrated anywhere in your example. Unless I am missing something obvious here.


  18. Hello Shawn,
    I integrated the virtual port files in step 5 where I created the Logical vSwitch.
    So when you create a vmNIC for a VM or a vNIC on the host, you can choose one of the predefined virtual port files.
    As you noticed I created all my Virtual Port Profiles in Step 4 with a proper weight for each type.

    Hope this helps.


  19. Hi,

    Thanks for all the help. I have completed the implementation of SCVMM 2012 R2 (windows 2012 R2 Hosts). I have created Multiple clusters for Dev, Prod, etc… Is there any way we can move the VM from First cluster to Second Cluster? I would also like to hide the following network from VM guest network selection option(Host management, Cluster, LM)

Let me know what you think, or ask a question...

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Subscribe to Stay in Touch

Never miss out on your favorite posts and our latest announcements!

The content of this website is copyrighted from being plagiarized!

You can copy from the 'Code Blocks' in 'Black' by selecting the Code.

Please send your feedback to the author using this form for any 'Code' you like.

Thank you for visiting!