Step By Step: Create a Converged Network Fabric in VMM 2012 R2 #VMM #SCVMM #SysCtr

Hello Folks,

In this post I will show and demonstrate how you can model your logical network in Virtual Machine Manager to support Converged Network fabric (NIC teaming, QoS and virtual network adapters).

Before we get started, what is converged network?

As we discussed in a previous post on how to Isolate DPM Traffic in Hyper-V by leveraging the converged network on the Hyper-V host were combining multiple physical NICs with NIC teaming, QoS and vNICs, we can isolate each network traffic and sustaining network resiliency if one NIC failed as shown in below diagram:ConvergedNetwork

To use NIC teaming in a Hyper-V environment with QoS, you need to use PowerShell to separate the traffic, however in VMM we can deploy the same using the UI.

More information about QoS Common PowerShell Configurations can be found here.

So without further ado, let’s jump right in.

The Hyper-V server in this Demo is using 2X10GB fiber, I want to team those NICs to leverage QoS and converged networking. The host is connected to the same physical fiber switch, configured with static IP address on one of the NICs, it’s joined to the domain and managed by VMM.

Step 1

We must create Logical Networks in VMM.
What is logical Network and how to create Logical Network in Virtual Machine Manager 2012 R2? click here.

We will create several logical networks in this demo for different purposes:

Logical Networks:

  • Management / VM: Contains the IP subnet used for host management and Virtual Machines. This network does also have an associated IP Pool so that VMM can manage IP assignment to hosts, and virtual machines connected to this network. In this demo the management and Virtual Machines are on the same network, however in production environment both networks must be separated.
  • Live Migration: Contains the IP subnet and VLAN for Live Migration traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Backup: Contains the IP subnet and VLAN for Hyper-V Backup traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Hyper-V Replica: Contains the IP subnet and VLAN for Hyper-V Replica traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Cluster: Contains the IP subnet and VLAN for Cluster communication. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.

Step 2

Creating IP pools for host Management, Live Migration, Backup, Hyper-V Replica and Cluster network.
We will create IP pools for each logical network site so that VMM can assign the right IP configuration to the Virtual NIC within this network. This is an awesome feature in VMM so that we don’t have to perform this manually or rely on DHCP server. We can also exclude IP addresses from the pool that has already been assigned to other resources.

L-Network01

Step 3

The VM Networks step is not necessary if you selected ‘Create a VM network with the same name to allow virtual machines to access this logical network directly’ while creating the logical networks above. If you did not, please continue to create VM networks with 1:1 mapping with the Logical Networks in the Fabric workspace.

V-Network01

Step 4

Creating Uplink Port Profiles, Virtual Port Profiles and Port Classifications.

Virtual Machine Manager does not ship with a default Uplink port profile, so we must create one on our own.
We will create one Uplink port profile, so in this demo one profile for our production Hyper-V host.

Production Uplink Port Profile:

Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile.

As you can see in below screenshot, Windows Server 2012 R2 supports three different load balancing algorithms:

Hashing, Hyper-V switch port and Dynamic.

For Teaming mode, we have Static teaming, Switch independent and LACP.

More information about NIC Teaming can be found here.

Assign a name, description, and make sure that ‘Uplink port profile’ is selected, then specify the load balancing algorithm together with teaming mode, as best practice we will select Dynamic and Switch Independent for Hyper-V workload. Click Next.

U-PP01

Select the network sites supported by this uplink port profile. VMM will tell the Hyper-V hosts that they are connected and mapped to the following logical networks and sites in our fabric: Backup (for Hyper-V backup traffic), Live Migration (for live migration traffic), Management/VM (for management and virtual machine communication), Replica (for Hyper-V replica traffic), Cluster (for Hyper-V cluster communication).

U-PP02

U-PP03

Virtual Port Profile:

If you navigate to Port Profiles under the networking tab in fabric workspace, you will see several port profiles already shipped with VMM. You can take advantage of these and use the existing profiles for Host management, Cluster and Live Migration.

For the purpose of this demo, I will create 2 additional virtual port profiles for Hyper-V Backup and Hyper-V Replica as well.

Note: Make sure you don’t exceed the total weight of 100 in Bandwidth settings for all virtual port profiles that you intend to apply on the Hyper-V host.

Here are the different bandwidth settings for each profile that I used:

Host Management: 10
Hyper-V Replica: 10
Hyper-V Cluster: 10
Hyper-V Backup: 10
Live Migration: 20

If you summit up, we have already weight of 60 and the rest can be assigned to the Virtual Machines or other resources.

Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile one for backup and another one for replica. Assign a name, description, and by default ‘Virtual network adapter profile’ will be selected. Click Next.

V-PP01

V-PP02

Click on Offload settings to see the settings for the virtual network adapter profile.

V-PP03

Click on Bandwidth settings and adjust the QoS as we described above.

V-PP04

Repeat the same steps for each Virtual network adapter profile.

V-PP05

Port Classifications:

Creating a port classification.

We must also create a port classification that we can associate with each virtual network port profile. When you are configuring virtual network adapters on a team on a Hyper-V host, you can map the network adapter to a classification that will ensure that the configuration within the virtual port profiles are mapped.

Please note that this is a description (label) only and does not contains any configuration, this is very useful in a host provider deployment were the tenant/customer see the message for their Virtual Machines for example High bandwidth, but effectively you can limit the customer with a port profile very Low bandwidth weight and they think they have a very high speed VMs Winking smile sneaky yes (don’t do this) Smile.

Navigate to fabric, expand networking and right click on Port Classification to create a new port classification.
Assign a name, description and click OK.

P-CL01

P-CL02

Step 5

Creating Logical Switches:

A logical switch is the last networking fabric in VMM before we apply it into our Hyper-V host, it’s basically a container of Uplink profiles and Virtual port profiles.

Right click on Logical Switches in the Fabric workspace and create a new logical switch.
L-SW01

Assign the logical switch a name and a description. Leave out the option for ‘Enable single root I/O virtualization (SR-IOV)’, this is beyond our scope in this demo.

L-SW02

We are using the default Microsoft Windows Filtering Platform. Click Next.

L-SW03

Specify the uplink port profiles that are part of this logical switch, sure enough we will enable uplink mode to be ‘Team’, and add our Production Uplink. Click Next.

L-SW04

Specify the port classifications for each virtual ports part of this logical switch. Click ‘add’ to configure the virtual ports. Browse to the right classification and include a virtual network adapter port profile to associate the classification. Repeat this process for all virtual ports so you have added classifications and profiles for management, backup, cluster, live migration and replica, low, medium and high bandwidth are used for the Virtual Machines. Click Next.

L-SW05

Review the settings and click Finish.

L-SW06

Step 6

Last but not least, we need to apply the networking template on the Hyper-V host.

Until this point, we created different networking fabric in VMM and we ended up with Logical Switch template that contains all networking configurations that we intended to apply on the host.

Navigate to the host group in fabric workspace that contains your production Hyper-V hosts.
Right click on the host and click ‘Properties’.
Navigate to Virtual switches.
Click ‘New Virtual Switch’ and ‘New Logical Switch’. Make sure that Production Converged vSwitch is selected and add the physical adapters that should participate in this configuration. Make sure that ‘Production Uplink’ is associated with the adapters.

HV-Host-vSwitch01

Click on ‘New Virtual Network Adapter’ to add virtual adapters to the configuration.
I will add in total 5 virtual network adapters. One adapter for host management, one for live migration, one for backup, one for replica and one for cluster. Please note that the virtual adapter used for host management, will have the setting ‘This virtual network adapter inherits settings from the physical management adapter’ enabled. This means that the physical NIC on the host configured for management, will transfer its configuration into a virtual adapter created on the team. This very important, if you don’t select this you will loose the network connectivity to the host, therefore you cannot access it anymore unless you have ILO or iDRAC management interface on the physical host.

HV-Host-vSwitch02

Repeat the process for Live Migration and Cluster virtual adapters, etc… and ensure they are connected to the right VM networks with the right VLAN, IP Pool and port profile classification.

HV-Host-vSwitch03

Once you are done, click OK. VMM will now communicate with its agent on the host, and configure NIC teaming with the right configurations and corresponding virtual network adapters.

HV-Host-vSwitch04

The last step is to validate the network configuration on the host Smile

HV-Host-vSwitch05

Until next time… Enjoy your weekend!

Cheers,
Charbel

About Charbel Nemnom 298 Articles
Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

51 Comments

  1. Charbel, Great article. Reading through this has really give me some direction on how to do this. I have a few questions.

    I typically use Teaming at the driver level. Can a logical switch created from SCVMM interface a driver based team? Maybe I need to let go of driver teams however I question if the built in teaming is as robust and performs as well as driver based teams?

    Also, I am running into issues on the last step where you apply the Logical Switch and all Virtual Network Adapters due to SCVMM losing connection to the Host. This is occuring because the network card that it is communicating with is one of the cards that will be teamed. How do you get around this? Are you using a dedicated card that is not part of this configuration for communication with SCVMM?

    Thanks for you efforts

  2. Hello,

    I am glad that you like the article.

    For the first question.
    No, the logical switch cannot interface a driver based team.
    The built-in team in Windows Server 2012 R2 is very robust, I am using it in all my deployments and it performs very well.
    If you want to create a logical switch with NIC teaming, you should do it from VMM end to end as described in above article.

    As for the second question, no I am not using a dedicated NIC card, you need to make sure that before you apply the logical switch into the host, the management NIC is enabled as I mentioned above (This virtual network adapter inherits settings from the physical management adapter). This means that the physical NIC on the host used for management, will transfer it’s configuration into virtual adapter created by the team. This is very important, otherwise you lose remote access as you mentioned.

    Hope this helps.

    Cheers,
    /Charbel

  3. All is working well now, the issue I was having when applying the logical switches to the host was an error on my part not realizing the management port needs to be part of the configuration. At this point I am using a similar configuration to the config within this article. 4 hosts with dual port 10 gig cards teamed and configured using the converged network. The Converged network is configured between live migration, virtual machine, management, cluster, and eventually DPM. These 4 servers also have a quad port 1 gig intel card that I will use for ISCSI multi-path. At first my plan was to configure this the traditional route not using SCVM and just configure the cards on each server, etc. I am now testing and considering using SCVM for the ISCSI configuration. We are using Dequalogic PS4000 and PS41000 for SAN which use 2-4 1 gig ports for connectivity.

    Here is what I have tested.
    Logical network – Created ISCSI logical network with its own network site including subnet, vlan, etc.
    Port Profile – Created ISCSI uplink switch using switch independent, dynamic, no teaming here.
    Logical Switches – Created 4 ISCSI logical switches all bound to the ISCSI uplink port profile, ISCSI network site, and iscsi workload virtual port.
    Each host I have created added 4 new logical switches each bound to one of the four different port on the quad port network card and using the iscsi uplink port created earlier. Underneath the virtual switch I have added a new virtual network adapter using the respective ISCSI network, vlan, ISCSI workload port profile, ISCSI IP pool, and static IP is configured within the ISCSI subnet.
    Assuming I have configured this correctly I have to ask if you would even recommend using SCVM for ISCSI configuration in this scenario? I certainly like the concept of using SCVM for all network configurations. Since these quad port cards are already dedicated for ISCSI traffic will using SCVM with the logical and virtual networks simply be adding unnecessary overhead reducing performance? I have been reading and researching this scenario and I have not seen much guidance for this type of configuration. Any guidance you can provide is much appreciated.
    Also, I had to create a new account as my previous one says it does not exist, strange.

  4. Hi there

    Excellent article 🙂

    Few questions, and appreciate your input:

    1. The article covers Management networks, for Virtual Machine traffic like production LAN, exchange Replication, etc would you recommend use separate physical adapter and follow the similar process? or can use existing management network?

    2. When creating VM Networks if I have two site (live and DR – isolated within the logical networks using vlan/subnets) do I have to create separate VM networks for each.

    3. Do you have any document for full automated bare metal build process.

    Any help is much appreciated.

  5. @Zgravity,

    1- It depends on your environment, how is it big? as best practice keep the storage traffic out of converged networks. Example: 2x10G as a team and create Converged Network with multiples vNICs, keep the storage on dedicated 2 or more physical NICs without teaming.

    2- If you have isolated logical network for each site (VLAN based), yes you need to have a VM Network for each. VM Network allows you to assign a network segment (VLAN) to a Virtual Adapter. One VM Network will typically be associated with one network segment (VLAN). This gives the network segment a friendly name that can be used so that VMM admin do not need to know subnets or VLANIDs. It also can have permissions assigned so that only certain users can select the network segment in their virtual machines.

    3- Honestly, I don’t do Bare metal deployment in VMM.

    Hope this helps.

    Cheers,
    /Charbel

  6. Excellent article. I have a question.
    We have multiple routable vlan in my organization, what would I choose while creating logical network . One connected or vlan based network?
    For all the configuration we need to configure the
    trunk network?

  7. Hello Sandeep,

    I am glad that you liked the article.

    The first question:
    I recommend to have one logical network as “one connected” for your infrastructure networks such as (Storage, MGMT, Cluster, Backup, etc…) and another logical network as VLAN based for your Tenants/VMs.

    As for the second question, yes you must configure trunk mode for the uplink ports on the physical switch in order to pass different VLAN ID.

    Hope that helps.

    Cheers,
    /Charbel

    • Thank you very much for the quick reply.
      We are building SCVMM for Internal servers(Like Vmware).

      Action Plan

      Logical Switch -> Vlan Based network(Second option) for all the routable network(added all the Vlan ids).Also, created respective VM networks

      I have used traditional failover Hyper-V cluster configuration for Management traffic, since we are using multiple 1 GB NICs (Not configured anything inside VMM for Managemnet Traffic.

      Next year we are upgrading the network switch to support 10GB traffic.

      Network details

      4 x 1GB for – Live Migration

      2 X 1GB Host Management

      4 x 1GB for VM traffic(Configured Inside VMM(Vlan based)

      2 X 1GB for Cluster and CSV communication

      Storage FC

      Please let me know your best recommendation

      • Hello Sandeep,

        I recommend for you the following based on your network:

        1- One Logical Network as “One Connected” for (Live Migration, Host Management, Cluster, Backup)-Team all the 8 NICs Switch Independent/Dynamic Mode
        2- One Logical Network as VLAN based for Virtual Machine Traffic (4 NICs Teamed) Switch Independent/Dynamic Mode

        Last but not least make sure you set the Port Profile appropriately for each traffic, and make sure you are using Minimum bandwidth mode weight “Live Migration@40, Host Management@10, Cluster@20, Backup@30” The total should not exceed 100.

        Cheers,
        /Charbel

  8. Hello Charbel,
    I am glad you enjoyed your first MVP summit, wished you passed by us in San Francisco.

    I was looking for best practices for the various networks in Hyper-V and their settings (register DNS, NetBIOS, Gateway, etc.etc.) and found the following article: that explains it in details, however it is for the 2008 version. I was wondering if all this remains valid for the 2012R2 or you have a more suitable set for the latest Hyper-V version.

    Note: I couldn’t paste the article link in here, I will try to send it by email.

    Thank you

  9. Hi Charbel

    Will the below configuration satisfy Hyper-V best practice configuration in terms of networks? Please advise.

    • 1 x 1Gb 4-port On-board adapter for: DMZ 1 (Port 1), DMZ 2 (Port 2), in-guest VM iSCSI (Port 3 and Port 4)

    • 1 x 10 Gb 2-port CNA Adapter for: iSCSI Storage connection to hosts using One Command Manager software. Using SCVMM 2012 R2 create One Connected logical networks for LiveMigration, CSV, Management and Heartbeat with weights Heartbeat: 10, Hyper-V Management: 10, Live migration: 40, Storage: 40

    • 1 x 10 Gb 2-port Adapter – Using SCVMM 2012 R2 create VLAN based logical networks for Production LAN, Exchange Replication, SQL heartbeat. Do I have to assign weights for virtual machines networks?

  10. Hi Charbel, thanks for the post. I am just wondering why do you have so many Logical Networks in your design? For the same functionality I see that others have created just one logical network, and then created one site with multiple VLANs which would represent all the Mgmt, Cluster, LM etc networks. Because essentially that’s what we are doing in converged networking; have one switch with multiple virtual ports go out through one uplink and isolated by VLANs. So technically one network which is converged. I am just looking for your thought process in doing the way you have designed. Thank you for the great posts.

    • Hello Shawn,

      I am glad that you like the post.
      Good question!
      The answer to why I have multiple logical networks for the same functionality (Infrastructure = MGT, Cluster, LM, etc…), is because I need to use the same logical network/network subnet in multiple different sites, in other words the Hyper-V Hosts are decentralized, and each site has it is own network subnet.
      So when I create the Logical Network “Cluster” for example, I can select all my Network Sites that have cluster network, so later when you create the Port Profiles “Uplink” for each site, I can select the same “Cluster” logical network for each site.
      However you can create One Infrastructure Logical Network as you mentioned above if all the hosts are centralized, this depends on your network design.

      Hope this helps.
      Cheers,

      • Thanks for the reply Charbel. That helps me understand this well. I am almost done designing my fabric. I have one last question that came across my mind while looking at your configuration. Where exactly did you integrate the Virtual Port profiles from Step 4? I mean I can see where the uplink port went resulting in the team. However, where did those other Virtual Port Profiles go where you assigned them weights? I do not see them integrated anywhere in your example. Unless I am missing something obvious here.

        Thanks
        Shawn

        • Hello Shawn,
          I integrated the virtual port files in step 5 where I created the Logical vSwitch.
          So when you create a vmNIC for a VM or a vNIC on the host, you can choose one of the predefined virtual port files.
          As you noticed I created all my Virtual Port Profiles in Step 4 with a proper weight for each type.

          Hope this helps.

          Cheers,
          Charbel

  11. Hi,

    Thanks for all the help. I have completed the implementation of SCVMM 2012 R2 (windows 2012 R2 Hosts). I have created Multiple clusters for Dev, Prod, etc… Is there any way we can move the VM from First cluster to Second Cluster? I would also like to hide the following network from VM guest network selection option(Host management, Cluster, LM)

    • Hello Sandeep,

      I am glad that this article help you out.

      If I understood your question correctly, you need to move the VM from Dev Cluster to Prod Cluster, right?
      Yes, you can move the VM between two clusters but using Shared-Nothing Live Migration (Migrate VM Including the Storage) and not through Live Migration, because each cluster has dedicated Shared Storage, you cannot live migrate between two separate clusters.

      As for the second question is not clear. You need to hide the Management Network, Cluster, etc… inside the guest VM?
      The Host Management network, Cluster, LM are vNICs on the Hyper-V Host (Parent Partition) using the Converged vSwitch and not inside the guest OS.

      Cheers,
      /Charbel

  12. Thanks for the help.

    Second Question – When we select VM(guest)Properties for particular Vlan network option , I can view Management network, Cluster network, Live Migration network + all the Vlan networks which I have selected in port profiles. Is there any way we can hide Vm,LM and Cluster network?

    I have one more question

    I have 2×10 GB adapter teamed and created one Logical Switch which has 3 attached Virtual Network adapter for LM, Cluster and Management. Is it necessary to add VM traffic Virtual Network adapter?(I did not add additional Virtual adapter for Vm traffic)

    All are Vlan based network. I have created appropriate VM network and added into port profile.

    VM connectivity is fine

    Network Details

    Production Host group

    Logical network – (All Vlan based)
    Production(All the Vlans)
    Cluster – Cluster Vlan network IP
    Lm – LM network Vlan IP
    Management – Vlan Management IP

    Port profile

    Added all the above networks

  13. Correction:

    I have created appropriate vm network for Vms. Please ignore the line added into port profile( wrong my bad). Also, ignore production host group. Only Host group. I do not see the edit option to correct the same

  14. Great article. Very very informative. I have a few questions though i hope you don’t mind answering for me.

    I understand that I can use SCVMM to team my two 10G nics together and then create various vNICs for the HOST specific nics like management, CSV, LM, Replica, etc. What I’m confused about is how to create a vSwitch that the VMs can use. With a non-converged config, i would just create a vSwitch on top of a nic or team and then use HyperV settings to tell each VM which VLAN to communicate on. With the new logical switch option that doesn’t seem to be how it works. Am i supposed to create separate VM Networks, virtual network adapters, and (therefore) vNics on the HOST for each VLAN i need? Do I no longer need to add VLAN tag info to each VM settings? Is this why i have a VM Network for each vlan now?

    Here is my setup for reference:

    I have created TWO logical networks:
    1) The “Host Networks” logical network configured as “ONE CONNECTED NETWORK”. All of my HOST specific vlan/subnet pairs are added to a single site under this logical network. There is also a SINGLE VM Network for this logical network.

    2) The “Production Networks” logical network is configured as “VLAN-BASED INDEPENDENT NETWORKS”. All of my VM specific vlan/subnet pairs are added to a single site under this network. I have also manually created a VM network for each of the vlan/subnet pairs I need VMs to communicate on.

    I only have two NICs (2x10G) in each host so I configured a single uplink port profile with both logical networks and a single logical switch.

    When i go into a hosts properties and create a new Logical Switch and then add VM Network Adapters to that logical switch, it creates vNics on the HOST for every network including the VM production networks. I don’t need those networks visible to the host at all. I just need the VMs to talk to those networks.

    I seem to be missing something here. Hopefully you can help click the lightbulb on in my brain.

    • Thanks Matt,

      I am glad that you found the article very informative!

      Hopefully the following screenshot will give you the full answer to your question:

      VMM Converged Non-Converged Network

      Since you have only two 10GB NICs, you cannot create two vSwitch with both NICs, you can have one NIC assigned to one vSwitch, but in this case you will loose redundancy.

      As a side note, you can create a team with one physical NIC only as you can see in above screenshot, this is handful if you decided to have additional NICs later without having to delete the vSwitch and create the team 🙂

      Hope that helps!

      Cheers,
      /Charbel

  15. Can you tell me please which is the difference if I will make an uplink port profile for every network site or if I will add all the network site to one uplink port profile?

    • Hello Ionut,

      Actually there is no difference between having one Uplink Port Profile or one for each site. This all depends on your environment.
      If you want to use different Load Balancing Algorithm in each site, for example you have Windows Server 2012 R2 in one site and 2012 in the other site.
      You want to use Network Virtualization in one site and not in the other site.
      If you have many sites, I prefer to create one uplink port profile for each site, so you can have more flexibility later on.

      Hope that helps!

      Cheers,
      /Charbel

  16. I have a test environment with 2 hosts and every host has 3 NICs.I made 3 logoical network with one site every of them,and 2 of them are with virtualization enabled,and now I don’t understand how to make the connection.I’m thinking to make an uplink port for every site and after that one logical switch for every network,but I think that something I misunderstand.Can you tell me please if something is wrong in what I want?

  17. Hi Charbel,thank you for this great post , i have a question , hope you answer .
    i have 2 host with 4 Nic (single site deployment, NO vlan ) all the Nic,s are taking ip from DHCP. i want to use 2 nic for cluster and live migration (this traffic will be non routable as you explained ) and rest 1 nic for management and 1 for VM traffic .

    now how can i segregate management and vm traffic because bot nic are getting same range of Ip from DHCP ?

    please suggest .

    • Hello,

      I recommend for you the following according to your scenario.
      Since you have 4 NICs in your host, you need to have two different Logical Networks and two logical Switches, one logical network will be used for MGMT ConvergedNetwork 2NICs-Teamed Uplink Port Profile (Live Migration, Cluster, Backup and ManagementOS), the other 2 NICs will be used for VM Traffic.
      In this way you have redundancy across all your network traffic.

      You need to segregate the Management and VM traffic by creating Logical Network as VLAN-Based (Hyper-V VLAN) and have each Network Traffic on a separate VLAN.

      Please refer to the screenshot above your reply.

      Hope this helps!

      Cheers,
      /Charbel

  18. Thank you Charbel for your great advice , now i have another problem , when i am adding ” NEW VIRTUAL NETWORK ADAPTER” in hyper host virtual switch property its only allow me add only 1 adapter after that “new virtual network adapter” option grey out.

    i need to add multiple ..any suggestion

    • Hello Kamal,

      In order to add another vNIC, you want to select again the Logical Switch in Hyper-V Host under Virtual Switches and click “New Virtual Network”.
      You need to repeat the same steps in order to add several vNICs.

      Hope this helps.

      Cheers,
      ~Charbel

  19. Charbel – Last week we have upgrade our switches to 10G .I have reconfigured the Vmm network settings to support the 10g . Now everything is working as expected. Thanks for your guidance.

  20. Hello

    This is a wonderful article for SCVMM 2012 R2 Converged Architecture.

    But as I have noticed in all the blogs and documents, there is one grey area about this solution.

    As you mentioned, there are only 2 x 10GB NICS used which are part of a Dynamic/Switch independent team.

    As mentioned in Step 6, Applying networking template to Hyper-V hosts, you are selecting both 10GB NICS as physical adapters. But there is no information about how you added these Hosts to SCVMM and what is the network settings of the host prior to applying vSwitch.

    If you already created a TEAM before adding the hosts to SCVMM, you cannot apply the settings in Step6.

    So in this scenario, I hope you would have done in below order.

    1. Assigned Host IP on one 10GB NIC initially (NO Team) and Added the hosts to SCVMM.
    2. After all the networking configuration, in Step 6, you selected both 10GB NICs including one with the Host IP assigned.

    Kindly clarify my query.

    Normally if we have two NICs, NIC1 will be connected to Switch 1 and NIC2 to Switch 2. If VSS (CISCO) technology is configured in the external switch, we can configure a LACP trunk with both ports connected in different switch as part of same TRUNK. In this case, we cannot assign IP to a single NIC which is part of same Trunk. So is LACP not required in switch level when configuring Dynamic/Switch Independent mode configuration?

    Regards,
    Deepak

    • Hello Deepak,

      Thanks for the feedback.

      Prior applying the Team and create the converged vSwitch using SCVMM. I have the management IP set on one of the 10Gb NIC only.
      I joined the host to the domain and then I pushed the VMM agent.
      Now when I am ready to push the UPLINK Trunk Team, I mentioned an important point in Step 6 in order to avoid loosing network connectivity. In this case VMM will remove the management IP from the physical NIC, then create a (vEthernet vNIC) and add the same management IP.

      This is correct:
      1. Assigned Host IP on one 10GB NIC initially (NO Team) and Added the hosts to SCVMM.
      2. After all the networking configuration, in Step 6, you selected both 10GB NICs including one with the Host IP assigned.

      I strongly recommend to use Switch Independent with Load Balancing algorithm for the Team and especially when you have two TOP of rack switches.
      LACP is not required when configuring Dynamic/Switch Independent mode configuration.
      LACP is required configuration on the switch side when then team mode is set to LACP only.

      Hope this helps.

      Cheers,
      ~Charbel

  21. Hi Charbel

    Thanks for the excellent article!

    We’re building a 3-node and single site Hyper-V cluster and wondering what would you recommend for our setup.

    1. Each node has 4 10Gig NICs and dedicated FC HBA.

    2. Is it better off to configure LACP on the switches and use switch dependant LACP/Dynamic combo?

    3. Should we configure the converged fabric/vSwitch/vNICs prior to creating the logical switches on SCVMM? How would you recommend
    we set up the logical switches?

    • Hello Dennis,

      Please find my answers inline:

      1. Each node has 4 10Gig NICs and dedicated FC HBA. (10Gig great, are they RDMA capable or standard 10Gig?)

      2. Is it better off to configure LACP on the switches and use switch dependent LACP/Dynamic combo? )For Hyper-V Hosts, I always recommend to use Switch Independent/Dynamic mode as per Microsoft best practices).

      3. Should we configure the converged fabric/vSwitch/vNICs prior to creating the logical switches on SCVMM? How would you recommend we set up the logical switches? (If you follow the steps described in this article, you should be able to deploy end to end Converged Networks with SCVMM).

      Hope this helps!

      Cheers,
      ~Charbel

  22. I got a bit confused between creating NIC teaming using powershell and creating Logical switches in SCVMM, If I have created the NIC teaming and vNICs uing powershell, Do I still have to create the logical switches in scvmm? What are the differences…

    • Hello Dennis,

      You need to decide if you want to use PowerShell or you want to use VMM. You cannot mix and use both. If you want to use Logical Switches in VMM which I highly recommend, then you want to configure the teaming and create the vNICs on the host using SCVMM as described in this article.
      End to end through VMM. There are no difference between both, but if you want to leverage VMM, then you want to create and manage the complete network fabric using VMM.

      Hope this helps!

      Cheers,
      ~Charbel

    • Thanks for the kind response, Charbel.

      These are Intel DP X520 10GigE NICs I don’t think they support RDMA. As for LACP vs switch independant I know MS recommend the latter however I can’t find what their pros and cons are. Network admin here always prefer using LACP.

        • Thanks Charbel, very helpful indeed! 🙂

          We’ve got the 3 hyper-v nodes up in a cluster now it’s the time to create the logical networks. Since we’ve got 4 NICs (on 1 LACP portchannel) on each host, we are going to run VMs of separate VLANs on the Hyper-V cluster. we only have single site at the moment, What would be the recommended logical network design for this? Is it possible to team 4 NICs and create 1 logical network with separate VLANS on top?

          Thanks again for your help.

          • Hello Dennis,

            Since you have 4 NICs, I recommend to have two Logical Networks, each with 2 NICs teamed.
            One logical network for the Management Network (ManagementOS, Backup, CSV, LiveMigration) ad the second logical network dedicated to VMs.
            As for now in Windows Server 2012 R2, I recommend to split the VM traffic from the converged network.

            Cheers,
            -Charbel

  23. Hi! Great post. Hopefully quick question. I have 6 vNICs: 1 MGMT, 1 CSV, 2 iSCSI, 1 Live Migration, 2 Production network which are teamed. All on seperate VLANs but routable. What is the best practice here. Am I to create logical networks with IP pools, followed by Uplink ports and then logical switches? Any guidance would be nice.

  24. to add to the previous comment. it is a 5 node environment in a clustered environment. 6 vNICs, 2 teamed for the VM traffic (VMs can communicate with one another but not with host; each node hosts a “production site” and each production site is isolated from one another via VLANs). what do you suggest in terms of the SCVMM network design?

  25. Hi Charbel,

    Thanks for this very good article.
    I’m a bit confuse how to separate the Management traffic from the Hyper-V traffic I our environment. We have currently our hyper-v machines on the same subnet and Vlan as our Hyper-v hosts. Should I create a new Vlan for the management traffic?

    • Yes it’s absolutely best practice to have the ManagementOS (Hyper-V host) on a separate VLAN!
      The Virtual Machines on a different VLANs of course, and if you are a service provider, you want to look at Network Virtualization as well.

      Hope this helps!

      • I plan to have this configuration for our VSA environment after reading your recommendation on your blog.
        We have 12-NICs on each host (not 8 as I write before).

        One vSwitch for VSA (2 NICS)
        One vSwitch for production-Management (8 NICS)

        2 NICS for iSCSI
        One Management Logical Network (Host-Mngt-Traffic(VLAN29),Host-Backup-Traffic(VLAN45),Cluster-Traffic(VLAN47),Live-Migration-Traffic(VLAN46),Switch_Mngt(VLAN1))

        -One Production Logical Network(VM_Traffic(VLAN30),DMZ(VLAN40),Untrunsted-C(VLAN34)……)

        Should I have the VSA and iSCSI on different VLAN?
        Should I integrate those VLAN to one of the Logical Network above or should I create a new one for VSA and another for iSCSI?
        Is it a good configuration to only have 2 vSwitch?

        And again, thanks for your support!!

        You Wrote

        Hello Christophe,

        I recommend to have VSA Traffic and iSCSI on a different VLAN.
        The VSA traffic is mirroring/replicating the data across all nodes at the same time.
        The Cluster requires access to the shared storage through iSCSI, it’s better to have them separated.

        I prefer to have a separate VMM Logical Network as well for each type VSA and another for iSCSI.
        I have the same in my environment.

        Yes two vSwitches, one vSwitch is used for VSA and another one for VM Traffic (Production).
        The remaining Physical 2 NICs are for iSCSI with HP DSM MPIO.

        P.S. Post a comment (confirm) on the blog post when you are done.

        Hope this helps!

        Hello Charbel,

        I created 2 Uplink ports profile one for the Production and another one for VSA, both configured as Dynamic/Switch Independent (as you recommend it).
        We have only one top of rack switch in our environment so I wonder if I should use Dynamic/LACP instead of Switch Independent.

        Thanks

        • Thanks Christophe,

          I am glad to hear your project is completed.
          I recommend to keep Dynamic/Switch Independent instead of Dynamic/LACP even with one TOR switch.
          For more information, please refer to this post.

          Cheers,
          -Charbel

1 Trackback / Pingback

  1. PowerShell Magazine » Announcing DSC resources to deploy Hyper-V converged virtual network

Leave a Reply