How To Enable & Configure VMQ/dVMQ on Windows Server 2012 R2 with Below Ten Gig Network Adapters #HyperV #Vmq #vRSS

Hello Folks,

Back to basics: What is Virtual Machine Queue (VMQ), why do you need it and why you should enable it?

Virtual Machine Queue or dynamic VMQ is a mechanism for mapping physical queues in a physical NIC to the virtual NIC (vNIC) or virtual machine NIC (vmNIC) in Parent partition or Guest OS. This mapping makes the handling of network traffic more efficient. The increased efficiency results in less CPU time in the parent partition and reduced latency of network traffic.

VMQ spreads the traffic per vmNIC/vNIC, and each VMQ can use at most one logical CPU in the host, in other words VMQ distributes traffic equally amongst multiple guests (VMs) on a single host with a vSwitch (1 core per vmNIC/vNIC).

Note: The vNIC means a host partition Virtual NIC of the Virtual Switch in the Management OS, and the vmNIC is the synthetic NIC inside a Virtual Machine.    

VMQ is auto enabled by default on Windows Server machines when a vSwitch is created with 10Gig network adapters and above, and it’s useful when hosting many VMs on the same physical host.

The below figure is showing the ingress traffic with VMQ enabled for virtual machines.


[VMQ incoming traffic flow for virtual machines – source Microsoft]

When using 1Gig network adapters VMQ is disabled by default, because Microsoft don’t see any performance benefit to VMQ on 1Gig NICs, and one CPU/Core can keep up with 1Gig network traffic without any problem.

As I mentioned above with VMQ disabled all network traffic for vmNIC has to be handled by a single core/CPU, however with VMQ enabled and configured the network traffic is distributed across multiple CPUs automatically.

Now what happened if you have a large number of Web Servers VMs on a host with 2 eight core processors or more and with large amount of memory but you are limited by the physical NICs with 1Gig only?

The answer is…

VMQ and vRSS better together Smile

As I demonstrated in a previous blog posts, Post I and Post II, Windows Server 2012 R2 introduced a new feature called Virtual Receive Side Scaling (vRSS). This feature works with VMQ to distribute the CPU workload of receive network traffic across multiple (vCPUs) inside the VM. This effectively eliminates the CPU core bottleneck that we experienced with a single vmNIC. To take the full advantage of this feature both the host and the guest need to be Windows Server 2012 R2. As a result VMQ needs to be enabled on the physical host and RSS enabled inside the virtual machine, but until this point in time Microsoft don’t actually enable vRSS for the host vNICs, it’s only for VMs so we are stuck with one processor on the host Management partition with Converged Network environment. The good point is the vNICs on the host side get locked to one processor, but they will still get VMQs assuming you have enough Queues and they get distributed across different processors.  

The requirements to enable VMQ are the following:

1. Windows Server 2012 R2 (dVMQ+vRSS).
2. The Physical network adapters must support VMQ.
3. Install the latest NIC driver/firmware (very important).
4. Enable VMQ for 1Gig NICs in the registry, this step can be skipped if you have 10Gig adapters or more:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\VMSMP\Parameters\BelowTenGigVmqEnabled = 1

5. Reboot the host if you enable the registry key in step 4.
6. Determine the values for Base and Max CPU based on your hardware configuration.
7. Assign values for Base and Max CPU.
8. Enable RSS inside the Virtual Machines.
9. Turn on VMQ under Hyper-V settings for each VM which is already ON by default.

What is the Base CPU? It is the first CPU used for processing the incoming traffic for a particular vmNIC.

What is the Max CPU? It is the maximum number of CPU that we allow that NIC to process the traffic on.

Ok, so having this explained let’s configure VMQ step by step:

Our Lab Scenario:

We have 8 Physical 1Gig NICs and 2 X 8-core (32 logical processors).

First we need to determine if HyperThreading is enabled by running the following cmdlet:

PS C:\Get-WmiObject –Class win32_processor | ft –Property NumberOfCores, NumberOfLogicalProcessors –auto


As you can see we have the NumberOfLogicalProcessors as twice as the NumberOfCores, so we know that HT is enabled in the system.

Next, we need to look at our NIC Teaming and load distribution mode:

PS C:\Get-NetlbfoTeam | ft –Property TeamNics, TeamingMode, LoadBalancingAlgorithm –auto


After we determined that HyperThreading is enabled and the Teaming Mode is Switch Independent with Dynamic Mode, next we move on to Assign the Base and Max CPUs.

Attention: before moving into the assignment, one important point to consider, if the NIC team is in Switch-Independent teaming mode and the Load Distribution is set to Hyper-V Port mode or Dynamic mode, then the number of queues reported is the sum of all the queues available from the team members (SUM-Queues mode), otherwise the number of queues reported is the smallest number of queues supported by any member of the team (MIN-Queues mode).

What is (SUM-Queues mode) and What is (MIN-Queues mode)?

The SUM-Queues mode is the total number of VMQs of all the physical NICs that are participating in the team, however the MIN-Queues mode is the minimum number of VMQs of all the physical NICs that are participating in the team.

As an example, let’s consider we have two physical NICs with 4 VMQs each, if the teaming mode is Switch Independent with Hyper-V Port, the mode will be SUM-Queues equal to 8 VMQs, however if the teaming mode is Switch Dependent with Hyper-V Port the mode will be MIN-Queues equal to 4 VMQs.

[You can refer to the table below in order to determine the Teaming and Load distribution mode, source - Microsoft]:

Distribution mode→

Teaming mode↓

Address Hash modes Hyper-V Port Dynamic
Switch independent Min-Queues Sum-Queues Sum-Queues
Switch dependent Min-Queues Min-Queues Min-Queues

In our scenario, the NIC Team is in Switch Independent with Dynamic Mode so we are in SUM-Queues mode.

If the team is in Sum-Queues mode the team members’ processors should be, non-overlapping or with little overlap as possible. For example, in a 4-core host (8 logical processors) with a team of 2X10Gbps NICs, you could set the first NIC to use base processor of 2 and to use 4 cores; the second would be set to use base processor 6 and use 2 cores.

If the team is in Min-Queues mode the processor sets used by the team members must be identical, you should configure each NIC team member to use the same cores, in other words the assignment for each physical NIC will be the same.

Now let’s check first if VMQ is enabled:

PS C:\Get-NetAdapterVmq


As you can see VMQ is enabled (=True) but not yet configured.

And here we have two Converged Network Teams with 4 Physical NICs and 16 Queues each, so the total number of VMQs per team is 64.

I am using one Converged Team for vmNIC (VMs) and the second one is used for vNIC in the host.

We will set the Base and Max CPUs by running the following cmdlets for the teamedadapters under ConvergedNetTeam01:

PS C:\Set-NetAdapterVmq –Name NIC-b0-f0 –BaseProcessorNumber 2 –MaxProcessors 8
PS C:\Set-NetAdapterVmq –Name NIC-b0-f1 –BaseProcessorNumber 10 –MaxProcessors 8
PS C:\Set-NetAdapterVmq –Name NIC-b0-f2 –BaseProcessorNumber 18 –MaxProcessors 8
PS C:\Set-NetAdapterVmq –Name NIC-b0-f3 –BaseProcessorNumber 24 –MaxProcessors 8

As I mentioned above that in (Sum-Queues mode) you should configure the Base and Max CPU for each physical NIC with non-overlapping as possible, but in our lab environment we didn’t have as many cores as we had Queues so we had to have some overlap otherwise we are wasting our Queues.

Let’s run Get-NetAdapterVmq again and see the changes:


As you can see the Base and Max processors are set now, next we can run the Get-NetAdapterVmqQueue andthis will shows us how all queues are assigned across the VMQs in the vmNICs for all VMs on that particular host.


Now let’s see the result before and after VMQ + vRSS are enabled:

VMQ and vRSS disabled

In the Guest OS:


In the Host:


VMQ and vRSS enabled

In the Guest OS:


In the Host:


Last but not least best practices for configuring VMQ:

1. When using NIC Teaming, always use Switch Independent with Dynamic Mode when possible.
2. Make sure your base processor is never set to Zero to ensure best performance, because CPU0 handles special functions that cannot be handled by any other CPU in the system.
3. Keep in mind when assigning the Base/Max CPU and HyperThreading is enabled in the system, only the even number of processor is real processor (2,4,6,8, etc…), if HT is not enabled you can use even and odd number (1,2,3,4,5, etc…).
4. In SUM-Queues mode, try to configure the Base and Max CPU for each physical NIC with little overlap as possible, this is depends on the host hardware configuration with multiple cores.
5. Only assign Max Processor values of 1,2,4,8. It is ok to have a max processor number that will extend past the last core, or exceeds the number of VMQs on the physical NIC.
6. Don’t set the Base & Max processors on the Multiplexor NIC Teamed Adaptors, leave it as default.

In conclusion, I would prefer to enable VMQ on 1Gig NICs so I can keep my network traffic spread across as many CPU/cores as possible Smile.

For VMQ and RSS deep dive, here you go TechNet 3 part series VMQ Deep Dive.

Hope this helps.

Until then, enjoy your weekend!


Posted in Hyper-V, Network

Deploying HP StoreVirtual VSA On Hyper-V 2012 R2 Cluster – Part 3 #HyperV #HP #Storage #StoreVirtual #SysCtr

Hello Folks,

In Part I and Part II of this series we covered the deployment of HP StoreVirtual VSA including the Centralized Management Console (CMC) on Hyper-V, and the configuration of the StoreVirtual VSA Cluster and Failover Manager, Part III this post we will create our Hyper-V Cluster using Virtual Machine Manager.

So without further ado, let’s get started.

As I mentioned in Part I, HP StoreVirtual integrated very well with Microsoft Hyper-V such as HP Insight Control for Microsoft System Center, HP Management pack for SCOM, VSS provider and requestor functions, LeftHand DSM for MPIO, Application Aware Snapshot Manager, Recovery Manager for Windows, Windows Active Directory and SMI-S Support for System Center Virtual Machine Manager.

For streamlined workflows in Hyper-V environments, the storage presented by StoreVirtual VSA can be managed from within Microsoft System Center Virtual Machine Manager (SCVMM) 2012 SP1. Using VMM provision new storage for a virtual machine or rapid deployment of a virtual machine template with SAN Copy. HP StoreVirtual does not require a proxy agent, instead SCVMM communicates directly with the StoreVirtual Cluster.

Once the StoreVirtual cluster has been setup as described in Part II, the cluster virtual IP address of the VSA can be added as storage provider into Virtual Machine Manager. When adding StoreVirtual, make sure you use SSL encryption for the communication, default port (TCP 5989), and protocol SMI-S CIM-XML as showing in below screenshots:HPVSA-VMM02HPVSA-VMM04


The storage resources need to be associated with a storage classification (administrator defined class of storage, such as Bronze, Silver, Gold).

After the provider has been added successfully, all clusters in your StoreVirtual management group will be listed in the Classification and Pools list. New volumes can be provisioned from the available storage pools, used by new virtual machines and presented to Hyper-V hosts that are managed by Virtual Machine Manager via iSCSI.

Note: After you successfully registered the VSA in VMM via SMI-S, there is no information shown rather than the device name. No Vendor, No Model and even No storage information!!!HPVSA-VMM06

Unfortunately Microsoft System Center Virtual Machine Manager 2012 R2 is not supported yet by HP VSA SAN IQ11 Sad smile only VMM 2012 SP1 is qualified, and hopefully this will be fixed in the near future.

So let’s go back to the traditional way and present the LUN manually on each Hyper-V node via the iSCSI initiator.

After the LUN is mapped on each node, we will create Windows Server 2012 R2 Failover Cluster using the Hyper-V hosts in VMM. Any VMs that are deployed on the cluster will be made highly available, so after a host failure the VMs would automatically restart on another node in that cluster, providing resiliency to both planned and unplanned downtime.

Here are the steps to create the Cluster in VMM fairly simple:

1. Select the Fabric workspace in VMM.

2. Select the Home tab above the ribbon.

3. Click Create, and select Hyper-V Cluster.


4. In the Create Cluster Wizard, on the General page, type your desired “Cluster Name” in the Name field.

5. Next to Use an existing Run As account, click Browse, select an admin account in order to create the cluster, and then click OK. Click Next.


6. On the Nodes page, multi-select HV01 and HV02, click Add, and then click Next.


7. On the IP Address page, check the box next to, select LAN IP Pool from the Static IP Pool dropdown. VMM will automatically allocate an IP address to the Cluster from that pool of IP Addresses. Click Next.HPVSA-VMM11

8. On the Storage page, select the disks you want to cluster, in my case the first disk will be selected as witness disk and the second one as Cluster Shared Volume (CSV).HPVSA-VMM13

9. On the Networks page, click Next. The Virtual Switches and Logical Switches for these hosts were already created.

10. On the Summary page, ensure the settings match below, and then click Finish.

11. Observe the running job in the Jobs window. Maximize the Jobs window. In the results pane at the bottom, select the Validate Cluster job, and click the Details tab. Observe the job through to completion.

12. This will take a few minutes to complete.

And here you go the Hyper-V Cluster is created!


Hope this blog series demonstrated the full picture on how to deploy HP StoreVirtual VSA on Hyper-V and the integration with VMM.

If you missed Part I and Part II.

Thanks for reading.

Until then, enjoy your day!


Posted in Cluster, Hyper-V

Deploying HP StoreVirtual VSA On Hyper-V 2012 R2 Cluster – Part 2 #HyperV #HP #Storage #StoreVirtual #SysCtr

Hello Folks,

In Part I of this series we covered the deployment of HP StoreVirtual VSA and the Centralized Management Console (CMC) on Hyper-V, Part II this post is dedicated to the configuration of the StoreVirtual VSA Cluster and Failover Manager, in Part III we will create our Hyper-V Cluster using Virtual Machine Manager, I know it’s a long series blog posts, but I want to make sure that we cover all the aspects of the Hyper-V Software-Defined Storage deployment Smile.

So without further ado, we are in Step 3 in our deployment plan:

III- Configuration

Let’s start the Centralized Management Console (CMC). If you did not see any system under “Available Systems”, click “Find” on the menu and then choose “Find Systems…”. A dialog box will appear. Click “Add…” and enter the IP address of one of the earlier deployed VSA nodes (HP-VSA01). Repeat this until all deployed VSA nodes are added. Then click “Close”. Now you should have all available VSA nodes listed under “Available Systems”.


A management group is a collection of one or more storage VSA systems. It is the container within which your cluster storage systems and create volumes for storage. Creating a management group is the first step in creating HP StoreVirtual VSA Storage. Right click on any node and choose “Add to New Management Group…” from the context menu. We will add two nodes (HP-VSA01 and HP-VSA02) into this new management group.


As you can see warning message! To ensure that we have the highest level of availability we need to install HP StoreVirtual Failover Manager (FOM).

HP StoreVirtual FOM is a specialized version of the LeftHand OS designed to run as a virtual appliance in a virtual environment. The HP StoreVirtual FOM participates in the management group as a manager in the system performing quorum operations only, not data movement operations. It is recommended for 2-node or Multi-Site configurations to maintain quorum without requiring any additional physical hardware.

IMPORTANT: You must Install the Failover Manager on network hardware other than the storage systems in the SAN to ensure that it is available for failover and quorum operations if a storage system in the SAN becomes unavailable.

Now If you don’t use Failover Manager, and you select “I understand that not using FOM… a Virtual Manager is added to the management group by default, but is not started on a storage system until a failure in the system causes a loss of quorum. Unlike the Failover Manager, which is always running, the virtual manager must be started manually on a storage system after quorum is lost. It is designed to be used in two-systems or two-site system configurations which are at risk for a loss of quorum.

Note: A virtual manager requires manual intervention to recover quorum and can have undesirable effects when left running after quorum is recovered. Therefore, it’s highly recommended that you use the Failover Manager rather than the Virtual Manager.

OK, so having this explained and because I have only two VSA storage nodes, I will install HP StoreVirtual FOM on a third Hyper-V host and then repeat the cluster group wizard.

The installation of the Failover Manager for Hyper-V Server is straightforward:

Note: Before you start the installation, please make sure .NET Framework 3.5 (includes .NET 2.0 and 3.0) is installed on the Hyper-V host.

1. Insert the .iso image for HP StoreVirtual VSA and Failover Manager DVD.

2. Locate the applicable executable and double-click it to begin.HP-VSAFOM01

3. Click Agree to accept the terms of the License Agreement.HP-VSAFOM02

4. Choose a location for the Failover Manager virtual machine and a location for the virtual hard
disks, and click Next.

5. Enter the network information, including the host name and network addressing information, and click Next.HP-VSAFOM05

6. Enter a name for the Failover Manager, select whether you want it powered on after it is installed, and click Next.HP-VSAFOM06

7. Finish the installation, reviewing the configuration summary, and click Next. When the installer is finished, the Failover Manager is ready to be used in the HP StoreVirtual Storage.HP-VSAFOM07HP-VSAFOM08

After the installation, click FindFind Systems in the CMC and enter the Failover Manager IP address to discover the Failover Manager, and then add it to the management group, but before doing that let’s open Hyper-V Manager and check the settings of the StoreVirtual Failover Manager VM.HP-VSAFOM09

Now let’s create the Cluster Management Group again using the Centralized Management Console:HP-CMC014

Click “Next”. On the next page of the wizard we have to enter a username and password for the administrator user, that will be added to all VSA nodes.HP-CMC015

On the next page we need to provide NTP server. we can set the time manually, but a preferred NTP server is one that is more reliable, such as a server that is on a local network. An NTP server on a local network would have a reliable and fast connection to the storage system. They have to be reachable by the VSA nodes!HP-CMC016

On the next page of the wizard, you have to provide information about the DNS Server: DNS domain name, additional DNS suffixes and one or more DNS servers. For the DNS servers the same applies as for the NTP server. They have to be reachable by the VSA nodes!HP-CMC017

You can set up email notification for events for each management group. You must set up the email server to send events. You need the email (SMTP) server IP or host name, server port, and a valid email address to use as the sender address.HP-CMC018

Now very important question: Standard or Multi-Site Cluster? A Multi-Site cluster incorporates different features to ensure site fault tolerance. You must have the same number of VSA storage systems in each of the data sites spanned by your Multi-Site cluster. I chose to create a standard cluster since we don’t have Multi-Site deployment Winking smileHP-CMC019

After choosing the cluster type, we have to provide a cluster name and the number of nodes, that should be member of this new cluster.HP-CMC020

The next step is to configure the cluster Virtual IP address (Cluster VIP). This IP address has to be in the same subnet (VLAN) as the VSA nodes. This IP address is used to access the cluster. After the initial connection to the cluster VIP, the initiator will contact a VSA node for the data transfer.

A VIP is required for a fault-tolerant iSCSI cluster configuration, using VIP load balancing or the HP StoreVirtual DSM for Microsoft MPIO. When using a VIP, one storage system in the cluster hosts the VIP. All I/O goes through the VIP host. You can determine which storage system hosts the VIP by selecting the cluster, then clicking the iSCSI tab when the cluster is created.

Note: All iSCSI initiators (Hyper-V Hosts) must be configured to connect to the VIP address for the iSCSI failover to work properly.HP-CMC021

The wizard allows us to create a volume. This step can be skipped and create the volume later, however I created two full provisioned volumes, one volume 5GB for Windows Failover Cluster Quorum and another one 195GB for Cluster Shared Volume.


After clicking “Finish” the management group and the cluster will be created. This steps could take some time.

At the end you will get a summary screen. You can create further volumes or you can repeat the whole wizard to create additional management groups or cluster.HP-CMC024

Congratulations! You have now a fully functional HP StoreVirtual VSA cluster Open-mouthed smile.


But wait we didn’t finish yet… Smile we need to add the Hyper-V Hosts.

To present the volumes to the hosts, you have to add hosts. A host consist of a name, an IP address, and an iSCSI IQN and, if needed, CHAP credentials. Multiple hosts can be grouped to server clusters. You need at least two hosts to build a server cluster. But first of all, we will add the two Hyper-V hosts:


What is the Controlling Server IP Address to use?

When working with Hyper-V, it’s the Physical IP address required for each server hosting a VSS Provider in the Windows Failover Cluster. Since there is likely more than one LeftHand OS server in a Windows Failover Cluster, enter one physical IP address in each of the related LeftHand OS server dialogs.

With at least two hosts, you can create a server group. A server group simplifies the volume management, because you can assign and unassign volumes to a group of Hyper-V hosts with a single click. This ensures the consistency of volume presentations for a group of hosts.


Last but not least, we need to present the volumes, during the initial Cluster Management Group configuration we created two volumes 5GB and 195GB full-provisioned Network-RAID 10 volume. To assign the volumes to a server group, right-click the new server group in the CMC and click “Assign and Unassign Volumes…”. A window will popup and you can check or uncheck the volume and set the permission as Read-Only or Read-Write.


And here we go, we are nearly at the end. We only have to add the Cluster VIP to the iSCSI initiator on each Hyper-V host, initialize the disks and then create two volumes out of the presented volumes.


Part 3 will cover the configuration of the Hyper-V Cluster in VMM. If you have any question or feedback, please feel free to leave a comment below.

Thanks for reading.

Stay tuned… Until then, enjoy your day!


Posted in Cluster, Hyper-V

Deploying HP StoreVirtual VSA On Hyper-V 2012 R2 Cluster – Part 1 #HyperV #HP #Storage #StoreVirtual #SysCtr

Hello Folks,

In this blog post, we will cover the deployment of the current StoreVirtual VSA release (LeftHand OS11) including the Centralized Management Console (CMC). A second blog post will cover the management group cluster configuration, and a third blog post will cover Hyper-V Cluster configuration using Virtual Machine Manager 2012 R2. The three posts are focused on LeftHand OS11 and Microsoft Hyper-V 2012 R2.

HP Software-Defined Storage Overview:

HP brought Software Defined Storage as a bundle with StoreVirtual VSA (Virtual Storage Appliance) with every purchase of new ProLiant Gen8 Server. The new offer, announced Last November 2013, and it is available with 10 models of ProLiant and includes a 1TB free entitlement license. The license can be combined on up to 3 nodes for a total capacity of 3TB of total storage in a free software SAN.


[Example of software-defined storage on HP ProLiant servers with the StoreVirtual VSA – Source HP]

The new offer is awesome for small and medium office deployments where typically only a few Hyper-V servers are deployed. HP VSA completely eliminates any need for an expensive SAN or NAS or other physical shared storage. 100% less hardware, so CapEx is reduced because there’s no need to buy expensive hardware, and OpEx is lowered because there’s less hardware to maintain.

If you want more than 1TB capacity which is the case in production Smile, you can upgrade to a retail copy of StoreVirtual VSA, and the licensing scheme start @ 4,10, 20, 30, 40 and 50TB for 3 nodes each and includes 3 years of support.

HP StoreVirtual application integrated very well with Microsoft Hyper-V such as HP Insight Control for Microsoft System Center, HP Management pack for SCOM, VSS provider and requestor functions, LeftHand DSM for MPIO, Application Aware Snapshot Manager, Recovery Manager for Windows, Windows Active Directory and SMI-S support for System Center Virtual Machine Manager.

Ok, so having this explained, let’s move now to the deployment process, but before we start the setup wizard, we have to think about the goals of our setup. There are some things, that we need to consider. The deployment process can be divided into 3 steps:

  1. Planning
  2. Deployment
  3. Configuration

I- Planning the installation

Preparation for StoreVirtual VSA:

Make sure to update all firmware and driver components, and consider installing all required Windows Server roles and features (Windows Failover Clustering, MPIO, etc.) before starting the deployment of StoreVirtual VSA.

The underlying hardware on each Hyper-V host should be similar when used in the same StoreVirtual VSA cluster for ideal performance. This is especially true for the storage and networking configuration.

Network Considerations:

The network is a key component in every HP StoreVirtual deployment. It is critical to the communication among storage nodes as well as for Hyper-V hosts accessing volumes via iSCSI. Therefore the network segment for StoreVirtual VSA should be isolated from other networks using a separate VLAN.

Hyper-V Host networking: We need a minimum of four network interfaces. On Hyper-V, the StoreVirtual VSA uses a virtual network switch (vSwitch), which is typically connected to two or more physical network interfaces on the server.

Let’s assume we have four NICs on the Hyper-V host, the recommended configuration is to dedicate two NICs to a vSwitch for StoreVirtual VSA. In addition it can be shared by the host to access volumes via iSCSI, and by leveraging Hyper-V Converged Networks, the remaining two NICs on a second vSwitch are used for Virtual Machines traffic as well as for ManagementOS, Backup, Cluster, and Live Migration traffic.


[Expanded networking configuration for StoreVirtual VSA on Hyper-V – Source HP]

As best practices, it is recommended to use individual network interfaces for Multipath I/O to connect to iSCSI targets instead of teaming network interfaces.

HP StoreVirtual DSM for MPIO can also be used to improve sequential throughput and lower latency when connecting to volumes on a HP StoreVirtual cluster. By establishing an iSCSI session to each StoreVirtual VSA in the cluster, blocks of data can be efficiently retrieved from the StoreVirtual VSAs where the block actually resides. The software can be installed on the Hyper-V host.

Storage Considerations:

StoreVirtual VSA relies on hardware RAID to protect its data disks. It is strongly recommended to validate the storage configuration to be the same on all Hyper-V servers designated to run HP StoreVirtual VSA for optimal performance.

Adaptive Optimization: LeftHand OS 11.0 introduces simple and smart sub-volume auto-tiering with a technology called Adaptive Optimization in the StoreVirtual VSA. This concept is the same used in Windows Server 2012 R2 Storage Tiered Spaces, It allows two types of storage tiers to be used in a single StoreVirtual VSA instance, moving more frequently accessed blocks to the faster storage tier (for instance SSDs), and keeping the less accessed blocks on a tier with lower performance and potentially lower cost (for example SAS drives), however the Adaptive Optimization is only available for 10TB license and above.

The RAID sets used for the StoreVirtual VSA’s data disks should not be shared with other workloads. On a Hyper-V host, the base image of the StoreVirtual VSA is typically stored on the local boot device of the server along with the installation of Windows Server; alternatively it can be stored on the lower performing tier when virtual hard disk files are used.

Storage capacity can be presented to the StoreVirtual VSA via Virtual Hard Disks (VHDX; up to 64 TB) or directly as pass-through Disks. Note that the StoreVirtual VSA’s OS image is a VHDX file which is typically stored on the slower storage tiers along with the StoreVirtual VSA’s data disks, but for the majority of deployments, Virtual Machine Disk (VHDX) files are typically used and easier to manage.


[Mapping storage resources to StoreVirtual VSA – Source HP

CPU & Memory Considerations:

CPU and memory resources have to be reserved. You should have at least two 2GHz cores reserved for each VSA node. The memory requirements depend on the virtualized storage capacity. For example, 1-4 TB, you should have 5GB RAM for each VSA node (Please refer to the table below for all VSA capacity with or without Adaptive Optimization): HP-VSA-Storage01

[Virtual memory requirements for StoreVirtual VSA – Source HP]


Meaningful hostnames facilitate management later using the Centralized Management Console (CMC). I named my HP VSA nodes in my Lab to HP-VSA01, HP-VSA02. Feel free to name your VSAs as you like Smile.

II- Deployment

Now assume that storage and network are configured on all Hyper-V hosts that should run the StoreVirtual VSA, the next step is to start the deployment of the software on each host. HP StoreVirtual VSA Installer and HP StoreVirtual DSM for MPIO helps to install and configure the StoreVirtual VSA on the local server. This means that you need to run the installer on each Hyper-V hosts individually.

Deploying HP StoreVirtual DSM for MPIO:

Note: Installing the DSM for MPIO requires a server reboot to complete the installation.

The setup file (HP_StoreVirtual_DSM_for_Microsoft_MPIO_Installer_AT004-10511.exe) is self-extracting.







Please repeat above steps on the second Hyper-V node as well.

Deploying HP StoreVirtual VSA on Hyper-V:

Note: Installing the StoreVirtual VSA on Hyper-V requires .NET Framework 3.5 (includes .NET 2.0 and 3.0) to be installed before you start the deployment.

The setup file (HP_StoreVirtual_VSA_2014_Installer_for_Microsoft_Hyper-V_TA688-10517.exe) is self-extracting.

On the Welcome screen, Click “Next”…


Accept the Agreement…


Select the Virtual Storage Appliance (VSA) and click “Next”…


Choose the location for the HP VSA Virtual Machine configuration file and the virtual hard disks, as I mentioned above the base image of the StoreVirtual VSA is typically stored on the local boot device of the Hyper-V server along with the installation of Windows Server; alternatively it can be stored on the lower performing tier when virtual hard disk files are used.


Now it’s time to give a name to the Host Name (in my case node 1), specify a static IP address and then select a dedicated Hyper-V Virtual Switch for HP StoreVirtual VSA.


Now it’s time to give a name to the VM and create Virtual Hard Drives (VHDX) were the data will be stored.


You can create up to 7 VHDX, and based on your licensing scheme you can increase from 1TB to 50TB Smile


At the end of the wizard and before the actual installation is started, the installer presents a summary page of the settings. Please review this page carefully and compare with your planning guides.


After a couple of minutes the deployment is finished. Hit “Close”. Now it’s time to start the Centralized Management Console (CMC) deployment.


Let’s open Hyper-V Manager console and check the settings of the StoreVirtual VSA VM.


Please repeat above steps on the second Hyper-V node as well.

Deploying HP StoreVirtual Centralized Management Console (CMC):

The setup file (HP_StoreVirtual_Centralized_Management_Console_for_Windows_BM480-10562.exe) is self-extracting.

The installation of the Centralized Management Console is straightforward…











After the successful deployment using the installation wizard, the StoreVirtual VSA instance will be available on the designated network. Make sure that the StoreVirtual VSA listed as an available system in the HP StoreVirtual Centralized Management Console (CMC).

CMC will open and you can start discovering your VSA nodes.


For the latest information about the StoreVirtual VSA offer and how to take advantage, visit HP website at

Part II will cover the cluster configuration of the management group, and Part 3 will cover Hyper-V Cluster, and SCVMM. If you have any question or feedback, please feel free to leave a comment below:

Stay tuned… Until then, enjoy your day!


Posted in Cluster, Hyper-V

5nine Manager 5.0 NEW for Hyper-V Just Released



5nine Manager 5.0 NEW for Hyper-V

5nine just announced today an updated version of Hyper-V Manager V5.0, a new version of 5nine Cloud Security will be announced later in July.

What’s new in 5nine Manager 5.0 for Hyper-V

  • Automated virtual machine (VM) provisioning
  • Enhanced cluster management
  • VMs Guest connection views through FreeRDP or Microsoft controls
  • Replication to enable failover of production workloads to a secondary site for disaster recovery
  • Group operations for multiple VMs
  • Applications Logs
Key features
  • Manage multiple Microsoft Hyper-V hosts of different versions (2012 R2 / 2012 / 2008 R2 SP1) from one management console, remotely or locally
  • A local Graphical User Interface for Windows Server and Microsoft Hyper-V Server
  • Agentless antivirus plugin for full and incremental VM and host scans
  • Hyper-V logs view 
  • Applications Logs NEW
  • Integrated Best Practices Analyzer
  • Copying Hyper-V settings between hosts
  • File Manager

#1 Agentless Security and Compliance Solution for Hyper-V

5nine Cloud Security for Hyper-V is the first and only agentless complete security and compliance solution built specifically for Microsoft Cloud OS and Hyper-V, utilizing the extensibility of Hyper-V switch. It allows users to:

  • Secure  multi-tenant Hyper-V environment and provide VM isolation
  • Protect Hyper-V with fast, agentless antivirus
  • Enforce PCI-DSS, HIPAA and Sarbanes-Oxley compliance
  • And more.

Multi-layered protection is provided, with an integrated firewall, antivirus and Intrusion Detection System (IDS). The agentless firewall ensures complete traffic control and isolation between VMs. The antivirus performs incremental scans up to 50x faster and IDS proactively detects malicious attacks.


1- Secure multi-tenant Hyper-V environment

Whether you are a hosting provider or a business, rest assured that multiple tenants in your virtual network have access to all required resources, all while being absolutely isolated and protected from each other.

2- Provide Hyper-V VM isolation

With the Hyper-V environment, the system faces new types of security threats. 5nine Cloud Security allows you to protect your virtual machines from any internal and/or external network security breach.

3- Protect Hyper-V with agentless antivirus

5nine Cloud Security provides unique agentless antivirus technology for Hyper-V that allows saving CPU resources and increasing VM density by up to 30%. As a result, it leads to a reduction of capital expenditure on physical infrastructure.

4- Enforce Hyper-V compliance

5nine Cloud Security will provide the required level of protection for all Hyper-V networks in order to be compliant with PCI-DSS, HIPAA or Sarbanes-Oxley security standards.

5- And more

Would you like to know more about powerful security and compliance instruments of 5nine Cloud Security for Hyper-V? Then feel free to proceed to the Trial Edition.


Posted in Hyper-V, Security

Auto Update The Installation of Hyper-V Integration Services with PowerShell

Hello Folks,

Recently I came across a project were I need to upgrade 6 Hyper-V Hosts from 2012 to 2012 R2.

As I mentioned in the previous post, we must update the Hyper-V Integration Component Services (ICS) for all Virtual Machines after the Hyper-V host is upgraded, and always remember to keep them up to date in Windows guests Operating Systems.

The Integration Services for Hyper-V are actually part of the Windows Server and Windows Client OSs (since the time of Windows Server 2008 R2/Windows 7).

The challenge is that I want to update the Integration Services for 40 Virtual Machines. As you know you can find the Hyper-V Integration Services in the settings of each virtual machine.

What happened when you select the Insert Integration Services Setup Disk action option for a VM? behind the scenes the C:\Windows\System32\vmguest.iso file is attached to the Virtual Machine. Either the setup.exe in the support\amd64 or support\x86 folder is run, depending on the architecture of the guest OS in the VM.

Now the logic behind the auto deployment of the Integration Services outside of the Hyper-V manager console, is to extract the content of the vmguest.iso file. This can be done now with Windows Server 2012, Windows 8 and above or with PowerShell.

Next we will share the extracted ISO image, and then call the setup.exe from within the Virtual Machine. The /quiet switch can be used with the setup.exe to make the installation in Unattended mode, but please note that the Virtual Machine will reboot after the installation is done. If you don’t want to restart the VM when the installation is completed, you need to add the /norestart switch, but obviously you need to restart the Virtual Machine later in order to have the Integration Services updated.

Ok, so having this explained, let’s jump into the automation now Smile 


Author: Charbel Nemnom


Email: charbel.nemnom[at]

Date created: 13-June-2014

Last modified: 16-June-2014

Version: 1.0


# Get Creds

$cred = get-credential “Domain\AdminUser”

# List All of Your Hyper-V Hosts

$ServerNames = “HV01″,”HV02″,”HV03″,”HV04″,”HV05″,”HV06″

# Mount the Integration Services ISO Image

$iso = “C:\windows\system32\vmguest.iso”

$folder =[System.IO.Path]::GetFileNameWithoutExtension($iso)

$mount_params = @{ImagePath = $iso; PassThru = $true; ErrorAction = “Ignore”}

$mount = Mount-DiskImage @mount_params

# Extract the Integration Services ISO Image

$volume = Get-DiskImage -ImagePath $mount.ImagePath | Get-Volume

$source = $volume.DriveLetter + “:\*”

$folder = New-Item -ItemType directory -Path C:\ISO

$params = @{Path = $source; Destination = $folder; Recurse = $true;}

cp @params

# Create Shared Folder

New-SMBShare -Name ICS -Path “C:\ISO\support\amd64″

# Check all Running Virtual Machines on each Hyper-V Host which has the ICS lower than V6.3.9600.16384

foreach ($ServerName in $ServerNames)


$VMNames = Get-VM -ComputerName $ServerName | ?{$_.IntegrationServicesVersion -lt “6.3.9600.16384″ -and $_.State -EQ “Running”} | Select VMName

$VMs = $VMNames > C:\VMNames.txt

If ((Get-Content “C:\VMNames.txt”) -eq $Null)

{ Write-Host “The Integration Services Upgrade is NOT required for Virtual Machines” “on Host” $ServerName -foregroundcolor “yellow”}


{foreach ($VirtualMachine in $VMNames)

{$VMName = $VirtualMachine.VMName

Enable-WSManCredSSP -DelegateComputer $VMName -Role Client -Force

Connect-WSMan $VMName -Credential $cred

Set-Item WSMan:\$VMName*\Service\Auth\CredSSP -Value $true

$scriptblock = {cmd.exe /C “\\ServerNameRunningTheScript\ICS\Setup.exe /quiet”}

Invoke-Command -scriptblock $scriptblock -computername $VMName -Authentication Credssp -Credential $cred

Disconnect-WSMan $VMName



Write-Host “The Integration Services has been updated for” $VMName “VM on” $ServerName “host, the VM is Rebooting…” -foregroundcolor “green”}




Remove-Item -Path C:\VMNames.txt -Force


# Remove Shared Folder

Remove-SmbShare -Name ICS -Confirm:$false -Force

# Delete the Extracted ISO Folder

Remove-Item -Path C:\ISO -Recurse -Confirm:$false -Force

# Dismount the Integration Component Services ISO Image

Dismount-DiskImage -DevicePath \\.\CDROM1

And here you go:HV-ICS01

If none of the Virtual Machines required to be updated:HV-ICS02

Two notes to mention:

1- You need to have Administrative privilege in order to run the script.

2- All Virtual Machines must be running x64 OS.

Please bear in mind I am not a PowerShell Guru Just kidding, but nevertheless it has worked for me and I feel that it’s a much nicer than having to update the Integration Services manually for each VM from the UI, so that’s that.

A couple of areas that could definitely be improved though, would have to check the guest OS architecture x86 or x64 in the VM, and then depending on that would have to update the Hyper-V Integration Services, etc.

If you have more ideas and would like to add more options, please share in the comment below:

Hope this was helpful for you.

Enjoy your day!


Posted in Hyper-V

The Integration Services Setup Disk image could not be updated: The process cannot access the file because it is being used by another process. (0×80070020).

Hello Folks,

Today I found my Hyper-V server event log filled with below Error:

The Integration Services Setup Disk image could not be updated: The process cannot access the file because it is being used by another process.


The error is clear related to Hyper-V Integration Component Services (ICS).

What is Hyper-V Integration Services?

As a quick overview, Hyper-V Integration Services is a suite of 6 components designed to enhance the performance of a virtual machine’s (child partition) guest operating system.

The Integration Services are installed as user mode components in the guest OS, and are implemented in the following services:

  • Hyper-V Heartbeat Service (vmicheartbeat)
  • Hyper-V Guest Shutdown Service (vmicshutdown)
  • Hyper-V Data Exchange Service (vmickvpexchange)
  • Hyper-V Time Synchronization Service (vmictimesync)
  • Hyper-V Remote Desktop Virtualization Service (vmicrdv)
  • Hyper-V Volume Shadow-Copy Requestor Service (vmicvss)

In Windows Server 2012 R2, a new integration service has been added, Guest services. Guest services enables the copying of files to a Virtual Machine using WMI APIs or using the new Copy-VMFile PowerShell cmdlet.

For more information please refer to the following article Hyper-V Overview.

Ok, so having this explained, why this error occurred at first?

I remember that I upgraded my Hosts from Hyper-V 2012 to 2012 R2, and then I live migrated different VMs using Cross-Version Live Migration.

Now I upgraded the Integration Services, but I forgot to dismount the ISO image which is located by default on the parent partition under the following path C:\Windows\System32\vmguest.iso

As rule of thumb, after the migration you should upgrade the Integration Services for all Virtual Machines, and by default the latest IC version is shipped with Windows Server 2012 R2 at the moment is V6.3.9600.16384, but I had many VMs running older Operating Systems as well.

Let’s check which VM still has the ISO image attached:

You can use Hyper-V Manager UI and check each VM separately, but it’s a long process…

Our friend is PowerShell that makes our life easier Smile so let’s check which VM still has the ISO mounted:


As you can see, I still have many Virtual Machines with the ISO mounted.

Next, let’s sort only the VMs with their DvdMediaType equal to ISO:


Now it’s time to dismount the ISO…


The command will connect to the VM which has the ISO attached and eject the DVD/ISO from the VM.

And finally view the result again by typing the command Get-VMDvdDrive and confirm the result:


Let’s now see our Event Viewer… and here you go: The Integration Services Setup Disk image was successfully updated now. Smile


Note: According to Microsoft you should remove unused devices such as the CD/DVD-ROM and COM port, or disconnect their media for improving the performance of the guest OS.

Hope this helps if you encounter the same issue.

Enjoy your Weekend!


Posted in Hyper-V

A New Virtual Desktop could not be created. Verify that all Hyper-V Servers have the correct network configuration…

Hello Folks,

The new BYOD era is expanding number of devices every day, Operating Systems, and applications as well as the constant expectation that we all should be able to access vital information from anywhere anytime.

But… how about accessing the Remote Applications from other platforms??? You can get the Microsoft Remote Desktop App free for each platform below:

Windows Server 2012 R2 Remote Desktop Sessions and Virtual Desktop Infrastructure, users can connect to their Personal or Pooled collection of Virtual Machines, RD sessions, and RemoteApp programs within a single sign-on web based client or Remote Desktop App client:


And with Server Manager you have a nice overview of all the RDS components in a single pane of glass…


More information about Remote Desktop Services Overview can be found here.

Well during the creation of my Pooled VDI collection, the Virtual Desktops failed with the following provisioning error:


As you can see the virtual desktop could not be created from the virtual desktop template.

First thing first you might think that the VM Template where the Pooled VDI is created from could be misconfigured.

Note: You cannot use the new Hyper-V Generation 2 VMs as a VDI template even in Windows Server 2012 R2.

Let’s check first if the VM template is created as Generation 1, and sure enough it is:PoolVDI-03.1

Next, if we continue reading the error message, you can see: [Verify that all the Hyper-V Servers in the deployment have the correct network configuration…]. As you know Remote Desktop Virtualization Host (RD Virtualization Host) integrates with Hyper-V role to deploy Pooled or Personal virtual desktop collections.

What a good admin must do in this kind of situation? is to check the Event LogSmile

So let’s have a look then:


As you can see Warning in the event log.
What this event is saying that you have a MAC conflict: A port on the virtual switch has the same MAC as one of the underlying team members on Team Nic Microsoft Network Adapter Multiplexor Driver.

Let’s sort all the Virtual NICs, Physical NICs and MAC addresses using PowerShell.

PS C:\Get-netadapter | sort macaddress


As you can see we have MAC address conflict!

One note to mention that this host is deployed and managed using Virtual Machine Manager 2012 R2 and we are using Converged Network Fabric.

Now as per Microsoft, this conflict should not cause any issue as long as the team member that has the same MAC as the Virtual NIC remains in the same team, but if that team member is removed from this team and attempt to operate in a standalone mode with the same MAC, then we will have a duplicate MAC address on the network assuming that the Virtual NIC is also in operation state.

More information about How to Set the Static MAC Address Range for Virtual Network Devices in VMM.

I don’t like to see Warnings or Errors in the event log, so to avoid the MAC address conflict, we can change the MAC of the Team (Microsoft Network Adapter Multiplexor). The operation is straightforward, you can jump into the properties of the team interface, then click the Configure button, then select the Advanced tab, MAC Address\Value:

Let’s now try to re-create the VDI Pool again and see how it goes…


Unfortunately we still have the same error Sad smile

What could be the problem then? Thinking smile

The issue is around the network configuration for this VM Template, I remember that I assigned a static IP Address while preparing and updating the template.

I removed the static IP address, Syspreped the image and then I tried to re-create the Pool, but still no hope!!!

I jumped into my VMM Server, and I revised the Hardware Configuration for this VM Template since It was deployed through VMM.

As you can see, the Virtual Machine template is connected to my Logical Network.

Mummm, Thinking smile Do you think that the RD Virtualization Host is smart enough to understand that the VM Template is connected to a Logical Switch instead of a Standard Switch?


Well let’s change the Virtual Switch from Logical to Standard switch


And here you go the VDI Pool collection is created successfully now Open-mouthed smile


Hope this help.

Enjoy your day!


Posted in Remote Desktop Services, VDI

Create a Converged Network Fabric in VMM 2012 R2

Hello Folks,

In this post I will show and demonstrate how you can model your logical network in Virtual Machine Manager to support Converged Network fabric (NIC teaming, QoS and virtual network adapters).

Before we get started, what is converged network?

As we discussed in a previous post on how to Isolate DPM Traffic in Hyper-V by leveraging the converged network on the Hyper-V host were combining multiple physical NICs with NIC teaming, QoS and vNICs, we can isolate each network traffic and sustaining network resiliency if one NIC failed as shown in below diagram:ConvergedNetwork

To use NIC teaming in a Hyper-V environment with QoS, you need to use PowerShell to separate the traffic, however in VMM we can deploy the same using the UI.

More information about QoS Common PowerShell Configurations can be found here.

So without further ado, let’s jump right in.

The Hyper-V server in this Demo is using 2X10GB fiber, I want to team those NICs to leverage QoS and converged networking. The host is connected to the same physical fiber switch, configured with static IP address on one of the NICs, it’s joined to the domain and managed by VMM.

Step 1

We must create Logical Networks in VMM.
What is logical Network and how to create Logical Network in Virtual Machine Manager 2012 R2? click here.

We will create several logical networks in this demo for different purposes:

Logical Networks:

  • Management / VM: Contains the IP subnet used for host management and Virtual Machines. This network does also have an associated IP Pool so that VMM can manage IP assignment to hosts, and virtual machines connected to this network. In this demo the management and Virtual Machines are on the same network, however in production environment both networks must be separated.
  • Live Migration: Contains the IP subnet and VLAN for Live Migration traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Backup: Contains the IP subnet and VLAN for Hyper-V Backup traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Hyper-V Replica: Contains the IP subnet and VLAN for Hyper-V Replica traffic. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.
  • Cluster: Contains the IP subnet and VLAN for Cluster communication. This network is non-routable as it only remains within the physical rack. This network does also have an associated IP Pool so that VMM can manage IP assignment to the hosts.

Step 2

Creating IP pools for host Management, Live Migration, Backup, Hyper-V Replica and Cluster network.
We will create IP pools for each logical network site so that VMM can assign the right IP configuration to the Virtual NIC within this network. This is an awesome feature in VMM so that we don’t have to perform this manually or rely on DHCP server. We can also exclude IP addresses from the pool that has already been assigned to other resources.


Step 3

The VM Networks step is not necessary if you selected ‘Create a VM network with the same name to allow virtual machines to access this logical network directly’ while creating the logical networks above. If you did not, please continue to create VM networks with 1:1 mapping with the Logical Networks in the Fabric workspace.


Step 4

Creating Uplink Port Profiles, Virtual Port Profiles and Port Classifications.

Virtual Machine Manager does not ship with a default Uplink port profile, so we must create one on our own.
We will create one Uplink port profile, so in this demo one profile for our production Hyper-V host.

Production Uplink Port Profile:

Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile.

As you can see in below screenshot, Windows Server 2012 R2 supports three different load balancing algorithms:

Hashing, Hyper-V switch port and Dynamic.

For Teaming mode, we have Static teaming, Switch independent and LACP.

More information about NIC Teaming can be found here.

Assign a name, description, and make sure that ‘Uplink port profile’ is selected, then specify the load balancing algorithm together with teaming mode, as best practice we will select Dynamic and Switch Independent for Hyper-V workload. Click Next.


Select the network sites supported by this uplink port profile. VMM will tell the Hyper-V hosts that they are connected and mapped to the following logical networks and sites in our fabric: Backup (for Hyper-V backup traffic), Live Migration (for live migration traffic), Management/VM (for management and virtual machine communication), Replica (for Hyper-V replica traffic), Cluster (for Hyper-V cluster communication).



Virtual Port Profile:

If you navigate to Port Profiles under the networking tab in fabric workspace, you will see several port profiles already shipped with VMM. You can take advantage of these and use the existing profiles for Host management, Cluster and Live Migration.

For the purpose of this demo, I will create 2 additional virtual port profiles for Hyper-V Backup and Hyper-V Replica as well.

Note: Make sure you don’t exceed the total weight of 100 in Bandwidth settings for all virtual port profiles that you intend to apply on the Hyper-V host.

Here are the different bandwidth settings for each profile that I used:

Host Management: 10
Hyper-V Replica: 10
Hyper-V Cluster: 10
Hyper-V Backup: 10
Live Migration: 20

If you summit up, we have already weight of 60 and the rest can be assigned to the Virtual Machines or other resources.

Right click on Port Profiles in fabric workspace and create a new Hyper-V port profile one for backup and another one for replica. Assign a name, description, and by default ‘Virtual network adapter profile’ will be selected. Click Next.



Click on Offload settings to see the settings for the virtual network adapter profile.


Click on Bandwidth settings and adjust the QoS as we described above.


Repeat the same steps for each Virtual network adapter profile.


Port Classifications:

Creating a port classification.

We must also create a port classification that we can associate with each virtual network port profile. When you are configuring virtual network adapters on a team on a Hyper-V host, you can map the network adapter to a classification that will ensure that the configuration within the virtual port profiles are mapped.

Please note that this is a description (label) only and does not contains any configuration, this is very useful in a host provider deployment were the tenant/customer see the message for their Virtual Machines for example High bandwidth, but effectively you can limit the customer with a port profile very Low bandwidth weight and they think they have a very high speed VMs Winking smile sneaky yes (don’t do this) Smile.

Navigate to fabric, expand networking and right click on Port Classification to create a new port classification.
Assign a name, description and click OK.



Step 5

Creating Logical Switches:

A logical switch is the last networking fabric in VMM before we apply it into our Hyper-V host, it’s basically a container of Uplink profiles and Virtual port profiles.

Right click on Logical Switches in the Fabric workspace and create a new logical switch.

Assign the logical switch a name and a description. Leave out the option for ‘Enable single root I/O virtualization (SR-IOV)’, this is beyond our scope in this demo.


We are using the default Microsoft Windows Filtering Platform. Click Next.


Specify the uplink port profiles that are part of this logical switch, sure enough we will enable uplink mode to be ‘Team’, and add our Production Uplink. Click Next.


Specify the port classifications for each virtual ports part of this logical switch. Click ‘add’ to configure the virtual ports. Browse to the right classification and include a virtual network adapter port profile to associate the classification. Repeat this process for all virtual ports so you have added classifications and profiles for management, backup, cluster, live migration and replica, low, medium and high bandwidth are used for the Virtual Machines. Click Next.


Review the settings and click Finish.


Step 6

Last but not least, we need to apply the networking template on the Hyper-V host.

Until this point, we created different networking fabric in VMM and we ended up with Logical Switch template that contains all networking configurations that we intended to apply on the host.

Navigate to the host group in fabric workspace that contains your production Hyper-V hosts.
Right click on the host and click ‘Properties’.
Navigate to Virtual switches.
Click ‘New Virtual Switch’ and ‘New Logical Switch’. Make sure that Production Converged vSwitch is selected and add the physical adapters that should participate in this configuration. Make sure that ‘Production Uplink’ is associated with the adapters.


Click on ‘New Virtual Network Adapter’ to add virtual adapters to the configuration.
I will add in total 5 virtual network adapters. One adapter for host management, one for live migration, one for backup, one for replica and one for cluster. Please note that the virtual adapter used for host management, will have the setting ‘This virtual network adapter inherits settings from the physical management adapter’ enabled. This means that the physical NIC on the host configured for management, will transfer its configuration into a virtual adapter created on the team. This very important, if you don’t select this you will loose the network connectivity to the host, therefore you cannot access it anymore unless you have ILO or iDRAC management interface on the physical host.


Repeat the process for Live Migration and Cluster virtual adapters, etc… and ensure they are connected to the right VM networks with the right VLAN, IP Pool and port profile classification.


Once you are done, click OK. VMM will now communicate with its agent on the host, and configure NIC teaming with the right configurations and corresponding virtual network adapters.


The last step is to validate the network configuration on the host Smile


Until next time… Enjoy your weekend!


Posted in Networking, System Center, Virtual Machine Manager

Migrate VMware VMs to Hyper-V using MVMC 2.0

The MVMC in its version 2.0 now released few years back originally, it’s a tool that help folks to convert VMware Workloads into Hyper-V, but the new capability in version 2.0 is able to convert VMware Virtual Machine into Hyper-V and to Microsoft Azure if you see this fit as well.

The difference between version 1.0 and 2.0, aside of the conversion to Azure there is also native PowerShell API that previously wasn’t that help to automate key tasks. The other key changes with this release support the latest VMware version 5.5 (either managed by vCener or standalone ESXi host) and the latest guest Operating Systems.

The version 1.0 of MVMC didn’t support any Linux guests, however with version 2.0 does bring support for Linux such as (Redhat, Ubuntu, Suse Linux, Centos, Debian, etc…).

Until this point in time, MVMC can do conversion to Generation 1 Virtual Machine only, if you need to convert to Generation 2, you have to go through 2 steps, first convert to Hyper-V and then run Hyper-V generation 2 VM conversion utility from here.

Now during the conversion process, you can convert a running VMs or offline either way, the key part is when the conversion is taking place, the VMware tools will be removed and bring it across Hyper-V.

Notes: You need to take into consideration the original size of the Virtual Machine running on VMware which includes the original disk .vmdk file and the conversion to .vhd or .vhdx, you effectively need double of space on your conversion machine. If you have static IP address assigned in yours VMs in VMware, MVMC doesn’t bring the static IP address into Hyper-V. 

So now, let’s see VMware into Hyper-V conversion:


Select the migration destination to Azure or to Hyper-V…Smile


Select your Hyper-V Host.


Select the path where you want to store the converted disk .vhd or .vhdx


Select your vCenter, or ESXi host.


Choose the desired Virtual Machine to convert…


Provide a user name with enough privilege in order for MVMC to uninstall VMware Tools, then choose the state of source and destination Virtual Machine (On/Off).


Select the temp workplace conversion on the local MVMC machine.


Review the details…


And here you go… Grab one cup of coffee and you are done Smile


You can download MVMC 2.0 free from here.



Posted in Hyper-V, MVMC

Sponsor – ALTARO

Copyright Warning

All material is copyrighted by me or by its respective owners. To use any of it, full or in part, you must contact me or owner of the material. You may quote few paragraphs from this blog only if you link to the original blog post.

Follow me on Twitter


Get every new post on this blog delivered to your Inbox.

Join other followers: