What’s New In Windows Server 2016 Hyper-V? #HyperV #WS2016

Picture left to right: Fellow MVPs, Clint Wyckoff, Dave Kawula, Mr. Hyper-V (Ben Armstrong, Principal Program Manager Lead), Didier Van Hoye, Charbel Nemnom and Andy Syrewicze.

/- Updated September 27, 2016 -/

Live from Microsoft Ignite 2016, Atlanta – Georgia.

In October 2014, Microsoft released the first Windows Server Technical Preview, on May 5th, 2015 Microsoft released the second Windows Server Technical Preview 2 which the final official name will be Windows Server 2016, and on August 19th they released Technical Preview 3, on November 19th Microsoft released Technical Preview 4, and on April 27th Microsoft released Windows Server 2016 Technical Preview 5. At Microsoft Ignite 2016, the Windows Server team announced that they have released the Windows Server 2016 and System Center 2016 to the public. This is the Evaluation version of Windows Server 2016 edition. In mid-October 2016, Windows Server 2016 will be generally available (GA). GA is the RTM version of Windows Server 2016 including additional updates that are expected to be available at GA.

This post is a detailed overview of all Windows Server 216 Hyper-V features that will be available at GA!

Put your seat belt and let’s dive into the new Hyper-V features in Windows Server 2016.

Windows 10

Windows 10 includes three big new features for Hyper-V: support for nested virtualizationcreate backwards compatible virtual machines directly on Windows 10 and support for virtual Trusted Platform Module (vTPM).

On Windows 10 and similarly on Windows Server 2016, you can create backwards compatible virtual machines that can be imported into Windows Server 2012 R2 or Windows 8.1, so if you still have earlier version of Hyper-V running in your environment, you can move VMs between the two environments easily. To do that, you need to use the following PowerShell command:


If you do not specify the version, you will get the highest version of Hyper-V that you are running. As of this writing, the current default version is 8.0.

To use Virtual TPM in a VM, you’ll first need to enable Isolated User Mode on your computer. To do that, go to the Control Panel, Program and Features, Turn Windows Features On or Off. Check Isolated User Mode, click Ok, and then restart when prompted.

The Virtual TPM is included as part of Gen2 Virtual Machines, because the virtual TPM isn’t emulated in Software, you need a physical TPM to be installed in the computer, so if your computer doesn’t have a TPM or if TPM is disabled in the BIOS, then you will be missing the TPM feature in your VM settings under security. However in Windows Server 2016 Hyper-V, you can use TPM-based or Active Directory-based Attestation which is covered in Shielded Virtual Machines section.


Microsoft introduced Credential Guard in Wind0ws 1o Enterprise Edition to make the most secure desktop Operating System that you can be using, Credential Guard uses virtualization-based security to isolate secrets so that only privileged system software can access then. Unauthorized access to these secrets can lead to credential theft attacks, such as Pass-the-Hash or Pass-The-Ticket.

To do that, you need to have all the requirement discussed in section besides the Virtualization Based Security in Group Policy, open local group policy by running gpedit.msc. In the Group Policy Editor, go to Computer Configuration, Administrative Templates, System, Device Guard. Double-click Turn On Virtualization Based Security. Set the policy to Enabled, Click Ok, and then reboot again.


You can read all about this technology here

Many of the features in Windows Server 2016 Hyper-V listed below will be available in Windows 10 as well.

Hyper-V on Nano Server

New Windows Server installation option. It’s a Cloud-first refactoring essential for infrastructure and application OS requirements. The Server roles and features enabled for Nano Server targeting at the moment will be Hyper-V clustering, networking and storage which they are the three key Infrastructure as a Service (IaaS) scenarios.

Windows Server and Hyper-V Containers

Windows Server Containers is a new approach to build, ship, deploy, and instantiate applications. Hyper-V Containers will be coming out very soon.

Virtual Machine Protection

Trust is the biggest blocker to cloud computing adaption let’s be honest. All the awesome technology that Microsoft is doing in Azure and are enabled through Hyper-V and the cloud platform. The number one thing that is holding the customer back is the issue of trust, you want to know that your data is safe, you want to know that no one else is accessing it, you want to know that it hasn’t been fiddled with, it hasn’t been compromised. the customers want to know that their data is secure whatsoever. You are the only person who has access to that data!

So Microsoft in the next release in Windows Server they are doing a lot of work in the Hyper-V core platform to start providing these guarantees. Even if you trust or you don’t trust your IT administrator, no one can access your data!

What is the technology behind Virtual Machine Protection: A virtual TPM (Trusted Platform Module) can be injected into a VM. Then you can enable BiLocker in the VM and protect your data from anyone outside of the VM. So you can now have a virtual machine running on someone’s else Hyper-V server or on someone’s else infrastructure and you can know that you are the only one who has access to that data. However on the Hardware side, you need TPM Version 2.0 to be installed on the physical host. This is really an exciting technology!

Linux Secure Boot

Secure boot support for Linux virtual machines, works now with Ubuntu 14.04 or later, and SUSE Linux Enterprise Server 12. The secure boot is an amazing feature that is so underrated in the world today. If you look at security issues at large, one of the biggest challenges out there is malware, and root kits. And one of the challenges is if you get compromised kernel mode code on a system there is a pretty much nothing you can do to bring that system back, so at that stage your only option is to format and reinstall the system.

What secure boot does is it allows the hardware to verify that the kernel mode code is uncompromised. Microsoft in Windows Server 2012 R2 introduced Generation 2 virtual machines that support secure boot for Windows guest operating system so you could have this functionality and know that your workloads were secure. So now we have secure boot for Linux Virtual machines as well.

In order to enable secure boot for Linux VMs, you need to change the secure boot Template for the virtual machine, you can do this with the PowerShell command:

Shielded Virtual Machines

Shielded Virtual Machines can only run in fabrics that are designated as owners of that virtual machine. Shielded Virtual Machines will need to be encrypted (by BitLocker or other means) in order to ensure that only the designated owners can run this virtual machine. You can convert a running virtual machine into a Shielded Virtual Machines.

There are two attestation models for Hyper-V 2016: Hardware Trusted Attestation and Admin-Trusted Attestation. The former is the most secure model which is relies on new hardware, such as Trusted Platform Module (TPM v2.0) chips in the hosts, whereas the latter simply requires that hosts are placed in a particular group in Active Directory.  Admin-Trusted Attestation is appropriate when you trust you AD and fabric administrators. 

If a local user try to access the VM’s console, the user will receive the following error message!


Key Storage Drive for Gen 1 VMs

In Windows Server 2016 Hyper-V, Microsoft introduced a new feature called Key Storage Drive (KSD) for Generation 1 virtual machines only. KSD requires a special IDE device to be attached for each Gen1 VM that you want to protect. The default drive size is 42MB, so you can use different file systems (i.e. exFAT, FAT32, or NTFS). The content of the key storage drive is first compressed, encrypted and then stored as part of the virtual machine Runtime State.

Key Storage Drive allows BitLocker encryption for Generation 1 VMs (Windows and Linux) by leveraging the same core guarded fabric infrastructure. The virtual machines with KSD are encrypted similar to Shielded VMs although, however, the key storage drive Gen1 VMs don’t have the same security assurances as Gen2 shielded VMs (No secure boot, no measurement on the VM), but Microsoft is allowing using BitLocker to encrypt the disk within the guest.

For more information on how to enable Key Storage Drive, please refer to the following step by step guide.

Guest Virtual Secure Mode

As discussed at the beginning of this post, Credentials Guard and Device Guard delivers unparalleled levels of OS security for the host. Now you can benefit from these technologies which are enabled by Hyper-V virtualization and are available inside virtual machines as well.

Distributed Storage QoS

Leveraging Scale-Out File Server to allow you to:

Define IOPs reserves for important virtual hard disk

Define a IOPs reserve and limit that is shared by a group of virtual machines / virtual hard disks

You can find the complete details here

Host Resource Protection

Dynamically identify virtual machines that are not “playing well” and reduce their resource allocation. So what Microsoft actually done is to build heuristics in where they’ve identified patterns of access that should never happen if someone is just running standard Windows applications or Linux applications. So Host Resource Protection can dynamically detect that malicious VM and they will throttle back the resource on that evil virtual machine to reduce it’s impact on the system until it behavior returns to normal.

This is something that you can enable or disable for each VM on the Hyper-V server, it’s disabled by default on Windows Server 2016. You can enable it by running the PowerShell command:

Storage and Cluster Resiliency

There are two features in Windows Server Technical Preview 2 to handle transitory failures and be more resilient:

Virtual Machine Storage Resiliency
Storage fabric outage no longer means that virtual machines crash.
Virtual machines pause and resume automatically in response to storage fabric problems.

Virtual Machine Cluster Resiliency
VMs continue to run even when a node falls out of cluster membership
Resiliency to transient failures
Repeat offenders are “quarantined”

In Windows Server 2012 R2, we had a feature in the Scale-Out File Server called resilient file handles, back then resilient file handles give us protection against really short storage outage. If you have a glitch in the system that causes storage to go out for 30 or 40 seconds, resilient file handles will have you covered. The activity will be buffered up and wait for the storage to come back, and then reconnect. The problem is, if that storage outage goes over 60 seconds! In Windows Server 2021 R2 if we have a storage outage that goes over 60 seconds, resilient file handles will go up to 60 seconds and just return I/O failure back to the guest. The result is that the VM just crashes!

So with the Storage Resiliency feature now when the resilient file handle expires and we get that storage failure, rather then returning the I/O failure to the guest, the virtual machine will be automatically paused, and put it in a special state to indicate to you that like there’s a disk failure, the VM will be suspended to make sure it doesn’t crash. The VM will report Disk(s) encountered critical IO errors.

In the other hand, what does clustering do when you have a transitory failure in your network. In Windows Server 2012 R2 when Hyper-V Cluster see these transitory failures, the moment that clustering can’t talk to a node it assumes all the virtual machines are shot in the head and it starts failing them across to other nodes in the cluster. However in Windows Server 2016 Microsoft are changing that, and all the behavior around that. Now what will happen if you have a transient network failure, the cluster node will be put into an isolated state. The virtual machines will report as unmonitored, and then wait for 4 minutes by default to see if the node comes back naturally (This value is configurable, you can tune it to match your environment), and if it comes back in under four minutes, the cluster node will join back to the cluster and heal automatically. But what if you have a cluster node that is having intermittent problems (The network is flapping, the node is going in and out of the cluster continuously), where if that happens, the cluster will detect that and say hey Mr. Node X, you went into isolated state too many times. Then the cluster will mark that node as quarantined, and when it comes back all virtual machines will be live migrated off of it and no workload will be put on that cluster node until it looks like it’s healthy again.

Shared VHDX

Shared VHDX is a great feature that was introduced in Windows 2012 R2, it’s very easy to set up and do guest clustering. However when you create a guest cluster with shared VHDX you pretty much all the limitations of any other disk cluster with the exception it’s just using VHDX, but with guest clusters you cannot do host-level backup as you do with the physical clusters, you cannot do online resize for shared VHDX, you cannot do storage live migration for the shared VHDX, you cannot protect and replicate VMs with Hyper-V replica, so all these great features aren’t there, however in Windows Server 2016 Microsoft are working hard to start bringing all those capabilities back even when you are using shared VHDX.

There are three big things that have been introduced in WS2016, the first one is allowing you to do host-based backup without any agent in the guest, so you can backup the guest clusters with shared VHDX, this is fantastic! The second one is you can online resize shared VHDX while the guest cluster is running… Note: you can only expand a shared VHDX drive, but you cannot shrink it. The third one is the integration with Hyper-V Replica, so guest clusters can now have Shared VHDX protected by Hyper-V Replica for disaster recovery. In order for this to work it requires to have Windows Server 2016 as a Hyper-V host and Windows Server 2016 as a guest OS. As of this writing, the two limitations are, you cannot do storage live migration for a VM with shared VHD Set, and you cannot create a checkpoint.

The hot/add of shared VHDX was supported in Windows Server 2012 R2, however in Windows Server 2016 a Shared Drive has it’s own category under the SCSI Controller when you to Add a Shared virtual hard disk.


There is a new type of VHD file that Microsoft introduced called VHD Set (VHDS) and it’s necessary for some of the new shared VHDX functionality that we have. The Shared VHDX file still exists, so if you have existing guest clusters using VHDX file you can continue to use those VHDX files for guest clusters. However, you will not be able to do the online resize and the host based backup. The good news that Microsoft will provide tools to do a very quick and easy upgrade from a VHDX, to a VHDS file so you can take advantage of that.


Hyper-V Replica Hot Add/Remove

We love Hyper-V Replica, but the down side was if you want to hot add and remove disks to a replica virtual machine, then it gets all strange and then you have to redo the initial old Sync and send all the data over and it’s a real pain.

So the medicine is built-in Windows Server 2016 to take away that pain Winking smile

When you add a new virtual hard disk to a virtual machine that is being replicated – it is automatically added to the no-replicated set. This set can be updated online with the following PowerShell command:

You can find the complete details here

Runtime Resize of Memory

Dynamic memory is great, but more can be done right Winking smile, in Windows Server 2016 and Windows 10 guests, you can now increase and decrease the memory assigned to virtual machines while they are running. Please note that the VM cannot be decreased lower than current demand, or increased higher than physical system memory.

You can find the complete details here

Hot Add/Remove Network Adapters

This is only supported for Generation 2 VMs.

You can find the complete details here

Rolling Cluster Upgrade

Seamless Zero Downtime Rolling Clustering Upgrades, you can take a cluster at any size running 2012 R2 and you can go through one node at a time upgrade to Windows Server 2016 and then live migrate VMs around with no new hardware with no downtime, and at any stage you can roll back, because we support live migration in both directions, you can actually even entirely upgrade a cluster to Windows Server 2016, and then you can roll it back to 2012 R2, and your virtual machines will still running the whole time, but the most important without upgrading the Cluster Functional Level or the VM version. The process any size cluster 3 nodes, pick a node, you want evict that node from the cluster, then format, reinstall Windows Server 2016, join it to the cluster, and rinse and repeat.


New VM Upgrade Process

In the previous versions of Hyper-V, whenever you upgraded your host to a new release, the moment Hyper-V sees your virtual machines, it will be upgraded automatically behind the scenes.

However, this has been changed in Windows Server 2016, Hyper-V will not automatically upgrade your virtual machines. The upgrade of a virtual machine is a manual operation that is separate from upgrading the host. This gives you the flexibility to move individual virtual machines back to earlier versions, until they have been manually upgraded. This is what called the point of no return.

Version 5.0 is the configuration version of Windows Server 2012 R2. Version 2.1a was for Windows Server 2008 R2 SP1. The configuration version was always there for internal usage based on the functionality and not based on the release, and it was not displayed to users. In Windows Server 2016 the version is 8.0.

The process to upgrade a virtual machine version requires to shutdown the VM, and do a manual upgrade. This is a one-way process so you can either do this through PowerShell or through the UI, and in the UI you can see we have got this Upgrade Configuration Version.


To upgrade the VM Configuration File through PowerShell, you need to run the following cmdlet from an elevated Windows PowerShell:

So why you would upgrade and why would you keep the same. Well, obviously you need to upgrade to get all new features and functionality. You are only going to get those new features if you are running a 8.0 virtual machine. On the flip side, if you keep a virtual machine at 5.0, we keep everything compatible with Windows Server 2012 R2. We keep the configuration compatible, we keep the safe state compatible, we keep the checkpoints compatible. So even if you are not doing a rolling cluster upgrade, if you just have an environment where you have some 2012 R2 servers and some Windows Server 2016 servers and you want to move virtual machines between those from time to time, then just keep it at version 5.0, and you can always freely move those virtual machines back and forth. 

Production Checkpoints

Production checkpoint delivers the same Checkpoint experience that you had in Windows Server 2012 R2 – but now fully supported for Production Environments. VSS is used instead of Saved State to create checkpoint. Restoring a checkpoint is just like restoring a system backup.


You can find the complete details here

PowerShell Direct

You can now script PowerShell in the Guest OS directly from the Host OS. This is just awesome Open-mouthed smile

No need to configure PowerShell Remoting, or even have network connectivity, but still need to have guest credentials. You can completely configure a Guest OS directly from the host with no network access.

To connect and inject PowerShell scripts into the guest OS without network access, you need to run the following cmdlet from an elevated Windows PowerShell on the host:

Starting with Windows 10 and Windows Server 2016 Build#14280 or later… The Hyper-V team added a new functionality into PowerShell Direct. You can now move data between the virtual machine and the Hyper-V host with persistent PowerShell Direct sessions! 

The Hyper-V team also brought Just Enough Administration (JEA) supports into PowerShell Direct. The reason behind this additional capability is to bring PowerShell Direct into a Cloud environments, traditionally if you want to use PowerShell Direct, you need to have valid credentials both on the host and the virtual machine, so a hoster or Service Provider cannot use PowerShell Direct to connect to the tenant virtual machine which they don’t have access to, but in Windows Server 2016, Microsoft enabled a new capability where a tenant can now choose to expose PowerShell Direct to their hoster using  Just Enough Administration (JEA) platform. 

ReFS Accelerated VHDX Operations

Taking advantage of an intelligent Resilient File System (ReFS) for instant fixed VHDX disk creation and instant disk merge operations.

So in Windows Server 2016, if you create an ReFS volume and you use it to run your virtual machines on, you will get instant fix disk creation and instant disk merge on your virtual hard drives. This is amazing powerful stuff! 

Hyper-V Manager and PowerShell Improvements

Multiple improvements to make it easier to remotely manage and troubleshoot Hyper-V Servers:

Support for alternate credentials
Connecting via IP address
Connecting via WinRM


Cross-Version Management

You can manage Windows Server 2012 and Windows Server 2012 R2 hosts with Hyper-V Manager UI in Windows Server 2016 (Single Console). Hyper-V PowerShell in Windows 10 / Sever 2016 and 2012 R2 module included in-box Version 1.1 and Version 2.0.

NAT Network

In Windows Server 2016 Technical Preview 5 and the latest Windows 10 build #14295 or later, Microsoft introduced a new feature called “NAT” network which can be integrated into the Hyper-V virtual switch in order to provide external/internet network access for virtual machines. For more information on how to enable this feature, please refer to the following step by step guide.

VM Groups

There are two different types of VM Groups:

  • VM Collection Groups
  • Management Collection Groups

VM collection groups is a logical collection of virtual machines. This type of group makes it possible to carry out tasks on specific VM groups, rather than having to carry them out on each individual VM separately.
Management collection groups is a logical collection of VM collection groups. With this type of group, you can nest VM collection groups as well. In other words, you can have a management collection group inside another management collection group. The main difference between VM collection groups and management collection groups is that management groups can contain both VM groups and other management groups as well.

The two main scenarios for which Microsoft developed VM groups are for backups and VM replications. In some situations, because of distributed applications, virtual machines should be treated as a single unit instead of managing them individually. This is true in both backup and VM replica situations where you want to backup or replicate a multi-tier application.

The following new PowerShell cmdlets have been added into Hyper-V module to facilitate VM Groups scripting:

For more information on how to use this feature, please refer to the following step by step guide.

VM Start Ordering

VM Start Ordering is a new feature introduced in Windows Server 2016 Technical Preview 5 Failover Clusters, which gives you more control over clustered virtual machines that are started or restarted first. This makes much easier to start virtual machine that provide services before virtual machines that use those services. An example of a complex starting order is a multi-tier application and all its interdependencies on Domain Controller, SQL Server, IIS Server, Application Server, Load Balancer, etc… You need to define sets, place virtual machines in sets, and then specify dependencies. In order to do so, you need to use the following new PowerShell cmdlets to manage the VM Sets:

For more information on how to use this feature, please refer to the following step by step guide.

Integration Services

In Windows 8.1 and Server 2012 R2, Virtual Machine Servicing for VM drivers (integration services) get updated with each host release. This require that VM driver version matches the host and the drivers hipped with the Host Operating System.

However in Windows Server 2016 the Integration Services Setup Disk is absolutely gone.

Because when your deployment get bigger, it became increasingly more difficult to update the integration components by logging into the virtual machine, mounting a DVD opening the DVD installing the integration services, then check if the integration services version on the host match the guest, the entire process just became a hassle.



The days of matching integration services version to the host version are over, so what Microsoft have done in Windows Server 2016, Windows 10 and going forward is the integration services are actually available through Windows Update, they are set as a critical update, they apply to virtual machines running on the correct version of Hyper-V they just all handle it for you as long as you are getting updates inside your virtual machine, they are there, you can install them using WSUS or through normal windows updates to keep your virtual machines up to date.

Evolving Hyper-V Backup

New architecture to improve reliability, scale and performance.

Decoupling backing up virtual machines from backing up the underlying storage. No longer dependent on hardware snapshots for core backup functionality, but still able to take advantage of hardware capabilities when they are present.

Microsoft is building a change tracking into the platform called Resilient Change Tracking (RCT). What this means is that it’s no longer necessary for backup partners to develop kernel mode filters that run in the parent partition. This means you have less stuff installed in your parent partition, and increases the reliability of the system overall.   

VM Configuration Changes

Microsoft in Windows Server technical preview they have introduced a new configuration file format for Virtual Machines which is designed to increase the efficiency of reading and writing virtual machine configuration data. It is also designed to reduce the potential for data corruption in the event of a storage failure. The new configuration files use the .VMCX extension (Replaced the old .XML files) for virtual machine configuration data, and the .VMRS extension (Replaced the old .VSV/.BIN files) for VM runtime state data.


So we are moving away from the XML format to a binary format. Thus you cannot open and edit the XML file anymore Winking smile. However, you should be able to do everything through PowerShell.

Hyper-V Cluster Management

Providing a single view of an entire Hyper-V cluster through WMI. You can manage an entire Hyper-V cluster like it were just one big Hyper-V Server.

So for example with Get-VM, if you actually point it to a single Hyper-V host you get all the virtual machines on that particular host, however if you point it at your Hyper-V cluster, what’s going to do, it’s going to return all the virtual machines on that cluster, so you can take output and pipe it into all various PowerShell commands, makes it a lot easier to start operating on a cluster of Hyper-V using PowerShell.

Hypervisor Power Management Improvements

Updated hypervisor power management model to support new modes of power management.

So now you can now Turn on Hyper-V on your Microsoft Surface Pro tablet and not to worry about your battery uptime, the Connected Standby just works in Windows 10 and later on all devices that support connected standby mode.

Hyper-V Nested Virtualization

In May 2015 at the build conference, Microsoft announced that Windows Server 2016 will support Nested Virtualization so you could run Hyper-V Containers in Hyper-V virtual machines.

But the good news is that Windows 10 will support Nested Virtualization as well!

Microsoft just announced the release of build 10565 to Windows Insiders on the Fast ring. This build contains an early preview of nested virtualization.

In short, Nested virtualization exposes hardware virtualization support to guest virtual machines. This allows you to install Hyper-V in a guest virtual machine, and create more virtual machines “within” that underlying virtual machine (VMs in VMs!).

The primary motivation for nested virtualization, is support for Hyper-V Containers in any cloud environment, and the second benefit is for lab and training scenarios.

Hyper-V Direct Device Assignment (DDA)

In Windows Server 2016 Hyper-V, you can allow NVMe/GPU devices to be assigned to guest VMs! Now Microsoft recommends that these VMs only be those that are under control of the same administration team that manages the host and the guest. Other types of devices may work when passed through to a guest VM. Many will work such as USB 3.0, RAID/SAS controllers , but none will be candidates for official support from Microsoft at the time of writing this post. Things might change in the future, so consider these devices to be in the “experimental” category.  


Headless Virtual Machine

This feature gives you the ability to boot a VM without display devices which reduces the memory footprint of the virtual machines and simulates a true headless server. This is something that you can enable or disable for each VM on the Hyper-V server, it’s enabled by default, you can disable it by running the PowerShell command:

This will remove and disable the keyboard, video and mouse for virtual machines.

Improved support for High DPI Devices

The Hyper-V team have made some strategic changes to address Hyper-V’s usability on High DPI systems. They have made Virtual Machine Connection (vmconnect.exe) completely DPI aware, they have worked to get new icons for all of Hyper-V UI buttons as shown in the following screenshot, so all of which are available at all DPI points, and lastly they have changed the display when you connect to a virtual machine using enhanced or basic mode, so the virtual machine will get and match the DPI information from the host.


Windows Server 2016 Hyper-V scale limits

Based on the feedback throughout the Windows Server 2016 Technical Preview, Microsoft had numerous requests to push Hyper-V scalability to new heights to embrace interesting new scenarios around huge databases and machine learning. With Windows Server 2016, Microsoft is delivering new industry-leading scalability to virtualize any and every workload without exception. Up to 24 TB of memory per physical server, up to 512 logical processor per physical server, up to 12 TB memory per VM and up to 240 virtual processors per VM.

RemoteFX Improvements

Support for Generation 2 VMs including OpenGL 4.4 and OpenCL 1.1 API. Up to 1GB dedicated vRAM for the RemoteFX display device attached to a VM. Improved graphics compression performance using the new AVC444 mode as part of the Remote Desktop Protocol (RDP) and support for Windows Server 2016 personal VM.

Hyper-V Sockets

This new feature will deliver a platform for developers to build something great on. It extends the Windows Socket API and makes fast, efficient communication between the host and the guest easy and accessible.

I hope this blog post gave you enough details about the new features that will be available to you in Windows Server 2016 Hyper-V.

Until then… enjoy your day!


About Charbel Nemnom 270 Articles
Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.


  1. Hi Charbel,

    Thanks for the post, very insightful commentary on what Windows 2016 has to offer. I do have one question about unmonitored virtual machines in a cluster. If I’m running a cluster and a node goes down, I don’t want my VM’s sitting there unmonitored for 4 minutes; I’d rather it be a shorter amount of time so they can transfer to a different node and resume. You mentioned that this value is configurable, could you point me to where in the registry I could modify this?



    • Hello Kameron,

      I am glad that you found the article very useful.

      The Unmonitored scenario in VM Cluster Resiliency in Windows Server 2016 is designed for a transient network failure in your cluster, the cluster will not failover the VM immediately, because the issue could be a transient network failure.
      As you know, if you shoot the VM immediately to another node, you will have a very minimal downtime, however the network issue was not permanent.

      As a side note, a virtual machine in Unmonitored mode means that the VM is still up and running and the workload is running inside the guest and serve the users.

      Why is 4 minutes by default, because the switch reboot will take around 3 to 4 minutes to come up.
      If the network continue to flaps for more than 3 times in one hour (default), the VM will failover to another node and the affected node will be put in Quarantine mode for 2 hours (default) and will not accept any workload (the value is configurable too).

      If you which to tune this timer based on your environment, you can use PowerShell as the following:
      Value is seconds!

      PS C:\> (Get-Cluster).ResiliencyDefaultPeriod
      PS C:\> (Get-Cluster).ResiliencyDefaultPeriod = 60
      PS C:\> (Get-Cluster).ResiliencyDefaultPeriod

      Hope that helps!


1 Trackback / Pingback

  1. What’s new in Windows Server Hyper-V Tech Preview 2 | Brisbane Cloud User Group

Leave a Reply