In this article, we will discuss in detail what are the new features that get introduced in System Center Virtual Machine Manager 1711.
Table of Contents
Introduction
You might have heard that Microsoft is switching to Semi-Annual Channel (SAC) model for Windows Server and System Center. You can read all about it here.
In short, this means that every 6 months, Microsoft will release a new version with additional features for System Center and Windows Server. You will be able to update your system if you have eligible Software Assurance. These updates will fix issues but also include new features. But before Microsoft releases the final SAC version, they will publish first a preview version for that specific update.
The first SAC preview version for System Center is 1711 (17=Year 2017 and 11=November), and the first production SAC version for System Center will be called 1801 (18=Year 2018 and 01=January) and so on and so forth. You can find and download all System Center preview 1711 releases here.
If we check the new feature list and improvements we will discover some really promising topics.
What’s new in System Center Virtual Machine Manager 1711
First, the installation experience is exactly the same as it is for VMM 2016. There are no changes in VMM 1711.
System Center Virtual Machine Manager (SCVMM 1711) public preview supports managing Windows Server RS3 Server Core (Version 1709) and includes a set of following features:
Manage ARM-based and region-specific Azure subscriptions
Virtual Machine Manager (VMM) 2016 Azure plugin supports only classic virtual machines (VMs) and public Azure regions. SCVMM 1711 supports the management of ARM-based VM and region-specific Azure subscriptions.
This new capability in VMM enables you to manage ARM-based VMs (the VMs created using the new Azure Portal) as well as enable the management of region-specific Azure subscriptions such as Germany, China, and US Government Azure regions.
You can manage your ARM-based VMs by using Azure AD authentication or using a management certificate. For detailed information, check the following article here.
Note: Make sure you assign the application access role to Virtual Machine Contributor, otherwise you won’t be able to manage Azure VMs using VMM.
Nested Virtualization
Nested Virtualization is new functionality introduced in Windows Server 2016 Hyper-V and above that allows you to create one or more virtual machines inside another virtual machine. Nested virtualization can be enabled out-of-band by using PowerShell and Hyper-V host configuration.
This capability allows you to enable and disable nested virtualization using VMM 1711. You can leverage this functionality to reduce infrastructure expense for development and test scenarios. You can configure the VM as a Host in VMM and perform host operations from VMM on this VM. For example, VMM dynamic optimization will consider a nested VM host for placement.
Before you start enabling nested virtualization in VMM, you need to make sure the following prerequisites are met:
- A Hyper-V host is running Windows Server 2016, Windows Server 1709, and above.
- A Hyper-V VM running Windows Server 2016, Windows Server 1709, and above.
- A Hyper-V VM with configuration version 8.0 or greater.
- A Hyper-V VM with Generation 2.
- An Intel processor with VT-x and EPT technology.
Ensure the VM is in a stopped state before you Enable Nested Virtualization. You can disable nested virtualization using the same wizard page.
Once the VM is in a running state. You can configure the nested VM as a Host. Run through the wizard, select the options as appropriate, and complete the wizard.
Migration of VMware VMs (EFI-based VM) to Hyper-V Generation 2 VM
Virtual Machine Manager (VMM) 2016 only supports the migration of BIOS-based VMs. However, with VMM 1711 release, you can enable migration of EFI-based VMware VMs to Hyper-V generation 2 VMs. VMware VMs that you migrate to the Microsoft Hyper-V platform can take advantage of all Hyper-V generation 2 features.
As part of the VMM 1711 release, the Convert Virtual machine wizard enables this migration based on the firmware type (BIOS or EFI), selects and defaults the Hyper-V VM generation appropriately:
- BIOS-based VMs are migrated to Hyper-V VM Generation 1
- EFI-based VMs are migrated to Hyper-V VM Generation 2
Please note that VMM PowerShell can also be used to convert VMware VMs.
Performance improvement in host refresher
VMM 1711 – Host refresher has undergone certain updates to result in performance improvement. In scenarios where you are managing a large number of hosts and virtual machines with checkpoints – you would be able to observe significant and noticeable improvements in the performance of the job.
The improvement in performance has been validated with VMM instances managing 20 hosts and each host managing 45-100 VMs. Microsoft measures up to 10X performance improvement.
Enhanced console session in VMM
The Console Connect in VMM provides an alternative way for remote desktops to connect to the VM. This is most useful when VM does not have any network connectivity or wants to change network configuration that could break the network connectivity. With VMM 2016 and earlier releases, the console connects support only basic sessions where clipboard text can only be pasted through the Type Clipboard Text menu option.
With SCVMM 1711, console connect through the enhanced session is supported that enables Cut (Ctrl + X), Copy (Ctrl + C), and Paste (Ctrl + V) operations on the ANSI text and files available on the clipboard, thereby copy/paste commands for text and files are possible from and to the VM.
To leverage Enhanced Console Session in VMM 1711, you need to make sure that the host operating system of the VM should be Windows Server 2012 R2 and above. If you recall, this feature was introduced by the Hyper-V Team in Windows Server 2012 R2. You need also to make sure that the guest operating systems have Guest Services enabled.
VMM Storage QoS improvements
Storage Quality of Service (QoS) provides a way to centrally monitor and manage the storage performance for virtual machines using Hyper-V and the Scale-Out File Server (SOFS) roles. This feature automatically improves storage resource fairness between multiple VMs using the same cluster and allows policy-based performance goals. VMM 1711 now has three improvements in Storage Quality of Service.
Extension of QoS support beyond Storage Spaces Direct (S2D) cluster: In VMM 2016, the management of QoS is currently limited to VHDXs residing on the S2D hyper-converged type clusters and Scale-Out File Servers only (SOFS). With VMM 17111, the support is extended to all managed clusters and Scale-out File Servers running on Windows Server 2016 and beyond. Now, the Scope page in the Create Storage QoS Policy Wizard displays the list of friendly names (easy for reference) for VMM-managed SOFS and failover clusters, running Windows Server 2016 or beyond.
Support for VMM Private Cloud: Storage QoS Policies section in the Create cloud wizard helps the fabric admin to select the list of policies, which should be made available for the cloud consumers. Note: This list contains only those policies that are available in the scope of all the clusters selected in the Resources section of the cloud wizard. This helps the self-service user to choose between the available plans, even after the VM is placed.
Availability of Storage QoS policies as templates: Templates usage is a common way for deploying VMs and Services on a cloud, and hence with this update, VMM 1711 allows you to select storage QoS policies in a template as well.
Linux Shielded VM Support
In Windows Server 2016 Hyper-V, Microsoft introduced the concept of a shielded VM for Windows OS-based virtual machines. Shielded VMs provide protection against malicious administrator actions both when VM’s data is at rest or an untrusted software is running on Hyper-V hosts.
With Windows Server RS3 (1709), Hyper-V introduces support for provisioning Linux-shielded VMs as well, and the same has been extended to VMM 1711.
Fallback HGS configuration in VMM
Being at the heart of providing attestation and key protection services to run shielded VMs on Hyper-V hosts, the host guardian service (HGS) should operate even in situations of disaster. To support this, in VMM 1711 the guarded host can be configured with a primary and secondary pair of HGS URLs (an attestation and key protection URI). This capability will enable scenarios such as guarded fabric deployments spanning two data centers for disaster recovery purposes, HGS running as shielded VMs, etc.
The primary HGS URLs will always be used in favor of the secondary. If after the appropriate timeout and retry count the primary HGS fails to respond, the operation will be re-attempted against the secondary. Subsequent operations will always favor the primary; the secondary will only be used when the primary fails.
Configure encrypted networks through VMM
In VMM 1711, you can use the new Encrypted Networks feature, end-to-end encryption can be easily configured on VM networks by using the Network Controller (NC). This encryption prevents traffic between two VMs on the same subnet, from being read and manipulated.
The control of encryption is at the subnet level and encryption can be enabled/disabled for each subnet of the VM network. VMM 1711 recognizes the VM network to be encrypted if any one of the subnets has encryption enabled on it.
This feature is managed through the SDN Network Controller (NC). If you do not already have a Software Defined Network (SDN) infrastructure with an NC, check this article for more information.
This feature currently provides protection from third-party and network admins and doesn’t offer any protection against fabric admins. Protection against fabric admins is in the pipeline and will be available soon.
Configure SLB VIPs through VMM service template
Software Defined Networking (SDN) in Windows 2016 can use Software Load Balancing (SLB) to evenly distribute network traffic among workloads managed by service providers and tenants.
VMM 2016 currently supports only deploying SLB Virtual IPs (VIPs) using Powershell. With VMM 1711, you can configure SLB VIPs while deploying multi-tier applications using Service Templates. In addition, VMM supports load balancing of both public and internal network traffic. Check this article for more information.
Configure guest clusters in SDN through VMM
VMM 2016 supports currently guest clustering. However, with the advent of the Network Controller, Windows Server 2016, and System Center 2016, the configuration of guest clusters has undergone some change.
With the introduction of the network controller, VMs, which are connected to the virtual network are only permitted to use the IP address that the network controller assigns for communication. The network controller does not support floating IP addresses which are essential for technologies such as Microsoft Failover Clustering to work. With VMM 1711 release enables this by emulating the floating IP functionality through the Software Load Balancer (SLB) in the SDN.
VMM 1711 supports guest clustering through an Internal Load Balancer(ILB) Virtual IP(VIP). The ILB uses probe ports which are created on the guest cluster VMs to identify the active node. At any given time, the probe port of only the active node responds to the ILB, and all the traffic directed to the VIP is routed to the active node. Check this article for more information.
This is pretty cool stuff and maybe there is more to come in VMM version 1801. This article gives you a quick overview of what’s new in VMM Preview 1711.
Thanks for reading!
Cheers,
-Ch@rbel-