In This Article
Microsoft announced the release of System Center 2019 under the Long-Term Servicing Channel (LTSC). LTSC provides 5 years of standard and 5 years of extended support. Subsequent to the release of System Center 2019, the suite will continue to accrue value through the Update Rollup releases every six months over the mainstream support window of 5 years. Microsoft has dropped Semi-Annual Channel (SAC) releases, but new features before the next Long-Term Servicing Channel (LTSC) release will be delivered through Update Rollups. You can read about the announcement on Windows Server Blog. You can download the media from the Volume Licensing Service Center (VLSC), or you can download the evaluation bits from the following link.
There are a lot of improvements and new features were introduced in this release.
In System Center 2019 Virtual Machine Manager, Microsoft added several new features. In the previous blog post, we showed you how to install System Center 2019 Virtual Machine Manager on top of Windows Server 2019 and SQL Server 2017. In this post, we will dive into the new features and improvements.
Click on the title to forward in the article:
- Nested Virtualization – Run Hyper-V inside a Hyper-V virtual machine
- Performance Improvements to Host refresher
- Manage VMware Esxi 6.5 hosts
- Migrate VMware VM (EFI firmware-based VM) to Hyper-V
- Enhanced console session for VMs
- Prevent CSVs from getting full by setting a threshold for warning or auto migrating VHDs across CSVs in a cluster
- Monitor storage health and operation status
- Ability to choose CSV during the creation of new VHD
- Improvements to VMM Storage QoS
- Upgrade and patching of S2D clusters
- Encrypted VM Networks
- Layer 2 Network Information for Hosts (LLDP)
- Convert SET switch to Logical Switch
- Configure SLB VIPs through VMM Service Templates
- Configuration of guest clusters in SDN environments
- Support for Static MAC address on VMs deployed on a VMM cloud
- Configure Layer 3 Forwarding Gateway
- New RBAC Role – VM Administrator
- Support for ‘Group Managed Service Account (gMSA)’ as VMM service account
- Linux Shielded VM support
Nested Virtualization is new functionality introduced in Windows Server 2016 that allows users to create one or more virtual machines inside another virtual machine. Nested virtualization exposes hardware virtualization support to guest virtual machines.
In VMM 2019, you can enable or disable nested virtualization. You can configure the VM as a Host in VMM and then perform host operations from VMM on this VM. Hence, VMM dynamic optimization will consider a nested VM host for VM placement.
To enable nested virtualization, you need to ensure the following prerequisites are met:
- A Hyper-V host running Windows Server 2016 or Windows Server 2019 (Datacenter / Standard).
- A Hyper-V VM running Windows Server 2016 or Windows Server 2019 (Datacenter / Standard).
- A Hyper-V VM with configuration version 8.0 or later.
- Hyper-V host should run on Intel processor with VT-x and EPT technology.
Once you identify the VM that meets the prerequisites above, make sure the VM is in a stopped state. Select the VM’s properties, and on the General tab, select Enable Nested Virtualization.
In VMM 2019, Microsoft has made significant changes that result in performance improvements for VMM Host and VM refresher.
If you are managing a large number of Hyper-V hosts and virtual machines with checkpoints – you would be able to observe significant and noticeable improvements in the performance of the job. The VM refresh is triggered as soon as you click Refresh Virtual Machines under Hosts.
VMM 2019 supports VMware ESXi v6.5 servers in the VMM fabric. This support helps you with additional flexibility in managing multiple hypervisors in use. To learn more about the additional details of supported VMware servers, please check the following documentation from Microsoft.
In the previous release of System Center Virtual Machine Manager, Microsoft enabled the migration of virtual machines from VMware to Hyper-V. As part of this migration, VMM converts the VMware VMs to Generation 1 Hyper-V VMs. These VMs support legacy drivers, use Hyper-V BIOS-based architecture and initialize the IDE controller of the operating system to initialize the file system.
In VMM 2019, Microsoft enabled migration to UEFI-based VMware VMs to Hyper-V. So now VMware VMs that you migrate to the Hyper-V platform can take advantage of the Generation 2 features.
As part of VMM 2019, the Convert Virtual Machine wizard enables you to select and default the Hyper-V VM generation appropriately:
- BIOS-based VMs are migrated to Hyper-V VM Generation 1
- UEFI based VMs are migrated to Hyper-V VM Generation 2
To learn more on how to convert UEFI-based VMware VM to Hyper-V Generation 2 VM, please check the following step-by-step guide from Microsoft.
The Connect Console in VMM provides an alternative way to remote desktop to connect to the VM. This is useful when VM does not have any network connectivity or you want to change the network configuration that could break the network connectivity.
In the earlier versions of VMM, VMM supports only basic sessions where clipboard text can only be pasted through the Type Clipboard Text menu option. With VMM 2019, you can now connect through Enhanced Session mode which supports Cut (Ctrl + X), Copy (Ctrl + C), and Paste (Ctrl + V) operations on the ANSI text and files available on the clipboard, thereby copying/paste commands for text and files are possible from and to the VM.
To use Enhanced Session mode, you need to ensure the following prerequisites are met:
- The guest operating system on the VM should have Virtual Guest Services installed and enabled. This is enabled by default for Windows Server 2012 R2 and later.
- The host operating system of the VM should be Windows Server 2012 R2 or later.
In VMM 2019, Microsoft introduced a new feature that helps prevent Cluster Shared Volumes (CSV) from becoming full due to expanding VHD placed on the CSV. You can now set the threshold so that you will receive a warning while VHD placement if the placement causes the CSV free storage space to fall below the threshold. You can also choose to automatically migrate the VHDs when the free storage space in CSV falls below the threshold. To use Dynamic Optimization (DO) for storage, the cluster should have more than one CSV.
You can set the Dynamic Optimization (DO) at the host group level. If you want to use the settings from the parent host group, then check the box Use dynamic optimization settings from the parent host group.
You can also select Automatically migrate virtual hard disks to balance load value in minutes. By selecting the frequency, you will be setting the time interval at which the storage Dynamic Optimization will run. The frequency number for Compute and storage will have to be the same value in minutes (i.e. 30 minutes).
To set CSV Free space threshold and receive a warning during placement or auto VHD migration between CSVs in a cluster, you must enter a value in the Disk Space, threshold setting. This value can be represented in either GB or % of CSV allocated storage space. The representation can be set in the Host group Properties > Host Reserves page. By selecting the unit in the drop-down. Once this is changed to either % or GB, the same will reflect in the Dynamic Optimization page.
You can also run the storage Dynamic Optimization manually by Right-Click on the cluster name and choosing> Optimize Disk Space. The operation will return a list of all VHDs that will be migrated to other CSVs in the cluster if any of the CSVs had free storage below the threshold.
In VMM 2019, Microsoft added new functionality to help you monitor the health and operational status of Storage pools, LUNs, and physical disks in the VMM fabric.
To check the storage health status, open VMM Console and go to Fabric > Storage > Classification and Pools. The column “Health Status” provides the status of pool, LUN, and Physical disks.
In the earlier versions of VMM, a new disc on a VM would be placed by default on the same CSV as the earlier VHDs were associated with the VM. So you could not choose a different CSV / folder, in case of issues related to CSV storage getting full. With VMM 2019, you can now choose any location for placement of new disc, giving you more flexibility to deploy new discs based on the storage availability of CSVs.
To add a new disk to the VM, from the VMM console, go to Properties > Hardware Configuration > New Disk. You will see now a Browse button for Virtual hard disk path.
VMM 2019 supports the following improvements in storage QoS running on Windows Server 2016 and beyond:
- Extension of Storage QoS support beyond Storage Spaces Direct – You can now assign storage QoS policies to Storage Area Networks (SAN). To create a Storage QoS Policy, go to Fabric > Storage > QoS Policies > Create Storage QoS Policy.
- Support for VMM private cloud – storage QoS policies can now be consumed by the VMM cloud tenants. You can assign Storage QoS Policies while creating a cloud.
- Availability of storage QoS policies as templates – You can set a storage QoS policy while creating VM template.
In VMM 2019, you can update individual Storage Spaces Direct hosts or clusters against the baselines configured in Windows Server Update Services (WSUS). For more information on how to patch and update hosts/clusters in VMM, please check the following article.
In VMM 2019, you can use the new Encrypted Networks feature, end-to-end encryption can be easily configured on VM networks by using the Network Controller (NC). This encryption prevents traffic between two VMs on the same subnet, from being read and manipulated.
The control of encryption is at the subnet level and encryption can be enabled/disabled for each subnet of the VM network. VMM 2019 recognizes the VM network to be encrypted if any one of the subnets has encryption enabled on it.
This feature is managed through the SDN Network Controller (NC). If you do not have already have a Software Defined Network (SDN) infrastructure with an NC, see this article for more information.
This feature currently provides protection from third-party and network admins and doesn’t offer any protection against fabric admins. Protection against fabric admins is in the pipeline and will be available soon.
Software-Defined Networking (SDN) in Windows 2016 and Windows Server 2019 can use Software Load Balancing (SLB) to evenly distribute network traffic among workloads managed by service providers and tenants. VMM 2016 supports only deploying SLB Virtual IPs (VIPs) using PowerShell. However, with VMM 2019, you can now configure SLB VIPs while deploying multi-tier applications by using Service Templates. In addition, VMM 2019 supports load balancing of both public and internal network traffic. For more information on how to configure SLB VIPs through VMM service templates, please check the following guide.
VMM 2016 supports guest clustering. However, with the advent of the Network Controller, starting with Windows Server 2016 and System Center 2016, the configuration of guest clusters has undergone some change. With the introduction of the network controller, VMs, which are connected to the virtual network are only permitted to use the IP address that the network controller assigns for communication. The SDN design is inspired by Azure networking design, supports the floating IP functionality through the Software Load Balancer (SLB), like Azure networking. VMM 2019 release also supports the floating IP functionality through the Software Load Balancer (SLB) in the SDN scenarios.
VMM 2019 supports guest clustering through an Internal Load Balancer(ILB) Virtual IP(VIP). The ILB uses probe ports that are created on the guest cluster VMs to identify the active node. At any given time, the probe port of only the active node responds to the ILB, and all the traffic directed to the VIP is routed to the active node. For more information on how to configure guest clusters in SDN through VMM, please check the following article.
VMM 2019 allows you to convert a switch embedded teaming (SET) switch to a logical switch by using the VMM console. However, in earlier VMM versions, this feature was supported only through the PowerShell script.
For more information on how to convert Hyper-V Virtual Switch to Logical Switch in VMM, please check the following step-by-step guide.
VMM 2019 supports Link Layer Discovery Protocol (LLDP). You can now view network device properties and capabilities information of the hosts from VMM.
To fetch the LLDP properties and view this information in VMM, you must have the following prerequisites:
- The host operating system must be Windows 2016 or higher.
- DataCenterBridging and DataCenterBridging-LLDP-Tools features have been enabled on hosts.
- Reading LLDP information is not supported for host adapters having virtual switches deployed. You must remove the virtual switch before retrieving LLDP information for the adapter. This is useful for RDMA network adapters that are NOT part of a converged virtual switch.
By default, the LLDP Packet wait time is set as 30 seconds. You can modify this value by modifying the registry key at Software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Settings\LLdpPacketWaitIntervalSeconds. The minimum value you can set is 5 seconds, the maximum value is 300 seconds.
To get the LLDP details of network devices from the VMM console, go to View > Host > Properties > Hardware Configuration > Network adapter. The details displayed contain a time stamp. To get the current details, click the Refresh button.
There are scenarios where certain products have their licensing tied to the MAC address of the server to which they are deployed. In these scenarios, it is desirable to provide a static MAC address for the VMs. In earlier versions of VMM, users were allowed to set a static MAC address on the VMs deployed on the hosts only. However, users did not have an option to set static MAC addresses for the VMs deployed on the cloud. With VMM 2019, you can set a static MAC address on the VMs deployed on VMM Cloud.
Please note that MAC address being assigned to the VM should be part of an accessible MAC pool which you can find in Fabric > Networking > MAC Address Pools. As self-service users do not have visibility into the fabric MAC pools, they need to co-ordinate with admins to make sure that the MAC address is part of the accessible MAC pool.
You can set the static MAC address on the VM either while:
- Deploying a new VM onto the cloud from VHD / VM Template.
- Changing the MAC address on an existing VM deployed to the cloud.
L3 forwarding enables connectivity between the physical infrastructure in the datacenter and the virtualized infrastructure in the Hyper-V network virtualization cloud.
Using L3 forwarding, tenant network virtual machines can connect to a physical network through the Windows Server 2016/2019 SDN Gateway, which is already configured in an SDN environment. In this case, the SDN gateway acts as a router between the virtualized network and the physical network.
You must configure a unique next-hop logical network, with a unique VLAN ID, for each Tenant VM network for which L3 forwarding needs to be set up. There must be 1:1 mapping between a tenant network and the corresponding physical network (with a unique VLAN ID).
Use the following steps to create the next-hop logical network in VMM:
1) On the VMM console, select Logical Networks, right-click, and select Create Logical Network.
2) In the Settings page, choose One connected network and select the checkbox for Create a VM network with the same name to allow virtual machines to access this logical network directly and Managed by Microsoft Network Controller.
3) Create an IP Pool for this new logical network. The IP address from this pool is required for setting up L3 forwarding.
In a large environment, it’s necessary to have a user role for performing troubleshooting, so this user can access all the VMs and make changes on VM to resolve the issue. There is also a need for another user to have access and identify the root cause for issues on the fabric layer. However, for security reasons, the VM administrator user should not be given the privileges to make any changes on the fabric layer such as (add storage, add hosts, etc).
The current RBAC in VMM does not have a role defined for this persona and the existing roles of Delegated Admin and Fabric admin have too little or more than necessary permissions to perform just troubleshooting.
To address this issue, Microsoft in VMM 2019 creates a new role called Virtual Machine Administrator. The user of this role has Read and Write access to all VMs but read-only access to the fabric.
To create an RBAC role for VM administrator, go to Settings > Users Roles > Create User Role. In the Create User Role Wizard, under the Profile page, select the Virtual Machine Administrator role as shown in the following screenshot.
On the Scope page, the VM Administrator role can be scope to both clouds and host groups. Even if the user is scoped to a cloud, the user would be able to read only all the fabric elements attached to the cloud.
On the Permissions page, you can see a list of granular permissions that can be assigned to the role. New permissions like Migrate Virtual Machine, Migrate VM Storage and Update VM Functional Level have been added to this role.
VMM 2019 supports the use of gMSA for ‘Management server service account’. What gMSA actually is, it helps improve the security posture and provides convenience through automatic password management, it simplifies service principle name (SPN) management and the ability to delegate the management to other administrators. For more information on how to create the gMSA account, please check the following step-by-step guide.
Please note that you do not need to specify the ‘Service Principle Name (SPN)’ when creating the gMSA. VMM service sets the appropriate SPN on the gMSA.
During the VMM installation setup, on the Configure service account and distributed key management page, select Group Managed Service Account as shown in the following screenshot, and then enter the gMSA account details in “Domain\gMSA account” format.
In Windows Server 2016 Hyper-V, Microsoft introduced the concept of a shielded VM for Windows OS-based virtual machines. Shielded VMs provide protection against malicious administrator actions both when VM’s data is at rest or an untrusted software is running on Hyper-V hosts.
With Windows Server, version 1709 and beyond, Microsoft introduced support for provisioning Linux shielded VMs as well, and the same has been extended to VMM 2019. For more information on how to create a Linux shielded VM template disk in VMM, please check the following step-by-step guide.
Azure update Management helps you to patch and update VMs that are managed by VMM by integrating VMM with Azure Automation subscription. Currently, the scope would be limited to all new VMs deployed using a VM Template that is mapped to Azure subscription. Microsoft is also working to extend this functionality to all existing VMs as well in the next releases. For more information on how to patch and update on-premises VMs managed by VMM 2019, please check the following article.
Besides the Azure Update Management integration, VMM 2019 supports the management of ARM-based VM and region-specific Azure subscriptions. This new capability in VMM 2019 enables you to manage ARM-based VMs as well as enable the management of the region-specific Azure subscriptions such as Germany, China, and US Government Azure regions.
You can manage your ARM-based VMs by using Azure AD authentication or using a management certificate. For detailed information, check the following article here.
Note: Make sure you assign the application access role to Virtual Machine Contributor, otherwise you won’t be able to manage Azure VMs using VMM.
Thank you for reading my blog.
If you have any questions or feedback, please leave a comment.