You dont have javascript enabled! Please enable it! What’s New In System Center 2019 Virtual Machine Manager #VMM #SCVMM - CHARBEL NEMNOM - MVP | MCT | CCSP | CISM - Cloud & CyberSecurity

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM

12 Min. Read

Microsoft announced the release of System Center 2019 under the Long-Term Servicing Channel (LTSC). LTSC provides 5 years of standard and 5 years of extended support. Subsequent to the release of System Center 2019, the suite will continue to accrue value through the Update Rollup releases every six months over the mainstream support window of 5 years.

What’s New in System Center 2019 VMM

Microsoft has dropped Semi-Annual Channel (SAC) releases, but new features before the next Long-Term Servicing Channel (LTSC) release will be delivered through Update Rollups. You can read about the announcement on the Windows Server Blog. You can download the media from the Volume Licensing Service Center (VLSC), or you can download the evaluation bits from the following link.

There are a lot of improvements and new features were introduced in this release.

In System Center 2019 Virtual Machine Manager, Microsoft added several new features. In the previous blog post, we showed you how to install System Center 2019 Virtual Machine Manager on top of Windows Server 2019 and SQL Server 2017. In this post, we will dive into the new features and improvements.

Click on the title below to navigate through the article:





Azure Management

Enable Nested Virtualization

Nested Virtualization is a new functionality introduced in Windows Server 2016 that allows users to create one or more virtual machines inside another virtual machine. Nested virtualization exposes hardware virtualization support to guest virtual machines.

In VMM 2019, you can enable or disable nested virtualization. You can configure the VM as a Host in VMM and then perform host operations from VMM on this VM. Hence, VMM dynamic optimization will consider a nested VM host for VM placement.

To enable nested virtualization, you need to ensure the following prerequisites are met:

Once you identify the VM that meets the prerequisites above, make sure the VM is in a stopped state. Select the VM’s properties, and on the General tab, select Enable Nested Virtualization.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 1

Performance Improvements

In VMM 2019, Microsoft has made significant changes that result in performance improvements for VMM Host and VM refresher.

If you are managing a large number of Hyper-V hosts and virtual machines with checkpoints – you will be able to observe significant and noticeable improvements in the performance of the job. The VM refresh is triggered as soon as you click Refresh Virtual Machines under Hosts.

Manage VMware ESXi 6.5 Hosts

VMM 2019 supports VMware ESXi v6.5 servers in the VMM fabric. This support helps you with additional flexibility in managing multiple hypervisors in use. To learn more about the additional details of supported VMware servers, please check the following documentation from Microsoft.

Migrate VMware VM (EFI firmware-based VM) to Hyper-V

In the previous release of System Center Virtual Machine Manager, Microsoft enabled the migration of virtual machines from VMware to Hyper-V. As part of this migration, VMM converts the VMware VMs to Generation 1 Hyper-V VMs. These VMs support legacy drivers, use Hyper-V BIOS-based architecture and initialize the IDE controller of the operating system to initialize the file system.

In VMM 2019, Microsoft enabled migration to UEFI-based VMware VMs to Hyper-V. So now VMware VMs that you migrate to the Hyper-V platform can take advantage of the Generation 2 features.

As part of VMM 2019, the Convert Virtual Machine wizard enables you to select and default the Hyper-V VM generation appropriately:

  • BIOS-based VMs are migrated to Hyper-V VM Generation 1
  • UEFI-based VMs are migrated to Hyper-V VM Generation 2

To learn more on how to convert UEFI-based VMware VM to Hyper-V Generation 2 VM, please check the following step-by-step guide from Microsoft.

Enhanced Console Session for VMs

The Connect Console in VMM provides an alternative way for remote desktops to connect to the VM. This is useful when VM does not have any network connectivity or you want to change the network configuration that could break the network connectivity.

In the earlier versions of VMM, VMM supported only basic sessions where clipboard text can only be pasted through the Type Clipboard Text menu option. With VMM 2019, you can now connect through Enhanced Session mode which supports Cut (Ctrl + X), Copy (Ctrl + C), and Paste (Ctrl + V) operations on the ANSI text and files available on the clipboard, thereby copying/paste commands for text and files are possible from and to the VM.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 2

To use Enhanced Session mode, you need to ensure the following prerequisites are met:

  • The guest operating system on the VM should have Virtual Guest Services installed and enabled. This is enabled by default for Windows Server 2012 R2 and later.
  • The host operating system of the VM should be Windows Server 2012 R2 or later.

Dynamic Optimization for Storage on Cluster Shared Volumes

In VMM 2019, Microsoft introduced a new feature that helps prevent Cluster Shared Volumes (CSV) from becoming full due to expanding VHD placed on the CSV. You can now set the threshold so that you will receive a warning while VHD placement if the placement causes the CSV-free storage space to fall below the threshold. You can also choose to automatically migrate the VHDs when the free storage space in CSV falls below the threshold. To use Dynamic Optimization (DO) for storage, the cluster should have more than one CSV.

You can set the Dynamic Optimization (DO) at the host group level. If you want to use the settings from the parent host group, then check the box Use dynamic optimization settings from the parent host group.

You can also select Automatically migrate virtual hard disks to balance load value in minutes. By selecting the frequency, you will be setting the time interval at which the storage Dynamic Optimization will run. The frequency number for Compute and Storage will have to be the same value in minutes (i.e. 30 minutes).

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 3

To set the CSV Free space threshold and receive a warning during placement or auto VHD migration between CSVs in a cluster, you must enter a value in the Disk Space, threshold setting. This value can be represented in either GB or % of CSV allocated storage space. The representation can be set in the Host group Properties > Host Reserves page. Selecting the unit in the drop-down. Once this is changed to either % or GB, the same will be reflected in the Dynamic Optimization page.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 4

You can also run the storage Dynamic Optimization manually by right-clicking on the cluster name and choosing> Optimize Disk Space. The operation will return a list of all VHDs that will be migrated to other CSVs in the cluster if any of the CSVs have free storage below the threshold.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 5

Monitor Storage Health and Operation Status

In VMM 2019, Microsoft added new functionality to help you monitor the health and operational status of Storage pools, LUNs, and physical disks in the VMM fabric.

To check the storage health status, open VMM Console and go to Fabric > Storage > Classification and Pools. The column “Health Status” provides the status of pool, LUN, and Physical disks.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 6

Ability to choose CSV during the creation of new VHD

In the earlier versions of VMM, a new disc on a VM would be placed by default on the same CSV as the earlier VHDs associated with the VM. So you could not choose a different CSV / folder, in case of issues related to CSV storage getting full. With VMM 2019, you can now choose any location for the placement of new disc, giving you more flexibility to deploy new discs based on the storage availability of CSVs.

To add a new disk to the VM, from the VMM console, go to Properties > Hardware Configuration > New Disk. You will see now a Browse button for Virtual hard disk path.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 7

Improvements to VMM Storage QoS

VMM 2019 supports the following improvements in storage QoS running on Windows Server 2016 and beyond:

  • Extension of Storage QoS support beyond Storage Spaces Direct – You can now assign storage QoS policies to Storage Area Networks (SAN). To create a Storage QoS Policy, go to Fabric > Storage > QoS Policies > Create Storage QoS Policy.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 8

  • Support for VMM private cloud – storage QoS policies can now be consumed by the VMM cloud tenants. You can assign Storage QoS Policies while creating a cloud.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 9

  • Availability of storage QoS policies as templates – You can set a storage QoS policy while creating VM template.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 10

Upgrade and patching of S2D clusters

In VMM 2019, you can update individual Storage Spaces Direct hosts or clusters against the baselines configured in Windows Server Update Services (WSUS). For more information on how to patch and update hosts/clusters in VMM, please check the following article.

Encrypted VM Networks

In VMM 2019, you can use the new Encrypted Networks feature, end-to-end encryption can be easily configured on VM networks by using the Network Controller (NC). This encryption prevents traffic between two VMs on the same subnet, from being read and manipulated.
The control of encryption is at the subnet level and encryption can be enabled/disabled for each subnet of the VM network. VMM 2019 recognizes the VM network to be encrypted if any one of the subnets has encryption enabled on it.

This feature is managed through the SDN Network Controller (NC). If you do not already have a Software Defined Network (SDN) infrastructure with an NC, see this article for more information.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 11

This feature currently provides protection from third-party and network admins and doesn’t offer any protection against fabric admins. Protection against fabric admins is in the pipeline and will be available soon.

Configure SLB VIPs through the VMM service template

Software-defined networking (SDN) in Windows 2016 and Windows Server 2019 can use Software Load Balancing (SLB) to evenly distribute network traffic among workloads managed by service providers and tenants. VMM 2016 supports only deploying SLB Virtual IPs (VIPs) using PowerShell. However, with VMM 2019, you can now configure SLB VIPs while deploying multi-tier applications by using Service Templates. In addition, VMM 2019 supports load balancing of both public and internal network traffic. For more information on how to configure SLB VIPs through VMM service templates, please check the following guide.

Configuration of guest clusters in SDN environments

VMM 2016 supports guest clustering. However, with the advent of the Network Controller, starting with Windows Server 2016 and System Center 2016, the configuration of guest clusters has undergone some change. With the introduction of the network controller, VMs, which are connected to the virtual network are only permitted to use the IP address that the network controller assigns for communication. The SDN design is inspired by Azure networking design and supports the floating IP functionality through the Software Load Balancer (SLB), like Azure networking.  VMM 2019 release also supports the floating IP functionality through the Software Load Balancer (SLB) in the SDN scenarios.

VMM 2019 supports guest clustering through an Internal Load Balancer(ILB) Virtual IP(VIP). The ILB uses probe ports that are created on the guest cluster VMs to identify the active node. At any given time, the probe port of only the active node responds to the ILB, and all the traffic directed to the VIP is routed to the active node. For more information on how to configure guest clusters in SDN through VMM, please check the following article.

Convert SET switch to Logical Switch

VMM 2019 allows you to convert a switch embedded teaming (SET) switch to a logical switch by using the VMM console. However, in earlier VMM versions, this feature was supported only through the PowerShell script.

For more information on how to convert Hyper-V Virtual Switch to Logical Switch in VMM, please check the following step-by-step guide.

Layer 2 Network Information for Hosts (LLDP)

VMM 2019 supports Link Layer Discovery Protocol (LLDP). You can now view network device properties and capabilities information of the hosts from VMM.

To fetch the LLDP properties and view this information in VMM, you must have the following prerequisites:

  • The host operating system must be Windows 2016 or higher.
  • DataCenterBridging and DataCenterBridging-LLDP-Tools features have been enabled on hosts.
  • Reading LLDP information is not supported for host adapters having virtual switches deployed. You must remove the virtual switch before retrieving LLDP information for the adapter. This is useful for RDMA network adapters that are NOT part of a converged virtual switch.

By default, the LLDP Packet wait time is set as 30 seconds. You can modify this value by modifying the registry key at Software\Microsoft\Microsoft System Center Virtual Machine Manager Server\Settings\LLdpPacketWaitIntervalSeconds. The minimum value you can set is 5 seconds, the maximum value is 300 seconds.

To get the LLDP details of network devices from the VMM console, go to View > Host > Properties > Hardware Configuration > Network adapter. The details displayed contain a time stamp. To get the current details, click the Refresh button.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 12

Support for Static MAC address on VMs deployed on a VMM cloud

There are scenarios where certain products have their licensing tied to the MAC address of the server to which they are deployed. In these scenarios, it is desirable to provide a static MAC address for the VMs. In earlier versions of VMM, users were allowed to set a static MAC address on the VMs deployed on the hosts only. However, users did not have an option to set static MAC addresses for the VMs deployed on the cloud. With VMM 2019, you can set a static MAC address on the VMs deployed on VMM Cloud.

Please note that MAC address being assigned to the VM should be part of an accessible MAC pool which you can find in Fabric > Networking > MAC Address Pools. As self-service users do not have visibility into the fabric MAC pools, they need to co-ordinate with admins to make sure that the MAC address is part of the accessible MAC pool.

You can set the static MAC address on the VM either while:

  • Deploying a new VM onto the cloud from the VHD / VM Template.
  • Changing the MAC address on an existing VM deployed to the cloud.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 13

Configure Layer 3 Forwarding Gateway

L3 forwarding enables connectivity between the physical infrastructure in the datacenter and the virtualized infrastructure in the Hyper-V network virtualization cloud.

Using L3 forwarding, tenant network virtual machines can connect to a physical network through the Windows Server 2016/2019 SDN Gateway, which is already configured in an SDN environment. In this case, the SDN gateway acts as a router between the virtualized network and the physical network.

You must configure a unique next-hop logical network, with a unique VLAN ID, for each Tenant VM network for which L3 forwarding needs to be set up. There must be 1:1 mapping between a tenant network and the corresponding physical network (with a unique VLAN ID).

Use the following steps to create the next-hop logical network in VMM:

1) On the VMM console, select Logical Networks, right-click, and select Create Logical Network.

2) In the Settings page, choose One connected network and select the checkbox for Create a VM network with the same name to allow virtual machines to access this logical network directly and Managed by Microsoft Network Controller.

3) Create an IP Pool for this new logical network. The IP address from this pool is required for setting up L3 forwarding.

New RBAC Role – VM Administrator

In a large environment, it’s necessary to have a user role for performing troubleshooting, so this user can access all the VMs and make changes on the VM to resolve the issue. There is also a need for another user to have access and identify the root cause for issues on the fabric layer. However, for security reasons, the VM administrator user should not be given the privileges to make any changes on the fabric layer such as (adding storage, adding hosts, etc).

The current RBAC in VMM does not have a role defined for this persona and the existing roles of Delegated Admin and Fabric Admin have too little or more than necessary permissions to perform just troubleshooting.

To address this issue, Microsoft in VMM 2019 created a new role called Virtual Machine Administrator. The user of this role has Read and Write access to all VMs but read-only access to the fabric.

To create an RBAC role for a VM administrator, go to Settings > Users Roles > Create User Role. In the Create User Role Wizard, under the Profile page, select the Virtual Machine Administrator role as shown in the following screenshot.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 14

On the Scope page, the VM Administrator role can be scope to both clouds and host groups. Even if the user is scoped to a cloud, the user would be able to read only all the fabric elements attached to the cloud.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 15

On the Permissions page, you can see a list of granular permissions that can be assigned to the role. New permissions like Migrate Virtual Machine, Migrate VM Storage and Update VM Functional Level have been added to this role.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 16

Support for ‘Group Managed Service Account (gMSA)’

VMM 2019 supports the use of gMSA for ‘Management server service account’. What gMSA actually is, that it helps improve the security posture and provides convenience through automatic password management, it simplifies service principle name (SPN) management and the ability to delegate the management to other administrators. For more information on how to create the gMSA account, please check the following step-by-step guide.

Please note that you do not need to specify the ‘Service Principle Name (SPN)’ when creating the gMSA. VMM service sets the appropriate SPN on the gMSA. You can use the following PowerShell commands to create a new gMSA account in AD Domain Services (AD DS) depending on your VMM deployment.

# Use the following for SCVMM Stand-Alone deployment
New-ADServiceAccount VMMgMSA -DNSHostName `
 -PrincipalsAllowedToRetrieveManagedPassword SCVMM-2019 `
 -KerberosEncryptionType RC4, AES128, AES256

# Use the following for SCVMM Highly Available deployment
# All the nodes in the Cluster and the Cluster Computer Account Name 
New-ADServiceAccount VMMgMSA -DNSHostName `
 -PrincipalsAllowedToRetrieveManagedPassword SCVMMNode1, SCVMMNode2, SCVMMCLU `
 -KerberosEncryptionType RC4, AES128, AES256

During the VMM installation setup, on the Configure service account and distributed key management page, select Group Managed Service Account as shown in the following screenshot, and then enter the gMSA account details in the “DomainName\gMSA-account$” format, and be sure to include the dollar sign, $, at the end of the account name, as it is considered a computer account.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 17
With gMSA support, the VMM Server will request the password from AD on a consistent basis and update the SCVMM Service with the new Service Account password, all in the background, allowing you and your security team peace of mind that the Service account password is reset regularly and unknown to any humans.

Linux Shielded VM Support

In Windows Server 2016 Hyper-V, Microsoft introduced the concept of a shielded VM for Windows OS-based virtual machines. Shielded VMs provide protection against malicious administrator actions both when the VM’s data is at rest or when untrusted software is running on Hyper-V hosts.

With Windows Server, version 1709 and beyond, Microsoft introduced support for provisioning Linux-shielded VMs as well, and the same has been extended to VMM 2019. For more information on how to create a Linux-shielded VM template disk in VMM, please check the following step-by-step guide.

Integration with Azure Update Management

Azure Update Management helps you patch and update VMs that are managed by VMM by integrating VMM with Azure Automation subscription. Currently, the scope would be limited to all new VMs deployed using a VM Template that is mapped to an Azure subscription. Microsoft is also working to extend this functionality to all existing VMs as well in the next releases. For more information on how to patch and update on-premises VMs managed by VMM 2019, please check the following article.

Besides the Azure Update Management integration, VMM 2019 supports the management of ARM-based VM and region-specific Azure subscriptions. This new capability in VMM 2019 enables you to manage ARM-based VMs as well as enable the management of region-specific Azure subscriptions such as Germany, China, and US Government Azure regions.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 18

You can manage your ARM-based VMs by using Azure AD authentication or using a management certificate. For detailed information, check the following article here.

Note: Make sure you assign the application access role to Virtual Machine Contributor, otherwise you won’t be able to manage Azure VMs using VMM.

What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM 19

Thank you for reading my blog.

If you have any questions or feedback, please leave a comment.

-Charbel Nemnom-

Photo of author
About the Author
Charbel Nemnom
Charbel Nemnom is a Senior Cloud Architect with 21+ years of IT experience. As a Swiss Certified Information Security Manager (ISM), CCSP, CISM, Microsoft MVP, and MCT, he excels in optimizing mission-critical enterprise systems. His extensive practical knowledge spans complex system design, network architecture, business continuity, and cloud security, establishing him as an authoritative and trustworthy expert in the field. Charbel frequently writes about Cloud, Cybersecurity, and IT Certifications.

What’s New in System Center 2019 Data Protection Manager #SCDPM

Storage Spaces Direct is Now Azure Stack HCI


2 thoughts on “What’s New in System Center 2019 Virtual Machine Manager #VMM #SCVMM”

Leave a comment...

  1. Hello Brian, thanks for the comment and the great question!
    Yes, unfortunately, the gMSA account still cannot provision new Hyper-V clusters.
    We can only use gMSA account for the SCVMM service account as illustrated in the post you shared.

Let us know what you think, or ask a question...