Getting Started With Software-Defined Networking in Windows Server 2019 @Microsoft_SDN

7 min read

Introduction

In Windows Server 2016, Microsoft released their Software-Defined Networking version 2 (SDNv2) as part of their Software-Defined Datacenter offering. If you’ve tried to deploy SDN in Windows Server 2016, you will notice that it’s difficult to deploy. You can deploy SDN either with System Center Virtual Machine (SCVMM) or only with PowerShell, and both options were hard to use and deploy. If you’re using SCVMM for management you must use VMM to perform the deployment, and from then on will use VMM’s integrated and managed SDN environment. When deploying through SCVMM you can use it’s UI, or the automation available through the VMM SDN Express PowerShell scripts. For more information on how to deploy SDN with VMM SDN Express, please check my previous article here.

In Windows Server 2019, Microsoft enhanced and facilitate the SDN deployment and management with a new deployment UI and Windows Admin Center.

In this article, I will show you how to deploy Software-Defined Networking in Windows Server 2019 using the new SDN Express PowerShell module. In a follow-up blog post, I will dive into the SDN management side using Windows Admin Center.

Infrastructure Overview

I have the following servers already deployed for this environment:

  • Domain controller, DNS server, and DHCP Server.
  • 2 nodes Hyper-Converged Infrastructure (HCI) cluster with Windows Server 2019, Datacenter Edition.
  • Management machine with Windows Admin Center version 1809 and Windows Server 2019, Datacenter Edition.
  • Routing and Remote Access Service (RRAS) running in a separate virtual machine.
  • Management network for infrastructure communication.
  • Provider network for the virtualized workloads.

Please note that the management machine must NOT be running on the same host where you want to deploy SDN, because SDN will enable the Azure VFP Switch Forward Extension on each host, in this case only SDN traffic will pass-through the Hyper-V virtual switch and other traffic will be blocked. You can run the SDN deployment script either on the SDN host directly or from a machine outside the SDN stack.

If by any chance you are using USB NIC adapters in your environment and you are planning to validate SDN as described in this article, the deployment will fail since the external USB NIC has limitation to pass tagged VLANs.

To plan your Software-Defined Network Infrastructure correctly, please check the following guide from Microsoft.

I have also prepared a virtual hard disk containing Windows Server 2019 Datacenter Edition (Server Core or Full Server) that SDN Express UI will use as a prerequisite to deploying the SDN stack (more on that later).

Deploying SDN with SDN Express UI

SDN Express is a UI, a PowerShell script and set of modules and functions to get you up and running quickly. The new guided UI is able to perform parameter validation to avoid any error at input time. This gives you an immediate opportunity to correct mistakes before the deployment starts. With the new SDN Express UI, Microsoft greatly streamlined the deployment as compared to previous versions of SDN Express, with minimum prerequisites.

The SDN Express UI module can be downloaded from GitHub here. After you download the files, save them on your management machine and take the following steps:

Import the PowerShell module (SDNExpressModule.psm1) by running the following command:

Import-Module .\SDNExpressModule.psm1

After importing the module, type .\SDNExpress.ps1 and the SDN Express deployment wizard will launch. This is the wizard will basically walk you through the configuration of SDN. Click on Next to continue.

In the VM Creation step, enter the following prerequisites information for customizing the SDN basic VM creation and then click Next.

  • VHD Location for the virtual hard disk image.
  • VM Path. The path for virtual machines could be local for standalone hosts, or SMB share as well as Cluster Shared Volume in case of an S2D cluster.
  • VM Name Prefix.
  • VM Domain Name.
  • Domain Join Username.
  • Domain Join Password.
  • Local Admin Password.

In the Management Network step, provide the information about the management network for the SDN infrastructure that will use for this deployment. This information is used to provide each VM with a network adapter configured for this network. This management network isn’t necessarily needed for the Network Controller unless you want to apply policy from it, however, the management network is needed to assign addresses to the VM that the SDN stack will create and to configure basic networking. Click Next.

In the Provider Network step, provide the information about the provider network which is used for all workload VM communication. The provider network is required for the Network Controller and this will be created as a logical network in the Network Controller. You can put a new MAC Address (First and Last), this is important in case you have performed multiple deployments on the same VLAN. Click Next.

In the Network Controller step, provide the information to be used for the creation of the Network Controller and the Hyper-V hosts to be added to the controller. For Multi-node option, you need to have minimum three Hyper-V hosts because the SDN deployment will create 3 Network Controllers if you have two-node cluster as in this example, then you select Single-node option and set it up as a highly available VM so you get failover as well in case one node goes down. Please note that three Network Controllers can be deployed on 2 nodes host cluster as well, but as a best practice, it is better to have 3 nodes.

For the REST Name (FQDN) field, enter a fully qualified domain name to be assigned to the REST interface of the Network Controller. Add the Hyper-V Hosts and then enter the Host Credentials. Click Next.

In the Software Load Balancer step known as MUX, specify how many load balancers do you need. The default is 2, however, you can increase or decrease the number by moving the slider. The Software Load Balancer is an SDN integrated L3 and L4 load balancer that is also used for network address translation (NAT). Muxes are the routers for the Virtual IP (VIP) endpoints. Then specify the Private VIP subnet as well as the Public VIP subnet. Those subnets must not be configured on a VLAN in the physical switch as it will be advertised by the load balancers through BGP. Click Next.

In the Gateways step, specify how many gateways do you want. The minimum is 2, however, you can increase the number by moving the slider. Gateways are used for routing between a virtual network and another network (local or remote). SDN Express creates a default gateway pool that supports all connection types. Then specify the subnet prefix for the GRE Endpoints. This subnet must not be configured on a VLAN in the physical switch as the endpoints will be advertised to the physical network through BGP. The primary purpose of the GRE tunnels is to provide connectivity from the SDN virtual networks to a GRE capable switch/router in the local datacenter. This could then be used to connect to physical workloads in the datacenter or to connect to MPLS circuits to enable communication across the WAN. For more information about SDN GRE scenarios, please check the following documentation from Microsoft.

Click on Next to continue.

In the Border Gateway Protocol (BGP) step, provide the ASN numbers and the router IP address. BGP is used by the Software Load Balancer to advertise VIPs to the physical network. It is also used by the gateways for advertising GRE endpoints. Click on Next to continue.

In the final step, review the information that you entered for SDN Express to configure SDN on your environment. You can also export and save this configuration as a .psd1 file, so you can re-run SDN Express later with this file using the -ConfigurationDataFile parameter. When ready click Deploy.

The actual deployment will take about 45 minutes depending on how fast your system and network is. The Network Controller itself will take about 10 minutes from the 45 minutes that actually takes. When SDN Express finishes, your SDN environment is ready to manage with the SDN extension for the Windows Admin Center.

That’s it there you have it!

Summary

This post highlights the efforts Microsoft took in Windows Server 2019 to make SDN easier to deploy through SDN Express. Now that the entire SDN stack is deployed, you can go ahead and add the Network Controller URI in Windows Admin Center, then deploy tenant workloads and check if everything is good with networking perspective. Stay tuned for the next post where I will show you how to manage SDN using Windows Admin Center.

Last but not least, I want to thanks the SDN Team at Microsoft for supporting me during the deployment.

I encourage you to deploy and evaluate SDN stack in Windows Server 2019 using the new SDN Express UI and share your feedback in the comment section below.

__
Thank you for reading my blog.

If you have any questions or feedback, please leave a comment.

-Charbel Nemnom-

Advertisements
About Charbel Nemnom 475 Articles
Charbel Nemnom is a Cloud Architect and Microsoft Most Valuable Professional (MVP), totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 17 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

Be the first to comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.