Running Storage Spaces Direct on HPE ProLiant MicroServers #S2D #HyperV #WS2016

9 min read

Warning! This article is written for lab and demo purposes, it’s not supported to be used in production and is neither supported by Microsoft, HPE nor myself.


In Windows Server 2016, Microsoft added a new type of storage called Storage Spaces Direct (aka S2D). S2D enables building highly available storage systems with locally attached disks, and without the need to have any external SAS fabric such as shared JBODs or enclosures. This is the first true Software-Defined Storage (SDS) from Microsoft. Software-Defined Storage is a concept, which involves storing data without dedicated hardware.

SDS is the keystone of any modern data center today. Storage Spaces Direct is a huge evolution for the datacenter vision. Before the introduction of SDS solutions such as Storage Spaces Direct, StarWind Virtual SAN or VMware Virtual SAN, a SAN or NAS was implemented to store virtual machines. S2D helps you save your investments on expensive SAN and NAS solutions by allowing you to use your existing NVMe, SSD or HDD (SATA/SAS) drives combined to offer high performing and simple storage solutions for your workloads. This is a significant step forward for Microsoft in Windows Server 2016 Software-Defined Storage (SDS) which reduces hardware and operation costs.

With S2D, we have two deployment models: Hyper-Converged model, where Storage Spaces Direct and the hypervisor (Hyper-V) run on the same set of hardware, or as private cloud storage where Storage Spaces Direct cluster is disaggregated (known as converged model) is separate from the hypervisor. The hyper-converged deployment groups compute and storage together. This simplifies the deployment model, while the compute and storage scale at the same time. A disaggregated storage deployment separates the compute and storage resources. This deployment model enables us scaling compute and storage independently for larger scale-out deployments and avoids over-provisioning.

For more information about Storage Spaces Direct (S2D) in Windows Server 2016, please check the following article:

In this article, we will walk you through how to deploy Microsoft® Storage Spaces Direct in Hyper-Converged model on top of HPE ProLiant Micro Servers, and finally test the deployment.

Network Overview

The diagram below gives you a high-level overview of the network topology that will be using in this example.

In this example, we have (2) HPE Top of Rack Switches – model: PS1810-8G with Jumbo Frames and Spanning-tree enabled.

Hardware Configuration

2 X HPE ProLiant Micro Servers Gen 8, each system has the following set of specs:

  • 1 HPE micro SD 32GB Enterprise Mainstream Flash Media
  • 1 SATA SSD 2.5” 512 GB – model: SanDisk SD7SB2Q-512G-1006
  • 2 SATA SSD 2.5” 1 TB – model: Samsung SSD 840 EVO 1TB
  • 2 SATA HDD 3.5” 4 TB 7.2K rpm – model: MB4000GCVBU
  • 2 NIC 1 Gb Ethernet – model: Broadcom NetXtreme Gigabit Ethernet (built-in)
  • 1 NIC 1 Gb Ethernet – model: Intel(R) 82574L Gigabit Network Connection
  • 2 DDR3 8GB RAM 1600 MHz – 16 GB Total Memory
  • 1 Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz 4/4 cores; 8 threads

Firmware Version Information

  • Broadcom NetXtreme Gigabit Ethernet: 1.39.0
  • iLO: 2.50 Sep 23, 2016
  • Intelligent Provisioning: 1.64.121
  • Redundant System ROM: J06 07/16/2015
  • Server Platform Services (SPS) Firmware:
  • System ROM: J06 11/02/2015
  • System ROM Bootblock: 02/04/2012

In the following steps, we will illustrate the hardware installation and configuration that will be using in this example:

  • We installed 1 micro SD 32 GB to be used for the operating system, and we added 1 Gb Ethernet card for redundancy. Each system has 2 X 1 Gb built-in Ethernet adapters, we will team those NICs to have 3 Gb network bandwidth (more on that later).

  • The Optical Disk Drive (ODD) is disconnected and replaced with 1 X SSD drive to be used for S2D cache. Please note that for production, it’s recommended having minimum two cache drives per server to preserve performance if one cache drive fails.

  • We installed 2 X SSD 2.5” drives to be used for S2D performance tier and 2 X HDD 3.5” drives for S2D capacity tier (more on that later). In Windows Server 2016, S2D can be deployed using 2 Nodes up to 16 Nodes. This is the minimum and maximum limits are based on your fault tolerance requirements and what Microsoft supports. Since we are using two nodes only, we will be limited to 1 failure either drive or server but not both, and the volume will be limited to Mirror volume (two-way mirroring) resiliency. Mirroring is like distributed, software-defined RAID-1.

  • Storage Spaces is only compatible with HBAs where you can completely disable all RAID functionality. Thus, we need to disable the built-in RAID support for the Dynamic HP Smart Array B120i controller and then enable SATA Advanced Host Controller Interface (AHCI) support.

  • Once you enable SATA AHCI support and reboot your system, all disks will be exposed for SATA’s advanced capabilities (such as hot swapping and native command queuing) so the host system can use them.

Software Configuration

We have the following set of software deployed for this demo:

  • Domain controller and DNS server
  • System Center Virtual Machine Manager 2016 with Rollup Update 3
  • Host: Windows Server 2016 Datacenter Core Edition with June 2017 update
    • Single Storage Pool
    • 2 X 2 TB 2-copy mirror volume
    • CSVFS_REFS file system
    • 10 virtual machines (5 VMs per host)
    • 2 virtual processors and 2 GB RAM per VM
    • VM: Windows Server 2016 Datacenter Core Edition with June 2017 update
    • Jumbo Frame Enabled on all NICs

Network and Pre-Configuration Tasks

In the following steps, we will illustrate the network and pre-configuration steps:

  • Install Hyper-V, Failover Clustering and file server roles. Set necessary Windows firewall rules, set the power options to high-performance plan and remove SMB version 1. You can use the following set of PowerShell commands to automate this step.
# S2D hyper-converged cluster Pre-Configuration 
$Nodes = "S2D-HV01", "S2D-HV02"

Invoke-Command -ComputerName $Nodes -ScriptBlock {

#Install Hyper-V, Failover Clustering and File Server
Install-WindowsFeature Hyper-V, Failover-Clustering, FS-FileServer -IncludeAllSubFeature -IncludeManagementTools -Verbose

#Set Windows Firewall
Set-NetFirewallRule -Group "@firewallapi.dll,-36751" -Profile Any -Enabled true
Set-NetFirewallRule -DisplayName 'Windows Remote Management (HTTP-In)' -Profile Any -Enabled True -Direction Inbound -Action Allow
Set-NetFirewallRule -DisplayName 'Windows Management Instrumentation (WMI-In)' -Profile Any -Enabled True -Direction Inbound -Action Allow
Set-NetFirewallRule -DisplayName "Remote Volume Management - Virtual Disk Service (RPC)" -Profile Any -Enabled True -Direction Inbound -Action Allow
Set-NetFirewallRule -DisplayName "Remote Volume Management - Virtual Disk Service Loader (RPC)" -Profile Any -Enabled True -Direction Inbound -Action Allow

#Set Windows Power plan to High Performance:

#Remove SMBv1
Remove-WindowsFeature -Name FS-SMB1 -Verbose -Restart
  • In this step, we will set the cluster memory dump to “Active Memory Dump”. In Windows Server 2016, Microsoft added a new option for creating memory dumps when a system failure occurs. The recommended setting for Failover Clustering is “Active memory dump”. This can be set in the “Startup and Recovery” dialog in the System control panel, Advanced system settings, or to use the Set-ItemProperty cmdlet to specify the value of CrashDumpEnabled.
# Set Active Memory Dump on Server Core
Invoke-Command -ComputerName $Nodes -ScriptBlock {
# Configure Active memory dump
Set-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl –Name CrashDumpEnabled –value 1
New-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl -Name FilterPages -Value 1    
Get-ItemProperty –Path HKLM:\System\CurrentControlSet\Control\CrashControl 

We will create the following vNICs on the host partition as part of the converged SET virtual switch.

  • Open the host properties in VMM for each host and then deploy the logical switch as shown in the following screenshot.

  • Next, we will disable DNS registration for Storage, Cluster, Backup, Replica, and Live Migration vNICs by running the following PowerShell commands: Note: DNS registration should be only enabled on the Management Host vNIC.
Invoke-Command -ComputerName $Nodes -ScriptBlock {
Get-DnsClient | Where-Object {$_.InterfaceAlias -ne "*vSwitch*"} | Set-DNSClient -RegisterThisConnectionsAddress $False 
  • In this step, we will enable Jumbo Frames on all vNICs by running the following PowerShell commands:
# Configure Jumbo Frame on each network adapter
Invoke-Command -ComputerName $Nodes -ScriptBlock {
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | Set-NetAdapterAdvancedProperty -RegistryValue 9014 
Get-NetAdapterAdvancedProperty -Name * -RegistryKeyword "*jumbopacket" | FT -AutoSize

  • You should now be able to send jumbo packets, you can verify this by using the following ping command from the host.

Create S2D Cluster

In the following steps, we will create an S2D cluster, but before doing so we will validate the cluster support:

  • Open Windows PowerShell and run the following command:
Test-Cluster -Node $Nodes -Include Inventory, Network, "System Configuration", "Storage Spaces Direct" -Verbose
  • As you can see in the next screenshot, the Failover Cluster Validation is succeeded.

  • Once the cluster validation is succeeded, we will move into creating the cluster by running the following command. Please note that “-NoStorage” parameter is important to be added to the command, otherwise the disks will be automatically added to the cluster and they will not be included in the Storage Spaces Direct storage pool (more on that in the next section).
# New S2D Cluster
$Cluster = "NINJA-S2DCLU" 
New-Cluster -Name $Cluster -Node $Nodes -NoStorage -StaticAddress -Verbose
  • In this step, we will configure the cloud witness. The requirements for cloud witness is to have an active Azure subscription. In your Azure subscription, you need to create a storage account. We’ll configure the cluster with a cloud witness by running the following PowerShell command. Please update the -AccountName and -AccessKey parameters accordingly.
# Configure Cloud Witness
Set-ClusterQuorum –Cluster $Cluster –CloudWitness –AccountName <AccountName> –AccessKey <AccessKey>
  • In the final step, we will enable and set the CSV Cache to 1GB by running the following command:
# Set CSV Block Cache Size to 1GB 
(Get-Cluster -Name $Cluster).BlockCacheSize = 1024 

Enable Storage Spaces Direct

In the following steps, we will enable Storage Spaces Direct:

  • Open Windows PowerShell and enable Storage Spaces Direct by running the following command. Please note the “-CacheDeviceModel” parameter.
Enable-ClusterS2D -CimSession $Cluster -PoolFriendlyName "NINJA-S2D-HVPOOL" -CacheDeviceModel "SanDisk SD7SB2Q-512G-1006" -Confirm:$false -Verbose

  • As discussed earlier, we need to use only one cache device in this deployment, and the remaining four drives will be used for performance and capacity tiers. Because if you enable Storage Spaces Direct without specifying the Cache Device Model, S2D will automatically select by default all the fastest drives (NVMe or SSD) and use them as a cache device. Recall, in this deployment, we have total 3 SSDs and 2 HDDs. The “Journal” allocation disk is set as a cache device.

  • To find the Cache Device Model name, you can use the following command
Invoke-Command -ComputerName $Nodes -ScriptBlock {
Get-PhysicalDisk | Group Model -NoElement

  • When you enable Storage Spaces Direct, S2D will automatically create two storage tiers, known as performance tier and capacity tier. Enable-ClusterS2D cmdlet analyzes the devices and configures each tier with the mix of device types and resiliency (Mirror and Parity). In other words, the storage tier details and resiliency depend on the storage devices in the system and thus vary from system to system. In this example, since we have only two nodes, we will be only limited to resiliency setting “Mirror”. The remaining two SSDs will be used for the Performance tier, and the two HDDs will be used for the Capacity tier. To figure out the supported storage size for each tier, you can use the following command:
# Get Supported storage size for Performance (Mirror) tier
Get-StorageTierSupportedSize -FriendlyName Performance -CimSession $Cluster | `
Select @{l="TierSizeMin(GB)";e={$_.TierSizeMin/1GB}}, @{l="CapacityTierSizeMax(TB)";e={$_.TierSizeMax/1TB}}, @{l="TierSizeDivisor(GB)";e={$_.TierSizeDivisor/1GB}}

# Get Supported storage size for Capacity (Mirror) tier
Get-StorageTierSupportedSize -FriendlyName Capacity -CimSession $Cluster | `
Select @{l="TierSizeMin(GB)";e={$_.TierSizeMin/1GB}}, @{l="CapacityTierSizeMax(TB)";e={$_.TierSizeMax/1TB}}, @{l="TierSizeDivisor(GB)";e={$_.TierSizeDivisor/1GB}} 

As you can see in the next screenshot, in this example we have around 1.8 TB size on the performance tier and 7.2 TB on the capacity tier.

  • In the last step, we will create the CSV volumes. In this example, since we have two nodes, we will create two “two-way” mirror volumes (performance / capacity), one volume per node. Open Windows PowerShell and run the following command:
Foreach ($Node in $Nodes) {
New-Volume -CimSession $Node -StoragePoolFriendlyName *S2D* -FriendlyName $Node -FileSystem CSVFS_REFS -StorageTierFriendlyNames Performance, Capacity -StorageTierSizes 400GB, 1648GB -AllocationUnitSize 64KB -ProvisioningType Fixed -Verbose

Workload Test Configuration

DISKSPD version 2.0.17 workload generator.

VM Fleet workload orchestrator.

Total 154K IOPS – Read Latency @ 0.5ms and Write Latency @ 3ms

Each VM configured with:

4K IO size
10GB working set
100% read and 0% write
No Storage QoS


In this article, we showed you how to install Storage Spaces Direct on two nodes using HPE ProLiant MicroServers Gen 8. This setup is used for lab and test environment only and it’s not supported for production.

We also tested and generated 154K IOPS by pushing the MicroServers to the edge.

Make sure to check my recently published book Getting Started with Nano Server for a complete step by step guide on how to deploy Storage Spaces Direct in production.

Thanks for reading!
[email protected]

About Charbel Nemnom 559 Articles
Charbel Nemnom is a Cloud Architect, ICT Security Expert, Microsoft Most Valuable Professional (MVP), and Microsoft Certified Trainer (MCT), totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 17 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize the performance of mission-critical enterprise systems. Excellent communicator is adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design, business continuity, and cloud security.

Be the first to comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.