How to Create a Multi-Resilient Volume with ReFS on Standalone Server in Windows Server 2016? #WS2016 #HyperV #StorageSpaces

Introduction

A while ago, I wrote a blog post on how to create a Two-way Mirror Storage Spaces in Windows Server 2012 R2.

In Windows Server 2016, Microsoft introduced a new storage solution called Storage Spaces Direct (a.k.a S2D), for more information about Storage Spaces Direct in Windows Server 2016, see – Storage Spaces Direct overview.

In Storage Spaces Direct, the same volume can be part mirror, and part parity, the Resilient File System (ReFS) will automatically move data back and forth between the two tiers in real-time depending on what’s hot and what’s cold. This is so powerful compare to what we had in Windows Server 2012 R2 where Tiering is triggered by a scheduled task which is set to run by default at 1.00 AM.

ReFS is a new file system which was introduced in Windows Server 2012 and was designed to overcome issues that had become significant over the years since NTFS was conceived. In Windows Server 2016, Microsoft enhanced ReFS and it is now the preferred file system deployment for Storage Spaces Direct and Storage Spaces. This updated version offers many new capabilities for private cloud workloads such as “Data Integrity”, “Resiliency to corruption and Availability” and “Speed and Efficiency”.

I can hear you loud out there, if you are wondering if ReFS in Windows Server 2016 is supported on Block Storage (Cluster Shared Volumes with Fiber Channel or Serial-Attached SCSI). The answer is NO!!! Hopefully this will change in the Future. Besides the real-time Tiering in ReFS, there are other benefits you can use with ReFS such as speed for fixed VHDX creation/extension and faster checkpoint merge.

Please note that the Storage Spaces deployment for Standalone and Cluster mode with external JBODs remains the same between Windows Server 2012 R2 and Windows Server 2016.

In this blog, I am going to focus on how to deploy a Storage Spaces with Storage Tiering using ReFS on a standalone server in Windows Server 2016.

The following figure illustrates the Storage Spaces workflow for Standalone deployment:

Setup-SS-WS2016-01

Prerequisites

To use Storage Spaces on a stand-alone Windows Server 2016 based server, make sure that the physical disks that you want to use meet the following prerequisites:

Disk bus types: Serial Attached SCSI (SAS) or Serial Advanced Technology Attachment (SATA). In this example, we have SAS disks.

Disk configuration: Physical disks must be at least 4 GB. In this example, we have 8 HDDs with 1.64 TB and 4 SSDs with 894 GB.

HBA considerations: Storage Spaces is compatible only with HBAs where you can completely disable all RAID functionality. Furthermore, you need to make sure that the HBA you are using is fully supported for Windows Server Storage Spaces. In this example, I am using HPE Smart HBA H240. This adapter is configured by default in Raid mode, you need enable HBA mode as shown in the following screenshot, and then reboot your system.

Setup-SS-WS2016-03

JBOD enclosures: A JBOD enclosure is optional, verify with your storage vendor that the JBOD enclosure supports Windows Server Storage Spaces. In this example, we are using Direct Attached Storage (DAS).

Resiliency Type: For resiliency, you can use “Simple” space which does not provide any resiliency, and does not protect you from disk failure. You can use “Mirror” space to store two or three copies of the data across the set of physical disks. The Two-way mirror requires at least two physical disks to protect from single disk failure, however, Three-way mirror requires at least five physical disks to protect from two simultaneous disk failures. And if you are using Tiering, you need at least three disks (per tier), with a minimum of five disks total in the pool to maintain pool metadata. In this example, we have 4 physical SSD disks and 8 HDD disks, therefore we can create a three-way mirror tier resilient space. You can also use “Parity” space which stripes data and parity information across physical disks. Please note that “Mirror” volumes are faster than any other type of resiliency and it’s recommended for Hyper-V to store VHD(X). However, “Parity” space is recommended for archive or backup.

Create a Storage Pool

The first step is to check the available disks that can be pooled into one or more storage pools.

Setup-SS-WS2016-07

In this example, I have 4 SSDs and 8 HDDs, 1:2 ratio.

Next, we need to create a new storage pool named “StoragePool01”. It uses all available disks.

Open Windows PowerShell and run the following commands:

Setup-SS-WS2016-08

You can check the pool created by running the following command:

Setup-SS-WS2016-09

The total RAW disk size for the storage pool is 16.6 TB without any type of resiliency.

Create a Tier virtual disk

In the next step, I will create two storage tiers, one for SSD and the second one for HDD.

Please note that if you are using Storage Spaces Direct (S2D), this step is taking care for you automatically behind the scene. Because when you enable S2D, it will automatically create two storage tiers (known as resiliency), a mirror tier and a parity tier. The parity tier is called “Capacity” and the mirror tier is called “Performance“. Enable-ClusterS2D cmdlet analyzes the devices and configures each tier with the mix of device types and resiliency, in other words, the storage tier details depend on the storage devices in the system, and thus vary from system to system.

Since we have a standalone system and not S2D, we will create three-way Mirror with two tiers manually, the SSD tier will be named “Performance” and the HDD tier will be named “Capacity”. Open Windows PowerShell and run the following two commands:

Next, we will check each tier created size by running the following commands:

Note: The three-way Mirror offers 1187.5 GB on the SSD Tier (894 GB * 4 disks / 3).

Note: The three-way Mirror offers 4464 GB on the HDD Tier (1.64 TB * 8 disks /3).

The three-way Mirror offers 3 data copies – The two simultaneous disk failures could be any of the following combination based on this example:

  • 1 SSD + 1 HDD 
  • 2 HDDs
  • 1 SSD + 2 HDDs: The SSD tier will remain active and the HDD tier will remain active even if we lost 2 HDDs at the same time. Recall, we have 8 disks in the HDD Tier. The minimum is 5 disks per tier for two simultaneous disk failures. However, this does not apply for the SSD tier in this example since we have only 4 disks. In other words, if we lost two simultaneous disks from the SSD tier, the Multi-Resilient Volume will go offline.

Create a Multi-Resilient Volume with ReFS

In the last step, we will create a Multi-Resilient Volume (a.k.a MRV and often is called hybrid-volume). Run the following command:

In this example we will create a volume with 30GB in size, 10GB from the SSD tier and 20GB from the HDD tier.

To prove the real-time tiering with ReFS between “Performance” tier and “Capacity” tier. I will copy a file with less than 10GB in size.

As you can see the Read/Write is only served from the SSD Performance tier in this case.

Let’s now move more than 10GB file to the Hybrid-Volume.

As you can see, the SSD Performance tier is full now, thus ReFS real-time kicked in and the data is de-staging and is moving to the HDD Capacity tier.

If you want to create a Multi-Resilient Volume with full capacity size from both tiers, you can run the following command:

Have a great weekend!

Cheers,
-Ch@rbel-

About Charbel Nemnom 314 Articles
Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

Be the first to comment

Leave a Reply