Windows Server 2016 #HyperV DEMO04: Distributed Storage QoS

[Updated: May, 22nd, 2016]

Hello folks,

In today’s demo, I will show you a very interesting new feature introduced in Windows Server Technical Preview called Distributed Storage QoS.

If you recall, in Windows Server 2012 R2, Microsoft added Storage QoS feature support for Virtual machine, where you can go in and specify a minimum or maximum cap on a virtual disk and stop it from chewing up all your IOPs on the system.

 HV-vNext-DistStorageQoS-02

• The Minimum IOPS per VHD/X: is not enforced, and is not guaranteed, it’s only informational!
• The Maximum IOPS per VHD/X: to cap the max storage performance.

That’s was great! but it was not enough, Microsoft are investing more and more in this area to brings more functionality.

There a couple of big challenges with Storage QoS today in Windows Server 2012 R2.

The first one is, ok great we got this technology that allows us to go in and say you know for this virtual machine cap it this amount of IOPs, and for the other VM cap it with this amount, and that all works great on a single Hyper-V server!

But to be honest, no one is deploying a Standalone Hyper-V host in production, of course we are leveraging Hyper-V cluster for High Availability.

Storage QoS today it doesn’t really works great if you have a dozen of Hyper-V servers talking to a shared storage at the back-end, because in Windows Server 2012 R2 those Hyper-V servers are not aware that there are competing with each other for storage bandwidth.

Thankfully, in the next release of Windows Server, Microsoft introduced Distributed Storage QoS policy manager directly attached on the Scale Out File Server as a cluster resource.

HV-vNext-DistStorageQoS-01

Of course we still have what we had in Windows Server 2012 R2 where you can go to Hyper-V and go to the virtual machine settings and configure Storage QoS properties on each virtual hard disk, but in the technical preview we can actually now go to the scale out file server cluster and configure the Storage QoS policies there, and this enables a couple of really interesting scenarios:

1- The first scenario, if you have a multiple of Hyper-V Servers talking to the same file server at the back-end all your storage QoS policies get respected.

2- The second scenario is actually allowing us to do some really cool things where we can now start pulling Storage QoS policies and having a single policy that applies to multiple virtual hard drives/virtual machines instead of just one VM or one virtual hard disk.

Note: One important point to mention that distributed storage QoS in Windows Server 2016 works for SOFS deployments and on Clustered Shared Volume (CSVs) as well.

So without further ado, let’s switch across my demo system and show you how Distributed Storage QoS works in action Open-mouthed smile

Update: In Windows Server 2016 Technical Preview 5, the Storage QoS Policy type names have been renamed. The Multi-instance policy is renamed as Dedicated and Single-instance was renamed as Aggregated. The management behavior of Dedicated policies is also modified – VHD(X) files within the same virtual machine that have the same Dedicated policy applied to them will not share I/O allocations.

I hope you enjoyed the demo, and I would like to thank you for viewing.

Cheers,
/Charbel

About Charbel Nemnom 310 Articles
Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

Be the first to comment

Leave a Reply