Running Microsoft Storage Spaces Direct On DELL EMC PowerEdge R740xd Ready Nodes

I recently did a 2 Nodes Storage Spaces Direct Hyper-Converged deployment on top of DELL EMC PowerEdge R740xd Ready Nodes.

In this quick blog post, I would like to share with you my experience and performance tests.

2 Nodes Cluster – Hardware Platform

I used the following hardware configuration:

  • DELL PowerEdge R740XD S2D Ready Nodes
  • 2 HE Rack Server for up to 12 x 3.5″ HD
  • 2 x Intel Xeon Silver 4110 2.1G (11MC 8C/16T)
  • 192GB (12x16GB) 2666MT/s RDIMM Memory
  • Dell Boot Optimized Storage Solution (BOSS)
  • PCIe Card with 2 x 240GB SATA M.2 SSD (RAID1)
  • 4 x 1.92TB SAS-12G 3.5″ MixUse Hot-Plug SSD
  • 8 x 2TB NLSAS-12G 3.5″ 7.2K Hot-Plug HD
  • DELL PERC H330+ 12G-SAS RAID Controller
  • Trusted Platform Module 2.0
  • Intel Dual Port I350 1GbE NIC
  • Intel Dual Port X710 SFP+ 10GbE NIC
  • QLogic Dual Port FastLinQ 41262 SFP28 10/25GbE Ethernet Adapter (Support RoCE v2 and iWARP)

The Dell EMC Microsoft Storage Spaces Direct Ready Nodes are pre-configured with certified components, tested and validated by Dell EMC and Microsoft to help you build Storage Spaces Direct clusters with ease. It’s kind like plug and play system, great hardware indeed!

In this configuration, each SSD disk will serve as a Cache device for the Capacity slow HDD (1:2 ratio).

Software Configuration

  • Host: Windows Server 2016 Datacenter Edition with October 2018 update
  • Single Storage Pool
  • 2 X 5TB 2-copy mirror volume
  • ReFS/CSVF file system
  • 40 virtual machines (20 VMs per node)
  • 2 virtual processors and 4 GB RAM per VM
  • VM: Windows Server 2016 Datacenter Core Edition with October 2018 update
  • Jumbo Frame enabled
  • CSV Block Cache enabled @ 8GB

Workload Configuration

DISKSPD version 2.0.21a workload generator
VM Fleet workload orchestrator

First Test – Total 761K IOPS – Read/Write Latency @ 0.2/1.3(ms)

Each VM configured with:

  • 4K IO size
  • 10GB working set
  • 100% read and 0% write
  • No Storage QoS
  • RDMA Enabled

Second Test – Total 401K IOPS – Read/Write Latency @ 0.2/1.0(ms)

Each VM configured with:

  • 4K IO size
  • 10GB working set
  • 70% read and 30% write
  • No Storage QoS
  • RDMA Enabled

In this example, I was using 25GbE direct connection between the nodes, I set the RDMA NICs to support iWARP instead of RoCE. You may wonder why iWARP and not RoCE. iWARP is easy to setup and does not require DCB configuration, and some people cannot handle RoCE configuration on the switch side, but in this setup I am not using switch between the nodes, yes it’s true. However, this deployment will grow beyond 2 nodes in the future, and I want to make sure the expansion will go smooth. Planning is the key for a successful deployment.

Last but not least, Microsoft also added support for Windows Admin Center to manage Storage Spaces Direct Hyper-Converged Infrastructure on top of Windows Server 2016, which gives you a complete visibility to monitor your S2D environment.

Let me know what you think in the comment section below!

__
Thank you for reading my blog.

If you have any questions or feedback, please leave a comment.

-Charbel Nemnom-

Advertisements
About Charbel Nemnom 399 Articles
Charbel Nemnom is a Cloud Solutions Architect and Microsoft Most Valuable Professional (MVP), totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 17 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

Be the first to comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.