Step By Step: Deploying Virtualized Shared Storage for #HyperV With #StarWind and SoFS

Hello folks,

In today’s blog post, I will share with you how to create a virtualized shared storage for Hyper-V using StarWind Software Inc and Scale-out File Server (SoFS).

Scale-out File Server (SoFS) is a clustered file server that allows keeping server application data (SQL and Hyper-V) on file shares, making it accessible for clients. A SoFS provides SAN reliability and high availability of shared data. StarWind iSCSI SAN allows using two-node configurations to provide highly available shared storage for Scale-out File Server (SoFS).

I will start by covering StarWind management console, device creation and settings explanation, and then move into step by step configuration of StarWind Virtual SAN with SoFS.

First things first, let’s open StarWind Management Console.

At anytime you can Add / Disconnect any host from the console. In this demo, I have two hosts added into the console with two virtualized disks.  

StepByStep-HyperV-StarWind01

Right Click on any host, to Add Device. If you noticed, we have two choices one Add Device, and the second one Add Device (advanced).

What is the difference between both. The former is a Wizard which allows you to add a StarWind device with default settings. This one I personally don’t like to use, because I would like to custom my device settings. The latter is more advanced where you can custom your device as needed, and I suggest you do the same. 

StepByStep-HyperV-StarWind02

Click on Add Device (advanced). As you can see we have three types of devices. (Virtual Hard Disk Device, Virtual Tape Library and Optical Disc Drive). The Tape Device and Optical Disk Drive are not covered in this post. I will choose Hard Disk Device.

StepByStep-HyperV-StarWind03

Here we have an option to choose the Disk Device Type for the Hard Disk Device. The Virtual Disk which will create an image file on our physical storage (DAS, RAID or Storage Spaces). The Physical Disk which is a disk bridge (Pass-through disk), and finally the RAM Disk which is small disk that can be created using the Physical RAM of the server, this feature is in development actually. The RAM Disk is very fast and is useful for some temp bases for large SQL Server deployment, they are temp and do not contain extremely important data, because as you know if the host get rebooted, the Random Access Memory is volatile.

I will select Virtual Disk and click Next.   

StepByStep-HyperV-StarWind04

In the Virtual Disk Location wizard, you can specify the path and the size of the Virtual Disk.

StepByStep-HyperV-StarWind05

I will rename my Virtual Disk to DemoDevice instead of Storage1, and then I will create a new file with 13GB for this demo (you can change this number according to your environment).

What is the different between 512 bytes and 4K sector size?

The sector size is really important, because in case of Hyper-V, it does a great performance boost, so 4096 sector size it is a Hyper-V native feature, if you are creating devices only for Hyper-V (yes of course Winking smile), then make sure to set the sector size to 4K. But if you are using Xen Server, VMware or Microsoft Storage Spaces, then 4K sector won’t work for you.

I will leave it to 512 byes sector size, because I am leveraging Microsoft Storage Spaces in my lab.   

StepByStep-HyperV-StarWind06

Next, the Virtual Disk Options (Thick-provisioned versus Thin-provisioned). The Thick-provisioned option is self-explanatory, which is block the exact same storage capacity that you specified in the previous step, it’s a standard LUN and has no storage boost. I personally enjoy Thick-provision drives more than Thin-provision. However the Thin-provisioned option is misleading name here and has upside and downside. The upside of it, it is not a thin-provision device, it’s an LSFS device (Log-Structured File System). The LSFS device was initially designed for slow spindle drives, for those who still using 7K rpm drives or even slower, this is the way to go. The point of the LSFS device is they take many small random blocks of data, then combines them into one huge sequential block, and then performs a write. As we all know that sequential write are much better than random ones, they get processed faster. This is how StarWind get performance boost with LSFS device especially for the write operations. The LSFS device was designed for RAID 5 and RAID 6, and as we all know RAID 5 and 6 has extremely good read performance speed, so the engineering team focused on enhancing the write speed with LSFS. As for the downside of this device, there is only one of it, that downside that is actually takes 3 times more space and here is the confusion comes, you will say, if it’s called Thin-provision, then you can create a 100TB drive and place a 1GB file on it, and on my physical disk it will take 1GB, because this is how Thin-provision devices work, but unfortunately not, so if you place a 1GB file on this LSFS device, it will grow into 3GB file, because the way it’s designed, it’s a Log-Structured File System, and each write operation on it gets logged, and that log take that space. The point of this log is that sometime in the future, StarWind will be able to restore data from those system snapshots as they call it, and I think that’s great. This will be renamed in the future to LSFS device instead of Thin-provisioned. In future release, the LSFS device will grow three times more space, and then with time it will shrink back to a smaller size especially if there is duplicated data on it.

For further reading about the features of LSFS and their description, please read here.          

StepByStep-HyperV-StarWind08

I will select Thick-provisioned and click Next.

StepByStep-HyperV-StarWind07

Here we have the caching parameters. They are two modes (Write-Back and Write-Through). The write-back is designed to increase write performance, and read performance simultaneously. However Write-Through, it’s designed to increase only the read performance. So, if you have intensive read workload, then use Write-Through mode, if you have intensive read and write workload or just even write, then use Write-Back mode.

Then you can set the L1-Cache in MB, the L1-Cache use Random Access Memory (RAM) which you have free in your system. StarWind has recommendations for that size as well as next proportion, so if you have a 1TB device, then it’s recommended to set the L1 cache to 1GB of size.

StepByStep-HyperV-StarWind09

Since I have a very small device 13GB, I will set the L1 cache to N/A (Reads and Writes are not cached), because I don’t think it will make any difference here. Click Next.

StepByStep-HyperV-StarWind10

We move now to L2 Cache Parameters. The L2 Cache, works completely the same way and use the same algorithms as L1 Cache, but uses another resource (it uses Solid-State Drives). That SSD can be configured as well in Write-Back or Write-Through mode.

The recommendation size for the L2 Cache is that should be 10% out of all storage. So if you have a 1TB device, then the L2 Cache should be set to 100GB on your SSD drives.

For this demo, I will not use SSD and leave the L2 Cache to N/A (Reads and Writes are not cached).

StepByStep-HyperV-StarWind11

Last but not least, we need to specify the Target Attachment Method.

StepByStep-HyperV-StarWind12

I will create a new Target and give it an Alias (DemoDevice). I personally leave the Target Name as it is, you can change it if you choose to. Then select Allow multiple concurrent iSCSI Connections.

StepByStep-HyperV-StarWind13

Click Next, and then click Create.

StepByStep-HyperV-StarWind14

StepByStep-HyperV-StarWind15

As you can see, the DemoDevice has been created.

StepByStep-HyperV-StarWind16

Now we need to replicate this device to the second Hyper-V host, so right click on the newly created device and click Replication Manager.

StepByStep-HyperV-StarWind17

Click Add Replica.

StepByStep-HyperV-StarWind18

Select Synchronous “Two-Way” Replication. Click Next.

StepByStep-HyperV-StarWind19

Select the IP Address of the partner Host Name. Click Next.

StepByStep-HyperV-StarWind20

Select Create new Partner Device and Click Next.

StepByStep-HyperV-StarWind21

Here you can choose the path to partner device and change the default random name Smile

StepByStep-HyperV-StarWind22

I will rename the device with the same name that I used for Host 1 (DemoDevice). Click Next.

StepByStep-HyperV-StarWind23

Next we move into the Network Options for Replication. Click Change Network Settings.

StepByStep-HyperV-StarWind24

I will choose the iSCSI network as a Heartbeat channel and I will choose a dedicated network for Synchronization and Heartbeat. I strongly recommend to use 10Gi for the Synchronization channel, which is synchronizing your data across both nodes. Click OK.

StepByStep-HyperV-StarWind25

Click Next.

StepByStep-HyperV-StarWind26

Click Create Replica.

StepByStep-HyperV-StarWind28

Click Close.

StepByStep-HyperV-StarWind29

After the replication is created, you need to wait for it until the Synchronization is completed (10Gi is your friend here Winking smile).  

StepByStep-HyperV-StarWind31

StepByStep-HyperV-StarWind32

Since the Synchronization is completed now, we will move into connecting to the device using iSCSI Initiator.

Log now into one of the SoFS node and launch Microsoft iSCSI Initiator:

  • Start > Administrative Tools > iSCSI Initiator or iscsicpl from the command line interface. The iSCSI Initiator Properties window appears.
  • Navigate to the Discovery tab.
  • Click the Discover Portal button. Discover Target Portal dialog appears. Type in 127.0.0.1.
  • Click the Advanced button. Select Microsoft ISCSI Initiator as your Local adapter and select your Initiator IP (leave default for 127.0.0.1).
  • Click OK. Then click OK again to complete the Target Portal discovery.
  • Click the Discover Portal… button again.
  • Discover Target Portal dialog appears. Type in the first IP address of the partner node you will use to connect the secondary mirrors of the HA devices.
  • Click Advanced.
  • Select Microsoft ISCSI Initiator as your Local adapter, select the Initiator IP in the same subnet as the IP address on the partner server from the previous step.
  • Click OK. Then click OK again to complete the Target Portal discovery.
  • If you are using MPIO which I highly recommend, then Click the Discover Portal button once again.
  • Discover Target Portal dialog appears. Discover Target Portal dialog appears. Type in the second IP address of the partner node you will use to connect the secondary mirrors of the HA devices, and then follow the same steps described above.
  • All target portals are added on the first node.
  • Complete the same steps for the second node as well.

StepByStep-HyperV-StarWind33

  • Click the Targets tab. The previously created targets are listed in the Discovered Targets section.
  • Connecting demodevice. Select a target of demodevice located on the local server and click Connect.

StepByStep-HyperV-StarWind34

  • Enable both checkboxes. Click Advanced

StepByStep-HyperV-StarWind35

  • Select Microsoft iSCSI Initiator in the Local adapter text field.
  • Select 127.0.0.1 in the Target portal IP. Click OK
  • Select the partner target for the other StarWind node and click Connect.
  • Select Microsoft iSCSI Initiator in the Local adapter text field. Select the IP address in the Initiator IP field. Select the corresponding portal IP from the same subnet in the Target portal IP.
  • Click OK.

StepByStep-HyperV-StarWind36

  • Repeat actions described in the steps above for other HA device.
  • Repeat the same steps on the second StarWind / SoFS node, specifying corresponding local and data channel IP addresses. The result should look like the screenshot below:

StepByStep-HyperV-StarWind37

After that, launch Disk Management or diskmgmt.msc from the command line and bring the disk online.

StepByStep-HyperV-StarWind38

Then we initialize the disk as GPT partition as per StarWind best practices.

StepByStep-HyperV-StarWind39

Then we create a new simple volume.

StepByStep-HyperV-StarWind40

Log in to the second SoFS node and bring the disk online only, you don’t need to initialize it or create a new partition, because it is an HA device.

StepByStep-HyperV-StarWind41

To add Cluster Shared Volumes (CSV) that is necessary to work with Hyper-V virtual machines:

  • Open Failover Cluster Manager.
  • Go to Cluster -> Storage -> Disks.
  • Click Add Disk.

StepByStep-HyperV-StarWind42

  • Right-click the required disk and rename it (I will rename it to CSV02 in my demo).

StepByStep-HyperV-StarWind43

  • Right-click the renamed disk (CSV02) and select Add to Cluster Shared Volumes.

StepByStep-HyperV-StarWind44

  • The disk will be displayed as a CSV at the Failover Cluster Manager window as shown in the screenshot below:

StepByStep-HyperV-StarWind45

  • Go to Cluster –> Roles
  • Right-click the Roles item and click Configure Role. High Availability Wizard appears.
  • Select the File Server Role. Click Next.

StepByStep-HyperV-StarWind46

  • Select Scale-out File Server for application data and click Next.

StepByStep-HyperV-StarWind47

  • On the Client Access Point page, in the Name text field type a NETBIOS name that will be used to access Scale-Out File Server.
  • I will choose SOFS-StarWind as a Name.
  • Click Next to continue.

StepByStep-HyperV-StarWind48

  • On the Confirmation page check the selected settings.
  • Click to Continue or Previous to make any changes.

StepByStep-HyperV-StarWind49

  • Review the information on the Summary page and click Finish.

StepByStep-HyperV-StarWind50

  • When you are finished, the Failover Cluster Manager window should look as shown in the screenshot below:

StepByStep-HyperV-StarWind51

To create a continuously available file share on the cluster shared volume:

  • Launch the Failover Cluster Manager.
  • Select the Roles item.
  • Right-click a previously created file server role and select Add Shared Folder.
  • From the list of profiles select SMB Share — Applications.
  • Click Next.

StepByStep-HyperV-StarWind52

  • I will Select a CSV02 volume that we just created to host the share.
  • Click Next.

StepByStep-HyperV-StarWind53

  • Enter a share name and verify the path to the share.
  • Click Next.

StepByStep-HyperV-StarWind54

  • Ensure the Enable continuous availability checkbox is selected.
  • Click Next.

StepByStep-HyperV-StarWind55

  • On the Specify permissions to control access page, click Customize permissions and grant the following permissions:
  • Since we are using this Scale-Out File Server file share for Hyper-V, all Hyper-V computer accounts, the SYSTEM account, and all Hyper-V administrators must be granted full control on the share and the file system.
  • Click Next to continue.

StepByStep-HyperV-StarWind56

  • On the Confirm selections page review the settings and click Create. To make any changes, click Previous.

StepByStep-HyperV-StarWind57

  • Review results on the last page and click Close.

StepByStep-HyperV-StarWind58

  • The Failover Cluster Manager should look as shown in the screenshot below:

StepByStep-HyperV-StarWind59

Configuring SMB Constrained Delegation For Hyper-V (Requires the installation of the Active Directory module for Windows PowerShell)

Last but not least, we will create a new virtual machine hosted on Scale-out File Server. 

StepByStep-HyperV-StarWind60

I hope this step by step guide gives you a solid knowledge to start your Hyper-V Cluster deployment with StarWind Software Inc and Scale-out File Server.

Thanks for reading!

Cheers,
-Charbel

About Charbel Nemnom 325 Articles

Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest’s IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

Be the first to comment

Leave a Reply