Scale-out File Server (SoFS) is a clustered file server that allows keeping server application data (SQL and Hyper-V) on file shares, making it accessible for clients. A SoFS provides SAN reliability and high availability of shared data. StarWind iSCSI SAN allows using two-node configurations to provide highly available shared storage for Scale-out File Server (SoFS).
I will start by covering StarWind management console, device creation and settings explanation, and then move into step by step configuration of StarWind Virtual SAN with SoFS.
First things first, let’s open StarWind Management Console.
At anytime you can Add / Disconnect any host from the console. In this demo, I have two hosts added into the console with two virtualized disks.
Right Click on any host, to Add Device. If you noticed, we have two choices one Add Device, and the second one Add Device (advanced).
What is the difference between both. The former is a Wizard which allows you to add a StarWind device with default settings. This one I personally don’t like to use, because I would like to custom my device settings. The latter is more advanced where you can custom your device as needed, and I suggest you do the same.
Click on Add Device (advanced). As you can see we have three types of devices. (Virtual Hard Disk Device, Virtual Tape Library and Optical Disc Drive). The Tape Device and Optical Disk Drive are not covered in this post. I will choose Hard Disk Device.
Here we have an option to choose the Disk Device Type for the Hard Disk Device. The Virtual Disk which will create an image file on our physical storage (DAS, RAID or Storage Spaces). The Physical Disk which is a disk bridge (Pass-through disk), and finally the RAM Disk which is small disk that can be created using the Physical RAM of the server, this feature is in development actually. The RAM Disk is very fast and is useful for some temp bases for large SQL Server deployment, they are temp and do not contain extremely important data, because as you know if the host get rebooted, the Random Access Memory is volatile.
I will select Virtual Disk and click Next.
In the Virtual Disk Location wizard, you can specify the path and the size of the Virtual Disk.
I will rename my Virtual Disk to DemoDevice instead of Storage1, and then I will create a new file with 13GB for this demo (you can change this number according to your environment).
What is the different between 512 bytes and 4K sector size?
The sector size is really important, because in case of Hyper-V, it does a great performance boost, so 4096 sector size it is a Hyper-V native feature, if you are creating devices only for Hyper-V (yes of course ), then make sure to set the sector size to 4K. But if you are using Xen Server, VMware or Microsoft Storage Spaces, then 4K sector won’t work for you.
I will leave it to 512 byes sector size, because I am leveraging Microsoft Storage Spaces in my lab.
Next, the Virtual Disk Options (Thick-provisioned versus Thin-provisioned). The Thick-provisioned option is self-explanatory, which is block the exact same storage capacity that you specified in the previous step, it’s a standard LUN and has no storage boost. I personally enjoy Thick-provision drives more than Thin-provision. However the Thin-provisioned option is misleading name here and has upside and downside. The upside of it, it is not a thin-provision device, it’s an LSFS device (Log-Structured File System). The LSFS device was initially designed for slow spindle drives, for those who still using 7K rpm drives or even slower, this is the way to go. The point of the LSFS device is they take many small random blocks of data, then combines them into one huge sequential block, and then performs a write. As we all know that sequential write are much better than random ones, they get processed faster. This is how StarWind get performance boost with LSFS device especially for the write operations. The LSFS device was designed for RAID 5 and RAID 6, and as we all know RAID 5 and 6 has extremely good read performance speed, so the engineering team focused on enhancing the write speed with LSFS. As for the downside of this device, there is only one of it, that downside that is actually takes 3 times more space and here is the confusion comes, you will say, if it’s called Thin-provision, then you can create a 100TB drive and place a 1GB file on it, and on my physical disk it will take 1GB, because this is how Thin-provision devices work, but unfortunately not, so if you place a 1GB file on this LSFS device, it will grow into 3GB file, because the way it’s designed, it’s a Log-Structured File System, and each write operation on it gets logged, and that log take that space. The point of this log is that sometime in the future, StarWind will be able to restore data from those system snapshots as they call it, and I think that’s great. This will be renamed in the future to LSFS device instead of Thin-provisioned. In future release, the LSFS device will grow three times more space, and then with time it will shrink back to a smaller size especially if there is duplicated data on it.
For further reading about the features of LSFS and their description, please read here.
I will select Thick-provisioned and click Next.
Here we have the caching parameters. They are two modes (Write-Back and Write-Through). The write-back is designed to increase write performance, and read performance simultaneously. However Write-Through, it’s designed to increase only the read performance. So, if you have intensive read workload, then use Write-Through mode, if you have intensive read and write workload or just even write, then use Write-Back mode.
Then you can set the L1-Cache in MB, the L1-Cache use Random Access Memory (RAM) which you have free in your system. StarWind has recommendations for that size as well as next proportion, so if you have a 1TB device, then it’s recommended to set the L1 cache to 1GB of size.
Since I have a very small device 13GB, I will set the L1 cache to N/A (Reads and Writes are not cached), because I don’t think it will make any difference here. Click Next.
We move now to L2 Cache Parameters. The L2 Cache, works completely the same way and use the same algorithms as L1 Cache, but uses another resource (it uses Solid-State Drives). That SSD can be configured as well in Write-Back or Write-Through mode.
The recommendation size for the L2 Cache is that should be 10% out of all storage. So if you have a 1TB device, then the L2 Cache should be set to 100GB on your SSD drives.
For this demo, I will not use SSD and leave the L2 Cache to N/A (Reads and Writes are not cached).
Last but not least, we need to specify the Target Attachment Method.
I will create a new Target and give it an Alias (DemoDevice). I personally leave the Target Name as it is, you can change it if you choose to. Then select Allow multiple concurrent iSCSI Connections.
Click Next, and then click Create.
As you can see, the DemoDevice has been created.
Now we need to replicate this device to the second Hyper-V host, so right click on the newly created device and click Replication Manager.
Click Add Replica.
Select Synchronous “Two-Way” Replication. Click Next.
Select the IP Address of the partner Host Name. Click Next.
Select Create new Partner Device and Click Next.
Here you can choose the path to partner device and change the default random name
I will rename the device with the same name that I used for Host 1 (DemoDevice). Click Next.
Next we move into the Network Options for Replication. Click Change Network Settings.
I will choose the iSCSI network as a Heartbeat channel and I will choose a dedicated network for Synchronization and Heartbeat. I strongly recommend to use 10Gi for the Synchronization channel, which is synchronizing your data across both nodes. Click OK.
Click Create Replica.
After the replication is created, you need to wait for it until the Synchronization is completed (10Gi is your friend here ).
Since the Synchronization is completed now, we will move into connecting to the device using iSCSI Initiator.
Log now into one of the SoFS node and launch Microsoft iSCSI Initiator:
- Start > Administrative Tools > iSCSI Initiator or iscsicpl from the command line interface. The iSCSI Initiator Properties window appears.
- Navigate to the Discovery tab.
- Click the Discover Portal button. Discover Target Portal dialog appears. Type in 127.0.0.1.
- Click the Advanced button. Select Microsoft ISCSI Initiator as your Local adapter and select your Initiator IP (leave default for 127.0.0.1).
- Click OK. Then click OK again to complete the Target Portal discovery.
- Click the Discover Portal… button again.
- Discover Target Portal dialog appears. Type in the first IP address of the partner node you will use to connect the secondary mirrors of the HA devices.
- Click Advanced.
- Select Microsoft ISCSI Initiator as your Local adapter, select the Initiator IP in the same subnet as the IP address on the partner server from the previous step.
- Click OK. Then click OK again to complete the Target Portal discovery.
- If you are using MPIO which I highly recommend, then Click the Discover Portal button once again.
- Discover Target Portal dialog appears. Discover Target Portal dialog appears. Type in the second IP address of the partner node you will use to connect the secondary mirrors of the HA devices, and then follow the same steps described above.
- All target portals are added on the first node.
- Complete the same steps for the second node as well.
- Click the Targets tab. The previously created targets are listed in the Discovered Targets section.
- Connecting demodevice. Select a target of demodevice located on the local server and click Connect.
- Enable both checkboxes. Click Advanced…
- Select Microsoft iSCSI Initiator in the Local adapter text field.
- Select 127.0.0.1 in the Target portal IP. Click OK
- Select the partner target for the other StarWind node and click Connect.
- Select Microsoft iSCSI Initiator in the Local adapter text field. Select the IP address in the Initiator IP field. Select the corresponding portal IP from the same subnet in the Target portal IP.
- Click OK.
- Repeat actions described in the steps above for other HA device.
- Repeat the same steps on the second StarWind / SoFS node, specifying corresponding local and data channel IP addresses. The result should look like the screenshot below:
After that, launch Disk Management or diskmgmt.msc from the command line and bring the disk online.
Then we initialize the disk as GPT partition as per StarWind best practices.
Then we create a new simple volume.
Log in to the second SoFS node and bring the disk online only, you don’t need to initialize it or create a new partition, because it is an HA device.
To add Cluster Shared Volumes (CSV) that is necessary to work with Hyper-V virtual machines:
- Open Failover Cluster Manager.
- Go to Cluster -> Storage -> Disks.
- Click Add Disk.
- Right-click the required disk and rename it (I will rename it to CSV02 in my demo).
- Right-click the renamed disk (CSV02) and select Add to Cluster Shared Volumes.
- The disk will be displayed as a CSV at the Failover Cluster Manager window as shown in the screenshot below:
- Go to Cluster –> Roles
- Right-click the Roles item and click Configure Role. High Availability Wizard appears.
- Select the File Server Role. Click Next.
- Select Scale-out File Server for application data and click Next.
- On the Client Access Point page, in the Name text field type a NETBIOS name that will be used to access Scale-Out File Server.
- I will choose SOFS-StarWind as a Name.
- Click Next to continue.
- On the Confirmation page check the selected settings.
- Click to Continue or Previous to make any changes.
- Review the information on the Summary page and click Finish.
- When you are finished, the Failover Cluster Manager window should look as shown in the screenshot below:
To create a continuously available file share on the cluster shared volume:
- Launch the Failover Cluster Manager.
- Select the Roles item.
- Right-click a previously created file server role and select Add Shared Folder.
- From the list of profiles select SMB Share — Applications.
- Click Next.
- I will Select a CSV02 volume that we just created to host the share.
- Click Next.
- Enter a share name and verify the path to the share.
- Click Next.
- Ensure the Enable continuous availability checkbox is selected.
- Click Next.
- On the Specify permissions to control access page, click Customize permissions and grant the following permissions:
- Since we are using this Scale-Out File Server file share for Hyper-V, all Hyper-V computer accounts, the SYSTEM account, and all Hyper-V administrators must be granted full control on the share and the file system.
- Click Next to continue.
- On the Confirm selections page review the settings and click Create. To make any changes, click Previous.
- Review results on the last page and click Close.
- The Failover Cluster Manager should look as shown in the screenshot below:
Configuring SMB Constrained Delegation For Hyper-V (Requires the installation of the Active Directory module for Windows PowerShell)
$Storage = "SOFS-STARWIND"
Enable-SmbDelegation –SmbServer $storage –SmbClient NINJA-HV01 -Verbose
Enable-SmbDelegation –SmbServer $storage –SmbClient NINJA-HV02 –Verbose
Last but not least, we will create a new virtual machine hosted on Scale-out File Server.
# Create VM on Hyper-V Cluster
$VM = "SOFS-StarWind-VM01"
$HVHost = "NINJA-HV02"
New-VM -ComputerName $HVHost -Name $VM -MemoryStartupBytes 1GB -Generation 2 -VHDPath \\sofs-starwind\SHARE01\SOFS-StarWind-VM01\Virtual Hard Disks\VMServerBase.vhdx
# Start the VM
Start-VM -ComputerName $HVHost -Name $VM –Verbose
Thanks for reading!