How To Manage #StarWind Virtual SAN With Virtual Machine Manager? #HyperV #SCVMM @Starwindsan

Hello folks,

In today’s blog post, I will show you how you can manage StarWind Virtual SAN with System Center Virtual Machine Manager 2012 R2.

Before we get started, let’s prepare the action plan first.

1- Installing and configuring StarWind SMI-S Agent
2- Connecting SMI-S provider to SCVMM 2012 R2
3- Creating a Logical Unit in SCVMM
4- Allocating storage pools and logical units
5- Connecting the storage to Hyper-V Cluster

As you can see we have a bunch of steps to configure before we present the storage to the Hyper-V Cluster.

For the purpose of this demo, I am using the following products:

StarWind Virtual SAN V8.0 build 7929
System Center Virtual Machine Manager 2012 R2 UR6
– 2 X Windows Server 2012 R2 Hyper-V hosts

Step 1. Installing and configuring StarWind SMI-S Agent

Storage Management Initiative – Specification (SMI-S) is a standard of disk storage management.

Storage manufacturers have to ensure SMI-S support by their products. They typically provide a so-called SMI-S provider or an “agent” that mediates communication between an SMI-S client and server storage array. An SMI-S client (SCVMM) is connected to the SMI-S provider via CIM-XML protocol, while SMI-S provider itself can use proprietary interfaces to manage the disk array.

StarWind Software, Inc. offers SMI-S provider for storage management, called StarWind SMI-S Agent. In the current StarWind build, the SMIS-S provider is included in the executable program.

You can install StarWind SMI-S Agent on any server.

   StarWind-VMM-01

After the installation of StarWind SMI-S Agent is completed, you need to configure it with the StarWindSMISConfigurator utility.

To configure StarWind SMI-S Agent:

1. Open the Run window.
2. Type StarWindSMISConfigurator.
3. Click OK.

StarWind-VMM-03StarWind-VMM-04

4. In the StorageSystemName field specify the name for a disk array, e.g. Storage.

5. Specify the StorageSystemIPAddress of the Hyper-V host running StarWind Service (this step is needed in case you installed StarWind SMI-S Agent on a different server, however in my case I have both the SMI-S Agent and StarWind service installed on the Hyper-V host).

6. In the StorageSystemPort field leave the number unchanged.

7. In the path fields specify the correct paths to folders where disk device images will be stored. You have to specify paths to the server where StarWind iSCSI target is launched, please use the same format as that used for creation of virtual devices in StarWind Management Console. In my demo here is My Computer\E\StarWind-VMData\

8. Since I am using StarWind Virtual SAN for Hyper-V in converged scenario, where the storage and compute nodes are on the same hardware, it’s recommended to configure the StarWind SMI-S Agent HA storage pool using the following parameters for the First Node and Second Node:

  • Path, partnerPath and partnerHostName
  • HBInterfaces and partnerHBInterfaces
  • SyncInterfaces and partnerSyncInterfaces

Leave the other settings as default.

9. After you applied the new settings restart the StarWind SMI-S Agent service, the StarWind SMI-S Agent service will restart automatically, so make sure the StarWind Service is up and running before you apply the new settings.

Step 2. Connecting SMI-S provider to SCVMM 2012 R2

To enable disk array management using VMM you need to connect the appropriate SMI-S provider to the SMI-S Agent.

To connect SMI-S provider:

1. Click the Add Resources button on the toolbar of the SCVMM console.
2. Select Storage Devices.

StarWind-VMM-05

3. Specify type of a storage provider. Select Add a storage device that is managed by an SMI-S provider button

StarWind-VMM-06

4. Select SMI-S CIMXML protocol.
5. Enter address of the host where StarWind SMI-S Agent is installed and running and the user name that will be used by VMM for authorization on StarWind SMI-S Agent. Since I have installed the SMI-S Agent on the Hyper-V Host, I will specify the management IP Address.
6. Click Next.

StarWind-VMM-07

7. If VMM succeeds to connect to SMI-S Agent, it will display all available disk arrays and eventually the StarWind Storage Array Pool will be detected.

StarWind-VMM-08

8. Click Next and select ConcretePools to be assigned to VMM. I will choose ConcretePool_Storage_HA since I am using HA Storage Pool. Every selected pool has to be classified such as Gold, Silver or Bronze. If there are no classification predefined yet, click Create classification to create one, and then specify the Host Group.

9. Click Next.

StarWind-VMM-09

10. Confirm the settings specified before. Summary displays the information regarding disk array, provider and pools to be managed by VMM.

StarWind-VMM-10

Step 3. Creating a Logical Unit in SCVMM

To add a new logical unit:

1. Click Create Logical Unit on the toolbar.

StarWind-VMM-11

2. In the Create Logical Unit dialog select a storage pool, specify name and size of the Logical Unit, and click OK.

StarWind-VMM-12

A newly created storage device appears on the list.

StarWind-VMM-13

Let’s confirm this operation in StarWind Management Console, and as you can see we have a new device called HAImage3 

StarWind-VMM-14

Step 4. Allocating storage pools and logical units

To allocate storage pools and logical units:

1. Click Allocate Capacity on the toolbar.

StarWind-VMM-15

2. In the Allocate Storage Capacity dialog, make sure the ConcretePool_Storage_HS is listed under Storage Pools:

In previous SCVMM versions, we used to Allocate Storage Pools and Allocate Logical Units for this host group… However this behavior has been changed with the recent version of VMM, this step is not required anymore since we allocated the storage to All Hosts during SMI-S provider registration.

StarWind-VMM-16

Step 5. Connecting the storage to Hyper-V Hosts

To connect a logical unit to a Hyper-V host:

1. Choose a Hyper-V host and click Properties on the shortcut menu, but since I have two nodes Hyper-V Cluster, I will select the cluster name NINJA-CLUSTER Ninja

StarWind-VMM-17

2. In the Properties window, click Available Storage on the left pane.

3. Click Add to add a disk array.

StarWind-VMM-18

4. Click OK.

StarWind-VMM-19

5. Click OK to confirm and add the available storage to both nodes.

6. Click Properties again on the Cluster Name

7. In the Properties window, click Available Storage on the left pane.

8. Select the Volume and click Convert to CSV.

StarWind-VMM-21

9. Click OK to apply and add the disk to Cluster Shared Volumes.

10. In the Properties window, click Shared Volumes to confirm the cluster disk is converted to Cluster Shared Volume.

Let’s confirm this operation in Failover Cluster Manager!

As you can see we have a new disk converted to Cluster Shared Volume and ready to host our Virtual Machines Open-mouthed smile

StarWind-VMM-22

All the operations can be performed from the VMM Console and there is no need to address the management provided by the StarWind Management Console. so by using the SMI-S substantially simplifies storage administration as it eliminates the need to operate different disk arrays produced by various storage vendors.

System Center Virtual Machine Manager is an important step for the automation of the private, public and Hybrid cloud infrastructure management.

I hope this post has been informative for you, and I would like to thank you for reading.

Cheers,
/Charbel

About Charbel Nemnom 311 Articles
Charbel Nemnom is a Microsoft Cloud Consultant and Technical Evangelist, totally fan of the latest's IT platform solutions, accomplished hands-on technical professional with over 15 years of broad IT Infrastructure experience serving on and guiding technical teams to optimize performance of mission-critical enterprise systems. Excellent communicator adept at identifying business needs and bridging the gap between functional groups and technology to foster targeted and innovative IT project development. Well respected by peers through demonstrating passion for technology and performance improvement. Extensive practical knowledge of complex systems builds, network design and virtualization.

Be the first to comment

Leave a Reply