Monday, December 21, 2015

How to setup a highly available cluster with Hyper-V and Starwind Virtual SAN - part 3

In the previous post we configured StarWind Virtual SAN and we are now moving to setup our Hyper-V Servers as iSCSI Initiators that mount the highly available backend storage for our cluster.

Basically there are 5 big steps:
  1. install and configure MPIO for iSCSI
  2. add the Hyper-V role and the failover clustering feature
  3. set up a virtual switch for Converged Networking
  4. configure iSCSI initiators to connect to iSCSI targets
  5. set up the Hyper-V cluster

This part will require two restart of your Hyper-V servers.

The installation of the Multipath-IO Feature is done on both hypervisors through PowerShell:
Get-WindowsFeature Multipath-IO | Install-WindowsFeature
Reboot both servers.

Now on both nodes open the MPIO control panel (this interface is available on Core versions too), which you can access by typing:


In the MPIO dialog choose the Discover Multi-Paths tab and then check the 'Add support for iSCSI devices' option.

The servers will now reboot for the second time.

The same result can be obtained much more simply with a line of PowerShell code. Isn't that fantastic?

Enable-MSDSMAutomaticClaim -BusType iSCSI

When they restart, the MPIO Devices tab lists the additional hardware ID “MSFT2005iSCSIBusType_0x9.”, which means that all iSCSI bus attached devices will be claimed by the Microsoft DSM.


In Windows 2016 TP4, the installation of roles and features can be achieved either in Powershell or with the GUI. This is entirely up to you at the outcome will be the same. My only suggestion is to skip the set up of virtual switches here, even do the interface asks you to do it. We will configure these switches later in Powershell for fine-grained control.


Start by configuring NIC teaming on your physical network cards. In my lab I only have one NIC, so this step is not necessary, but I will do it all the same, so that if in the future I add a secondary NIC, I can increase my bandwidth and availability without impacting the rest of the configuration:
New-NetLbfoTeam -TeamMembers 'Ethernet' -LoadBalancingAlgorithm HyperVPort -TeamingMode SwitchIndependent -Name Team1
Since your two Compute nodes have Hyper-V, Clustering and NIC Teaming, you can leverage the Hyper-V cmdlets to build your Converged Network Adapters. Basically a converged network adapter is a flexible pool of your physical NICs that are joined together (their bandwidth is combined) in order to provide a robust and fast data channel which is usually split in logical VLANs with a QoS attached for traffic shaping.

In my case I only have a teamed single physical network adapter, but I am still allowed to build a logical switch on top of it.

In my lab each Hyper-V host is configured with a static IP address:

I am going to use the New-Vmswitch cmdlet to setup my logical switch. Notice that when I execute New-Vmswitch, I set the AllowManagementOS parameter to $true, since I have only one NIC Teaming that I have to use for management. If I set that parameter to $false I would loose connectivity on the host.

 In the floowing screenshot you can see the configuration before building the logical switch. You can see that NIC Teaming is activated and the Physical NIC is now only bound to the 'Microsoft Network Adapter Multiplexor Protocol':

Here's the syntax to build the virtual switch where all the node traffic will flow through:

New-VMSwitch -Name vSwitch -NetAdapterName Team1 -AllowManagementOS:$false -MinimumBandwidthMode Weight
When you run this cmdlet, the NIC Team 'gives' its static IP addressing to the virtual adapter that gets created and then bind itself to the logical switch. That's why the only checkbox that is ticked for the NIC Team after building the logical switch is the one for 'Hyper-V Extensible Virtual Switch':

Then follows the configuration of the additional adapters as well as a bit of traffic shaping as suggested by some well-know best practices:

Set-VMSwitch vSwitch -DefaultFlowMinimumBandwidthWeight 10
Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName vSwitch
Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName vSwitch
Add-VMNetworkAdapter -ManagementOS -Name "iSCSI" -SwitchName vSwitch
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 40
Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10
Set-VMNetworkAdapter -ManagementOS -Name "iSCSI" -MinimumBandwidthWeight 40
Here's a view of the new virtual adapters:

These new adapters get their IP configuration from a DHCP by default. In my case I want to explicitely declare their addressing so that they are on different subnets (otherwise they won't be seen from the Cluster and you'll get errors in the cluster validation process):
New-NetIPAddress -InterfaceAlias "vEthernet (LiveMigration)" -IPAddress -PrefixLength "24"
New-NetIPAddress -InterfaceAlias "vEthernet (Cluster)" -IPAddress -PrefixLength "24"
New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI)" -IPAddress -PrefixLength "24"
As you can see, I put the iSCSI adapters on the same subnet of my StarWind iSCSI Targets:

I repeat the same operation on the second Hyper-V node and I am good to go for setting up my iSCSI initiators.


Now bring up the iSCSI initiator configuration panel and setup the link between all of your Hyper-V Server (initiators) to all of your StarWind servers (targets) on the network dedicated to iSCSI traffic (192.168.77.X in my case):


On my first node I have:

And on my second node:

The next step is to connect the targets with multi-pathing enabled.

We are almost done. Withe Get-Disk I can check that both of my compute nodes can see the backend storage:

Now move to the Disk Management and initialize these iSCSI disk:

Time to build the Hyper-V cluster.


Nothing easier now then to setup the cluster and bring the disks online:

The three disks are automatically added to the Hyper-V cluster. The smaller one is automatically selected to act as Quorum. The others two have to be manually added to the CSV, so that the CSV creates a unified namespace (under C:\ClusteredStorage) that enables highly available workloads to transparently fail over to the other cluster node if a server fails or is taken offline:

Check now that all your virtual network adapters appear in the Networks view:

That’s all about it for the moment. Now we have a two backend nodes serving highly available storage from their local disks to a couple of Hyper-V nodes. Four nodes in total, with full resiliency on the front-end servers as well as on the back-end servers.

We can definitively say that we have achieved a fault tolerant design thanks to StarWind Virtual SAN and Microsoft Hyper-V in a 'Compute and Storage' scenario.

In a next post I will test some VM workload on this storage as see how fast it is. I will also test the StarWind high availability mechanism and see how it respond to hardware failure.

Stay tuned for more.

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...