Thursday, December 17, 2015

How to setup a highly available cluster with Hyper-V and Starwind Virtual SAN - part 2

In the previous post I showed you how to setup install StarWind Virtual SAN on the backend storage of a Compute and Storage (4 nodes) environment based on Windows 2016 Technical Preview 4. It is now time to proceed with configuring StarWind as a provider of a resilient iSCSI target.

There are two main steps: setting up volumes as devices on the first node, then replicating data to the second node. StarWind made the process effortless, since the ISCSI Target setup is performed by the engine for you.


Open the StarWind console and connect on both hosts. This is easily accomplished and you are even allowed for an automatic discovery which tries to detect which one of your servers are acting as StarWind daemons on port 3261:

When you click on connect, you are allowed to choose the path where the StarWind service will keep the disk image files that are exported:

Now on the first host click on 'Add Device' and add a first disk for hosting your Hyper-V virtual machines. This will become the first CSV in the Hyper-V Cluster. Since it is a best practice to have at least one CSV per cluster node, repeat the step once more for the second CSV, then add a third smaller disk  (1Gb will be enough) which will be for Cluster Quorum. Notice that when you setup a device, StarWind Virtual SAN will automatically setup the ISCSI target.

That was pretty quick. Now let's setup synchronous replication between the two backend nodes.


Simply open each device (CSV1, CSV2 and Quorum) and click on the Replication Manager link. The replication wizard will easily help you in setting up two way replica, which is your best option in case one storage node goes down:

In the following screenshot I had to setup dedicated four network links for each type of traffic. Since in this lab I just have a single NIC per server, I forced multiple IP addresses per card with the help of the New-NetIPAddress PowerShell cmdlet. This is for sure not an ideal architecture, since having only one NIC is not robust (it's a single-point-of-failure) and will probably perform poorly:  all kind of traffic will go down the same wire no matter the expected bandwidth. Basically what I am doing here is some kind of hyper-converged low cost networking, which is completely fine for me.

I am dedicating the 192.168.2 channel to heartbeat and the 192.168.3 channel to sync.

The last interface on 192.168.77 is the one that I will use for iSCSI traffic between the Compute nodes (Hyper-V) and the Storage Node (StarWind). Make sure this is not ticked here.

In a real world scenario you should have at least four 10GbE physical adapters per Virtual SAN server: for management, sync, heartbeat and ISCSI traffic. And you would also add Teaming for your critical paths, or set-up true Converged Networking with QoS, Jumbo frames and all that stuff.

StarWind Virtual SAN will now take care of synching data between the two hosts:

Here's a screenshot of the files used by StarWind Virtual SAN to host my volumes on the first node:

An exact copy of these files is hosted on the second node:

Once the three volumes are properly configured, replicated and exported as ISCSI targets (the Storage part of the architecture), you can now move to configure your Hyper-V servers (the Compute part of our lab, as the hypervisors are commonly referred to these days) as ISCSI initiators.

I'll take you through this in the next post. Stay tuned.

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...