Monday, December 21, 2015

How to setup a highly available cluster with Hyper-V and Starwind Virtual SAN - part 3

In the previous post we configured StarWind Virtual SAN and we are now moving to setup our Hyper-V Servers as iSCSI Initiators that mount the highly available backend storage for our cluster.



Basically there are 5 big steps:
  1. install and configure MPIO for iSCSI
  2. add the Hyper-V role and the failover clustering feature
  3. set up a virtual switch for Converged Networking
  4. configure iSCSI initiators to connect to iSCSI targets
  5. set up the Hyper-V cluster
1 - MPIO SETUP

This part will require two restart of your Hyper-V servers.

The installation of the Multipath-IO Feature is done on both hypervisors through PowerShell:
Get-WindowsFeature Multipath-IO | Install-WindowsFeature
Reboot both servers.

Now on both nodes open the MPIO control panel (this interface is available on Core versions too), which you can access by typing:

mpiocpl

In the MPIO dialog choose the Discover Multi-Paths tab and then check the 'Add support for iSCSI devices' option.


The servers will now reboot for the second time.

The same result can be obtained much more simply with a line of PowerShell code. Isn't that fantastic?

Enable-MSDSMAutomaticClaim -BusType iSCSI

When they restart, the MPIO Devices tab lists the additional hardware ID “MSFT2005iSCSIBusType_0x9.”, which means that all iSCSI bus attached devices will be claimed by the Microsoft DSM.

2 - ADDING HYPER-V AND CLUSTERING

In Windows 2016 TP4, the installation of roles and features can be achieved either in Powershell or with the GUI. This is entirely up to you at the outcome will be the same. My only suggestion is to skip the set up of virtual switches here, even do the interface asks you to do it. We will configure these switches later in Powershell for fine-grained control.


3 - SETUP OF CONVERGED NETWORK ADAPTERS

Start by configuring NIC teaming on your physical network cards. In my lab I only have one NIC, so this step is not necessary, but I will do it all the same, so that if in the future I add a secondary NIC, I can increase my bandwidth and availability without impacting the rest of the configuration:
New-NetLbfoTeam -TeamMembers 'Ethernet' -LoadBalancingAlgorithm HyperVPort -TeamingMode SwitchIndependent -Name Team1
Since your two Compute nodes have Hyper-V, Clustering and NIC Teaming, you can leverage the Hyper-V cmdlets to build your Converged Network Adapters. Basically a converged network adapter is a flexible pool of your physical NICs that are joined together (their bandwidth is combined) in order to provide a robust and fast data channel which is usually split in logical VLANs with a QoS attached for traffic shaping.

In my case I only have a teamed single physical network adapter, but I am still allowed to build a logical switch on top of it.

In my lab each Hyper-V host is configured with a static IP address:


I am going to use the New-Vmswitch cmdlet to setup my logical switch. Notice that when I execute New-Vmswitch, I set the AllowManagementOS parameter to $true, since I have only one NIC Teaming that I have to use for management. If I set that parameter to $false I would loose connectivity on the host.

 In the floowing screenshot you can see the configuration before building the logical switch. You can see that NIC Teaming is activated and the Physical NIC is now only bound to the 'Microsoft Network Adapter Multiplexor Protocol':



Here's the syntax to build the virtual switch where all the node traffic will flow through:

New-VMSwitch -Name vSwitch -NetAdapterName Team1 -AllowManagementOS:$false -MinimumBandwidthMode Weight
When you run this cmdlet, the NIC Team 'gives' its static IP addressing to the virtual adapter that gets created and then bind itself to the logical switch. That's why the only checkbox that is ticked for the NIC Team after building the logical switch is the one for 'Hyper-V Extensible Virtual Switch':



Then follows the configuration of the additional adapters as well as a bit of traffic shaping as suggested by some well-know best practices:

Set-VMSwitch vSwitch -DefaultFlowMinimumBandwidthWeight 10
Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName vSwitch
Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName vSwitch
Add-VMNetworkAdapter -ManagementOS -Name "iSCSI" -SwitchName vSwitch
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 40
Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10
Set-VMNetworkAdapter -ManagementOS -Name "iSCSI" -MinimumBandwidthWeight 40
Here's a view of the new virtual adapters:


These new adapters get their IP configuration from a DHCP by default. In my case I want to explicitely declare their addressing so that they are on different subnets (otherwise they won't be seen from the Cluster and you'll get errors in the cluster validation process):
New-NetIPAddress -InterfaceAlias "vEthernet (LiveMigration)" -IPAddress 10.0.1.201 -PrefixLength "24"
New-NetIPAddress -InterfaceAlias "vEthernet (Cluster)" -IPAddress 172.0.1.201 -PrefixLength "24"
New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI)" -IPAddress 192.168.77.201 -PrefixLength "24"
As you can see, I put the iSCSI adapters on the same subnet of my StarWind iSCSI Targets: 192.168.77.0.

I repeat the same operation on the second Hyper-V node and I am good to go for setting up my iSCSI initiators.

4 - CONFIGURING ISCSI INITIATORS

Now bring up the iSCSI initiator configuration panel and setup the link between all of your Hyper-V Server (initiators) to all of your StarWind servers (targets) on the network dedicated to iSCSI traffic (192.168.77.X in my case):

iscsicpl



On my first node I have:

And on my second node:

The next step is to connect the targets with multi-pathing enabled.

We are almost done. Withe Get-Disk I can check that both of my compute nodes can see the backend storage:



Now move to the Disk Management and initialize these iSCSI disk:




Time to build the Hyper-V cluster.

5 - THE HYPER-V CLUSTER

Nothing easier now then to setup the cluster and bring the disks online:

The three disks are automatically added to the Hyper-V cluster. The smaller one is automatically selected to act as Quorum. The others two have to be manually added to the CSV, so that the CSV creates a unified namespace (under C:\ClusteredStorage) that enables highly available workloads to transparently fail over to the other cluster node if a server fails or is taken offline:

Check now that all your virtual network adapters appear in the Networks view:


That’s all about it for the moment. Now we have a two backend nodes serving highly available storage from their local disks to a couple of Hyper-V nodes. Four nodes in total, with full resiliency on the front-end servers as well as on the back-end servers.

We can definitively say that we have achieved a fault tolerant design thanks to StarWind Virtual SAN and Microsoft Hyper-V in a 'Compute and Storage' scenario.


In a next post I will test some VM workload on this storage as see how fast it is. I will also test the StarWind high availability mechanism and see how it respond to hardware failure.


Stay tuned for more.

Thursday, December 17, 2015

New PowerShell cmdlets in Windows 2016 TP4

I have been playing with Windows 2016 Technical Preview 4 for some weeks now. I wanted to share with you a quick post listing all the new cmdlets that have been added since Windows 2016 Technical Preview 3.

ModuleNew CmdletsWS 2016 TP4WS 2016 TP3
03838
AppBackgroundTask660
+ Disable-AppBackgroundTaskDiagnosticLog

+ Enable-AppBackgroundTaskDiagnosticLog

+ Get-AppBackgroundTask

+ Set-AppBackgroundTaskResourcePolicy

+ Start-AppBackgroundTask

+ Unregister-AppBackgroundTask

AppLocker055
Appx01414
AssignedAccess330
+ Clear-AssignedAccess

+ Get-AssignedAccess

+ Set-AssignedAccess

BestPractices044
BitsTransfer088
BranchCache03232
CimCmdlets01414
CIPolicy011
ConfigCI11211
+ Set-CIPolicyVersion

Defender11211
+ Start-MpWDOScan

DirectAccessClientComponents01111
Dism04343
DnsClient01717
EventTracingManagement01414
International01818
iSCSI01313
IscsiTarget02828
ISE033
Kds066
Microsoft.PowerShell.Archive022
Microsoft.PowerShell.Core26260
+ Get-PSSessionCapability

+ New-PSRoleCapabilityFile

Microsoft.PowerShell.Diagnostics055
Microsoft.PowerShell.Host022
Microsoft.PowerShell.Management08686
Microsoft.PowerShell.ODataUtils011
Microsoft.PowerShell.Security01313
Microsoft.PowerShell.Utility2107105
+ ConvertFrom-SddlString

+ Import-PowerShellDataFile

Microsoft.WSMan.Management01313
MMAgent055
MsDtc04141
NetAdapter06868
NetConnection022
NetEventPacketCapture02727
NetLbfo01313
NetNat01313
NetQos044
NetSecurity08585
NetSwitchTeam077
NetTCPIP03434
NetworkConnectivityStatus044
NetworkSwitchManager01919
NetworkTransition03434
NFS04242
PackageManagement31310
+ Find-PackageProvider

+ Import-PackageProvider

+ Install-PackageProvider

PcsvDevice099
Pester02020
PKI01717
PlatformIdentifier110
+ Get-PlatformIdentifier

PnpDevice044
PowerShellGet122311
+ Find-DscResource

+ Find-Script

+ Get-InstalledScript

+ Install-Script

+ New-ScriptFileInfo

+ Publish-Script

+ Save-Script

+ Test-ScriptFileInfo

+ Uninstall-Script

+ Update-ModuleManifest

+ Update-Script

+ Update-ScriptFileInfo

PrintManagement02222
PSDesiredStateConfiguration-11718
- Find-DscResource

PSDiagnostics01010
PSReadline066
PSScheduledJob01616
PSWorkflow022
PSWorkflowUtility011
RemoteDesktop07878
ScheduledTasks01919
SecureBoot055
ServerCore022
ServerManager077
ServerManagerTasks01111
SmbShare03535
SmbWitness033
SoftwareInventoryLogging01111
StartLayout033
Storage3150147
+ Get-StorageFirmwareInformation

- Get-StorageHealth

+ Get-StorageHealthAction

+ Get-StorageHealthReport

+ Update-StorageFirmware

TLS077
TroubleshootingPack022
TrustedPlatformModule01111
UserAccessLogging01414
VpnClient01919
Wdac01212
Whea022
WindowsDeveloperLicense033
WindowsErrorReporting033
WindowsSearch022
WindowsUpdate011

Hope this was interesting. Stay tuned for more Powershell!

How to setup a highly available cluster with Hyper-V and Starwind Virtual SAN - part 2

In the previous post I showed you how to setup install StarWind Virtual SAN on the backend storage of a Compute and Storage (4 nodes) environment based on Windows 2016 Technical Preview 4. It is now time to proceed with configuring StarWind as a provider of a resilient iSCSI target.



There are two main steps: setting up volumes as devices on the first node, then replicating data to the second node. StarWind made the process effortless, since the ISCSI Target setup is performed by the engine for you.

CONFIGURING VIRTUAL SAN SERVERS

Open the StarWind console and connect on both hosts. This is easily accomplished and you are even allowed for an automatic discovery which tries to detect which one of your servers are acting as StarWind daemons on port 3261:


When you click on connect, you are allowed to choose the path where the StarWind service will keep the disk image files that are exported:


Now on the first host click on 'Add Device' and add a first disk for hosting your Hyper-V virtual machines. This will become the first CSV in the Hyper-V Cluster. Since it is a best practice to have at least one CSV per cluster node, repeat the step once more for the second CSV, then add a third smaller disk  (1Gb will be enough) which will be for Cluster Quorum. Notice that when you setup a device, StarWind Virtual SAN will automatically setup the ISCSI target.









That was pretty quick. Now let's setup synchronous replication between the two backend nodes.

REPLICA SETUP

Simply open each device (CSV1, CSV2 and Quorum) and click on the Replication Manager link. The replication wizard will easily help you in setting up two way replica, which is your best option in case one storage node goes down:






In the following screenshot I had to setup dedicated four network links for each type of traffic. Since in this lab I just have a single NIC per server, I forced multiple IP addresses per card with the help of the New-NetIPAddress PowerShell cmdlet. This is for sure not an ideal architecture, since having only one NIC is not robust (it's a single-point-of-failure) and will probably perform poorly:  all kind of traffic will go down the same wire no matter the expected bandwidth. Basically what I am doing here is some kind of hyper-converged low cost networking, which is completely fine for me.

I am dedicating the 192.168.2 channel to heartbeat and the 192.168.3 channel to sync.

The last interface on 192.168.77 is the one that I will use for iSCSI traffic between the Compute nodes (Hyper-V) and the Storage Node (StarWind). Make sure this is not ticked here.



In a real world scenario you should have at least four 10GbE physical adapters per Virtual SAN server: for management, sync, heartbeat and ISCSI traffic. And you would also add Teaming for your critical paths, or set-up true Converged Networking with QoS, Jumbo frames and all that stuff.

StarWind Virtual SAN will now take care of synching data between the two hosts:






Here's a screenshot of the files used by StarWind Virtual SAN to host my volumes on the first node:



An exact copy of these files is hosted on the second node:



Once the three volumes are properly configured, replicated and exported as ISCSI targets (the Storage part of the architecture), you can now move to configure your Hyper-V servers (the Compute part of our lab, as the hypervisors are commonly referred to these days) as ISCSI initiators.

I'll take you through this in the next post. Stay tuned.

Tuesday, December 15, 2015

How to setup a highly available cluster with Hyper-V and Starwind Virtual SAN - part 1

For a long time I have wanted to build a new lab for doing my tests around all the new stuff that regularly come out in IT. One month ago, when I got back from the Microsoft MVP Summit, which by the way was an incredible human experience, I bought a good bunch of new hardware to extend the capabilities of my pre-existing infrastructure.

Once I set everything up, using of course the last preview of my favorite operating system, Windows 2016 Technical Preview 4 on the server side, and Windows 10 on the client side, I moved to think how I could build a virtual environment that could be highly available both on the hypervisor tier and on the storage tier.

What we all know is that building a virtual infrastructure on top of a failover cluster is the way of preventing a host server from becoming a single point of failure in case of hardware fault.

The problem is that, when building HA clusters, your hosted application availability is only as good as its weakest link.

Without some sort of HA mechanism, storage becomes the weakest link in the cluster setup because if the shared storage fails the whole cluster becomes unavailable, no matter how many iSCSI target servers you have.

THE QUEST FOR CHEAP AND HIGHLY AVAILABLE STORAGE

So, while I kept telling myself that I wanted a SAN, I also wanted to be able to configure a highly available ISCSI storage without investing in any kind of expensive hardware, like JBOD trays or Dual Controller SAS arrays.

Then, in the era of Software Defined Everything, I started to look at a Software Defined SAN. And found out that Virtual SAN by StarWind Software, is the best option here, since it integrates perfectly with Windows Server in providing redundancy for storage backend for my iSCSI target servers without the need to buy new hardware.

The basic concept of StarWind Virtual SAN is really interesting: you re-use your exceeding local Windows storage in a synchronously mirrored way and expose it through iSCSI to the Hyper-V servers.

STARWIND VIRTUAL SAN SCENARIOS

There are two possible scenarios: Hyper-converged or Compute and Storage.

The Hyper-Converged scenario is the only rational option when you want to build a fully redundant Hyper-V cluster and only have two physical nodes: the local storage on both servers is managed by StarWind Virtual SAN, which mirrors it between the two nodes over the network and provides it to the Hyper-V cluster as a Cluster Shared Volume (CSV). The software reads and writes data locally so there is no storage IO going through the network links, apart for synchronous replication between the two nodes.

In order to provide HA and resiliency you need both a heartbeat (for cluster monitoring) and a synchronization (for data replication) networks.



In my case I had four physical nodes at hand, and I intended to use them all. So I opted for the Compute and Storage scenario, with:
  • two backend servers mirroring their cheap internal storage through StarWind Virtual SAN and acting as iSCSI targets (Storage layer)
  • two front-end hypervisors in a cluster acting as iSCSI initiators and mounting the backend storage as a CSV (Compute layer)
This allows me to keep CPU cycles separated from IO ops, spreading the different workload across two scalable layers:


Let’s start playing with it.

SETTING UP STARWIND VIRTUAL SAN FOR COMPUTE AND STORAGE SCENARIO

Just download the installer (which is just a small exe file), which contains both the Management Console and the actual Virtual SAN services. The installation is pretty straightforward. Let's fire it up on each backend node:


Accept the licensing :



Once you get on the information screen you have a good bunch of information of all the feature of the latest release (V8):
  • New Log-structured File System (LSFS) container, with thin-provisioning, snapshots, optional deduplication, synchronous and asynchronous replication.
  • Flash cache to accelerate LSFS
  • New management console
  • Snapshots for High-availability devices
  • VAAI commands support
  • NUMA - aware resource management.
  • Single-node and multi-node devices SMI-S support.

Choose installation folder:




Choose the component to install. A quick but important note here. The list of components is different between Core and Desktop Experience versions of Windows: the Management Console only appears on Windows versions with GUI. Since you need console to build mirroring, you have to install at least one Windows with Desktop Experience on one of your two Virtual SAN nodes.

In my case I have installed the console on a separate workstation where all my RSAT tools are. That's a fifth node I have, but you can very well keep with the four nodes so configured:
  1. Virtual SAN and StarWind Console
  2. Virtual SAN
  3. Hyper-V
  4. Hyper-V
For a Virtual SAN node acting only as storage provider, select StarWind Virtual SAN Service:


If you choose to install the console, which is required to build the mirror, select the appropriate component:


Go through the rest of the installer:






Now you should have installed the StarWind service on both the backend storage nodes:


You can leverage Get-NetTCPConnection (which actually replaces Netstat), to check the different channels between the two mirrored storages:

Get-NetTCPConnection -OwningProcess (Get-Process StarWindService).ID



Open the management console. The interface is nice and simple and the process is self-explanatory:
  1. connect to both your StarWind Virtual SAN servers
  2. add a device on the first node
  3. use Replication Manager to enable synchronous mirroring toward the second node
  4. wait for first replica to end
I will go through the whole procedure in the next post. Stay tuned.
Related Posts Plugin for WordPress, Blogger...