Wednesday, May 4, 2016

How to parameterize a basic DSC configuration

After reading my two previous posts (here and here) you should have a good understanding of how DSC Configurations work.

At this point, it may be useful to know that within a Configuration block you can use parameters just like you would do in Functions:

Configuration TestConfig
    Param (
    Node $Computername

        WindowsFeature FileAndStorage
            Name = 'FileAndStorage-Services'
            Ensure = 'Present'
The aim of this Configuration is to enable the File and Storage Services feature on a given node. If we query the help for it we get:

Man TestConfig -Parameter * | Format-Table Name,IsDynamic,Required,Position,Aliases

name              isDynamic required position aliases
----              --------- -------- -------- -------
ComputerName      false     false    4        None   
ConfigurationData false     false    3        None   
DependsOn         false     false    1        None   
InstanceName      false     false    0        None   
OutputPath        false     false    2        None
There are here four parameters that automatically come with the Configuration keyword plus one parameter that I explicitely define in the Param () section of the script: ComputerName.

Man TestConfig -Parameter ComputerName

    Required?                    false
    Position?                    4
    Accept pipeline input?       false
    Parameter set name           (All)
    Aliases                      None
    Dynamic?                     false
Running the Configuration will generate a MOF file for the server specified in $ComputerName:

TestConfig -ComputerName 'srv1'

WARNING: The configuration 'TestConfig' is loading one or more built-in resources without explicitly importing associated modules. Add Import-DscResource –ModuleName 'PSDesiredStateConfiguration' to your configuration to avoid this message.

    Directory: E:\dsc\TestConfig

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----         5/4/2016  12:58 PM           1974 srv1.mof

We can pass more than one server to the ComputerName parameter and each will get its customized MOF file:

TestConfig -ComputerName 'srv1','srv2'
WARNING: The configuration 'TestConfig' is loading one or more built-in resources without explicitly importing associated modules. Add Import-DscResource –ModuleName 'PSDesiredStateConfiguration' to your configuration to avoid this message.

    Directory: E:\dsc\TestConfig

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----         5/4/2016   1:00 PM           1974 srv1.mof
-a----         5/4/2016   1:00 PM           1962 srv2.mof
Inside each MOF you'll find a @TargetNode:

Get-Content .\TestConfig\* | Select-String Target

Let's try now to add a second parameter in the form of a Feature that we'd like to be installed on the target servers. Before we proceed, note here that the Name property of the WindowsFeature resource takes in the Name of the feature as it is returned by Get-WindowsFeature, not its Display Name otherwise you'll get a ugly red error when running Start-DscConfiguration:

PowerShell DSC resource MSFT_RoleResource  failed to execute Test-TargetResource functionality with error message: The requested feature File and Storage Services is not found on the target machine. 
    + CategoryInfo          : InvalidOperation: (:) [], CimException
    + FullyQualifiedErrorId : ProviderOperationExecutionFailure
    + PSComputerName        : srv1
So, here's my Configuration block with two parameters:

Configuration TestConfig
    Param (
    Node $Computername

        WindowsFeature $Feature
            Name = $Feature
            Ensure = 'Present'

TestConfig -ComputerName 'srv1' -Feature 'FileAndStorage-Services'

    Directory: E:\dsc\TestConfig

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----         5/4/2016   1:09 PM           1998 srv1.mof

That is working pretty cool!

Now if I want to have more than one service running, the question is: is it possible to do a foreach loop in the WindowsFeature block?

The answer is yes: just change the parameter type to String[] so that it can accept multiple values and add the foreach loop around the WindowsFeature bock.

Configuration TestConfig
    Param (
    Node $Computername


    foreach($FeatureName in $Feature) {

        WindowsFeature $FeatureName


            Name = $FeatureName



TestConfig -ComputerName 'srv1','srv2' -Feature 'FileAndStorage-Services','FS-SMB1'

man TestConfig -Parameter * | Format-Table Name,parameterValue -auto

name              parameterValue
----              --------------
ComputerName      string[]      
ConfigurationData hashtable     
DependsOn         string[]      
Feature           string[]      
InstanceName      string        
OutputPath        string
That's a good achievement for today. Let me just finish this post beautyfing the output of this script and remove the orange warning:

WARNING: The configuration 'TestConfig' is loading one or more built-in resources without explicitly importing associated modules. Add Import-DscResource –ModuleName 'PSDesiredStateConfiguration' to your configuration to avoid this message.
To do so just add the Import-DscResource magic word between the Param () section and the Node keyword:

Feel free to share!

Monday, May 2, 2016

From monolithic to split DSC configurations

In the previous post I briefly introduced the PowerShell Configuration keyword which is used to describe desired states for your nodes.

Let’s have today a deeper look at its basic logic and see how Microsoft offered us a way to improve its usability in real-word environments.


To ease our learning process, we can take advantage of a great PowerShell ISE functionality which is called Snippets. These are reusable sections of code that you can insert into your scripts, just like templates.

The default set of snippets that comes with your ISE comprehends five DSC-related snippets:


In the simplest case, we will use the first snippet: DSC Configuration (simple)

configuration Name
    # One can evaluate expressions to get the node list
    # E.g: $AllNodes.Where("Role -eq Web").NodeName
    node ("Node1","Node2","Node3")
        # Call Resource Provider
        # E.g: WindowsFeature, File
        WindowsFeature FriendlyName
           Ensure = "Present"
           Name   = "Feature Name"

        File FriendlyName
            Ensure          = "Present"
            SourcePath      = $SourcePath
            DestinationPath = $DestinationPath
            Type            = "Directory"
            DependsOn       = "[WindowsFeature]FriendlyName"
As you can see the structure you see above matches the scripting rules I explained in the previous post, and contains two sections:
  • the list of nodes I am applying a configuration to (node1, node2 and node3)
  • the configuration to apply (a feature and a file in this example)
In my case I want to adapt this example to configure UAC and IEESC, so, after having installed the xSystemSecurity module (Install-Module or manually download from GitHub), I can easily modify the snippet to suit my needs:

configuration DisableUacIEEsc
    # Importing the required modules
    Import-DSCResource -Module xSystemSecurity -Name xUac
    Import-DSCResource -Module xSystemSecurity -Name xIEEsc

    node ("Node1","Node2","Node3")
        xUAC NeverNotifyAndDisableAll
           Setting = "NeverNotifyAndDisableAll"
        xIEEsc DisableIEEsc
            IsEnabled = $false
            UserRole = "Administrators"

The last line is the actual call to the Configuration, which generates one MOF file for each target computer in a subfolder named as the configuration itself under the current path:

WARNING: The configuration 'DisableUacIEEsc' is loading one or more built-in resources without explicitly importing associated modules. Add Import-DscResource –ModuleName 'PSDesiredStateConfiguration' to your configuration to avoid this message.

    Directory: E:\DSC\DisableUacIEEsc

Mode       LastWriteTime         Length Name
----       -------------         ------ ----
-a----     4/28/2016  1:31 PM    6102 Node1.mof
-a----     4/28/2016  1:31 PM    6102 Node2.mof
-a----     4/28/2016  1:31 PM    6102 Node3.mof
Those MOF files are text files whose format has been standardized by the DMTF and that are passed to the Start-DscConfiguration cmdlet for execution:
Start-DscConfiguration -path .\DisableUacIEEsc -Wait -ComputerName Node1

A couple of notes here:
  • we are not passing the name of a single MOF but the name of folder that contains configuration settings files
  • the –Wait parameter is particularly useful here to follow the configuration application in real-time. In this case the appropriate registry keys are added to the target server. In the example below here’s how the registry keys for IEEsc appear after running Start-DscConfiguration.

Now that was the simplest case, which is perfect for the novice to understand how the Configuration keyword works.


I want to show you now how you can adopt a less monolithic approach and separate the configurations that you want to apply from the group of target computers you want to apply them to. There are a few major evident reasons for doing so, starting from the fact that we don’t want to hardcode the name of the servers in the same file used for keeping the desired configurations.
Let’s exercise a bit and convert the previous example to something more modular and agile.
A DSC Configuration like the one we built here above as a ConfigurationData parameter that accepts an optional hashtable which contains the information on the target computers:
Get-Command DisableUacIEEsc -Syntax

DisableUacIEEsc [[-InstanceName] ] [[-DependsOn] ] [[-OutputPath] ] [[-ConfigurationData] ] []
The called hashtable must contain at least the keyword AllNodes:

$MyEnvironment = 
    AllNodes = @();
    NonNodeData = “”   


Name                           Value
----                           -----
AllNodes                       {}
Inside the AllNodes you have to put a list of hashtables (as simple as @{ propertyname = value }) like in the following example:
$MyEnvironment = @{
    AllNodes = @(
        NodeName = "Node1"
        NodeName = "Node1"
        NodeName = "Node1"
Now I want to explicitly declare some of my nodes as DEV nodes, and others as PRODUCTION nodes:
$MyEnvironment = @{
    AllNodes = @(
        NodeName = "Node1"
        Role = 'PROD'
        NodeName = "Node1"
        Role = 'DEV'
        NodeName = "Node1"
        Role = 'DEV'

Name                           Value
----                           -----
NodeName                       Node1
Role                           PROD
NodeName                       Node1
Role                           DEV
NodeName                       Node1
Role                           DEV
When you save this file, remember to use the .psd1 extension, which is the same used for module manifests, and to remove the variable name at the beginning, otherwise you’ll get the following error:
DisableUacIEEsc : Cannot process argument transformation on parameter 'ConfigurationData'. Failed to load the PowerShell data file 'myenv.psd1' with the following error:
At E:\DSC\myenv.psd1:1 char:1
+ $MyEnvironment = @{
+ ~~~~~~~~~~~~~~~~~~~~
Assignment statements are not allowed in restricted language mode or a Data section.
At E:\DSC\myenv.psd1:1 char:1
+ $MyEnvironment = @{
+ ~~~~~~~~~~~~~~~
A variable that cannot be referenced in restricted language mode or a Data section is being referenced. Variables that can be referenced include the following: 
$PSCulture, $PSUICulture, $true, $false, and  $null.
At E:\DSC\uacieesc2.ps1:19 char:36
+ DisableUacIEEsc -ConfigurationData myenv.psd1
+                                    ~~~~~~~~~~
    + CategoryInfo          : InvalidData: (:) [DisableUacIEEsc], ParameterBindingArgumentTransformationException
    + FullyQualifiedErrorId : ParameterArgumentTransformationError,DisableUacIEEsc
Once you have defined your environment, it is time to move to rewrite the configuration definition file as follow:
configuration DisableUacIEEsc
    Import-DSCResource -Module xSystemSecurity -Name xUac
    Import-DSCResource -Module xSystemSecurity -Name xIEEsc
    node $AllNodes.Where{$_.Role -eq “DEV”}.NodeName
        xUAC NeverNotifyAndDisableAll
           Setting = "NeverNotifyAndDisableAll"
        xIEEsc DisableIEEsc
            IsEnabled = $false
            UserRole = "Administrators"

DisableUacIEEsc -ConfigurationData myenv.psd1
In my case I want to disable Uac and IEEsc only on DEV servers, so I replaced the hardcoded node names with the $AllNodes automatic variable and its Where() method.


In the end you have two files for a split DSC configuration against one file in case of a monolithic approach:


To finish with, if you like the idea (you should) of separating your environmental configuration from the configuration itself, and you are scared of so much typing, remember that two ISE snippets are available for split DSC configurations:

DSC Configuration Data:
    AllNodes = @(
            NodeName = "Node1"
            Role = "WebServer"
            NodeName = "Node2"
            Role = "SQLServer"
            NodeName = "Node3"
            Role = "WebServer"
# Save ConfigurationData in a file with .psd1 file extension

...and DSC Configuration (using ConfigurationData):
configuration ConfigurationName
    # One can evaluate expressions to get the node list
    # E.g: $AllNodes.Where("Role -eq Web").NodeName
    node $AllNodes.Where{$_.Role -eq "WebServer"}.NodeName
        # Call Resource Provider
        # E.g: WindowsFeature, File
        WindowsFeature FriendlyName
           Ensure = "Present"
           Name   = "Feature Name"

        File FriendlyName
            Ensure          = "Present"
            SourcePath      = $SourcePath
            DestinationPath = $DestinationPath
            Type            = "Directory"
            DependsOn       = "[WindowsFeature]FriendlyName"

# ConfigurationName -configurationData 

In a next posts we will move the automation slider a bit further. Stay tuned.

Saturday, April 30, 2016

New PowerShell cmdlets in Windows 2016 TP5

I have just installed the last technical preview of Windows 2016 and couldn't retain myself from having a look at the new PowerShell cmdlet list.

Here's how I do it (I am logged on Win2016tp5):
Get-Command | Export-Clixml c:\temp\2016tp5.xml
icm -ComputerName win2016tp4 {Get-Command | Export-Clixml C:\temp\2016tp4.xml}
$newcmdlet = diff (Import-Clixml .\2016tp5.xml) (Import-Clixml \\win2016tp4\c$\temp\2016tp4.xml) -Property Name
Here's what I get:
$newcmdlet | Sort Name
Name                               SideIndicator
----                               --
Add-LocalGroupMember               <=
Add-NetEventVFPProvider            <=
Add-NetEventVmSwitchProvider       <=
Backup-AuditPolicy                 <=
Backup-SecurityPolicy              <=
Debug-VirtualMachineQueueOperation <=
Disable-LocalUser                  <=
Disable-StorageMaintenanceMode     <=
Disable-TlsEccCurve                <=
Enable-LocalUser                   <=
Enable-StorageMaintenanceMode      <=
Enable-TlsEccCurve                 <=
Find-Command                       <=
Find-RoleCapability                <=
Get-CustomerRoute                  <=
Get-LocalGroup                     <=
Get-LocalGroupMember               <=
Get-LocalUser                      <=
Get-NetEventVFPProvider            <=
Get-NetEventVmSwitchProvider       <=
Get-PACAMapping                    <=
Get-ProviderAddress                <=
Get-TlsEccCurve                    <=
Invoke-AppxPackageCommand          <=
New-LocalGroup                     <=
New-LocalUser                      <=
Remove-LocalGroup                  <=
Remove-LocalGroupMember            <=
Remove-LocalUser                   <=
Remove-NetEventVFPProvider         <=
Remove-NetEventVmSwitchProvider    <=
Remove-RDDatabaseConnectionString  <=
Rename-LocalGroup                  <=
Rename-LocalUser                   <=
Restore-AuditPolicy                <=
Restore-SecurityPolicy             <=
Set-LocalGroup                     <=
Set-LocalUser                      <=
Set-NetEventVFPProvider            <=
Set-NetEventVmSwitchProvider       <=
Test-EncapOverheadValue            <=
Test-LogicalNetworkConnection      <=
Test-VirtualNetworkConnection      <=
As you can see, there are a bunch of new cmdlets for local accounts management:
Get-Command | ? Source -eq 'Microsoft.PowerShell.LocalAccounts'

CommandType Name                    Version Source
----------- ----                    ------- ------
Cmdlet      Add-LocalGroupMember Microsoft.PowerShell.LocalAccounts
Cmdlet      Disable-LocalUser Microsoft.PowerShell.LocalAccounts
Cmdlet      Enable-LocalUser Microsoft.PowerShell.LocalAccounts
Cmdlet      Get-LocalGroup Microsoft.PowerShell.LocalAccounts
Cmdlet      Get-LocalGroupMember Microsoft.PowerShell.LocalAccounts
Cmdlet      Get-LocalUser  Microsoft.PowerShell.LocalAccounts
Cmdlet      New-LocalGroup Microsoft.PowerShell.LocalAccounts
Cmdlet      New-LocalUser  Microsoft.PowerShell.LocalAccounts
Cmdlet      Remove-LocalGroup Microsoft.PowerShell.LocalAccounts
Cmdlet      Remove-LocalGroupMember Microsoft.PowerShell.LocalAccounts
Cmdlet      Remove-LocalUser Microsoft.PowerShell.LocalAccounts
Cmdlet      Rename-LocalGroup Microsoft.PowerShell.LocalAccounts
Cmdlet      Rename-LocalUser Microsoft.PowerShell.LocalAccounts
Cmdlet      Set-LocalGroup Microsoft.PowerShell.LocalAccounts
Cmdlet      Set-LocalUser  Microsoft.PowerShell.LocalAccounts
Though being pretty self-explanatory cmdlets (remember the concept of discoverability?), these add nicely to the current set of cmdlets and make your servers even more manageable.

Powershell, an always evolving language!

Wednesday, April 27, 2016

First steps with Microsoft Desired State Configuration

It's been a long time since Microsoft set up the way for Desired State Configuration and this technology is spreading pretty fast through system admins. Rare are the people who have not heard of it: no matter if you are a PowerShell expert, or just someone making your first steps with it, DSC is one of the best feature in the language since Windows 2012 R2 and PowerShell 4.0 and a lot of us are already implementing it.

But this is only partially true. It isn't hard to see a great difference in speed of adoption of such things between the US, which are always moving pretty fast toward everything that is new (fellow MVP Mike F. Robbins has almost half of his audience at the PowerShell & DevOps Summit using DSC!), and good old Europe (where I live and work): here most system admins around me and with whom I spend a lot of time are still a bit lost when it comes to using a shell language to administer their systems (luckily with exceptions, such as fellow MVP Fabien Dibot who is doing a great job of evangelist around everything Cloud in France).

For sure Windows PowerShell rise has been incredibly fast, with five major versions in nine years, and most of us weren't ready for the change, but, hey, the change came, so why keep hesitating and risk being left behind for good?

So now the question is how do you get started with DSC. Well, the answer is not so easy. There are for sure a lot of resources out there, but, hey, it's complicated to find a good starting point. When you start looking at it, a lot of terms gravitate around DSC and make understanding more rude: you have Pester, GitHub, modules, resources, you have the PowerShell Gallery and a lot of stuff starting with x's and you have Pull and Push configurations and DSC resources to configure DSC itself. And finally you have DSC for Azure. Feeling left behind can definitively happen here.

So, thinking to all DSC newbies, I decided to write a basic simple blog post to introduce DSC in a simple way. I won't be talking about Push and Pull models, nor about Partial Configurations or Cross-Computer synchronization. And I won't try to explain you the difference between GPOs, SCCM and DSC (fellow MVP Stephen Owen does a good job of explaining it all in his blog post 'DSC vs. GPO vs. SCCM, the case for each.').

Everything starts with a keyword: Configuration.
Get-Command Configuration

CommandType     Name                           Version    Source
-----------     ----                           -------    ------
Function        Configuration                  1.1        PSDesiredStateConfiguration
Configurations are special types of functions which, at their simplest, are composed of a main block:
Configuration NameOfTheConfiguration {
Inside this block come one Node blocks for each target computer to configure:
Configuration NameOfTheConfiguration {
   Node 'SRV1' {
   Node 'SRV2 {
Each Node block contains one or more Resource blocks:
Configuration NameOfTheConfiguration {
   Node 'SRV1' {
      WindowsFeature FeatureName {
         Ensure = 'Present'
         Name = 'Name'
The list of resources you can declare are easily obtained using the Get-DscResource cmdlet:
Get-DscResource -Module PSDesiredStateConfiguration | select name
This cmdlet is pretty powerfull and it's not limited to showing you the lists of resources: it can also be used to get the syntax of a specific resource:
Get-DscResource -Name Service -Syntax
Service [String] #ResourceName
    Name = [string]
    [BuiltInAccount = [string]{ LocalService | LocalSystem | NetworkService }]
    [Credential = [PSCredential]]
    [Dependencies = [string[]]]
    [DependsOn = [string[]]]
    [Description = [string]]
    [DisplayName = [string]]
    [Ensure = [string]{ Absent | Present }]
    [Path = [string]]
    [PsDscRunAsCredential = [PSCredential]]
    [StartupType = [string]{ Automatic | Disabled | Manual }]
    [State = [string]{ Running | Stopped }]
Here you go, you have the basics: you know that you can write a script which contains a Configuration function which declaratively configures nodes with desired well-known resources. That's all there is to know about it to start with DSC and see if you can get any benefit from it.

Now you are NOT supposed to grab your keyboard and start writing your resources: most of what is used today on servers is already available for you out there, on GitHub and on the PowerShell Gallery: it's been developed by Microsoft, it's been improved by the community, and even if it is experimental (remember the x's?), you can already take advantage of it. But how?

That's the second step in learning DSC and there is a cmdlet for it: Install-Module.

Install-Module (alias inmo) is a cmdlet available for PowerShell 5.0 which does all the work for you: once you know you are interested in a module from the online Gallery, just ask this cmdlet to fetch it for you and store it under the %systemdrive%:\Program Files\WindowsPowerShell\Modules folder hence making it available for all the local users.

Quick tip: to get a list for all your module paths; just query the right variable:
$env:PSModulePath -split ';'
C:\Program Files\WindowsPowerShell\Modules\
It's interesting to note that this cmdlet accepts pipeline input, so if you do not know the exact name of a module, just use Find Module:
Find-Module -Name "xSys*Sec*" | Format-List

Name                       : xSystemSecurity
Version                    :
Type                       : Module
Description                : Handles Windows related security settings like UAC and IE ESC.
Author                     : Arun Chandrasekhar
CompanyName                : PowerShellTeam
Copyright                  : (c) 2014 Microsoft Corporation. All rights reserved.
PublishedDate              : 11/09/2015 23:28:31
LicenseUri                 :
ProjectUri                 :
IconUri                    :
Tags                       : {DesiredStateConfiguration, DSC, DSCResourceKit, PSModule}
Includes                   : {Function, DscResource, Cmdlet, Workflow...}
PowerShellGetFormatVersion :
ReleaseNotes               :
Dependencies               : {}
RepositorySourceLocation   :
Repository                 : PSGallery
PackageManagementProvider  : NuGet
AdditionalMetadata         : {versionDownloadCount, summary, ItemType, copyright...}
Then pass the output down to Install-Module:
Find-Module -Name "xSys*Sec*" | Install-Module

VERBOSE: The installation scope is specified to be 'AllUsers'.
VERBOSE: The specified module will be installed in 'C:\Program Files\WindowsPowerShell\Modules'.
VERBOSE: The specified Location is 'NuGet' and PackageManagementProvider is 'NuGet'.
VERBOSE: Downloading module 'xSystemSecurity' with version '' from the repository
VERBOSE: Searching repository ''xSystemSecurity'' for
VERBOSE: Downloading ''.
VERBOSE: Completed downloading ''.
VERBOSE: Completed downloading 'xSystemSecurity'.
VERBOSE: InstallPackageLocal' - name='xSystemSecurity',
VERBOSE: Module 'xSystemSecurity' was installed successfully.
Concerning these steps, I have seen system administrators trying to manually download DSC resources from GitHub, unzipping them to C:\Program Files\WindowsPowerShell\Modules\ but being unable to list their content with Get-DscResource:
Get-DscResource xSystemSecurity-dev

CheckResourceFound : The term 'xSystemSecurity-dev' is not recognized as the name of a Resource.
At C:\windows\system32\windowspowershell\v1.0\Modules\PSDesiredStateConfiguration\PSDesiredStateConfiguration.psm1:3983 char:13
+             CheckResourceFound $Name $Resources
+             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Write-Error], WriteErrorException
    + FullyQualifiedErrorId : Microsoft.PowerShell.Commands.WriteErrorException,CheckResourceFound
That's not well documented, but, when you download the zip file for a module, the zipped folder can have a '-dev' extension: in this case you have to remove it otherwise Get-DscResource won't be able to discover it.

That's all for today. Stay tuned for more DSC. Do not hesitate to share!

Monday, April 4, 2016

Working with Unicode scripts, blocks and categories in Powershell

This March 2016 I was honored to be the author of the monthly scripting competitions at For the contest, I came up with a scenario where the system administrator was tasked to use PowerShell check a given path and identify all the files whose names had letters (not symbols nor numbers) in the Latin-1 Supplement character block.

This scenario came in two versions: one for beginners, where competitors where allowed to write a oneliner, and a one for experts, where I expected people to write a tool (in the form of an advanced function) to do the job.

In both cases I expected people to focus on understanding how regular expression engines use the Unicode character set and to use the best possible syntax to solve the puzzle. That's why I explicitly asked competitors to work with the Latin-1 Supplement character block. That was the key clue that should have pushed people to learn that Unicode is a so large character set that it has been split up in categories: using these categories in your regex expressions makes them more robust.


Let's start looking at some sample answers we got, which is not exactly what I expected:

where {$_.Name -match '[\u00C0-\u00D6]' -or $_.Name -match '[\u00D8-\u00F6]' -or $_.Name -match '[\u00F8-\u00FF]'}

Where {$_.Name -match "[\u00C0-\u00FF]"}

Where {$_.Name -match '[\u00C0-\u00FF]' -and $_.Name -notmatch '[\u00D7]|[\u00F7]'}

Where-Object{[int[]][char[]]$ -gt 192}

.Where({$_.Name -match "[\u00C0-\u00FF]"

if (($LetterNumber -in 192..214) -or ($LetterNumber -in 216..246) -or ($LetterNumber -in 248..255))

where name -Match '[\u0080-\u00ff]'

$_.Name -match '[\u00C0-\u00FF -[\u00D7\u00F7]]'

$_.Name -match '[\u0083\u008A\u008C\u008E\u009A\u009C\u009E\u009F\u00C0-\u00D6\u00D8-\u00F6\u00F8-\u00FF]'

if ($char -notmatch '[a-z]' -and [Globalization.CharUnicodeInfo]::GetUnicodeCategory($char) -match 'caseLetter$')

Where-Object -FilterScript { @('©','¼','½','÷') -notcontains $_.Name -and [regex]::IsMatch($_.Name,"\p{IsLatin-1Supplement}")

($_.Name -match '[\p{IsLatin-1Supplement}-[\x80-\xbf\xd7\xf7]]+')

It's easily said that all these answers are, to varying degrees, impractical to maintain and error-prone for a simple reason: the code points used in the code are not human-readable and than a simple typo can break the code without raising alerts.

There is also a problem of subjectivity, where each competitors decided to use different code points: 00D6 or 00FF or 00F7, just to make a few examples.

So the question is how you decide which code points to use and how you could have taken advantage of Unicode categories in your regular expression to write a solid answer for this puzzle.

To answer this question I will first walk you through the Unicode model and see how it is structured.
You can think of Unicode as a database maintained by an international consortium which stores all the characters in all the existing languages.

New versions are released to reflect major changes, since new writing system are discovered periodically and new glyphs (which are graphical representations of characters) have to be added: look for instance at those found on the 4 thousands years old Phaistos Disc:


The first version of Unicode dates back to 1990 and since then a bunch of versions have followed:
  • 1.0.0 1991 October
  • 2.0.0 1996 July
  • 3.0.0 1999 September
  • 4.0.0 2003 April
  • 5.0.0 2006 July
  • 6.0.0 2010 October
  • 7.0.0 2014 June
  • 8.0.0 2015 June
The last version is 8.0 and defines a code space of 1,114,112 code points in the range 0 hex to 10FFFF hex:
(10FFFF)base16 = (1114111)base10
Concerning the Windows world, the .NET Framework 4.5 conforms to the Unicode 6.0 standard, which dates from 2010, while on previous versions, it conforms to the Unicode 5.0 standard, as you can read here.
Each code point is referred to by writing "U+" followed by its hexadecimal number, where U stands for Unicode. So U+10FFFF is the code point for the last code point in the database.

All these code points are divided into seventeen planes, each with 65,536 elements. The first three planes are named respectively:

  • Basic Multilingual Plane, or BMP
  • Supplementary Multilingual Plane, or SMP
  • Supplementary Ideographic Plane, or SIP

BMP, whose extent corresponds exactly to a Unsigned 16-bit integer ([uint16]::MaxValue = 65535), covers Latin, African and Asian languages as well as a good amount of symbols. So languages like English, Spanish, Italian, Russian, Greek, Ethiopic, Arabic and CJK (which stands for Chines, Japanese and Korean languages) have code points assigned in this plane.

These code points are expressed with four digits long code points, from 0000 to FFFF. So, for instance:

  • U+0058 is the code point for the Latin capital X
  • U+0389 is the code point for the Greek capital letter Omega
  • U+221A is the code point for the square root symbol
  • U+0040 is the code point for the Commercial At symbol
  • U+9999 is the code point for the Han character meaning 'fragrant, sweet smelling, incense'
  • U+0033 is the code point for the digit three
So, letters, digits and symbols we widely use have all their code point in the Unicode database.

The .NET Framework uses the System.Char structure to represent a Unicode character.


There is a simple way in Powershell to find the code point of a given glyph.

First you have to take the given character and find its numeric value, using typecasting on the fly:

$char = 'X'

This is the equivalent of the ORD function you have in many other languages (Delphi, PHP, Perl, etc).
Then, using the Format operator with the X format string, convert it to hexadecimal:

'{0:X4}' -f [int][char]$char
Since each Unicode code point is referred to with a U+, we just have to add it to our string through concatenation:

'U+{0:X4}' -f [int][char]$char

Now, if you want to get the glyph of a given code point, you have to reverse your code:

First you have to ask PowerShell to call ToInt32 to convert the hex value (base-16) to a decimal:

[int][Convert]::ToInt32('0058', 16)
Then a step is required to cast the decimal to a char:

[Convert]::ToChar([int][Convert]::ToInt32('0058', 16))
So, if we go back to the examples we saw before, we can use a loop to convert all the four digits long hex values of the Basic Multilingual Plane to their corresponding glyphs.

'0058','0389','221A','0040','9999','0033' | % { [Convert]::ToChar([int][Convert]::ToInt32($_, 16)) }
Actually, you have a simpler way to get the same result, which relies on the implicit conversion performed by the compiler when numbers are prefixed by '0x':

0x0058, 0x389, 0x221a, 0x0040, 0x9999, 0x0033 | % { [char]$_ }

At this point it is interesting to know here that Unicode adopts UTF-16 as the standard enconding for everything inside the Basic Multilingual Plane, since, as we have seen, most living languages have all (or at least most) of their glyphs within the range 0 - 65535.
For characters beyond the first Unicode plane, that is whose code is superior to 65535 and hence can't fit in a 16 bit integer (a Word), we can use two encodings: UTF-32 or 16-bits Surrogate Pairs.
The latter is a method where a glyph is represented by a first (high) surrogate (16-bit long) code value in the range U+D800 to U+DBFF and a second (low) surrogate (16-bit long as well) code value in the range U+DC00 to U+DFFF. Using this mechanism, UTF-16 can support all 1,114,112 potential Unicode characters (2^16 * 17 Planes).

In any case Windows is not capable of showing non-BMP glyphs even if a font like Code2001 is installed. Let's see this in practice.
In the example below I am outputting the glyph for the commercial AT (which is in the BMP) starting from its UTF-32 serialization using the ConvertFromUTF32 method:
In this other example below I am trying hard to show to screen the glyph for the MUSICAL SYMBOL G CLEF, which has been added to Unicode 3.1 and belongs to the Supplementary Multilingual Plane, but I am only able to get a square box (that is used for all characters for which the font does not have a glyph):
Now that you are confident with code points, it is time to step up your game and get an understanding of some Unicode properties which are useful to solve our puzzle: General Category, Script and Block.


Each code point is kind of an object that has a property named General Category. The major categories are: Letter, Number, Mark, Punctuation, Symbol, and Other.

Within these 7 categories, there are the following subdivisions:

  • {L} or {Letter}
  • {Ll} or {Lowercase_Letter}
  • {Lu} or {Uppercase_Letter}
  • {Lt} or {Titlecase_Letter}
  • {L&} or {Cased_Letter}
  • {Lm} or {Modifier_Letter}
  • {Lo} or {Other_Letter}
  • {M} or {Mark}
  • {Mn} or {Non_Spacing_Mark}
  • {Mc} or {Spacing_Combining_Mark}
  • {Me} or {Enclosing_Mark}
  • {Z} or {Separator}
  • {Zs} or {Space_Separator}
  • {Zl} or {Line_Separator}
  • {Zp} or {Paragraph_Separator}
  • {S} or {Symbol}
  • {Sm} or {Math_Symbol}
  • {Sc} or {Currency_Symbol}
  • {Sk} or {Modifier_Symbol}
  • {So} or {Other_Symbol}
  • {N} or {Number}
  • {Nd} or {Decimal_Digit_Number}
  • {Nl} or {Letter_Number}
  • {No} or {Other_Number}
  • {P} or {Punctuation}
  • {Pd} or {Dash_Punctuation}
  • {Ps} or {Open_Punctuation}
  • {Pe} or {Close_Punctuation}
  • {Pi} or {Initial_Punctuation}
  • {Pf} or {Final_Punctuation}
  • {Pc} or {Connector_Punctuation}
  • {Po} or {Other_Punctuation}
  • {C} or {Other}
  • {Cc} or {Control}
  • {Cf} or {Format}
  • {Co} or {Private_Use}
  • {Cs} or {Surrogate}
  • {Cn} or {Unassigned}

The Char.GetUnicodeCategory and the CharUnicodeInfo.GetUnicodeCategory method are used to return the General Category property of a char.

'X','Ή','√','@','香','3' | % { [System.Globalization.CharUnicodeInfo]::GetUnicodeCategory($_) }
As you can see, Unicode also brings interesting possibilities. Once you know that each Unicode character belongs to a certain category, you can try to match a single character to a category with \p (in lowercase) in your regular expression:

#A is a letter
'A' -match "(\p{L})"

#3 is a digit
3 -match "(\p{N})"
You can also match a single character not belonging to a category with \P (uppercase):

#X is not a digit
'X' -match "(\P{N})"

#3 is not a letter
3 -match "(\P{L})"

Other useful properties of a character are Script and Block: each character belongs to a Script and to a Block.

A Script is a group of code points defining a given human writing system, so we can generally think of a script as of a language. Though many scripts (like Cherokee, Lao or Thai) correspond to a single natural language, others (like Latin) are common to multiple languages (Italian, French, English...). Code points in a Script are scattered and don't form a contigous range.

The list of the existing Scripts is kept by the Unicode Consortium in the Unicode Character Database (UCD), which consists of a number of textual data files listing Unicode character properties and related data.

The UDB file for Scripts is here.
A block on the other side is a contiguous range of code points.

The UDB file for Blocks is here


At this point it can be interesting to see how you can use PowerShell to check if a given char belongs to which script.

This is a tough task since \p in .NET is not aware of script names, so there's no straightforward way to match a char to a Script, meaning the following code won't work:
'X' -match "(\p{Anatolian_Hieroglyphs})"

parsing "(\p{Anatolian_Hieroglyphs})" - Unknown property 'Anatolian_Hieroglyphs'.
At line:1 char:1
+ 'X' -match "(\P{Anatolian_Hieroglyphs})"
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: (:) [], ArgumentException
    + FullyQualifiedErrorId : System.ArgumentException

But, since we know now that the UCD contains a list of all the Scripts in a file on the website, we just have to retrieve it via Invoke-WebRequest:

$sourcescripts = ""

$scriptsweb = Invoke-WebRequest $sourcescripts
The a bit of manipulation is required to translate this text file to an object:

$scriptsinfo = ($scriptsweb.content.split("`n").trim() -ne "") |
                sls "^#" -n |
                convertfrom-csv -Delimiter ';' -header "range","scriptname"
Basically, I am

  • splitting the content of the web page so that I have one Script per line: split("`n")
  • removing empty lines: -ne ""
  • suppressing comments (indicated by hash marks): sls "^#" -n
  • converting the data to a CSV with two columns names Range and ScriptName: -header "range","scriptname"

That makes for a pretty nice oneliner: I had a text file on a web server and in three lines of code I have an object containing all the possible Scripts and their code point ranges:

AB60..AB64    Latin # L&   [5] LATIN SMALL LETTER SAKHA YAT......
FB00..FB06    Latin # L&   [7] LATIN SMALL LIGATURE FF..LATIN....
0370..0373    Greek # L&   [4] GREEK CAPITAL LETTER HETA..GRE....
0375          Greek # Sk       GREEK LOWER NUMERAL SIGN          ....
0376..0377    Greek # L&   [2] GREEK CAPITAL LETTER PAMPHYLIA....
037A          Greek # Lm       GREEK YPOGEGRAMMENI               ....
037B..037D    Greek # L&   [3] GREEK SMALL REVERSED LUNATE SI....
037F          Greek # L&       GREEK CAPITAL LETTER YOT      ....
0384          Greek # Sk       GREEK TONOS                       ....
0386          Greek # L&       GREEK CAPITAL LETTER ALPHA WIT....
0388..038A    Greek # L&   [3] GREEK CAPITAL LETTER EPSILON W....
038C          Greek # L&       GREEK CAPITAL LETTER OMICRON W....
Now to see what Script a char belongs to, I simply have to find its numeric value, then see if it is in the range (converted to decimal) of code points (converted from hex to decimal) and return the Script name:

$char = 'Ή'

$decimal = [int][char]$char

foreach($line in $scriptsinfo){

    #Splitting each range to the double points ..
    $hexrange = $line.range.split('..')

    #Getting the start value of the range
    $hexstartvalue = $hexrange[0].trim()

    #Getting the end value of the range (if it exists, hence the try/catch)
        $hexendvalue = $hexrange[2].trim()
        $hexendvalue = $null
    #Converting the start value from he to decimal for easier comparison
    $startvaluedec = [Convert]::ToInt32($hexstartvalue, 16)

        $endvaluedec = [Convert]::ToInt32($hexendvalue, 16)
        #Cheking existence in range
        if($decimal -in ($startvaluedec..$endvaluedec)){
            "$char (dec: $decimal) is in script $($line.scriptname -replace '\#.*$') between $startvaluedec and $endvaluedec"
        #Checking equality with single value (in case it is not a range)
        if($decimal -like $startvaluedec){
            "$char (dec: $decimal) is in script $($line.scriptname -replace '\#.*$') because it's equal to $startvaluedec"

Ή (dec: 905) is in script Greek  between 904 and 906
Nice isn't it?

Another nicety is to use the same code we saw above to get the full list of all the existing Scripts:

((($scriptsweb.content.split("`n").trim() -ne "") | sls "^#" -n | convertfrom-csv -Delimiter ';' -header "range","scriptname").scriptname -replace '\#.*$').trim() | select -unique
Common, Latin, Greek, Cyrillic, Armenian, Hebrew, Arabic, Syriac, Thaana, Devanagari, Bengali, Gurmukhi, Gujarati, Oriya, Tamil, Telugu, Kannada, Malayalam, Sinhala, Thai, Lao, Tibetan, Myanmar, Georgian, Hangul, Ethiopic, Cherokee, Canadian_Aboriginal, Ogham, Runic, Khmer, Mongolian, Hiragana, Katakana, Bopomofo, Han, Yi, Old_Italic, Gothic, Deseret, Inherited, Tagalog, Hanunoo, Buhid, Tagbanwa, Limbu, Tai_Le, Linear_B, Ugaritic, Shavian, Osmanya, Cypriot, Braille, Buginese, Coptic, New_Tai_Lue, Glagolitic, Tifinagh, Syloti_Nagri, Old_Persian, Kharoshthi, Balinese, Cuneiform, Phoenician, Phags_Pa, Nko, Sundanese, Lepcha, Ol_Chiki, Vai, Saurashtra, Kayah_Li, Rejang, Lycian, Carian, Lydian, Cham, Tai_Tham, Tai_Viet, Avestan, Egyptian_Hieroglyphs, Samaritan, Lisu, Bamum, Javanese, Meetei_Mayek, Imperial_Aramaic, Old_South_Arabian, Inscriptional_Parthian, Inscriptional_Pahlavi, Old_Turkic, Kaithi, Batak, Brahmi, Mandaic, Chakma, Meroitic_Cursive, Meroitic_Hieroglyphs, Miao, Sharada, Sora_Sompeng, Takri, Caucasian_Albanian, Bassa_Vah, Duployan, Elbasan, Grantha, Pahawh_Hmong, Khojki, Linear_A, Mahajani, Manichaean, Mende_Kikakui, Modi, Mro, Old_North_Arabian, Nabataean, Palmyrene, Pau_Cin_Hau, Old_Permic, Psalter_Pahlavi, Siddham, Khudawadi, Tirhuta, Warang_Citi, Ahom, Anatolian_Hieroglyphs, Hatran, Multani, Old_Hungarian, SignWriting
As you can see, in a few lines of code, we added to our code the ability to compare a character against a Unicode Script name, which is something that is not supported by .Net regex engine out of the box.


The next step is to see how we can get which Block a given character belongs to. This is easier the getting the Script because, while .NET doesn't support regex against Script names, it natively supports running matches against Block names.

Just remember to prepend 'Is' to the Block name: not all Unicode regex engines use the same syntax to match Unicode blocks and, while Perl uses the «\p{InBlock}» syntax, .NET uses «\p{IsBlock}» instead:

'Ω' -match "(\p{Greek})"
parsing "(\p{Greek})" - Unknown property 'Greek'.
At line:1 char:1
+ 'Ω' -match "(\p{Greek})"
+ ~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : OperationStopped: (:) [], ArgumentException
    + FullyQualifiedErrorId : System.ArgumentException

'Ω' -match "(\p{IsGreek})"
If I want to check for a character against all the existing Blocks, I just have to rely on the UDB and dynamically build all the possible regex:

$sourceblocks = ""

$blocksweb = Invoke-WebRequest $sourceblocks

$blocklist = (($blocksweb.content.split("`n").trim() -ne "") | 
                sls "^#" -n |
                convertfrom-csv -Delimiter ';' -header "range","blockname").blockname

$char = 'Ω'
foreach($block in $blocklist){

    $block = $block -replace ' ',''

    $regex = "(?=\p{Is$block})"


        if($char -match $regex)

            {"$char is in $block"}




Ω is in GreekandCoptic
Another funny exercise is to try to get all the characters in the Cherokee Block, just to see how that can be done:

0..65535 | % { if([char]$_ -match "(?=\p{IsCherokee})"){[char]$_} }
Ꭰ Ꭱ Ꭲ Ꭳ Ꭴ Ꭵ Ꭶ Ꭷ Ꭸ Ꭹ Ꭺ Ꭻ Ꭼ Ꭽ Ꭾ Ꭿ Ꮀ Ꮁ Ꮂ Ꮃ Ꮄ Ꮅ Ꮆ Ꮇ Ꮈ Ꮉ Ꮊ Ꮋ Ꮌ Ꮍ Ꮎ Ꮏ Ꮐ Ꮑ Ꮒ Ꮓ Ꮔ Ꮕ Ꮖ Ꮗ Ꮘ Ꮙ Ꮚ Ꮛ Ꮜ Ꮝ Ꮞ Ꮟ Ꮠ Ꮡ Ꮢ Ꮣ Ꮤ Ꮥ Ꮦ Ꮧ Ꮨ Ꮩ Ꮪ Ꮫ Ꮬ Ꮭ Ꮮ Ꮯ Ꮰ Ꮱ Ꮲ Ꮳ Ꮴ Ꮵ Ꮶ Ꮷ Ꮸ Ꮹ Ꮺ Ꮻ Ꮼ Ꮽ Ꮾ Ꮿ Ᏸ Ᏹ Ᏺ Ᏻ Ᏼ Ᏽ ᏶ ᏷ ᏸ ᏹ ᏺ ᏻ ᏼ ᏽ ᏾ ᏿


Now that we are proficient with Unicode in our regexes, let's how we could have easily soved the puzzle.

I asked to detect all filenames that had letters (not symbols nor numbers) in the Latin-1 Supplement character block.

The Latin-1 Supplement is the second Unicode block in the Basic Multilingual Plane. It ranges from U+0080 (decimal 128) to U+00FF (decimal 255) and contains 64 code points in the Latin Script and 64 code points in the Common Script. Basically it contains some currency symbols (Yen, Pound), a few math signs (multiplication, division) and all lowercase and uppercase letters that have diacritics.

What's a diacritic you ask? The answer comes from Wikipedia:

Diacritic /daɪ.əˈkrɪtɪk/ – also diacritical mark, diacritical point, or diacritical sign – is a glyph added to a letter, or basic glyph. The term derives from the Greek διακριτικός (diakritikós, distinguishing"), which is composed of the ancient Greek διά (diá, through) and κρίνω (krínein or kríno, to separate). Diacritic is primarily an adjective, though sometimes used as a noun, whereas diacritical is only ever an adjective. Some diacritical marks, such as the acute ( ´ ) and grave ( ` ), are often called accents. Diacritical marks may appear above or below a letter, or in some other position such as within the letter or between two letters. The main use of diacritical marks in the Latin script is to change the sound-values of the letters to which they are added.

Since a Unicode Block exists listing all of the diacritical marks, they can be shown with a oneliner:
0..65535 | % { if([char]$_ -match "(?=\p{IsCombiningDiacriticalMarks})"){[char]$_} }
Since we have seen the syntax to check if a character has a specific Unicode Block property, and since Latin-1 Supplement IS a Block property, here's what to do:

'A' -match "(\p{IsLatin-1Supplement})"

'é' -match "(\p{IsLatin-1Supplement})"
Good! No hardcoded values here, meaning no stuff like:

where name -Match '[\u0080-\u00ff]'

if (($LetterNumber -in 192..214)
Subjectivity is gone!
At the same time I did ask to include only the filenames containing Letters from that Unicode Block, not Symbols, nor Digits. Here's where the General Category property we saw above comes to the rescue. I can force the regex engine to include all letters ( \p{L} ), and exclude digits ( \P{N} ), punctuation ( \P{P} ), symbols ( \P{S} ) and separators ( \P{Z} ).

'A' -match "(?=\p{IsLatin-1Supplement})(?=\p{L})(?=\P{N})(?=\P{P})(?=\P{S})(?=\P{Z})"

'é' -match "(?=\p{IsLatin-1Supplement})(?=\p{L})(?=\P{N})(?=\P{P})(?=\P{S})(?=\P{Z})"
Concerning the expression, I am using here a positive lookahead assertion (?=), which is a non-consuming regular expression. I can do this as many times as I want, and this will be act as a logic "and" between the different categories I am passing to \p or \P .


For sure this can be shortened to
'é' -match "(?=\p{IsLatin-1Supplement})(?=\p{L})"
since the are no code points which are at the same time letters and numbers or letters and symbols, etc.

To sum it up, to get a list of all Latin letters with diacritics, it is as simple as typing the following line:

Get-ChildItem 'C:\FileShare' -Recurse -Force |
   Where { $_.Name -match "(?=\p{IsLatin-1Supplement})(?=\p{L})"} |
   ForEach-Object {[PSCustomObject]@{
    'Creation Date'=$_.CreationTime;
    'Last Modification Date'=$_.LastWriteTime;
    'File Size'=$_.Length} } |
        Format-Table -AutoSize

I hope you enjoyed this explanation. If you are a Unicode guru, and you find something incorrect, do not hesitate to drop a comment and I'll update. Thanks again to for giving me the occasion of being part of a larger community.

Stay tuned for more PowerShell fun!

Monday, March 7, 2016

Powershell puzzle

I am honored to say that one of my puzzles has been pusblished on There are two scenarios, one for beginners and one for advanced Powershell scripters, and both of them will require a good knowledge of the language as well as a bit of regex:

I hope you will enjoy the task and do not hesitate to contact me for questions.

Monday, December 21, 2015

How to setup a highly available cluster with Hyper-V and Starwind Virtual SAN - part 3

In the previous post we configured StarWind Virtual SAN and we are now moving to setup our Hyper-V Servers as iSCSI Initiators that mount the highly available backend storage for our cluster.

Basically there are 5 big steps:
  1. install and configure MPIO for iSCSI
  2. add the Hyper-V role and the failover clustering feature
  3. set up a virtual switch for Converged Networking
  4. configure iSCSI initiators to connect to iSCSI targets
  5. set up the Hyper-V cluster

This part will require two restart of your Hyper-V servers.

The installation of the Multipath-IO Feature is done on both hypervisors through PowerShell:
Get-WindowsFeature Multipath-IO | Install-WindowsFeature
Reboot both servers.

Now on both nodes open the MPIO control panel (this interface is available on Core versions too), which you can access by typing:


In the MPIO dialog choose the Discover Multi-Paths tab and then check the 'Add support for iSCSI devices' option.

The servers will now reboot for the second time.

The same result can be obtained much more simply with a line of PowerShell code. Isn't that fantastic?

Enable-MSDSMAutomaticClaim -BusType iSCSI

When they restart, the MPIO Devices tab lists the additional hardware ID “MSFT2005iSCSIBusType_0x9.”, which means that all iSCSI bus attached devices will be claimed by the Microsoft DSM.


In Windows 2016 TP4, the installation of roles and features can be achieved either in Powershell or with the GUI. This is entirely up to you at the outcome will be the same. My only suggestion is to skip the set up of virtual switches here, even do the interface asks you to do it. We will configure these switches later in Powershell for fine-grained control.


Start by configuring NIC teaming on your physical network cards. In my lab I only have one NIC, so this step is not necessary, but I will do it all the same, so that if in the future I add a secondary NIC, I can increase my bandwidth and availability without impacting the rest of the configuration:
New-NetLbfoTeam -TeamMembers 'Ethernet' -LoadBalancingAlgorithm HyperVPort -TeamingMode SwitchIndependent -Name Team1
Since your two Compute nodes have Hyper-V, Clustering and NIC Teaming, you can leverage the Hyper-V cmdlets to build your Converged Network Adapters. Basically a converged network adapter is a flexible pool of your physical NICs that are joined together (their bandwidth is combined) in order to provide a robust and fast data channel which is usually split in logical VLANs with a QoS attached for traffic shaping.

In my case I only have a teamed single physical network adapter, but I am still allowed to build a logical switch on top of it.

In my lab each Hyper-V host is configured with a static IP address:

I am going to use the New-Vmswitch cmdlet to setup my logical switch. Notice that when I execute New-Vmswitch, I set the AllowManagementOS parameter to $true, since I have only one NIC Teaming that I have to use for management. If I set that parameter to $false I would loose connectivity on the host.

 In the floowing screenshot you can see the configuration before building the logical switch. You can see that NIC Teaming is activated and the Physical NIC is now only bound to the 'Microsoft Network Adapter Multiplexor Protocol':

Here's the syntax to build the virtual switch where all the node traffic will flow through:

New-VMSwitch -Name vSwitch -NetAdapterName Team1 -AllowManagementOS:$false -MinimumBandwidthMode Weight
When you run this cmdlet, the NIC Team 'gives' its static IP addressing to the virtual adapter that gets created and then bind itself to the logical switch. That's why the only checkbox that is ticked for the NIC Team after building the logical switch is the one for 'Hyper-V Extensible Virtual Switch':

Then follows the configuration of the additional adapters as well as a bit of traffic shaping as suggested by some well-know best practices:

Set-VMSwitch vSwitch -DefaultFlowMinimumBandwidthWeight 10
Add-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -SwitchName vSwitch
Add-VMNetworkAdapter -ManagementOS -Name "Cluster" -SwitchName vSwitch
Add-VMNetworkAdapter -ManagementOS -Name "iSCSI" -SwitchName vSwitch
Set-VMNetworkAdapter -ManagementOS -Name "LiveMigration" -MinimumBandwidthWeight 40
Set-VMNetworkAdapter -ManagementOS -Name "Cluster" -MinimumBandwidthWeight 10
Set-VMNetworkAdapter -ManagementOS -Name "iSCSI" -MinimumBandwidthWeight 40
Here's a view of the new virtual adapters:

These new adapters get their IP configuration from a DHCP by default. In my case I want to explicitely declare their addressing so that they are on different subnets (otherwise they won't be seen from the Cluster and you'll get errors in the cluster validation process):
New-NetIPAddress -InterfaceAlias "vEthernet (LiveMigration)" -IPAddress -PrefixLength "24"
New-NetIPAddress -InterfaceAlias "vEthernet (Cluster)" -IPAddress -PrefixLength "24"
New-NetIPAddress -InterfaceAlias "vEthernet (iSCSI)" -IPAddress -PrefixLength "24"
As you can see, I put the iSCSI adapters on the same subnet of my StarWind iSCSI Targets:

I repeat the same operation on the second Hyper-V node and I am good to go for setting up my iSCSI initiators.


Now bring up the iSCSI initiator configuration panel and setup the link between all of your Hyper-V Server (initiators) to all of your StarWind servers (targets) on the network dedicated to iSCSI traffic (192.168.77.X in my case):


On my first node I have:

And on my second node:

The next step is to connect the targets with multi-pathing enabled.

We are almost done. Withe Get-Disk I can check that both of my compute nodes can see the backend storage:

Now move to the Disk Management and initialize these iSCSI disk:

Time to build the Hyper-V cluster.


Nothing easier now then to setup the cluster and bring the disks online:

The three disks are automatically added to the Hyper-V cluster. The smaller one is automatically selected to act as Quorum. The others two have to be manually added to the CSV, so that the CSV creates a unified namespace (under C:\ClusteredStorage) that enables highly available workloads to transparently fail over to the other cluster node if a server fails or is taken offline:

Check now that all your virtual network adapters appear in the Networks view:

That’s all about it for the moment. Now we have a two backend nodes serving highly available storage from their local disks to a couple of Hyper-V nodes. Four nodes in total, with full resiliency on the front-end servers as well as on the back-end servers.

We can definitively say that we have achieved a fault tolerant design thanks to StarWind Virtual SAN and Microsoft Hyper-V in a 'Compute and Storage' scenario.

In a next post I will test some VM workload on this storage as see how fast it is. I will also test the StarWind high availability mechanism and see how it respond to hardware failure.

Stay tuned for more.
Related Posts Plugin for WordPress, Blogger...