Wednesday, November 30, 2016

A PowerShell function to monitor physical disk activity per storage bay during sync activities

These days I have been migrating data on a old Windows 2003 server from an old HP XP128 storage array to a newer one, a HP 3PAR. Both those fiber channel SAN were mounted and managed on the server through Veritas Enterprise Administrator (VEA) version 5.1. At first I started with a Robocopy to migrate data, ACLs, and all the rest from the old volume to the new one, but soon discovered that there could be better ways to move huge amount of data (I am talking here of several million sensitive files).

One of the main advantages of using Robocopy is that you have fine grained control over your sync. The downside is that, after the sync, you have to stop the old volume and move all your pointers to the new volume, which has a big impact on the automation systems relying on those files to keep their 24/7 activity.

I decided then to change plans and build a mirror on VEA between the old storage array and the new one.

The only problem with such a old version of VEA is that you don't have access to such a basic information as the fact that the mirror sync is completed. The interface just shows you that you have successfully built your mirror but hides the information about the actual data sync taking place behind the curtains.

That's the moment the manager came in and asked for a way to keep an eye the sync. And that's the moment I replied: I can do that for you with PowerShell, sir.

I knew fairly well that though there's no PowerShell on a Windows 2003 server unless you have taken the time to install it, I could access its performance counters from a distant, more recent workstation trough Get-Counter:

What I wanted was to give the manager a script he could run himself that showed the activity for the disks involved in the sync. So I knew that I had to rely on cmdlets I am not used to put in my functions, such as Clear-Host or Write-Host.
But, as for anything else, there are times you have to make exceptions. And Write-Host can have its use sometimes.
In the end I came up with a function that given a set of physical disks on a source server and on a target server, monitors the disk activity in terms of read and written bytes per seconds and in case those are not null, set the font color to green, so that they're highlighted.

The names of the disks can be found in the Perfmon GUI itself as well as in VEA:

Their names can be given as input to the functions as a pattern for a regular expression. In my case this gave:

-SourceDiskPattern '\(2\)|\(14\)'
-DestinationDiskPattern '\(8\)|\(9\)|\(10\)|\(20\)|\(21\)|\(22\)' 
because I am trying to match those hard disk numbers.

I have also added to the function a couple of parameters to show the processor activity and the disk queue, since these counters can always be of use when tracing a workload:

In the end, here's the output expected by the manager during the sync, with the green lines highlighting the disks where the data are read or written:

I rely on Clear-Host to refresh the screen so that the manager can only see the current workload. This can be a bad practice as Invoke-ScriptAnalyzer will tell you, but in my case this is exactly the cmdlet I needed.

Here's the code for the Get-DiskStat function, which by the way you can find on Github:

   Monitors physical disk activity per bay during sync activities
   Monitors physical disk activity during a sync and highlights the disks that are active reading or writing bytes and the bay they belong to
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -Frequency 2 -Repeat 10
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -ShowCpu
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -ShowCpu -ShowQueue
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -ShowCpu -ShowQueue -Credential (Get-Credential)
   Get-DiskStat -sc srv1 -sbn 'HPE 3PAR' -sdp '\(2\)|\(14\)' -dc srv2 -dbn 'HP XP128' -ddp '\(7\)|\(10\)' -R -C -Q -Cred (Get-Credential) -F 1 -rep 1000
   Carlo MANCINI
function Get-DiskStat
        # Source computer for the sync

        # Source bay name for the sync

        # Source disk pattern for the sync

        # Destination computer for the sync

        # Destination bay name for the sync

        # Destination disk pattern for the sync

        # Clear the screen between each execution

        # Show Active and Idle CPU counters

        # Show disk queue for selected disks

        # Specifies a user account that has permission to perform this action
        $Credential = [System.Management.Automation.PSCredential]::Empty,

        # Frequency of the polling in seconds
        $Frequency = 10,

        # Total number of polling to perform
        $Repeat = 10

    Try { 
        Test-Connection $SourceComputer,$DestinationComputer -Count 1 -ErrorAction Stop | Out-Null
    Catch {
        Throw "At least one of the target servers is not reachable. Exiting."

    $CounterList = '\PhysicalDisk(*)\Disk Read Bytes/sec','\PhysicalDisk(*)\Disk Write Bytes/sec','\PhysicalDisk(*)\Current Disk Queue Length','\Processor(_Total)\% Idle Time','\Processor(_Total)\% Processor Time'

    1..$Repeat | % {

        $SourceCounterValue = (Get-Counter $CounterList -ComputerName $SourceComputer).countersamples

        if($DestinationComputer -eq $SourceComputer) {

            $DestinationCounterValue = $SourceCounterValue

            $SameHost = $True


        else {
            $DestinationCounterValue = (Get-Counter $CounterList -ComputerName $DestinationComputer).countersamples


        if($Refresh) {Clear-Host}

        if($ShowCpu) {

                    "$SourceComputer CPU Activity & Idle"
                    $SourceCounterValue | ? {$_.path -match 'processor'} | % {
                            Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10)

                    if(!$SameHost) {
                        "$DestinationComputer CPU Activity & Idle"
                        $DestinationCounterValue | ? {$_.path -match 'processor'} | % {
                            Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10)


        if($ShowQueue) {

            "$SourceStorageBayName Storage Bay Disk Queue on $SourceComputer"

            $SourceCounterValue | ? {($_.path -match $SourceDiskPattern) -and ($_.path -match 'queue')} | % {
                    if($_.cookedvalue -gt 0) {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor Green
                    else {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor White

            "$DestinationStorageBayName Storage Bay Disk Queue on $DestinationComputer"

            $DestinationCounterValue | ? {($_.path -match $DestinationDiskPattern) -and ($_.path -match 'queue')} | % {
                    if($_.cookedvalue -gt 0) {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor Green
                    else {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor White


        "$SourceStorageBayName Read stats on $SourceComputer"

        $SourceCounterValue | ? {($_.path -match $SourceDiskPattern) -and ($_.path -match 'read')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        "$SourceStorageBayName Write stats on $SourceComputer"

        $SourceCounterValue | ? {($_.path -match $SourceDiskPattern) -and ($_.path -match 'write')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        "$DestinationStorageBayName Read stats on $DestinationComputer"

        $DestinationCounterValue | ? {($_.path -match $DestinationDiskPattern) -and ($_.path -match 'read')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        "$DestinationStorageBayName Write stats on $DestinationComputer"

        $DestinationCounterValue | ? {($_.path -match $DestinationDiskPattern) -and ($_.path -match 'write')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        Start-Sleep -Seconds $frequency


PowerShell, once again the tool for the job.

Friday, November 25, 2016

On the road to Overlay networking on Docker for Windows

The container networking stack has gone through many rapid improvements on Windows Server 2016, and it's nice to see that some new features are coming out on a regular basis: Docker's release pace is fast, and though they have had a few missteps, most of the discovered bug are promptly addressed.

In this post I want to talk you about the implementation of multi-host networking on Docker for Windows.

On Linux this is supported since Kernel version 3.16 but on Windows, Containers are a recent feature and Overlay networking is likely going to be released pretty soon.

So, let's have a look at what this is and how it works.

As you have learned from my previous posts, the Docker engine communicates with the underlying Host Network Service (HNS) through a Libnetwork plugin. This plugin implements the Docker Container Network Model (CNM) which is composed of three main components:
  • A Sandbox, where the network configuration (IP address, mac address, routes and DNS entries) of the container is stored
  • An Endpoint linking the container Sandbox to a Network: this is a vNIC in the case of a Windows Container or a vmNIC in case of a Hyper-V container
  • A Network, which is a group of Endpoints belonging to different containers that can communicate directly
Behind each Network a built-in Driver performs the actual work of providing the required connectivity and isolation.

There are four possible driver packages inside Libnetwork:
  • null
  • bridge
  • overlay
  • remote
No network interface is attached to a container which is started with the Null driver:
docker run -it --network none microsoft/nanoserver powershell
Get-NetAdapter in this case returns nothing. And upon inspection this container will show no network:

In the second case, when you use the Bridge driver, the container won’t have a public IP but will be assigned a private address from the 20-bit private range defined by RFC 1918: - (172.16/12 prefix)

Get-Netadapter will show the virtual Ethernet adapter:

Name                      InterfaceDescription                    ifIndex
----                      --------------------                    -------
vEthernet (Container N... Hyper-V Virtual Ethernet Adapter #2          19
and Get-NetIpAddress will show the private IP address:
Get-NetIPAddress | Format-Table

ifIndex IPAddress                                       PrefixLength PrefixOrigin
------- ---------                                       ------------ ------------
19      fe80::29aa:cc8a:43f2:ae0f%19                              64 WellKnown   
18      ::1                                                      128 WellKnown   
19                                                20 Manual      
18                                                  8 WellKnown   
If I inspect this container, I can see the JSON describing the network specifications:
docker container inspect 4a44649f2b8d

Now just a couple of weeks ago (on version v1.13.0-rc1), Docker has implemented the third Driver (read Swarm-mode overlay networking support for windows), which basically means that your Windows running containers will be able to communicate even if they are residing on different hosts.

Actually this is a bit more complicated than that, because Overlay networking has been implemented in the Docker engine but not yet in the HNS service of Windows. So if you try to build a multi-host network you will get the following error message:
docker network create -d overlay --subnet multihost
Error response from daemon: HNS failed with error : Catastrophic failure
Same output if you try the PowerShell version:
New-ContainerNet -Driver overlay -Name MultiHost
New-ContainerNet : Docker API responded with status code=InternalServerError, response={"message":"HNS failed witherror : Catastrophic failure "}
At line:1 char:1
+ New-ContainerNet -Driver overlay -Name MultiHost
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  + CategoryInfo          : NotSpecified: (:) [New-ContainerNet], DockerApiException
  + FullyQualifiedErrorId : Docker Client Exception,Docker.PowerShell.Cmdlets.NewContainerNet
Once the required Windows binaries to build a Overlay network will be released, it will be interesting to see if Microsoft is going to embed in a Nano Server the required Key-Value store which has to be accessible to all the Containers belonging to the same Overlay network to be discoverable.

For the moment the most used key-value store is the one provided by Consul, but it is only Linux based so you won’t be able to run it on Windows:
docker run -p 8500:8500 -d consul --name consul
Unable to find image 'consul:latest' locally
latest: Pulling from library/consul
C:\Program Files\Docker\docker.exe: image operating system "linux" cannot be used on this platform.
See 'C:\Program Files\Docker\docker.exe run --help'.
All the same, Overlay networking is going to be soon available for Docker containers on Windows. The first step has been done. Now it is up to Microsoft to do the next move. Stay tuned for more on the subject.

Wednesday, November 16, 2016

Building a Docker container for the Image2Docker tool

I have been playing a bit with Image2Docker with the intention to see how far I could go into containerizing existing workloads. To date, this PowerShell-based module by fellow MVP and Docker Captain Trevor Sullivan mounts a vhdx or wim Windows Image and tries to discover running artifacts, such as IIS, SQL or Apache, and generates a Dockerfile for a container hosting these services.

Now this is still experimental, and the list of accepted artifacts is still short, but I couldn't retain myself from trying to build a Docker container for the job.

Here's how I tackled this, knowing that as most of us, I am moving my first steps into this new feature of Windows 2016.

First of all I built the following Dockerfile in Visual Studio Code:

Basically I am issuing five statements:
  1. pull the microsoft/nanoserver image. Actually I could have used the microsoft/windowsservercore image as well but that would have taken longer
  2. state that I am the mantainer of the repository
  3. install the package manager called Nuget
  4. install the actual Image2Docker module (version 1.5 at the time of writing)
  5. set the ConvertTo-Dockerfile cmdlet as entry point for this container, so that I can pass the .vhdx or .wim image path straight into this dedicated container on execution
Then the next steps to publish to the Docker Hub are:

docker build .  -t happysysadm/image2docker:latest -t happysysadm/image2docker:v0.1

In the step above I am issuing the build command from the folder containing the Dockerfile file, and I am setting two tags for the same image: latest and v0.1.

Then I logged in to the Hub:

docker login -u user -p password

And pushed the container into my public registry:

docker push happysysadm/image2docker

At this moment this repo becomes visible on the web:

Once I got your container up in the Hub, I cleaned up my local image:

docker rmi -f happysysadm/image2docker:v0.1

and pulled it again:

docker pull happysysadm/image2docker

Every time I have gone through an update of my Dockerfile, I had to do a rebuild and increment the version tag:

docker build .  -t happysysadm/image2docker:latest -t happysysadm/image2docker:v0.2

In the step above the latest tag is passed to v0.2 and the previous image retains only the tag v0.1.

Now this container is public and you can just do:

docker run happysysadm/image2docker sample.vhdx

and get the Dockerfile for your Windows image created for you. Let me know how it goes and remember that this project is open source so everybody's contribution is accepted.

Monday, November 14, 2016

Step up container management with PowerShell for Docker

I remember that one of the first reasons I started using Windows PowerShell is that it uses objects to represent the data, which is great when you are interacting with a object-oriented Windows ecosystem. Now that some historical borders have been crossed between Linux and Windows, and that preexisting tools have been translated to the Microsoft's OS, we, as PowerShell guys, could face a bit of a throwback in the way we use the shell.

Just have a look at Docker.

Invented in 2013 by a French guy named Solomon Hykes, this open source project aimed at automating the deployment of Linux containers has been quickly adopted by Microsoft for their last operating system and can today be run on both Windows 10 and Windows 2016.

The main drawback of adopting such a tool, is that it comes with a command line which is obsolete if you look at it in PowerShell terms: it only produces strings, which are hardly reusable, unless you feed them to ConvertFrom-String:

docker images | ConvertFrom-String -Delimiter "\s{2,}" | Format-Table

P1                          P2     P3           P4          P5
--                          --     --           --          --
REPOSITORY                  TAG    IMAGE ID     CREATED     SIZE
microsoft/iis               latest 211fecef1e6b 5 days ago  9.48 GB
microsoft/sample-dotnet     latest c14528829a37 2 weeks ago 911 MB
microsoft/windowsservercore latest 93a9c37b36d0 7 weeks ago 8.68 GB
microsoft/nanoserver        latest e14bc0ecea12 7 weeks ago 810 MB

Now, tough ConvertFrom-String is a extremely powerful cmdlet released with PowerShell 5.0 (check my blog post on the subject), it take some time to feel easy with its syntax. In the previous example for instance I am outputting the list of the images I have pulled from the Docker Hub onto my system. The text that comes through the pipeline once I run 'docker images' has to be split whenever I find at least 2 empty spaces. To achieve that I have to use the Delimiter parameter and match a whitespace \s at least two times {2,}.

Needless to say, knowing regular expressions becomes a must.

Happily enough we have an alternative to this. Since Docker comes with a nice API, there is open source project for a module exposing PowerShell cmdlets to manage Docker images, containers and networks. Tough still in development, I heartedly suggest you start using it to maintain consistency with your existing environment.

You can find it here:

The installation is straightforward.

Register-PSRepository -Name DockerPS-Dev -SourceLocation

Install-Module Docker -Repository DockerPS-Dev -Scope CurrentUser

Here's the list of cmdlets that come with it:

Get-Command -Module Docker -CommandType Cmdlet

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Add-ContainerImageTag                      Docker
Cmdlet          ConvertTo-ContainerImage                   Docker
Cmdlet          Copy-ContainerFile                         Docker
Cmdlet          Enter-ContainerSession                     Docker
Cmdlet          Export-ContainerImage                      Docker
Cmdlet          Get-Container                              Docker
Cmdlet          Get-ContainerDetail                        Docker
Cmdlet          Get-ContainerImage                         Docker
Cmdlet          Get-ContainerNet                           Docker
Cmdlet          Get-ContainerNetDetail                     Docker
Cmdlet          Import-ContainerImage                      Docker
Cmdlet          Invoke-ContainerImage                      Docker
Cmdlet          New-Container                              Docker
Cmdlet          New-ContainerImage                         Docker
Cmdlet          New-ContainerNet                           Docker
Cmdlet          Remove-Container                           Docker
Cmdlet          Remove-ContainerImage                      Docker
Cmdlet          Remove-ContainerNet                        Docker
Cmdlet          Request-ContainerImage                     Docker
Cmdlet          Start-Container                            Docker
Cmdlet          Start-ContainerProcess                     Docker
Cmdlet          Stop-Container                             Docker
Cmdlet          Submit-ContainerImage                      Docker
Cmdlet          Wait-Container                             Docker
This module also exposes a bunch of aliases, though I don't recommend their use since they seem confusing to me and don't add anything in terms of command line agility:

Get-Command -Module Docker -CommandType Alias | Format-Table Name,ResolvedCommandName

Name                 ResolvedCommandName
----                 -------------------
Attach-Container     Enter-ContainerSession
Build-ContainerImage New-ContainerImage
Commit-Container     ConvertTo-ContainerImage
Exec-Container       Start-ContainerProcess
Load-ContainerImage  Import-ContainerImage
Pull-ContainerImage  Request-ContainerImage
Push-ContainerImage  Submit-ContainerImage
Run-ContainerImage   Invoke-ContainerImage
Save-ContainerImage  Export-ContainerImage
Tag-ContainerImage   Add-ContainerImageTag

So, docker images becomes:


RepoTags                              ID                   Created                Size(MB)
--------                              --                   -------                --------
microsoft/sample-dotnet:latest        sha256:c14528829a... 25/10/2016 13:55:28    869,05
microsoft/windowsservercore:latest    sha256:93a9c37b36... 22/09/2016 10:51:07    8 273,19
microsoft/nanoserver:latest           sha256:e14bc0ecea... 22/09/2016 09:39:30    772,81

and the returned object is a heavily reusable ImagesListResponse object:

Get-ContainerImage | Get-Member

   TypeName: Docker.DotNet.Models.ImagesListResponse

Name        MemberType Definition
----        ---------- ----------
Equals      Method     bool Equals(System.Object obj)
GetHashCode Method     int GetHashCode()
GetType     Method     type GetType()
ToString    Method     string ToString()
Created     Property   datetime Created {get;set;}
ID          Property   string ID {get;set;}
Labels      Property   System.Collections.Generic.IDictionary[string,string] Labels...
ParentID    Property   string ParentID {get;set;}
RepoDigests Property   System.Collections.Generic.IList[string] RepoDigests {get;set;}
RepoTags    Property   System.Collections.Generic.IList[string] RepoTags {get;set;}
Size        Property   long Size {get;set;}
VirtualSize Property   long VirtualSize {get;set;}

Same model for the list of existing containers:


ID                   Image           Command              Created                Status
--                   -----           -------              -------                ------
43a05b618697033eb... microsoft/na... c:\windows\system... 14/11/2016 09:44:19    Exited...
005b51dbe002324f8... microsoft/na... --name nanoserver1   14/11/2016 09:44:04    Created
e8b31c61d5f42b271... microsoft/na... --name nanoserver1   14/11/2016 09:42:12    Created
547b7dbd3b1473127... microsoft/sa... dotnet dotnetbot.dll 06/11/2016 16:11:07    Exited...
Get-Container | Get-Member

   TypeName: Docker.DotNet.Models.ContainerListResponse

Name            MemberType Definition
----            ---------- ----------
Equals          Method     bool Equals(System.Object obj)
GetHashCode     Method     int GetHashCode()
GetType         Method     type GetType()
ToString        Method     string ToString()
Command         Property   string Command {get;set;}
Created         Property   datetime Created {get;set;}
ID              Property   string ID {get;set;}
Image           Property   string Image {get;set;}
ImageID         Property   string ImageID {get;set;}
Labels          Property   System.Collections.Generic.IDictionary[string,string] Labels...
Mounts          Property   System.Collections.Generic.IList[Docker.DotNet.Models.MountP...
Names           Property   System.Collections.Generic.IList[string] Names {get;set;}
NetworkSettings Property   Docker.DotNet.Models.SummaryNetworkSettings NetworkSettings...
Ports           Property   System.Collections.Generic.IList[Docker.DotNet.Models.Port]...
SizeRootFs      Property   long SizeRootFs {get;set;}
SizeRw          Property   long SizeRw {get;set;}
State           Property   string State {get;set;}
Status          Property   string Status {get;set;}

Now that you have this module, you have two ways to run a container. Either by using:

docker run -it microsoft\nanoserver powershell

or by using Invoke-ContainerImage (aliased as Run-ContainerImage):

Invoke-ContainerImage -ImageIdOrName microsoft/nanoserver:latest -Command powershell -Input -Terminal

which, at its best, can be shortened to:

Run-ContainerImage microsoft/nanoserver:latest powershell -In -T
None of the PowerShell syntaxes are as short as the 'legacy' one, but again, the produced object is what makes them worthy using.

I hope you have enjoyed this first post on the PowerShell module for the Docker Engine, which brings close integration between those that not so long ago were distant worlds. Stay tuned for more.

Friday, November 4, 2016

Announcing the winner of the PowerShell Oneliner Contest 2016

I am excited to announce the winner of the second PowerShell Oneliner Contest. But before I do it, let me tell you one thing. This year, I received over ninety submissions from wannabe PowerShell Monks from all over the world. Some solutions stood out as the most striking and imaginative entries. Some others were not successful in achieving what I asked, but showed a lot of effort in learning and initiative. Everybody seemed to understand that the aim of such a contest is not just to push PowerShell to its limit and beyond, by bending the command line to your will. It's a matter of generating knowledge and sharing it for others to learn from. Building code that can benefit the whole community is of paramount importance here.


So thanks to all of the entrants and, without further ado, let's have a look at the winning solution, by Sam Seitz, with 65 chars:

([char[]](71..89)|?{!(gdr $_)2>0}|sort{[guid]::newguid()})[0]+':'
#Posted by Sam Seitz to Happy SysAdm at October 25, 2016 at 8:02 AM

I got in touch with Sam so he could share a bit about himself and his thought process for the script.

Two years into his IT career, Sam is a 25-year old systems engineer for Network Technologies, Inc., an MSP in Olathe, KS. He spends his days perpetually amazed that his employer pays him to "play with computers" (as his father would say). Outside of work, his incredible wife and their pair of regal beagles keep him happier than a man has any right to be.

Seeing this challenge made me realize two things: 1) off the top of my head I know, maybe, five default aliases and 2) I should really start using more aliases. I'm normally extremely verbose in my scripting, so this proved to be a unique challenge. To keep it as short as possible, I used a few interesting techniques, which I'll break down step-by-step:
To generate the array of letters from g-y, I took advantage of the fact that the 71 through 89 is g through y in the ASCII table. When cast as a [char], 71 is g, 72 is h, etc.
?{!(gdr $_)2>0}
I then filtered out occupied drive letters by using Where-Object (?), the alias for the "-not" operator (!), and Get-Drive (gdr). 2>0 redirects the inevitable error output to null. (If you don't mind seeing each error Get-Drive throws when it's used with a non-existant drive, the 2>0 could be removed as it isn't necessary for the success of the one-liner. But who likes all those ugly red errors on their screen? Terrorists, that's who.)
In order to ensure the result was random, I used Sort-Object (sort) on the array of letters and told it to sort by the new GUID created using the .Net method [guid]::newguid().
Finally, I selected the first result [0] from the array of randomly sorted available drive letters in the output and threw a colon on the end (+':').

Thanks to Sam for sharing his deep knowledge of PowerShell with us. For those interested, I created a Gist with a list of working solutions I got, sorted by line length.


Now in the following section I will explain why I could not accept some entries.

The most common error by far was the use of Random as an alias of Get-Random. Though I understand the extreme difficulty of generating a random number without using Get-Random, I couldn't accept oneliners using Random as an alias for the simple reason that it is not one. It just works as an alias because the PowerShell interpreter prepends by default the verb 'Get-' to nouns if it can't find another match.

Get-Alias | ? {$_.Definition -match "Get-Random"}

Trace-Command can confirm that interpreter behavior:

Trace-Command -Name CommandDiscovery -PSHost -Expression { random }

DEBUG: CommandDiscovery Information: 0 : The command [random] was not found, trying again with get- prepended
DEBUG: CommandDiscovery Information: 0 : Looking up command: get-random
DEBUG: CommandDiscovery Information: 0 : Cmdlet found: Get-Random  Microsoft.PowerShell.Commands.GetRandomCommand


Now a word about my solutions to the contest. I wrote four of them. Since they are pretty short, I am pleased to share them with you.

In the first solution I was actually able to get a random GUID to sort on by fetching it from the internet.

There are for sure many websites exposing an engine for GUID generation ( or for instance) but in our case we want the shortest URL possible, and I was lucky enough to find a website named There's a funky cmdlet for getting stuff from the web and it is Invoke-RestMethod. It has an alias which is irm. Now the cool thing of Internet is that nowadays many websites have adopted a JSON API to let consumers retrieve and manipulate their content using HTTP requests. And is one of them. Luck, again. So, doing the alike of:

Invoke-WebRequest -UseBasicParsing | ConvertFrom-Json

can be achieved in a simpler manner with:


which can be shortened to:


or even shorter:


And there I have my random guid. Internet for the IT pro, I daresay.

For the rest my first solution matches Sam's one:
([char[]](71..89)|?{!(gdr $_)2>0}|sort{irm})[0]+':'

In my second solution, I leverage the .NET Framework's System.Random class, but instead of using [random]::New().Next() I went for a shorter ([random]@{}).Next():

My third solution relies on use of my favorite cmdlets, Select-String (aliased as sls) in conjunction with the [guid] type accelerator, which called the NewGuid static method:
''+(ls function:[g-y]:|sls(gdr)-n|sort{[guid]::NewGuid()})[0]

In my fourth and last solution I mixed both my third and first solution, and so I was able to go down to 59 chars. It's a bit slower than the others because of the action of fetching GUIDs from the Internet, but for the purpose of the contest this is the shortest solution I was able to come up with:
''+(ls function:[g-y]:|sls(gdr)-n|sort{irm})[0]

I have created a Gist with my solutions, which you can find here.


Now a word about how the testing of the posted oneliners went.

Since I was rapidly flowed by plenty of tricky oneliners, I was a bit scared by having to check all of them manually for respect of contest rules. Fortunately I had already worked a bit with Pester to define test cases on other projects, so I just had adapt what I knew to the contest I had just started.

Then, and it was sheer luck, I got contacted by Jakub who proposed a complete solution to test those oneliners.

I am glad to say that what Jakub came up with is just brilliant. So, who better than him to explain his approach. Take it away Jakub.

I always loved one liners. "Make the code as short as possible" is such a simple, yet so challenging restriction. Such restriction does not exist in our day to day work, we care about readability, understandability and performance, but rarely about the length of our code. Putting this restriction in place and removing any other, turns our usual focus on its head. For once we get to write code that is so unreadable and uses so many quirks of the language that we will need to explain it at least twice (if not three times). Finally we can put all the side notes we read in books to work and use all the features, that we thought were bugs when we saw them for the first time, to force the language syntax to it's limits, and then watch in awe when others produce a solutions twice as short as ours.

For this reason I had to take part in the Oneliner contest of 2016 hosted by Carlo on his blog. Once I read the requirements I thought to myself: Well that's more that one requirement, what a nice opportunity to take this on another level and write some tests as well. And so I approached the whole problem in a kata-like way, which means not only taking my time to think about the problem itself, but also taking time to reason about the tests and the process of writing tests. Now since I know I have no way of winning the contest, especially after seeing how creative were people last year, I will at least walk you through my thought process.

First I read the requirements just to make sure they are quantifiable, what I mean by that is that I can measure if the requirement was met. A quantifiable requirement is for example "contains no semi-colon", a non-quantifiable requirement (at least not easily) would be "the code looks nice".

Once I made sure I will be able to write some tests for all of the requirements I proceeded to categorize the requirements and realized that they can be split to wwo categories: stylistic, and functional. Where stylistic is how the code should look like, and functional is how the code should behave.

I started with the functional part of the tests as they seemed much simpler to implement.

### Test 1 - Outputs single string
The first decision I made, was to store my one liner as a script block. This enabled me to reuse the same script block in all the test cases, and it also enabled me to change my one liner very easily.

The first test checks that the output of the oneliner is a single string. Pester has a built-in assertion `BeOfType` which was my first choice, but then I realized that piping the output through pipeline would expand the array that I might get, and I wouldn't be able to check if I got just a single item or whole array or items. So I went oldschool and used the `-is` operator.

It "outputs single string" {
    (&$oneliner) -is [string] | Should be $True

### Test 2 - Outputs one letter followed by colon
Next requirement forces me to match the text and specifies that it should be a letter followed by colon. Any text matching is easy with the `Match` assertion which uses regular expressions. The only thing I had to watch out for was matching the start and end of the string, to make sure that no sorrounding characters are matched.

It "Outputs single letter followed by colon" {
    &$oneliner | Should Match "^[a-z]\:$"

I decided to match the whole alphabet in this test to limit mixing the requirements. I find it being a good practice to specify requirements in one place without unnecessarily resticting other unrelated tests.

### Test 3 - Should exclude drives A-F and Z
Yet another requirement forces me to exclude some of the drive letters. I decided to use test cases to have a single test for each excluded letter and specified a list of test cases. This feature of Pester generates a single test per testcase and also modifies the name of the test to reflect the actual value of `$DriveLetter` for extra readability. The scriptblock then contains parameter I named $DriveLetter, which I use to write the assertion.

It "Should not output drive letter " -TestCases `
    @{DriveLetter = "a:"},
    @{DriveLetter = "b:"},
    @{DriveLetter = "c:"},
    @{DriveLetter = "d:"},
    @{DriveLetter = "e:"},
    @{DriveLetter = "f:"},
    @{DriveLetter = "z:"}{
    param ($DriveLetter)
        &$oneliner | Should Not Be $DriveLetter

### Test 4 - Drive should not be used
This test could not be easier. I am used the `Exist` assertion which I know uses `Test-Path` internally. Nothing else was needed here.

It "Resulting drive should not exist" {
    &$oneLiner | Should Not Exist

### Test 5 - Drive should be random
This test I found interesting because randomness is something to avoid in tests as much as possible. Randomness can make test fail from time to time and that unexpected failures lower the trust in we have in tests. But well in this case I'll be using the tests locally so I decided to take the simplest route and run the code twice and then compare the results. If the results are not the same the output is probably "random". This is far from perfect, but in this simple case I can validate by running the test multiple times. In a real production environment I'd run the code more than twice and compare the results.

It "Should be random" {
    &$oneLiner | Should Not Be (&$oneLiner)

Another interesting thing about this test is that I did not notice the randomness requirement at first and posted my solution without it, which automatically makes my solution incorrect :)

### Test 6 - Code should be error free
This test seemed straight forward because any terminating error (exception) in a Pester test makes the test fail. The difficult part was capturing non-terminating errors as well. I had to set the error action preference to `Stop` and also pipe to `Not Throw` to make the test behave correctly. That's something to be improved in the next version of Pester.

It "Should be error-free" {
    $errorActionPreference = 'Stop'
    $oneLiner | Should Not Throw

That was it for the functional tests. All of them were pretty easy to write, and there was not much to figure out. Next up were te the stylistic tests, which were a bit more challenging as I first needed to write some helper functions to avoid any ifs and for loops in the body of my tests.

### Test 6 - All cmdlets must have an alias
This test was the most challenging test to write. There are two things that I needed to figure out. First I needed a way to parse the code and find all the commands. For that I knew I could use the AST, but I had to write and test the code to find all the commands. The other thing was checking if all the found commands have aliases. First I started with the tests for AST parsing and then I implemented the function:

Describe "Get-ScriptBlockCommand" { 
    It "Finds basic cmdlet" {
        Get-ScriptBlockCommand { Get-Date } | Should Be "Get-Date"
    It "Finds basic alias" {
        Get-ScriptBlockCommand { gci } | Should Be "gci"
    It "Finds multiple commands alias" {
        $actual = Get-ScriptBlockCommand { ps; get-process } 
        $actual[0] | Should Be 'ps'
        $actual[1] | Should Be 'get-process'
    It "Ignores keywords" {
        Get-ScriptBlockCommand { if ($true) {} } | Should BeNullOrEmpty
    It "Ignores other tokens" {
        Get-ScriptBlockCommand { $a = 10 ; $false } | Should BeNullOrEmpty

function Get-ScriptBlockCommand ($ScriptBlock) {
     $tokens = [System.Management.Automation.PSParser]::Tokenize($ScriptBlock,[ref]$null)
     $tokens | where { $_.Type -eq 'Command' } | select -expand content

Then I followed with looking up aliases and testing the every command has at least one:

Describe "Test-Alias" {
    It "Finds alias for basic cmdlet" {
        Test-Alias Get-ChildItem | Should Be $True
        Test-Alias Test-Path | Should Be $False        

    It "Finds alias when given alias " {
        Test-Alias gci | Should Be $True
        Test-Alias ps | Should Be $True 

    It "Returns true when all commands have aliases" {
        Test-Alias ("gci", "ps", "get-childItem") | Should Be $True

    It "Returns false when any of the commands does not have an alias" {
        Test-Alias ("Test-path", "ps", "get-childItem") | Should Be $false

function Test-Alias ([string[]] $Name) {
    end {
        $aliases = Get-Alias
        foreach ($n in $name) {
            if ($null -eq ($aliases | Where {$_.Name -eq $n -or $_.Definition -eq $n}))
                return $false

Then I could finally proceed to writing the main test:

It "All used cmdlets have an alias" {
    $commands = Get-ScriptBlockCommand $oneliner
    Test-Alias $commands | Should Be $True

### Test 7 - Code must not contain semicolon
And finally I finished with another primitive test checking that semicolon is nowhere to be found in my oneliner. The one liner is also not executed this time. Rather we implicitly convert it to string and pass it to the `Match` assertion.

It "contains no semicolons" {
    $oneliner | Should Not Match "\;"

And that was it for my testing. I hope you enjoyed the competition and congratulation to the winners!!!

Thanks again to all the competitors, to Mike F Robbins for the original function, to Sam Seitz for his brilliant solution and to Jakub Jares for showing us the way to functional testing. And remember, it was all about learning.
Related Posts Plugin for WordPress, Blogger...