Tuesday, December 27, 2016

A function for robust command execution in PowerShell

I have always been a great fan of a tool named RoboCopy, which I bet many of you have used countless times. Now these days I have been in need for this very same type of robustness for one of my functions since I am running it in an unreliable environment.

What I wanted to achieve with PowerShell in particular, was to fetch a great deal of data from public web servers and reuse the information in my scripts. Unfortunately when you use a cmdlet, be it Invoke-RestMethod, or Test-Connection, you can get failures which are not due to your cmdlets but to the underlying infrastructure (such as a Wi-Fi network, a distorted topology because of a flapping router or a way too busy web server).

Sure Invoke-RestMethod has a TimeoutSec parameter, but what if it fails and I really need the information coming from that website? Well, this reasoning brought me to write an advanced function that takes a command and its parameters and tries to run it a given number of times (three by default) with intervals of three seconds.

This function, which I called Start-RoboCommand (Start is an approved verb, so that my PSScriptAnalyzer is happy, and I borrowed the idea of the RoboCommand noun from RoboCopy itself), also supports an improved functioning where the command is run indefinitely, through the addition of a -Wait parameter, such as the one you can find in recent versions of Get-Content.

To finish with, I added a LogFile parameter to log errors (which is particularly important here because we are exactly dealing with commands not being successfull) and Verbose, which tells you exactly what's going wrong.

Now without further ado, here's my function:

function Start-RoboCommand {

   Function that tries to run a command until it succeeds or forever
   Function that tries to run a command until it succeeds or forever. By default this function tries to run a command three times with three seconds intervals.
    Command to execute
    Arguments to pass to the command
    Number of tries before throwing an error
    Run the command forever even if it succeeds
    Time in seconds between two tries
    The path to the error log
   Start-RoboCommand -Command 'Invoke-RestMethod' -Args @{ URI = "http://guid.it/json"; TimeoutSec = 1 } -Count 2 -Verbose
   Start-RoboCommand -Command 'Invoke-RestMethod' -Args @{ URI = "http://notexisting.it/json"; TimeoutSec = 1 } -Count 2 -Verbose
   Start-RoboCommand -Command 'Invoke-RestMethod' -Args @{ URI = "http://guid.it/json"; TimeoutSec = 1 } -Wait -Verbose
   Start-RoboCommand -Command 'Invoke-RestMethod' -Args @{ URI = "http://notexisting.it/json"; TimeoutSec = 1 } -Wait -Verbose
   Start-RoboCommand -Command 'Test-Connection' -Args @{ ComputerName = "bing.it" } -Wait -Verbose
   Start-RoboCommand -Command 'Test-Connection' -Args @{ ComputerName = "nocomputer" } -Wait -LogFile $Env:temp\error.log -Verbose
   Start-RoboCommand -Command Get-Content -Args @{path='d:\inputfile.txt'} -Wait -DelaySec 2 -LogFile $Env:temp\error.log -Verbose

    Param (


    [Parameter(Mandatory=$false,ParameterSetName = 'Limited')]
    [int32]$Count = 3, 

    [Parameter(Mandatory=$false,ParameterSetName = 'Forever')]

    [int32]$DelaySec = 3,

    $Args.ErrorAction = "Stop"
    $RetryCount = 0

    $Success = $false
    do {

        try {

            & $command @args

            Write-Verbose "$(Get-Date) - Command $Command with arguments `"$($Args.values[0])`" succeeded."

            if(!$Wait) {
                $Success = $true

        catch {

            if($LogFile) {

                "$(Get-Date) - Error: $($_.Exception.Message) - Command: $Command - Arguments: $($Args.values[0])" | Out-File $LogFile -Append

            if ($retrycount -ge $Count) {

                Write-Verbose "$(Get-Date) - Command $Command with arguments `"$($Args.values[0])`" failed $RetryCount times. Exiting."


            else {

                Write-Verbose "$(Get-Date) - Command $Command with arguments `"$($Args.values[0])`" failed. Retrying in $DelaySec seconds."

                Start-Sleep -Seconds $DelaySec

                if(!$Wait) {





    while (!$Success)


Let me know how it works for you and if you have any suggestion on the logic I'll be more than happy to improve it over time. Fo sure you can find it also on my github.

Thursday, December 8, 2016

Spotlight on the PSReadline PowerShell module

The trend is clear: Microsoft has shifted some major projects, like .NET and PowerShell itself, into the open-source ecosystem, and has made them cross-platform. Today you can run your PowerShell scripts on a GUI-less Windows Server Core, or on a headless Nano Server, but also on Linux, and on a Mac.

There is a project in particular which reveals this kind of cross-pollination between OSes, and it is the PSReadline module, which is aimed at bringing the GNU Readline experience to your PowerShell console.
This module is installed by default on Windows 10 and brings some slick functionalities which are well worth a quick look.
The first functionality is the fact that with PSReadline, the console preserves command history across sessions. Ok, you were used to Get-History to find the list of the typed commands, and to use Invoke-History (aliased as 'r') to run commands found in the history. But these two cmdlets are limited to the current session:

Now with the arrival of PSReadline, which is loaded by default when you start a PowerShell console, you got the possibility to retrieve commands typed in previous sessions, even across reboots. This is achieved through log files stored inside the Application Data folder:
  • $env:APPDATA\Microsoft\Windows\PowerShell\PSReadline\ConsoleHost_history.txt for the PowerShell console host (conhost.exe)
  • $env:APPDATA\Microsoft\Windows\PowerShell\PSReadline\Windows PowerShell ISE Host_history.txt for the Integrated Scripting Environment (ISE)
  • $env:APPDATA\Microsoft\Windows\PowerShell\PSReadline\Visual Studio Code Host_history.txt for Visual Studio Code, the new toy for those into DevOps
How I discovered that? Simple. The PSReadline module comes with five cmdlets:
  • Get-PSReadlineKeyHandler: gets the key bindings for the PSReadline module
  • Get-PSReadlineOption: gets values for the options that can be configured
  • Remove-PSReadlineKeyHandler: removes a key binding
  • Set-PSReadlineKeyHandler: binds keys to user-defined or PSReadline-provided key handlers
  • Set-PSReadlineOption: customizes the behavior of command line editing in PSReadline.
If you issue “(Get-PSReadlineOption).HistorySavePath” you will get the location where the system keeps the command history for your current interpreter.

Now for some reason, the only working log between those listed above is the one for PowerShell on the command line, probably because PowerShell ISE and VSCode don't have a true console (conhost.exe) behind it:

Being the Application Data folder user-specific, you only have access to the command history for your user account: so there is one ConsoleHost_history.txt file for each user on a given computer. The permissions are set in a way that the admin can access the command history for other users, which is good for checking your systems.

Here's a script I wrote to retrieve a list of all the consolehost_history.txt files on my systems, so that I know who used PowerShell and when:
(Get-ChildItem -Path c:\users).name | % {

     Get-Item ((Get-PSReadlineOption).HistorySavePath -replace ($env:USERNAME,$_)) -ErrorAction SilentlyContinue

     } | Select-Object FullName,



                       @{Name="Kbytes";Expression={ "{0:N0}" -f ($_.Length / 1Kb) }},

                       @{Name="Lines";Expression={(Get-Content $_.fullname | Measure-Object -Line).Lines}}
To prevent PowerShell from logging any command just type:
Set-PSReadlineOption –HistorySaveStyle SaveNothing
Other interesting settings that you could adopt or make custom are:
Set-PSReadLineOption -HistoryNoDuplicates
Set-PSReadLineOption -MaximumHistoryCount 40960
I wouldn't bother changing the HistorySaveStyle because the default parameter seems well suited to me: SaveIncrementally means that every run command is stored in the log before being actually executed.

If you want to erase you command history, you can just press ALT+F7, as you can discover by issuing:
Get-PSReadlineKeyHandler | ? Function -eq 'clearhistory'

Key    Function     Description
---    --------     -----------
Alt+F7 ClearHistory Remove all items from the command line history (not PowerShell history)
The second functionality is the possibility to access and search the history log in an interactive way. What I mean is that you can use your keyboard to search the history by pressing combinations of keys. The discovery of the existing keys is performed with:
Get-PSReadlineKeyHandler | ? function -like '*history*'

Key       Function                Description
---       --------                -----------
UpArrow   PreviousHistory         Replace the input with the previous item in the history
DownArrow NextHistory             Replace the input with the next item in the history
Ctrl+r    ReverseSearchHistory    Search history backwards interactively
Ctrl+s    ForwardSearchHistory    Search history forward interactively
Alt+F7    ClearHistory            Remove all items from the command line history (not PowerShell history)
F8        HistorySearchBackward   Search for the previous item in the history that starts with the current input - like PreviousHistory if the input is empty
Shift+F8  HistorySearchForward    Search for the next item in the history that starts with the current input - like NextHistory if the input is empty
Unbound   ViSearchHistoryBackward Starts a new seach backward in the history.
Unbound   BeginningOfHistory      Move to the first item in the history
Unbound   EndOfHistory            Move to the last item (the current input) in the history
As you can see pressing Ctrl+r will bring up a bottom-top search (identified by bck-i-search), and just start typing and PSReadline will complete the lines with commands from the history logfile:

The third functionality is the fact that PSReadLine allows you to mark, copy, and paste text in the common Windows way. It is actually just like if  you were in Word: CTRL+C copies text, CTRL+X cuts text, and CTRL+V pastes the text. CTRL+C can still be used to abort a command line, but when you select some text, with the CTRL+SHIFT+ArrowKeys key combination for instance, PSReadline will switch to CTRL+C Windows mode. Awesome.

The fourth functionality is syntax checking as you type. When PSReadline detects a syntax error it turns the grater-than-sign on the left to red, like in the following example where I forgot to close the double quotes after the $Computer variable:

If to all these functionalities you add the syntax coloring provided by PSReadline, or also the possibility to use key combinations like CTRL+Z to undo code changes, then there you are with a PowerShell console that is a delight to use. And that you can even install on your old Windows 7 by installing WMF v5 and then running the following line of code to get the module from the PowerShell Gallery:
Install-Module -Name PSReadline
Now just choose your way. Here's a comparative screenshot of the four main development environment I use:

Happy coding.

Wednesday, November 30, 2016

A PowerShell function to monitor physical disk activity per storage bay during sync activities

These days I have been migrating data on a old Windows 2003 server from an old HP XP128 storage array to a newer one, a HP 3PAR. Both those fiber channel SAN were mounted and managed on the server through Veritas Enterprise Administrator (VEA) version 5.1. At first I started with a Robocopy to migrate data, ACLs, and all the rest from the old volume to the new one, but soon discovered that there could be better ways to move huge amount of data (I am talking here of several million sensitive files).

One of the main advantages of using Robocopy is that you have fine grained control over your sync. The downside is that, after the sync, you have to stop the old volume and move all your pointers to the new volume, which has a big impact on the automation systems relying on those files to keep their 24/7 activity.

I decided then to change plans and build a mirror on VEA between the old storage array and the new one.

The only problem with such a old version of VEA is that you don't have access to such a basic information as the fact that the mirror sync is completed. The interface just shows you that you have successfully built your mirror but hides the information about the actual data sync taking place behind the curtains.

That's the moment the manager came in and asked for a way to keep an eye the sync. And that's the moment I replied: I can do that for you with PowerShell, sir.

I knew fairly well that though there's no PowerShell on a Windows 2003 server unless you have taken the time to install it, I could access its performance counters from a distant, more recent workstation trough Get-Counter:

What I wanted was to give the manager a script he could run himself that showed the activity for the disks involved in the sync. So I knew that I had to rely on cmdlets I am not used to put in my functions, such as Clear-Host or Write-Host.
But, as for anything else, there are times you have to make exceptions. And Write-Host can have its use sometimes.
In the end I came up with a function that given a set of physical disks on a source server and on a target server, monitors the disk activity in terms of read and written bytes per seconds and in case those are not null, set the font color to green, so that they're highlighted.

The names of the disks can be found in the Perfmon GUI itself as well as in VEA:

Their names can be given as input to the functions as a pattern for a regular expression. In my case this gave:

-SourceDiskPattern '\(2\)|\(14\)'
-DestinationDiskPattern '\(8\)|\(9\)|\(10\)|\(20\)|\(21\)|\(22\)' 
because I am trying to match those hard disk numbers.

I have also added to the function a couple of parameters to show the processor activity and the disk queue, since these counters can always be of use when tracing a workload:

In the end, here's the output expected by the manager during the sync, with the green lines highlighting the disks where the data are read or written:

I rely on Clear-Host to refresh the screen so that the manager can only see the current workload. This can be a bad practice as Invoke-ScriptAnalyzer will tell you, but in my case this is exactly the cmdlet I needed.

Here's the code for the Get-DiskStat function, which by the way you can find on Github:

   Monitors physical disk activity per bay during sync activities
   Monitors physical disk activity during a sync and highlights the disks that are active reading or writing bytes and the bay they belong to
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -Frequency 2 -Repeat 10
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -ShowCpu
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -ShowCpu -ShowQueue
   Get-DiskStat -SourceComputer srv1 -SourceStorageBayName 'HPE 3PAR' -SourceDiskPattern '\(2\)|\(14\)' -DestinationComputer srv2 -DestinationStorageBayName 'HP XP128' -DestinationDiskPattern '\(7\)|\(10\)' -Refresh -ShowCpu -ShowQueue -Credential (Get-Credential)
   Get-DiskStat -sc srv1 -sbn 'HPE 3PAR' -sdp '\(2\)|\(14\)' -dc srv2 -dbn 'HP XP128' -ddp '\(7\)|\(10\)' -R -C -Q -Cred (Get-Credential) -F 1 -rep 1000
   Carlo MANCINI
function Get-DiskStat
        # Source computer for the sync

        # Source bay name for the sync

        # Source disk pattern for the sync

        # Destination computer for the sync

        # Destination bay name for the sync

        # Destination disk pattern for the sync

        # Clear the screen between each execution

        # Show Active and Idle CPU counters

        # Show disk queue for selected disks

        # Specifies a user account that has permission to perform this action
        $Credential = [System.Management.Automation.PSCredential]::Empty,

        # Frequency of the polling in seconds
        $Frequency = 10,

        # Total number of polling to perform
        $Repeat = 10

    Try { 
        Test-Connection $SourceComputer,$DestinationComputer -Count 1 -ErrorAction Stop | Out-Null
    Catch {
        Throw "At least one of the target servers is not reachable. Exiting."

    $CounterList = '\PhysicalDisk(*)\Disk Read Bytes/sec','\PhysicalDisk(*)\Disk Write Bytes/sec','\PhysicalDisk(*)\Current Disk Queue Length','\Processor(_Total)\% Idle Time','\Processor(_Total)\% Processor Time'

    1..$Repeat | % {

        $SourceCounterValue = (Get-Counter $CounterList -ComputerName $SourceComputer).countersamples

        if($DestinationComputer -eq $SourceComputer) {

            $DestinationCounterValue = $SourceCounterValue

            $SameHost = $True


        else {
            $DestinationCounterValue = (Get-Counter $CounterList -ComputerName $DestinationComputer).countersamples


        if($Refresh) {Clear-Host}

        if($ShowCpu) {

                    "$SourceComputer CPU Activity & Idle"
                    $SourceCounterValue | ? {$_.path -match 'processor'} | % {
                            Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10)

                    if(!$SameHost) {
                        "$DestinationComputer CPU Activity & Idle"
                        $DestinationCounterValue | ? {$_.path -match 'processor'} | % {
                            Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10)


        if($ShowQueue) {

            "$SourceStorageBayName Storage Bay Disk Queue on $SourceComputer"

            $SourceCounterValue | ? {($_.path -match $SourceDiskPattern) -and ($_.path -match 'queue')} | % {
                    if($_.cookedvalue -gt 0) {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor Green
                    else {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor White

            "$DestinationStorageBayName Storage Bay Disk Queue on $DestinationComputer"

            $DestinationCounterValue | ? {($_.path -match $DestinationDiskPattern) -and ($_.path -match 'queue')} | % {
                    if($_.cookedvalue -gt 0) {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor Green
                    else {
                        Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $_.cookedvalue.tostring().padright(10) -ForegroundColor White


        "$SourceStorageBayName Read stats on $SourceComputer"

        $SourceCounterValue | ? {($_.path -match $SourceDiskPattern) -and ($_.path -match 'read')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        "$SourceStorageBayName Write stats on $SourceComputer"

        $SourceCounterValue | ? {($_.path -match $SourceDiskPattern) -and ($_.path -match 'write')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        "$DestinationStorageBayName Read stats on $DestinationComputer"

        $DestinationCounterValue | ? {($_.path -match $DestinationDiskPattern) -and ($_.path -match 'read')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        "$DestinationStorageBayName Write stats on $DestinationComputer"

        $DestinationCounterValue | ? {($_.path -match $DestinationDiskPattern) -and ($_.path -match 'write')} | % {
            if($_.cookedvalue -gt 0) {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor Green
            else {
                Write-Host $_.path.padright(65)`t $_.InstanceName.padright(5)`t $([math]::round($_.cookedvalue)).tostring().padright(10) -ForegroundColor White

        Start-Sleep -Seconds $frequency


PowerShell, once again the tool for the job.

Friday, November 25, 2016

On the road to Overlay networking on Docker for Windows

The container networking stack has gone through many rapid improvements on Windows Server 2016, and it's nice to see that some new features are coming out on a regular basis: Docker's release pace is fast, and though they have had a few missteps, most of the discovered bug are promptly addressed.

In this post I want to talk you about the implementation of multi-host networking on Docker for Windows.

On Linux this is supported since Kernel version 3.16 but on Windows, Containers are a recent feature and Overlay networking is likely going to be released pretty soon.

So, let's have a look at what this is and how it works.

As you have learned from my previous posts, the Docker engine communicates with the underlying Host Network Service (HNS) through a Libnetwork plugin. This plugin implements the Docker Container Network Model (CNM) which is composed of three main components:
  • A Sandbox, where the network configuration (IP address, mac address, routes and DNS entries) of the container is stored
  • An Endpoint linking the container Sandbox to a Network: this is a vNIC in the case of a Windows Container or a vmNIC in case of a Hyper-V container
  • A Network, which is a group of Endpoints belonging to different containers that can communicate directly
Behind each Network a built-in Driver performs the actual work of providing the required connectivity and isolation.

There are four possible driver packages inside Libnetwork:
  • null
  • bridge
  • overlay
  • remote
No network interface is attached to a container which is started with the Null driver:
docker run -it --network none microsoft/nanoserver powershell
Get-NetAdapter in this case returns nothing. And upon inspection this container will show no network:

In the second case, when you use the Bridge driver, the container won’t have a public IP but will be assigned a private address from the 20-bit private range defined by RFC 1918: - (172.16/12 prefix)

Get-Netadapter will show the virtual Ethernet adapter:

Name                      InterfaceDescription                    ifIndex
----                      --------------------                    -------
vEthernet (Container N... Hyper-V Virtual Ethernet Adapter #2          19
and Get-NetIpAddress will show the private IP address:
Get-NetIPAddress | Format-Table

ifIndex IPAddress                                       PrefixLength PrefixOrigin
------- ---------                                       ------------ ------------
19      fe80::29aa:cc8a:43f2:ae0f%19                              64 WellKnown   
18      ::1                                                      128 WellKnown   
19                                                20 Manual      
18                                                  8 WellKnown   
If I inspect this container, I can see the JSON describing the network specifications:
docker container inspect 4a44649f2b8d

Now just a couple of weeks ago (on version v1.13.0-rc1), Docker has implemented the third Driver (read Swarm-mode overlay networking support for windows), which basically means that your Windows running containers will be able to communicate even if they are residing on different hosts.

Actually this is a bit more complicated than that, because Overlay networking has been implemented in the Docker engine but not yet in the HNS service of Windows. So if you try to build a multi-host network you will get the following error message:
docker network create -d overlay --subnet multihost
Error response from daemon: HNS failed with error : Catastrophic failure
Same output if you try the PowerShell version:
New-ContainerNet -Driver overlay -Name MultiHost
New-ContainerNet : Docker API responded with status code=InternalServerError, response={"message":"HNS failed witherror : Catastrophic failure "}
At line:1 char:1
+ New-ContainerNet -Driver overlay -Name MultiHost
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  + CategoryInfo          : NotSpecified: (:) [New-ContainerNet], DockerApiException
  + FullyQualifiedErrorId : Docker Client Exception,Docker.PowerShell.Cmdlets.NewContainerNet
Once the required Windows binaries to build a Overlay network will be released, it will be interesting to see if Microsoft is going to embed in a Nano Server the required Key-Value store which has to be accessible to all the Containers belonging to the same Overlay network to be discoverable.

For the moment the most used key-value store is the one provided by Consul, but it is only Linux based so you won’t be able to run it on Windows:
docker run -p 8500:8500 -d consul --name consul
Unable to find image 'consul:latest' locally
latest: Pulling from library/consul
C:\Program Files\Docker\docker.exe: image operating system "linux" cannot be used on this platform.
See 'C:\Program Files\Docker\docker.exe run --help'.
All the same, Overlay networking is going to be soon available for Docker containers on Windows. The first step has been done. Now it is up to Microsoft to do the next move. Stay tuned for more on the subject.

Wednesday, November 16, 2016

Building a Docker container for the Image2Docker tool

I have been playing a bit with Image2Docker with the intention to see how far I could go into containerizing existing workloads. To date, this PowerShell-based module by fellow MVP and Docker Captain Trevor Sullivan mounts a vhdx or wim Windows Image and tries to discover running artifacts, such as IIS, SQL or Apache, and generates a Dockerfile for a container hosting these services.

Now this is still experimental, and the list of accepted artifacts is still short, but I couldn't retain myself from trying to build a Docker container for the job.

Here's how I tackled this, knowing that as most of us, I am moving my first steps into this new feature of Windows 2016.

First of all I built the following Dockerfile in Visual Studio Code:

Basically I am issuing five statements:
  1. pull the microsoft/nanoserver image. Actually I could have used the microsoft/windowsservercore image as well but that would have taken longer
  2. state that I am the mantainer of the repository
  3. install the package manager called Nuget
  4. install the actual Image2Docker module (version 1.5 at the time of writing)
  5. set the ConvertTo-Dockerfile cmdlet as entry point for this container, so that I can pass the .vhdx or .wim image path straight into this dedicated container on execution
Then the next steps to publish to the Docker Hub are:

docker build .  -t happysysadm/image2docker:latest -t happysysadm/image2docker:v0.1

In the step above I am issuing the build command from the folder containing the Dockerfile file, and I am setting two tags for the same image: latest and v0.1.

Then I logged in to the Hub:

docker login -u user -p password

And pushed the container into my public registry:

docker push happysysadm/image2docker

At this moment this repo becomes visible on the web:

Once I got your container up in the Hub, I cleaned up my local image:

docker rmi -f happysysadm/image2docker:v0.1

and pulled it again:

docker pull happysysadm/image2docker

Every time I have gone through an update of my Dockerfile, I had to do a rebuild and increment the version tag:

docker build .  -t happysysadm/image2docker:latest -t happysysadm/image2docker:v0.2

In the step above the latest tag is passed to v0.2 and the previous image retains only the tag v0.1.

Now this container is public and you can just do:

docker run happysysadm/image2docker sample.vhdx

and get the Dockerfile for your Windows image created for you. Let me know how it goes and remember that this project is open source so everybody's contribution is accepted.

Monday, November 14, 2016

Step up container management with PowerShell for Docker

I remember that one of the first reasons I started using Windows PowerShell is that it uses objects to represent the data, which is great when you are interacting with a object-oriented Windows ecosystem. Now that some historical borders have been crossed between Linux and Windows, and that preexisting tools have been translated to the Microsoft's OS, we, as PowerShell guys, could face a bit of a throwback in the way we use the shell.

Just have a look at Docker.

Invented in 2013 by a French guy named Solomon Hykes, this open source project aimed at automating the deployment of Linux containers has been quickly adopted by Microsoft for their last operating system and can today be run on both Windows 10 and Windows 2016.

The main drawback of adopting such a tool, is that it comes with a command line which is obsolete if you look at it in PowerShell terms: it only produces strings, which are hardly reusable, unless you feed them to ConvertFrom-String:

docker images | ConvertFrom-String -Delimiter "\s{2,}" | Format-Table

P1                          P2     P3           P4          P5
--                          --     --           --          --
REPOSITORY                  TAG    IMAGE ID     CREATED     SIZE
microsoft/iis               latest 211fecef1e6b 5 days ago  9.48 GB
microsoft/sample-dotnet     latest c14528829a37 2 weeks ago 911 MB
microsoft/windowsservercore latest 93a9c37b36d0 7 weeks ago 8.68 GB
microsoft/nanoserver        latest e14bc0ecea12 7 weeks ago 810 MB

Now, tough ConvertFrom-String is a extremely powerful cmdlet released with PowerShell 5.0 (check my blog post on the subject), it take some time to feel easy with its syntax. In the previous example for instance I am outputting the list of the images I have pulled from the Docker Hub onto my system. The text that comes through the pipeline once I run 'docker images' has to be split whenever I find at least 2 empty spaces. To achieve that I have to use the Delimiter parameter and match a whitespace \s at least two times {2,}.

Needless to say, knowing regular expressions becomes a must.

Happily enough we have an alternative to this. Since Docker comes with a nice API, there is open source project for a module exposing PowerShell cmdlets to manage Docker images, containers and networks. Tough still in development, I heartedly suggest you start using it to maintain consistency with your existing environment.

You can find it here:

The installation is straightforward.

Register-PSRepository -Name DockerPS-Dev -SourceLocation https://ci.appveyor.com/nuget/docker-powershell-dev

Install-Module Docker -Repository DockerPS-Dev -Scope CurrentUser

Here's the list of cmdlets that come with it:

Get-Command -Module Docker -CommandType Cmdlet

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Add-ContainerImageTag                      Docker
Cmdlet          ConvertTo-ContainerImage                   Docker
Cmdlet          Copy-ContainerFile                         Docker
Cmdlet          Enter-ContainerSession                     Docker
Cmdlet          Export-ContainerImage                      Docker
Cmdlet          Get-Container                              Docker
Cmdlet          Get-ContainerDetail                        Docker
Cmdlet          Get-ContainerImage                         Docker
Cmdlet          Get-ContainerNet                           Docker
Cmdlet          Get-ContainerNetDetail                     Docker
Cmdlet          Import-ContainerImage                      Docker
Cmdlet          Invoke-ContainerImage                      Docker
Cmdlet          New-Container                              Docker
Cmdlet          New-ContainerImage                         Docker
Cmdlet          New-ContainerNet                           Docker
Cmdlet          Remove-Container                           Docker
Cmdlet          Remove-ContainerImage                      Docker
Cmdlet          Remove-ContainerNet                        Docker
Cmdlet          Request-ContainerImage                     Docker
Cmdlet          Start-Container                            Docker
Cmdlet          Start-ContainerProcess                     Docker
Cmdlet          Stop-Container                             Docker
Cmdlet          Submit-ContainerImage                      Docker
Cmdlet          Wait-Container                             Docker
This module also exposes a bunch of aliases, though I don't recommend their use since they seem confusing to me and don't add anything in terms of command line agility:

Get-Command -Module Docker -CommandType Alias | Format-Table Name,ResolvedCommandName

Name                 ResolvedCommandName
----                 -------------------
Attach-Container     Enter-ContainerSession
Build-ContainerImage New-ContainerImage
Commit-Container     ConvertTo-ContainerImage
Exec-Container       Start-ContainerProcess
Load-ContainerImage  Import-ContainerImage
Pull-ContainerImage  Request-ContainerImage
Push-ContainerImage  Submit-ContainerImage
Run-ContainerImage   Invoke-ContainerImage
Save-ContainerImage  Export-ContainerImage
Tag-ContainerImage   Add-ContainerImageTag

So, docker images becomes:


RepoTags                              ID                   Created                Size(MB)
--------                              --                   -------                --------
microsoft/sample-dotnet:latest        sha256:c14528829a... 25/10/2016 13:55:28    869,05
microsoft/windowsservercore:latest    sha256:93a9c37b36... 22/09/2016 10:51:07    8 273,19
microsoft/nanoserver:latest           sha256:e14bc0ecea... 22/09/2016 09:39:30    772,81

and the returned object is a heavily reusable ImagesListResponse object:

Get-ContainerImage | Get-Member

   TypeName: Docker.DotNet.Models.ImagesListResponse

Name        MemberType Definition
----        ---------- ----------
Equals      Method     bool Equals(System.Object obj)
GetHashCode Method     int GetHashCode()
GetType     Method     type GetType()
ToString    Method     string ToString()
Created     Property   datetime Created {get;set;}
ID          Property   string ID {get;set;}
Labels      Property   System.Collections.Generic.IDictionary[string,string] Labels...
ParentID    Property   string ParentID {get;set;}
RepoDigests Property   System.Collections.Generic.IList[string] RepoDigests {get;set;}
RepoTags    Property   System.Collections.Generic.IList[string] RepoTags {get;set;}
Size        Property   long Size {get;set;}
VirtualSize Property   long VirtualSize {get;set;}

Same model for the list of existing containers:


ID                   Image           Command              Created                Status
--                   -----           -------              -------                ------
43a05b618697033eb... microsoft/na... c:\windows\system... 14/11/2016 09:44:19    Exited...
005b51dbe002324f8... microsoft/na... --name nanoserver1   14/11/2016 09:44:04    Created
e8b31c61d5f42b271... microsoft/na... --name nanoserver1   14/11/2016 09:42:12    Created
547b7dbd3b1473127... microsoft/sa... dotnet dotnetbot.dll 06/11/2016 16:11:07    Exited...
Get-Container | Get-Member

   TypeName: Docker.DotNet.Models.ContainerListResponse

Name            MemberType Definition
----            ---------- ----------
Equals          Method     bool Equals(System.Object obj)
GetHashCode     Method     int GetHashCode()
GetType         Method     type GetType()
ToString        Method     string ToString()
Command         Property   string Command {get;set;}
Created         Property   datetime Created {get;set;}
ID              Property   string ID {get;set;}
Image           Property   string Image {get;set;}
ImageID         Property   string ImageID {get;set;}
Labels          Property   System.Collections.Generic.IDictionary[string,string] Labels...
Mounts          Property   System.Collections.Generic.IList[Docker.DotNet.Models.MountP...
Names           Property   System.Collections.Generic.IList[string] Names {get;set;}
NetworkSettings Property   Docker.DotNet.Models.SummaryNetworkSettings NetworkSettings...
Ports           Property   System.Collections.Generic.IList[Docker.DotNet.Models.Port]...
SizeRootFs      Property   long SizeRootFs {get;set;}
SizeRw          Property   long SizeRw {get;set;}
State           Property   string State {get;set;}
Status          Property   string Status {get;set;}

Now that you have this module, you have two ways to run a container. Either by using:

docker run -it microsoft\nanoserver powershell

or by using Invoke-ContainerImage (aliased as Run-ContainerImage):

Invoke-ContainerImage -ImageIdOrName microsoft/nanoserver:latest -Command powershell -Input -Terminal

which, at its best, can be shortened to:

Run-ContainerImage microsoft/nanoserver:latest powershell -In -T
None of the PowerShell syntaxes are as short as the 'legacy' one, but again, the produced object is what makes them worthy using.

I hope you have enjoyed this first post on the PowerShell module for the Docker Engine, which brings close integration between those that not so long ago were distant worlds. Stay tuned for more.

Friday, November 4, 2016

Announcing the winner of the PowerShell Oneliner Contest 2016

I am excited to announce the winner of the second PowerShell Oneliner Contest. But before I do it, let me tell you one thing. This year, I received over ninety submissions from wannabe PowerShell Monks from all over the world. Some solutions stood out as the most striking and imaginative entries. Some others were not successful in achieving what I asked, but showed a lot of effort in learning and initiative. Everybody seemed to understand that the aim of such a contest is not just to push PowerShell to its limit and beyond, by bending the command line to your will. It's a matter of generating knowledge and sharing it for others to learn from. Building code that can benefit the whole community is of paramount importance here.


So thanks to all of the entrants and, without further ado, let's have a look at the winning solution, by Sam Seitz, with 65 chars:

([char[]](71..89)|?{!(gdr $_)2>0}|sort{[guid]::newguid()})[0]+':'
#Posted by Sam Seitz to Happy SysAdm at October 25, 2016 at 8:02 AM

I got in touch with Sam so he could share a bit about himself and his thought process for the script.

Two years into his IT career, Sam is a 25-year old systems engineer for Network Technologies, Inc., an MSP in Olathe, KS. He spends his days perpetually amazed that his employer pays him to "play with computers" (as his father would say). Outside of work, his incredible wife and their pair of regal beagles keep him happier than a man has any right to be.

Seeing this challenge made me realize two things: 1) off the top of my head I know, maybe, five default aliases and 2) I should really start using more aliases. I'm normally extremely verbose in my scripting, so this proved to be a unique challenge. To keep it as short as possible, I used a few interesting techniques, which I'll break down step-by-step:
To generate the array of letters from g-y, I took advantage of the fact that the 71 through 89 is g through y in the ASCII table. When cast as a [char], 71 is g, 72 is h, etc.
?{!(gdr $_)2>0}
I then filtered out occupied drive letters by using Where-Object (?), the alias for the "-not" operator (!), and Get-Drive (gdr). 2>0 redirects the inevitable error output to null. (If you don't mind seeing each error Get-Drive throws when it's used with a non-existant drive, the 2>0 could be removed as it isn't necessary for the success of the one-liner. But who likes all those ugly red errors on their screen? Terrorists, that's who.)
In order to ensure the result was random, I used Sort-Object (sort) on the array of letters and told it to sort by the new GUID created using the .Net method [guid]::newguid().
Finally, I selected the first result [0] from the array of randomly sorted available drive letters in the output and threw a colon on the end (+':').

Thanks to Sam for sharing his deep knowledge of PowerShell with us. For those interested, I created a Gist with a list of working solutions I got, sorted by line length.


Now in the following section I will explain why I could not accept some entries.

The most common error by far was the use of Random as an alias of Get-Random. Though I understand the extreme difficulty of generating a random number without using Get-Random, I couldn't accept oneliners using Random as an alias for the simple reason that it is not one. It just works as an alias because the PowerShell interpreter prepends by default the verb 'Get-' to nouns if it can't find another match.

Get-Alias | ? {$_.Definition -match "Get-Random"}

Trace-Command can confirm that interpreter behavior:

Trace-Command -Name CommandDiscovery -PSHost -Expression { random }

DEBUG: CommandDiscovery Information: 0 : The command [random] was not found, trying again with get- prepended
DEBUG: CommandDiscovery Information: 0 : Looking up command: get-random
DEBUG: CommandDiscovery Information: 0 : Cmdlet found: Get-Random  Microsoft.PowerShell.Commands.GetRandomCommand


Now a word about my solutions to the contest. I wrote four of them. Since they are pretty short, I am pleased to share them with you.

In the first solution I was actually able to get a random GUID to sort on by fetching it from the internet.

There are for sure many websites exposing an engine for GUID generation (https://www.uuidgenerator.net/ or https://www.guidgen.com/ for instance) but in our case we want the shortest URL possible, and I was lucky enough to find a website named guid.it. There's a funky cmdlet for getting stuff from the web and it is Invoke-RestMethod. It has an alias which is irm. Now the cool thing of Internet is that nowadays many websites have adopted a JSON API to let consumers retrieve and manipulate their content using HTTP requests. And guid.it is one of them. Luck, again. So, doing the alike of:

Invoke-WebRequest http://www.guid.it/json -UseBasicParsing | ConvertFrom-Json

can be achieved in a simpler manner with:

Invoke-RestMethod http://www.guid.it/json

which can be shortened to:

irm guid.it/json

or even shorter:

irm guid.it/api

And there I have my random guid. Internet for the IT pro, I daresay.

For the rest my first solution matches Sam's one:
([char[]](71..89)|?{!(gdr $_)2>0}|sort{irm guid.it/api})[0]+':'

In my second solution, I leverage the .NET Framework's System.Random class, but instead of using [random]::New().Next() I went for a shorter ([random]@{}).Next():

My third solution relies on use of my favorite cmdlets, Select-String (aliased as sls) in conjunction with the [guid] type accelerator, which called the NewGuid static method:
''+(ls function:[g-y]:|sls(gdr)-n|sort{[guid]::NewGuid()})[0]

In my fourth and last solution I mixed both my third and first solution, and so I was able to go down to 59 chars. It's a bit slower than the others because of the action of fetching GUIDs from the Internet, but for the purpose of the contest this is the shortest solution I was able to come up with:
''+(ls function:[g-y]:|sls(gdr)-n|sort{irm guid.it/api})[0]

I have created a Gist with my solutions, which you can find here.


Now a word about how the testing of the posted oneliners went.

Since I was rapidly flowed by plenty of tricky oneliners, I was a bit scared by having to check all of them manually for respect of contest rules. Fortunately I had already worked a bit with Pester to define test cases on other projects, so I just had adapt what I knew to the contest I had just started.

Then, and it was sheer luck, I got contacted by Jakub who proposed a complete solution to test those oneliners.

I am glad to say that what Jakub came up with is just brilliant. So, who better than him to explain his approach. Take it away Jakub.

I always loved one liners. "Make the code as short as possible" is such a simple, yet so challenging restriction. Such restriction does not exist in our day to day work, we care about readability, understandability and performance, but rarely about the length of our code. Putting this restriction in place and removing any other, turns our usual focus on its head. For once we get to write code that is so unreadable and uses so many quirks of the language that we will need to explain it at least twice (if not three times). Finally we can put all the side notes we read in books to work and use all the features, that we thought were bugs when we saw them for the first time, to force the language syntax to it's limits, and then watch in awe when others produce a solutions twice as short as ours.

For this reason I had to take part in the Oneliner contest of 2016 hosted by Carlo on his blog. Once I read the requirements I thought to myself: Well that's more that one requirement, what a nice opportunity to take this on another level and write some tests as well. And so I approached the whole problem in a kata-like way, which means not only taking my time to think about the problem itself, but also taking time to reason about the tests and the process of writing tests. Now since I know I have no way of winning the contest, especially after seeing how creative were people last year, I will at least walk you through my thought process.

First I read the requirements just to make sure they are quantifiable, what I mean by that is that I can measure if the requirement was met. A quantifiable requirement is for example "contains no semi-colon", a non-quantifiable requirement (at least not easily) would be "the code looks nice".

Once I made sure I will be able to write some tests for all of the requirements I proceeded to categorize the requirements and realized that they can be split to wwo categories: stylistic, and functional. Where stylistic is how the code should look like, and functional is how the code should behave.

I started with the functional part of the tests as they seemed much simpler to implement.

### Test 1 - Outputs single string
The first decision I made, was to store my one liner as a script block. This enabled me to reuse the same script block in all the test cases, and it also enabled me to change my one liner very easily.

The first test checks that the output of the oneliner is a single string. Pester has a built-in assertion `BeOfType` which was my first choice, but then I realized that piping the output through pipeline would expand the array that I might get, and I wouldn't be able to check if I got just a single item or whole array or items. So I went oldschool and used the `-is` operator.

It "outputs single string" {
    (&$oneliner) -is [string] | Should be $True

### Test 2 - Outputs one letter followed by colon
Next requirement forces me to match the text and specifies that it should be a letter followed by colon. Any text matching is easy with the `Match` assertion which uses regular expressions. The only thing I had to watch out for was matching the start and end of the string, to make sure that no sorrounding characters are matched.

It "Outputs single letter followed by colon" {
    &$oneliner | Should Match "^[a-z]\:$"

I decided to match the whole alphabet in this test to limit mixing the requirements. I find it being a good practice to specify requirements in one place without unnecessarily resticting other unrelated tests.

### Test 3 - Should exclude drives A-F and Z
Yet another requirement forces me to exclude some of the drive letters. I decided to use test cases to have a single test for each excluded letter and specified a list of test cases. This feature of Pester generates a single test per testcase and also modifies the name of the test to reflect the actual value of `$DriveLetter` for extra readability. The scriptblock then contains parameter I named $DriveLetter, which I use to write the assertion.

It "Should not output drive letter " -TestCases `
    @{DriveLetter = "a:"},
    @{DriveLetter = "b:"},
    @{DriveLetter = "c:"},
    @{DriveLetter = "d:"},
    @{DriveLetter = "e:"},
    @{DriveLetter = "f:"},
    @{DriveLetter = "z:"}{
    param ($DriveLetter)
        &$oneliner | Should Not Be $DriveLetter

### Test 4 - Drive should not be used
This test could not be easier. I am used the `Exist` assertion which I know uses `Test-Path` internally. Nothing else was needed here.

It "Resulting drive should not exist" {
    &$oneLiner | Should Not Exist

### Test 5 - Drive should be random
This test I found interesting because randomness is something to avoid in tests as much as possible. Randomness can make test fail from time to time and that unexpected failures lower the trust in we have in tests. But well in this case I'll be using the tests locally so I decided to take the simplest route and run the code twice and then compare the results. If the results are not the same the output is probably "random". This is far from perfect, but in this simple case I can validate by running the test multiple times. In a real production environment I'd run the code more than twice and compare the results.

It "Should be random" {
    &$oneLiner | Should Not Be (&$oneLiner)

Another interesting thing about this test is that I did not notice the randomness requirement at first and posted my solution without it, which automatically makes my solution incorrect :)

### Test 6 - Code should be error free
This test seemed straight forward because any terminating error (exception) in a Pester test makes the test fail. The difficult part was capturing non-terminating errors as well. I had to set the error action preference to `Stop` and also pipe to `Not Throw` to make the test behave correctly. That's something to be improved in the next version of Pester.

It "Should be error-free" {
    $errorActionPreference = 'Stop'
    $oneLiner | Should Not Throw

That was it for the functional tests. All of them were pretty easy to write, and there was not much to figure out. Next up were te the stylistic tests, which were a bit more challenging as I first needed to write some helper functions to avoid any ifs and for loops in the body of my tests.

### Test 6 - All cmdlets must have an alias
This test was the most challenging test to write. There are two things that I needed to figure out. First I needed a way to parse the code and find all the commands. For that I knew I could use the AST, but I had to write and test the code to find all the commands. The other thing was checking if all the found commands have aliases. First I started with the tests for AST parsing and then I implemented the function:

Describe "Get-ScriptBlockCommand" { 
    It "Finds basic cmdlet" {
        Get-ScriptBlockCommand { Get-Date } | Should Be "Get-Date"
    It "Finds basic alias" {
        Get-ScriptBlockCommand { gci } | Should Be "gci"
    It "Finds multiple commands alias" {
        $actual = Get-ScriptBlockCommand { ps; get-process } 
        $actual[0] | Should Be 'ps'
        $actual[1] | Should Be 'get-process'
    It "Ignores keywords" {
        Get-ScriptBlockCommand { if ($true) {} } | Should BeNullOrEmpty
    It "Ignores other tokens" {
        Get-ScriptBlockCommand { $a = 10 ; $false } | Should BeNullOrEmpty

function Get-ScriptBlockCommand ($ScriptBlock) {
     $tokens = [System.Management.Automation.PSParser]::Tokenize($ScriptBlock,[ref]$null)
     $tokens | where { $_.Type -eq 'Command' } | select -expand content

Then I followed with looking up aliases and testing the every command has at least one:

Describe "Test-Alias" {
    It "Finds alias for basic cmdlet" {
        Test-Alias Get-ChildItem | Should Be $True
        Test-Alias Test-Path | Should Be $False        

    It "Finds alias when given alias " {
        Test-Alias gci | Should Be $True
        Test-Alias ps | Should Be $True 

    It "Returns true when all commands have aliases" {
        Test-Alias ("gci", "ps", "get-childItem") | Should Be $True

    It "Returns false when any of the commands does not have an alias" {
        Test-Alias ("Test-path", "ps", "get-childItem") | Should Be $false

function Test-Alias ([string[]] $Name) {
    end {
        $aliases = Get-Alias
        foreach ($n in $name) {
            if ($null -eq ($aliases | Where {$_.Name -eq $n -or $_.Definition -eq $n}))
                return $false

Then I could finally proceed to writing the main test:

It "All used cmdlets have an alias" {
    $commands = Get-ScriptBlockCommand $oneliner
    Test-Alias $commands | Should Be $True

### Test 7 - Code must not contain semicolon
And finally I finished with another primitive test checking that semicolon is nowhere to be found in my oneliner. The one liner is also not executed this time. Rather we implicitly convert it to string and pass it to the `Match` assertion.

It "contains no semicolons" {
    $oneliner | Should Not Match "\;"

And that was it for my testing. I hope you enjoyed the competition and congratulation to the winners!!!

Thanks again to all the competitors, to Mike F Robbins for the original function, to Sam Seitz for his brilliant solution and to Jakub Jares for showing us the way to functional testing. And remember, it was all about learning.

Monday, October 24, 2016

PowerShell Oneliner Contest 2016

A lot of time has passed since I have organized a PowerShell oneliner contest. So when I saw the post by fellow MVP and scripting champion Mike F Robbins on a PowerShell Function to Determine Available Drive Letters, I thought that it could be fun to organize a contest to see who can manage to write the shortest possible oneliner that achieves the same result as Mike's function.

As you can see reading his blogpost, the function accepts parameters such as -Random, to return one or more available drive letters at random, or -All, to return all the available drive letters. It also allows you to exclude some letters from the match (A, B, C, D, E, F and Z) by means of a -ExcludeDriveLetter parameter.

Now, for this specific contest, what I want to get in a comment to this post is:
  • a oneliner (meaning in particular no semi-colon) that
  • returns one and only one random available drive letter on the system where it runs
  • with the exception of A-F and Z
  • whose object type is a System.String (I'll check this Get-Member)
  • and whose formatting is, say, G: or h: (case doesn't matter, we are on Windows)
For sure
  • aliases are mandatory, meaning that you can't use a cmdlet unless it has an alias
  • backticks are accepted for readability
  • you can use every PowerShell version, including 5.1, just state in the comment what version you tested it with
  • should you find a shorter oneliner to solve a task you are allowed to post additional comments (just remember to sign your comments so that I know who's who and so that I can get in touch with the winner)
A few more rules:
  • Entries (comments) will not be made public until after the submission deadline.
  • The first person to produce the shortest working solutions to the task wins.
  • The winner will be announced on Friday, November 4th on this blog.
  • I'll be the only judge.
If you want to spread the word about this PowerShell contest, feel free to twit about it. You can use the hashtags #poshcontest2016 and #powershell so that other competitors can share their thoughts (not the solutions of course!).

UPDATE Nov 4 2016
We have a winner! Check it here.

Thursday, October 20, 2016

How to query the Docker repo and find out the latest Master Build with PowerShell

PowerShell can be used to interact with the web and I have therefore decided to use it to stay tuned with the Docker project which I am currently pretty much interested into (check my previous series on Docker on Windows 2016). Docker is an open source project, and when you say open source nowadays you have to think Github, which is basically a hosting service for open source software projects with features for version control, issue tracking, commit history and all the rest.

Now Github has a REST API that can be consumed in PowerShell through the use of the Invoke-RestMethod cmdlet (aliased as irm in your oneliners). The JSON formatted answer is converted by Invoke-RestMethod to a custom object.

Under Github, URLs are in the form api.github.com/user/repos, so for the Docker project we have query api.github.com/repos/Docker/Docker

A simple request against this URL will return all the basic information on the project:
Selecting the properties I could be interested into is easily achieved:
irm api.github.com/repos/Docker/Docker | ft forks,open_issues,watchers

forks open_issues watchers
----- ----------- --------
10626        1929    36095
Once we know that this API can be consumed with PowerShell, we could very well think of retrieving all the published releases. The URL syntax is found in the result on the previous query:

Here's how I can get the first release returned by the API and see how the information is structured:
(irm api.github.com/repos/Docker/Docker/releases)[0]
A lot of information here, including a body containing the full description of the changes that come with the currently listed version.

As I said, we are only interested in getting the published releases of Docker, so let's use a sieve and keep just four key properties: the name of the package, its ID, author, creation date and publication date.

A short oneliner will do:
irm api.github.com/repos/Docker/Docker/releases |

        sort id |

        ft name,id,*at,@{n='author';e={$_.author.login}} -auto
Let's apply a couple of best practices and put the resulting object in a variable, for reusability, as well as add a bit of formatting for the dates:
$u = irm api.github.com/repos/Docker/Docker/releases

$u |
 sort id |
 ft  -auto name,id,


  @{n='creationdate';e={get-date $_.created_at}},

  @{n='publicationdate';e={get-date $_.published_at}}
Basically, with
Get-Date '2016-10-11T23:35:27Z'
I am using Get-Date to convert the timestamps, which are expressed in UTC, to my local time zone. It's the Z in the end of the date (which is a special UTC designator) to tell me that the timestamp is expressed in Coordinated Universal Time.

These two lines of code above return:
name             id author     creationdate        publicationdate    
----             -- ------     ------------        ---------------    
v1.10.1-rc1 2590708 tiborvass  10/02/2016 23:12:24 11/02/2016 00:09:31
v1.10.1     2598018 tiborvass  11/02/2016 22:14:44 11/02/2016 22:16:24
v1.10.2-rc1 2652399 tiborvass  20/02/2016 08:00:24 20/02/2016 08:23:08
v1.10.2     2666504 tiborvass  22/02/2016 23:57:57 23/02/2016 00:05:08
v1.10.3-rc1 2777835 tiborvass  09/03/2016 18:11:06 09/03/2016 18:14:08
v1.10.3-rc2 2780060 tiborvass  09/03/2016 22:58:41 09/03/2016 23:02:32
v1.10.3     2788494 tiborvass  10/03/2016 23:01:03 10/03/2016 23:07:27
v1.11.0-rc1 2875983 tiborvass  23/03/2016 21:15:31 23/03/2016 21:20:29
v1.11.0-rc2 2890861 tiborvass  25/03/2016 22:28:59 25/03/2016 22:31:10
v1.11.0-rc3 2937939 tiborvass  02/04/2016 01:59:32 02/04/2016 02:01:26
v1.11.0-rc4 2968912 tiborvass  07/04/2016 04:28:01 07/04/2016 04:56:10
v1.11.0-rc5 2998258 tiborvass  12/04/2016 01:33:25 12/04/2016 01:34:20
v1.11.0     3014278 tiborvass  13/04/2016 21:56:07 14/04/2016 00:10:23
v1.11.1-rc1 3097597 mlaventure 26/04/2016 10:01:02 26/04/2016 10:05:57
v1.11.1     3105125 mlaventure 27/04/2016 03:51:45 27/04/2016 03:59:03
v1.11.2-rc1 3327300 mlaventure 28/05/2016 21:44:50 28/05/2016 23:47:25
v1.11.2     3354503 tiborvass  02/06/2016 02:59:52 02/06/2016 03:08:13
v1.12.0-rc1 3447699 tiborvass  15/06/2016 10:39:54 15/06/2016 10:55:14
v1.12.0-rc2 3471944 tiborvass  17/06/2016 23:39:11 18/06/2016 00:49:34
v1.12.0-rc3 3573896 tiborvass  02/07/2016 05:26:36 02/07/2016 05:30:18
v1.12.0-rc4 3644623 tiborvass  13/07/2016 07:27:26 13/07/2016 07:25:48
v1.12.0-rc5 3744904 tiborvass  26/07/2016 22:48:09 26/07/2016 22:48:18
v1.12.0     3766135 tiborvass  29/07/2016 02:06:45 29/07/2016 02:07:31
v1.12.1-rc1 3879305 tiborvass  13/08/2016 01:25:24 13/08/2016 01:28:06
v1.12.1-rc2 3909470 tiborvass  17/08/2016 19:50:45 17/08/2016 19:53:00
v1.12.1     3919520 tiborvass  18/08/2016 20:14:05 18/08/2016 20:19:55
v1.12.2-rc1 4246481 vieux      27/09/2016 22:37:47 28/09/2016 02:05:20
v1.12.2-rc2 4304701 vieux      04/10/2016 07:37:23 05/10/2016 01:41:11
v1.12.2-rc3 4336430 vieux      06/10/2016 23:27:01 07/10/2016 21:15:16
v1.12.2     4364345 vieux      11/10/2016 07:23:52 12/10/2016 01:35:27
I am immediately surprised to see that the last (and more recent) release of Docker is the 1.12.2. I have been playing with Docker under Windows 2016 enough to know that there is a 1.13 version under development. So why can't I see it here?

Well, the answer is simple. Github doesn't show you the Master Build of Docker. For those that are encountering problems with Docker on Windows 2016, and for those that love to always have the last version no matter what, master.dockerproject.org is the place to look for:

Unfortunately there's no RestAPI for this site, and since it returns a table in old style HTML code, Invoke-RestMethod has no use here.

Happily enough there's a nice fail-back solution: using Invoke-WebRequest in conjunction with a nice script developed by Lee Holmes that does the job of extracting tables from web pages.

Save the code in a file named get-webrequesttable.ps1 so that you can reuse it, and feed it with the output of Invoke-WebRequest:
$uri = 'https://master.dockerproject.org/'

$r = iwr $uri

$o = .\get-webrequesttable.ps1' $r -TableNumber 0 

$o | Get-Member

   TypeName: System.Management.Automation.PSCustomObject

Name          MemberType   Definition                                   
----          ----------   ----------                                   
Equals        Method       bool Equals(System.Object obj)               
GetHashCode   Method       int GetHashCode()                            
GetType       Method       type GetType()                               
ToString      Method       string ToString()                            
Name          NoteProperty string Name=commit                           
Size          NoteProperty string Size=40 B                             
Uploaded Date NoteProperty string Uploaded Date=2016-10-20T07:12:53.000Z
Lee's script has found three columns and built an object with three properties: name, size and date of the upload.

With Format-Table we can produce a readable object:
$o | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                             Size     uploadeddate       
----                                             ----     ------------       
commit                                           40 B     20/10/2016 09:12:53
darwin/amd64/docker                              10.35 MB 20/10/2016 09:12:53
darwin/amd64/docker-1.11.0-dev                   10.44 MB 14/04/2016 22:14:23
darwin/amd64/docker-1.11.0-dev.md5               52 B     14/04/2016 22:14:24
darwin/amd64/docker-1.11.0-dev.sha256            84 B     14/04/2016 22:14:24
darwin/amd64/docker-1.11.0-dev.tgz               3.176 MB 14/04/2016 22:14:45
darwin/amd64/docker-1.11.0-dev.tgz.md5           56 B     14/04/2016 22:14:46
darwin/amd64/docker-1.11.0-dev.tgz.sha256        88 B     14/04/2016 22:14:46
darwin/amd64/docker-1.12.0-dev                   13.77 MB 29/07/2016 19:01:15
darwin/amd64/docker-1.12.0-dev.md5               52 B     29/07/2016 19:01:15
Since I am interested just in the versions of Docker for Windows, I can add a bit of filtering:
$o |? name -Match windows | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                             Size     uploadeddate       
----                                             ----     ------------       
windows/386/docker-1.11.0-dev.exe                9.456 MB 14/04/2016 22:14:38
windows/386/docker-1.11.0-dev.exe.md5            56 B     14/04/2016 22:14:39
windows/386/docker-1.11.0-dev.exe.sha256         88 B     14/04/2016 22:14:39
windows/386/docker-1.11.0-dev.tgz                3.089 MB 01/04/2016 01:28:35
windows/386/docker-1.11.0-dev.tgz.md5            56 B     01/04/2016 01:28:35
windows/386/docker-1.11.0-dev.tgz.sha256         88 B     01/04/2016 01:28:35
windows/386/docker-1.11.0-dev.zip                3.092 MB 14/04/2016 22:14:50
windows/386/docker-1.11.0-dev.zip.md5            56 B     14/04/2016 22:14:51
windows/386/docker-1.11.0-dev.zip.sha256         88 B     14/04/2016 22:14:51
windows/386/docker-1.12.0-dev.exe                12.3 MB  29/07/2016 19:01:36
windows/386/docker-1.12.0-dev.exe.md5            56 B     29/07/2016 19:01:37
windows/386/docker-1.12.0-dev.exe.sha256         88 B     29/07/2016 19:01:37
windows/386/docker-1.12.0-dev.zip                4.041 MB 29/07/2016 19:01:50
windows/386/docker-1.12.0-dev.zip.md5            56 B     29/07/2016 19:01:50
windows/386/docker-1.12.0-dev.zip.sha256         88 B     29/07/2016 19:01:50
windows/386/docker-1.13.0-dev.exe                10.55 MB 20/10/2016 09:13:18
windows/386/docker-1.13.0-dev.exe.md5            56 B     20/10/2016 09:13:19
windows/386/docker-1.13.0-dev.exe.sha256         88 B     20/10/2016 09:13:20
windows/386/docker-1.13.0-dev.zip                3.698 MB 20/10/2016 09:14:14
windows/386/docker-1.13.0-dev.zip.md5            56 B     20/10/2016 09:14:14
windows/386/docker-1.13.0-dev.zip.sha256         88 B     20/10/2016 09:14:14
windows/386/docker.exe                           10.55 MB 20/10/2016 09:13:20
windows/amd64/docker-1.11.0-dev.exe              30.46 MB 14/04/2016 22:14:40
windows/amd64/docker-1.11.0-dev.exe.md5          56 B     14/04/2016 22:14:43
windows/amd64/docker-1.11.0-dev.exe.sha256       88 B     14/04/2016 22:14:43
windows/amd64/docker-1.11.0-dev.tgz              8.567 MB 01/04/2016 01:28:35
windows/amd64/docker-1.11.0-dev.tgz.md5          56 B     01/04/2016 01:28:35
windows/amd64/docker-1.11.0-dev.tgz.sha256       88 B     01/04/2016 01:28:36
windows/amd64/docker-1.11.0-dev.zip              8.591 MB 14/04/2016 22:14:51
windows/amd64/docker-1.11.0-dev.zip.md5          56 B     14/04/2016 22:14:52
windows/amd64/docker-1.11.0-dev.zip.sha256       88 B     14/04/2016 22:14:52
windows/amd64/docker-1.12.0-dev.exe              15.19 MB 29/07/2016 19:01:38
windows/amd64/docker-1.12.0-dev.exe.md5          56 B     29/07/2016 19:01:39
windows/amd64/docker-1.12.0-dev.exe.sha256       88 B     29/07/2016 19:01:39
windows/amd64/docker-1.12.0-dev.zip              16.37 MB 29/07/2016 19:01:51
windows/amd64/docker-1.12.0-dev.zip.md5          56 B     29/07/2016 19:01:51
windows/amd64/docker-1.12.0-dev.zip.sha256       88 B     29/07/2016 19:01:52
windows/amd64/docker-1.13.0-dev.exe              11.67 MB 20/10/2016 09:13:20
windows/amd64/docker-1.13.0-dev.exe.md5          56 B     20/10/2016 09:13:21
windows/amd64/docker-1.13.0-dev.exe.sha256       88 B     20/10/2016 09:13:21
windows/amd64/docker-1.13.0-dev.zip              14.65 MB 20/10/2016 09:14:15
windows/amd64/docker-1.13.0-dev.zip.md5          56 B     20/10/2016 09:14:17
windows/amd64/docker-1.13.0-dev.zip.sha256       88 B     20/10/2016 09:14:17
windows/amd64/docker-proxy-1.12.0-dev.exe        2.936 MB 29/07/2016 19:01:39
windows/amd64/docker-proxy-1.12.0-dev.exe.md5    62 B     29/07/2016 19:01:39
windows/amd64/docker-proxy-1.12.0-dev.exe.sha256 94 B     29/07/2016 19:01:39
windows/amd64/docker-proxy-1.13.0-dev.exe        1.875 MB 20/10/2016 09:13:21
windows/amd64/docker-proxy-1.13.0-dev.exe.md5    62 B     20/10/2016 09:13:22
windows/amd64/docker-proxy-1.13.0-dev.exe.sha256 94 B     20/10/2016 09:13:22
windows/amd64/docker-proxy.exe                   1.875 MB 20/10/2016 09:13:22
windows/amd64/docker.exe                         11.67 MB 20/10/2016 09:13:22
windows/amd64/dockerd-1.12.0-dev.exe             40.28 MB 29/07/2016 19:01:40
windows/amd64/dockerd-1.12.0-dev.exe.md5         57 B     29/07/2016 19:01:42
windows/amd64/dockerd-1.12.0-dev.exe.sha256      89 B     29/07/2016 19:01:42
windows/amd64/dockerd-1.13.0-dev.exe             32.42 MB 20/10/2016 09:13:23
windows/amd64/dockerd-1.13.0-dev.exe.md5         57 B     20/10/2016 09:13:25
windows/amd64/dockerd-1.13.0-dev.exe.sha256      89 B     20/10/2016 09:13:25
windows/amd64/dockerd.exe                        32.42 MB 20/10/2016 09:13:25
Excluding all that's not a zip archive is achieved with Regex:
$o |? name -Match ^windows.*?zip$ | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                Size     uploadeddate       
----                                ----     ------------       
windows/386/docker-1.11.0-dev.zip   3.092 MB 14/04/2016 22:14:50
windows/386/docker-1.12.0-dev.zip   4.041 MB 29/07/2016 19:01:50
windows/386/docker-1.13.0-dev.zip   3.698 MB 20/10/2016 09:14:14
windows/amd64/docker-1.11.0-dev.zip 8.591 MB 14/04/2016 22:14:51
windows/amd64/docker-1.12.0-dev.zip 16.37 MB 29/07/2016 19:01:51
windows/amd64/docker-1.13.0-dev.zip 14.65 MB 20/10/2016 09:14:15
So cool. We have the list of the latest master builds for each release. Now, getting only the current master build is just one step away:
($o |? name -Match ^windows.*?zip$)[-1] | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                Size     uploadeddate       
----                                ----     ------------       
windows/amd64/docker-1.13.0-dev.zip 14.65 MB 20/10/2016 09:14:15
Hey, this is newer than the one I currently have, so let me download it:
$l = ($o |? name -Match ^windows.*?zip$)[-1].Name

iwr "$uri$l" -OutFile "$env:TEMP\docker.zip" -UseBasicParsing
I am out testing it. If you have any question on the code or on the aliases that I have used, do not hesitate to ask.
Related Posts Plugin for WordPress, Blogger...