Monday, October 24, 2016

PowerShell Oneliner Contest 2016

A lot of time has passed since I have organized a PowerShell oneliner contest. So when I saw the post by fellow MVP and scripting champion Mike F Robbins on a PowerShell Function to Determine Available Drive Letters, I thought that it could be fun to organize a contest to see who can manage to write the shortest possible oneliner that achieves the same result as Mike's function.

As you can see reading his blogpost, the function accepts parameters such as -Random, to return one or more available drive letters at random, or -All, to return all the available drive letters. It also allows you to exclude some letters from the match (A, B, C, D, E, F and Z) by means of a -ExcludeDriveLetter parameter.

Now, for this specific contest, what I want to get in a comment to this post is:
  • a oneliner (meaning in particular no semi-colon) that
  • returns one and only one random available drive letter on the system where it runs
  • with the exception of A-F and Z
  • whose object type is a System.String (I'll check this Get-Member)
  • and whose formatting is, say, G: or h: (case doesn't matter, we are on Windows)
For sure
  • aliases are mandatory, meaning that you can't use a cmdlet unless it has an alias
  • backticks are accepted for readability
  • you can use every PowerShell version, including 5.1, just state in the comment what version you tested it with
  • should you find a shorter oneliner to solve a task you are allowed to post additional comments (just remember to sign your comments so that I know who's who and so that I can get in touch with the winner)
A few more rules:
  • Entries (comments) will not be made public until after the submission deadline.
  • The first person to produce the shortest working solutions to the task wins.
  • The winner will be announced on Friday, November 4th on this blog.
  • I'll be the only judge.
If you want to spread the word about this PowerShell contest, feel free to twit about it. You can use the hashtags #poshcontest2016 and #powershell so that other competitors can share their thoughts (not the solutions of course!).

Thursday, October 20, 2016

How to query the Docker repo and find out the latest Master Build with PowerShell

PowerShell can be used to interact with the web and I have therefore decided to use it to stay tuned with the Docker project which I am currently pretty much interested into (check my previous series on Docker on Windows 2016). Docker is an open source project, and when you say open source nowadays you have to think Github, which is basically a hosting service for open source software projects with features for version control, issue tracking, commit history and all the rest.

Now Github has a REST API that can be consumed in PowerShell through the use of the Invoke-RestMethod cmdlet (aliased as irm in your oneliners). The JSON formatted answer is converted by Invoke-RestMethod to a custom object.

Under Github, URLs are in the form, so for the Docker project we have query

A simple request against this URL will return all the basic information on the project:
Selecting the properties I could be interested into is easily achieved:
irm | ft forks,open_issues,watchers

forks open_issues watchers
----- ----------- --------
10626        1929    36095
Once we know that this API can be consumed with PowerShell, we could very well think of retrieving all the published releases. The URL syntax is found in the result on the previous query:

Here's how I can get the first release returned by the API and see how the information is structured:
A lot of information here, including a body containing the full description of the changes that come with the currently listed version.

As I said, we are only interested in getting the published releases of Docker, so let's use a sieve and keep just four key properties: the name of the package, its ID, author, creation date and publication date.

A short oneliner will do:
irm |

        sort id |

        ft name,id,*at,@{n='author';e={$}} -auto
Let's apply a couple of best practices and put the resulting object in a variable, for reusability, as well as add a bit of formatting for the dates:
$u = irm

$u |
 sort id |
 ft  -auto name,id,


  @{n='creationdate';e={get-date $_.created_at}},

  @{n='publicationdate';e={get-date $_.published_at}}
Basically, with
Get-Date '2016-10-11T23:35:27Z'
I am using Get-Date to convert the timestamps, which are expressed in UTC, to my local time zone. It's the Z in the end of the date (which is a special UTC designator) to tell me that the timestamp is expressed in Coordinated Universal Time.

These two lines of code above return:
name             id author     creationdate        publicationdate    
----             -- ------     ------------        ---------------    
v1.10.1-rc1 2590708 tiborvass  10/02/2016 23:12:24 11/02/2016 00:09:31
v1.10.1     2598018 tiborvass  11/02/2016 22:14:44 11/02/2016 22:16:24
v1.10.2-rc1 2652399 tiborvass  20/02/2016 08:00:24 20/02/2016 08:23:08
v1.10.2     2666504 tiborvass  22/02/2016 23:57:57 23/02/2016 00:05:08
v1.10.3-rc1 2777835 tiborvass  09/03/2016 18:11:06 09/03/2016 18:14:08
v1.10.3-rc2 2780060 tiborvass  09/03/2016 22:58:41 09/03/2016 23:02:32
v1.10.3     2788494 tiborvass  10/03/2016 23:01:03 10/03/2016 23:07:27
v1.11.0-rc1 2875983 tiborvass  23/03/2016 21:15:31 23/03/2016 21:20:29
v1.11.0-rc2 2890861 tiborvass  25/03/2016 22:28:59 25/03/2016 22:31:10
v1.11.0-rc3 2937939 tiborvass  02/04/2016 01:59:32 02/04/2016 02:01:26
v1.11.0-rc4 2968912 tiborvass  07/04/2016 04:28:01 07/04/2016 04:56:10
v1.11.0-rc5 2998258 tiborvass  12/04/2016 01:33:25 12/04/2016 01:34:20
v1.11.0     3014278 tiborvass  13/04/2016 21:56:07 14/04/2016 00:10:23
v1.11.1-rc1 3097597 mlaventure 26/04/2016 10:01:02 26/04/2016 10:05:57
v1.11.1     3105125 mlaventure 27/04/2016 03:51:45 27/04/2016 03:59:03
v1.11.2-rc1 3327300 mlaventure 28/05/2016 21:44:50 28/05/2016 23:47:25
v1.11.2     3354503 tiborvass  02/06/2016 02:59:52 02/06/2016 03:08:13
v1.12.0-rc1 3447699 tiborvass  15/06/2016 10:39:54 15/06/2016 10:55:14
v1.12.0-rc2 3471944 tiborvass  17/06/2016 23:39:11 18/06/2016 00:49:34
v1.12.0-rc3 3573896 tiborvass  02/07/2016 05:26:36 02/07/2016 05:30:18
v1.12.0-rc4 3644623 tiborvass  13/07/2016 07:27:26 13/07/2016 07:25:48
v1.12.0-rc5 3744904 tiborvass  26/07/2016 22:48:09 26/07/2016 22:48:18
v1.12.0     3766135 tiborvass  29/07/2016 02:06:45 29/07/2016 02:07:31
v1.12.1-rc1 3879305 tiborvass  13/08/2016 01:25:24 13/08/2016 01:28:06
v1.12.1-rc2 3909470 tiborvass  17/08/2016 19:50:45 17/08/2016 19:53:00
v1.12.1     3919520 tiborvass  18/08/2016 20:14:05 18/08/2016 20:19:55
v1.12.2-rc1 4246481 vieux      27/09/2016 22:37:47 28/09/2016 02:05:20
v1.12.2-rc2 4304701 vieux      04/10/2016 07:37:23 05/10/2016 01:41:11
v1.12.2-rc3 4336430 vieux      06/10/2016 23:27:01 07/10/2016 21:15:16
v1.12.2     4364345 vieux      11/10/2016 07:23:52 12/10/2016 01:35:27
I am immediately surprised to see that the last (and more recent) release of Docker is the 1.12.2. I have been playing with Docker under Windows 2016 enough to know that there is a 1.13 version under development. So why can't I see it here?

Well, the answer is simple. Github doesn't show you the Master Build of Docker. For those that are encountering problems with Docker on Windows 2016, and for those that love to always have the last version no matter what, is the place to look for:

Unfortunately there's no RestAPI for this site, and since it returns a table in old style HTML code, Invoke-RestMethod has no use here.

Happily enough there's a nice fail-back solution: using Invoke-WebRequest in conjunction with a nice script developed by Lee Holmes that does the job of extracting tables from web pages.

Save the code in a file named get-webrequesttable.ps1 so that you can reuse it, and feed it with the output of Invoke-WebRequest:
$uri = ''

$r = iwr $uri

$o = .\get-webrequesttable.ps1' $r -TableNumber 0 

$o | Get-Member

   TypeName: System.Management.Automation.PSCustomObject

Name          MemberType   Definition                                   
----          ----------   ----------                                   
Equals        Method       bool Equals(System.Object obj)               
GetHashCode   Method       int GetHashCode()                            
GetType       Method       type GetType()                               
ToString      Method       string ToString()                            
Name          NoteProperty string Name=commit                           
Size          NoteProperty string Size=40 B                             
Uploaded Date NoteProperty string Uploaded Date=2016-10-20T07:12:53.000Z
Lee's script has found three columns and built an object with three properties: name, size and date of the upload.

With Format-Table we can produce a readable object:
$o | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                             Size     uploadeddate       
----                                             ----     ------------       
commit                                           40 B     20/10/2016 09:12:53
darwin/amd64/docker                              10.35 MB 20/10/2016 09:12:53
darwin/amd64/docker-1.11.0-dev                   10.44 MB 14/04/2016 22:14:23
darwin/amd64/docker-1.11.0-dev.md5               52 B     14/04/2016 22:14:24
darwin/amd64/docker-1.11.0-dev.sha256            84 B     14/04/2016 22:14:24
darwin/amd64/docker-1.11.0-dev.tgz               3.176 MB 14/04/2016 22:14:45
darwin/amd64/docker-1.11.0-dev.tgz.md5           56 B     14/04/2016 22:14:46
darwin/amd64/docker-1.11.0-dev.tgz.sha256        88 B     14/04/2016 22:14:46
darwin/amd64/docker-1.12.0-dev                   13.77 MB 29/07/2016 19:01:15
darwin/amd64/docker-1.12.0-dev.md5               52 B     29/07/2016 19:01:15
Since I am interested just in the versions of Docker for Windows, I can add a bit of filtering:
$o |? name -Match windows | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                             Size     uploadeddate       
----                                             ----     ------------       
windows/386/docker-1.11.0-dev.exe                9.456 MB 14/04/2016 22:14:38
windows/386/docker-1.11.0-dev.exe.md5            56 B     14/04/2016 22:14:39
windows/386/docker-1.11.0-dev.exe.sha256         88 B     14/04/2016 22:14:39
windows/386/docker-1.11.0-dev.tgz                3.089 MB 01/04/2016 01:28:35
windows/386/docker-1.11.0-dev.tgz.md5            56 B     01/04/2016 01:28:35
windows/386/docker-1.11.0-dev.tgz.sha256         88 B     01/04/2016 01:28:35
windows/386/                3.092 MB 14/04/2016 22:14:50
windows/386/            56 B     14/04/2016 22:14:51
windows/386/         88 B     14/04/2016 22:14:51
windows/386/docker-1.12.0-dev.exe                12.3 MB  29/07/2016 19:01:36
windows/386/docker-1.12.0-dev.exe.md5            56 B     29/07/2016 19:01:37
windows/386/docker-1.12.0-dev.exe.sha256         88 B     29/07/2016 19:01:37
windows/386/                4.041 MB 29/07/2016 19:01:50
windows/386/            56 B     29/07/2016 19:01:50
windows/386/         88 B     29/07/2016 19:01:50
windows/386/docker-1.13.0-dev.exe                10.55 MB 20/10/2016 09:13:18
windows/386/docker-1.13.0-dev.exe.md5            56 B     20/10/2016 09:13:19
windows/386/docker-1.13.0-dev.exe.sha256         88 B     20/10/2016 09:13:20
windows/386/                3.698 MB 20/10/2016 09:14:14
windows/386/            56 B     20/10/2016 09:14:14
windows/386/         88 B     20/10/2016 09:14:14
windows/386/docker.exe                           10.55 MB 20/10/2016 09:13:20
windows/amd64/docker-1.11.0-dev.exe              30.46 MB 14/04/2016 22:14:40
windows/amd64/docker-1.11.0-dev.exe.md5          56 B     14/04/2016 22:14:43
windows/amd64/docker-1.11.0-dev.exe.sha256       88 B     14/04/2016 22:14:43
windows/amd64/docker-1.11.0-dev.tgz              8.567 MB 01/04/2016 01:28:35
windows/amd64/docker-1.11.0-dev.tgz.md5          56 B     01/04/2016 01:28:35
windows/amd64/docker-1.11.0-dev.tgz.sha256       88 B     01/04/2016 01:28:36
windows/amd64/              8.591 MB 14/04/2016 22:14:51
windows/amd64/          56 B     14/04/2016 22:14:52
windows/amd64/       88 B     14/04/2016 22:14:52
windows/amd64/docker-1.12.0-dev.exe              15.19 MB 29/07/2016 19:01:38
windows/amd64/docker-1.12.0-dev.exe.md5          56 B     29/07/2016 19:01:39
windows/amd64/docker-1.12.0-dev.exe.sha256       88 B     29/07/2016 19:01:39
windows/amd64/              16.37 MB 29/07/2016 19:01:51
windows/amd64/          56 B     29/07/2016 19:01:51
windows/amd64/       88 B     29/07/2016 19:01:52
windows/amd64/docker-1.13.0-dev.exe              11.67 MB 20/10/2016 09:13:20
windows/amd64/docker-1.13.0-dev.exe.md5          56 B     20/10/2016 09:13:21
windows/amd64/docker-1.13.0-dev.exe.sha256       88 B     20/10/2016 09:13:21
windows/amd64/              14.65 MB 20/10/2016 09:14:15
windows/amd64/          56 B     20/10/2016 09:14:17
windows/amd64/       88 B     20/10/2016 09:14:17
windows/amd64/docker-proxy-1.12.0-dev.exe        2.936 MB 29/07/2016 19:01:39
windows/amd64/docker-proxy-1.12.0-dev.exe.md5    62 B     29/07/2016 19:01:39
windows/amd64/docker-proxy-1.12.0-dev.exe.sha256 94 B     29/07/2016 19:01:39
windows/amd64/docker-proxy-1.13.0-dev.exe        1.875 MB 20/10/2016 09:13:21
windows/amd64/docker-proxy-1.13.0-dev.exe.md5    62 B     20/10/2016 09:13:22
windows/amd64/docker-proxy-1.13.0-dev.exe.sha256 94 B     20/10/2016 09:13:22
windows/amd64/docker-proxy.exe                   1.875 MB 20/10/2016 09:13:22
windows/amd64/docker.exe                         11.67 MB 20/10/2016 09:13:22
windows/amd64/dockerd-1.12.0-dev.exe             40.28 MB 29/07/2016 19:01:40
windows/amd64/dockerd-1.12.0-dev.exe.md5         57 B     29/07/2016 19:01:42
windows/amd64/dockerd-1.12.0-dev.exe.sha256      89 B     29/07/2016 19:01:42
windows/amd64/dockerd-1.13.0-dev.exe             32.42 MB 20/10/2016 09:13:23
windows/amd64/dockerd-1.13.0-dev.exe.md5         57 B     20/10/2016 09:13:25
windows/amd64/dockerd-1.13.0-dev.exe.sha256      89 B     20/10/2016 09:13:25
windows/amd64/dockerd.exe                        32.42 MB 20/10/2016 09:13:25
Excluding all that's not a zip archive is achieved with Regex:
$o |? name -Match ^windows.*?zip$ | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                Size     uploadeddate       
----                                ----     ------------       
windows/386/   3.092 MB 14/04/2016 22:14:50
windows/386/   4.041 MB 29/07/2016 19:01:50
windows/386/   3.698 MB 20/10/2016 09:14:14
windows/amd64/ 8.591 MB 14/04/2016 22:14:51
windows/amd64/ 16.37 MB 29/07/2016 19:01:51
windows/amd64/ 14.65 MB 20/10/2016 09:14:15
So cool. We have the list of the latest master builds for each release. Now, getting only the current master build is just one step away:
($o |? name -Match ^windows.*?zip$)[-1] | ft name,size,@{n='uploadeddate';e={get-date $_.'uploaded date'}} -auto

Name                                Size     uploadeddate       
----                                ----     ------------       
windows/amd64/ 14.65 MB 20/10/2016 09:14:15
Hey, this is newer than the one I currently have, so let me download it:
$l = ($o |? name -Match ^windows.*?zip$)[-1].Name

iwr "$uri$l" -OutFile "$env:TEMP\" -UseBasicParsing
I am out testing it. If you have any question on the code or on the aliases that I have used, do not hesitate to ask.

Wednesday, October 19, 2016

First steps with Microsoft Containers - part 3

A lot of things could not work out of the box when you move your first steps with Docker (which we have learned to install in this previous post) under Windows 2016. This is a new project, the partnership between Microsoft and Docker is definitively recent, and while the community is contributing greatly to it, it's always good to know where the logs are when you can't get things to work, like in the following screenshot:

A post by a technical write at Microsoft showed how to use Get-EventLog to retrieve events generated by the Docker engine:
Get-EventLog -LogName Application -Source Docker -After (Get-Date).AddMinutes(-1000) |
        Sort-Object Time
Now, Get-EventLog is sort of legacy cmdlet that is supported by older versions of Windows  PowerShell.

Get-WinEvent is the cmdlet you want to be using, since it is definitively faster.

Get-WinEvent -FilterHashtable @{

   ProviderName: docker

TimeCreated               Id LevelDisplayName Message                                                            
-----------               -- ---------------- -------                                                            
18/10/2016 14:14:44       1  Error            Handler for GET /v1.24/images...
18/10/2016 14:14:26       1  Information      API listen on //./pipe/docker...
18/10/2016 14:14:26       11 Information      Docker daemon [version=1.12.2...
18/10/2016 14:14:26       1  Information      Daemon has completed initiali...
18/10/2016 14:14:26       1  Information      Loading containers: done.
18/10/2016 14:14:26       1  Information      Loading containers: start.
18/10/2016 14:14:26       1  Information      Graph migration to content-ad...
18/10/2016 14:14:25       1  Information      [graphdriver] using prior sto...
18/10/2016 14:14:25       1  Information      Windows default isolation mod...
The difference in performance is easily demonstrated:
Measure-Command {Get-EventLog -LogName Application -Source Docker}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 527
Ticks             : 5274839
TotalDays         : 6,10513773148148E-06
TotalHours        : 0,000146523305555556
TotalMinutes      : 0,00879139833333333
TotalSeconds      : 0,5274839
TotalMilliseconds : 527,4839

Measure-Command {Get-WinEvent -FilterHashtable @{

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 80
Ticks             : 801087
TotalDays         : 9,27184027777778E-07
TotalHours        : 2,22524166666667E-05
TotalMinutes      : 0,001335145
TotalSeconds      : 0,0801087
TotalMilliseconds : 80,1087

Stay tuned for more Docker tips.

Tuesday, October 18, 2016

First steps with Microsoft Containers - part 2

As I said in the previous post, Docker is required in order to work with Windows containers. The source can be downloaded and installed in a couple simple steps:
Invoke-WebRequest "" -OutFile "$env:TEMP\" -UseBasicParsing

Expand-Archive $env:TEMP\ -DestinationPath $env:ProgramFiles

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)
Closed and reopen PowerShell, then:
& $env:ProgramFiles\docker\dockerd.exe --register-service

Start-Service docker
There you are.


Once you have installed Docker on Windows, you can explore the possible parameters:

Usage: docker [OPTIONS] COMMAND [arg...]
       docker [ --help | -v | --version ]

A self-sufficient runtime for containers.


  --config=%USERPROFILE%\.docker              Location of client config files
  -D, --debug                                 Enable debug mode
  -H, --host=[]                               Daemon socket(s) to connect to
  -h, --help                                  Print usage
  -l, --log-level=info                        Set the logging level
  --tls                                       Use TLS; implied by --tlsverify
  --tlscacert=%USERPROFILE%\.docker\ca.pem    Trust certs signed only by this CA
  --tlscert=%USERPROFILE%\.docker\cert.pem    Path to TLS certificate file
  --tlskey=%USERPROFILE%\.docker\key.pem      Path to TLS key file
  --tlsverify                                 Use TLS and verify the remote
  -v, --version                               Print version information and quit

    attach    Attach to a running container
    build     Build an image from a Dockerfile
    commit    Create a new image from a container's changes
    cp        Copy files/folders between a container and the local filesystem
    create    Create a new container
    diff      Inspect changes on a container's filesystem
    events    Get real time events from the server
    exec      Run a command in a running container
    export    Export a container's filesystem as a tar archive
    history   Show the history of an image
    images    List images
    import    Import the contents from a tarball to create a filesystem image
    info      Display system-wide information
    inspect   Return low-level information on a container, image or task
    kill      Kill one or more running containers
    load      Load an image from a tar archive or STDIN
    login     Log in to a Docker registry.
    logout    Log out from a Docker registry.
    logs      Fetch the logs of a container
    network   Manage Docker networks
    node      Manage Docker Swarm nodes
    pause     Pause all processes within one or more containers
    port      List port mappings or a specific mapping for the container
    ps        List containers
    pull      Pull an image or a repository from a registry
    push      Push an image or a repository to a registry
    rename    Rename a container
    restart   Restart a container
    rm        Remove one or more containers
    rmi       Remove one or more images
    run       Run a command in a new container
    save      Save one or more images to a tar archive (streamed to STDOUT by default)
    search    Search the Docker Hub for images
    service   Manage Docker services
    start     Start one or more stopped containers
    stats     Display a live stream of container(s) resource usage statistics
    stop      Stop one or more running containers
    swarm     Manage Docker Swarm
    tag       Tag an image into a repository
    top       Display the running processes of a container
    unpause   Unpause all processes within one or more containers
    update    Update configuration of one or more containers
    version   Show the Docker version information
    volume    Manage Docker volumes
    wait      Block until a container stops, then print its exit code

Run 'docker COMMAND --help' for more information on a command.

Have a look at each parameter, they are easily understood. The key commands to start with are 'pull' and 'run'. The first one is used to pull an image from the Docker public registry, which is basically a repo of all the images found on the Docker Hub (

The second one is used to bring up the Container, and is often used in conjunction with the -it switches, where 'i' makes it interactive and 't' opens a pseudo terminal. You can also directly start a Container with run without pulling it from the registry first: the Docker client will do it for you.


If, like me, you are behind a corporate proxy, and aren't able to connect directly to the internet, you have two alternatives: either downloading the two classical Microsoft Containers from or or use the Start-BitsTransfer cmdlet, which is used to replace Install-ContainerOSImage (this has been removed, as I said in the previous post).

Start-BitsTransfer -Destination nanoserver_10.0.14300.1030_4.tar.gz

docker load -i .\nanoserver_10.0.14300.1030_4.tar.gz
Once you get the message 'Loaded image: microsoft/nanoserver:10.0.14300.1030' you are good to use the NanoServer image:
docker images
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
microsoft/nanoserver   10.0.14300.1030     3a703c6e97a2        4 months ago        969.8 MB
The tag here is important, because, in case of multiple images, you can refer to it by the tag.


Nothing simpler:

docker run -it microsoft/nanoserver:10.0.14300.1030 powershell


Now, a common beginner mistake, if you try to run the NanoServer Container on a full Windows 2016 Server, you get:

docker run -it microsoft/nanoserver:10.0.14300.1030 powershell
C:\Program Files\Docker\docker.exe: Error response from daemon: container 379f187cf37ff65df4ffbc4cc2dc98441a7e932443f155596f17f4e066c1585c encountered an error during CreateContainer failed in Win32: The operating system of the container does not match the operating system of the host.
(0xc0370101) extra info:{"SystemType":"Container","Name":"379f187cf37ff65df4ffbc4cc2dc98441a7e932443f155596f17f4e066c1585c","Owner":"docker","IsDummy":false,"VolumePath":"\\\\?\\Volume{4d073ac2-946711e6ae0a005056836799}","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\379f187cf37ff65df4ffbc4cc2dc98441a7e932443f155596f17f4e066c1585c","Layers"[{"ID":"115df3f3089b5f51ba901cf50b70e57a","Path":"C:\\ProgramData\\docker\\windowsfilter\\15d854d2cf0ddc892b511b920988221ff68e683df8c71c858d72fe151117b027"}],"HostName":"379f187cf37f","MappedDirectories":[],"SandboxPath":"","HvPartition":false,"EndpointList":["5f95dbcb-2bcc-44e8-a13d-2a91b97ec0ac"],"HvRuntime":null,"Servicing":false}.
As you remember from the previous post, containers are operating system level virtualization, so Nano Server core can only run as a Hyper-V container if the Container host is a full Windows 2016, or as a Windows Container, if the Container host is a NanoServer. Simple matching of kernels.

docker run -it --isolation hyperv microsoft/nanoserver:10.0.14300.1030 powershell
The second common mistake is that, if you don't have the Hyper-V feature installed and try to install a Hyper-V Container, you'll get the following error:

docker run -it --isolation hyperv microsoft/nanoserver:10.0.14300.1030 powershell
C:\Program Files\Docker\docker.exe: Error response from daemon: container cd93f27118a2e80964e4162b8d107a39fd78b7c0de3e2f
6e5377c4f998118c36 encountered an error during CreateContainer failed in Win32: No hypervisor is present on this system.
(0xc0351000) extra info: {"SystemType":"Container","Name":"cd93f27118a2e80964e4162b8d107a39fd78b7c0de3e2f6e5377c4f998118c36","Owner":"docker","IsDummy":false,"VolumePath":"","IgnoreFlushesDuringBoot":true,"LayerFolderPath":"C:\\ProgramData\\docker\\windowsfilter\\cd93f27118a2e80964e4162b8d107a39fd78b7c0de3e2f6e5377c4f998118c36","Layers":[{"ID":"115df3f3-089b-5f51ba901cf50b70e57a","Path":"C:\\ProgramData\\docker\\windowsfilter\\15d854d2cf0ddc892b511b920988221ff68e683df8c71c858d72fe151117b027"}],"HostName":"cd93f27118a2","MappedDirectories"[],"SandboxPath":"C:\\ProgramData\\docker\\windowsfilter","HvPartition":true,"EndpointList"["977f140941694fb786ff4abb20316975"],"HvRuntime"{"ImagePath":"C:\\ProgramData\\docker\\windowsfilter\\15d854d2cf0ddc892b511b920988221ff68e683df8c71c858d72fe151117b027\\UtilityVM"},"Servicing":false}.
Installing Hyper-V will solve this in a breeze.


Docker relies on the microsoft/hccshim package to call the Host Compute Service (vmcompute.dll) to run Windows Containers. When the service starts (dockerd.exe) a network setup occurs and a vmswitch is configured. This vmswitch is managed by the Host Network Service (HNS) subsytem which is in charge of the IPAM role.

In my labs I have often encoutered a bug that prevent the Docker service from starting because the HNS subsytem is unable to do all the network setup:

time="2016-10-18T02:21:35.312426300+02:00" level=info msg="Windows default isolation mode: process"
time="2016-10-18T02:21:35.339425700+02:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
time="2016-10-18T02:21:35.339425700+02:00" level=info msg="Loading containers: start."
time="2016-10-18T02:21:35.393424100+02:00" level=error msg="Resolver Setup/Start failed for container none, \"json: cann ot unmarshal array into Go value of type hcsshim.HNSNetwork\""
Error starting daemon: Error initializing network controller: Error creating default network: HNS failed with error : Unspecified error
It looks like this issue should be solved installing a cumulative update for Windows 10 Version 1607 and Windows Server 2016: October 11, 2016 (aka 9D).

In my case installing this patch, as well all the others in the standard channel, did not solve my issue:


Source         Description      HotFixID      InstalledBy          InstalledOn
------         -----------      --------      -----------          -----------
SRV1           Update           KB3176936     NT AUTHORITY\SYSTEM  18/10/2016 00:00:00
SRV1           Update           KB3192137     NT AUTHORITY\SYSTEM  12/09/2016 00:00:00
SRV1           Update           KB3199209     NT AUTHORITY\SYSTEM  18/10/2016 00:00:00
SRV1           Security Update  KB3194798     NT AUTHORITY\SYSTEM  18/10/2016 00:00:00
I will post un update as soon as I find out more, but in the mean time you can check the open issue on GitHub.
More on Docker in the next post.

Monday, October 17, 2016

First steps with Microsoft Containers - part 1

Windows Server 2016 has Container support. That’s what most of us have heard in the last months. I have spent many years in the virtualization field, but getting a grasp of Containers in Windows in 2016 has asked for a great deal of effort in order for me to move from a traditional administrator role, to becoming an automator in a stretched and heterogeneous IT environment.


The initial learning curve was steep, with the birth of PowerShell and the need to change my habit of relying on GUIs in order to get things really automated. I was pretty prone to dive into scripting since in the long term it meant less work for me, more for my servers.
Then it settled for a while. Learning Hyper-V was not difficult since I already had previous experience with VMware. And the cmdlets for managing those hypervisors are more or less the same (the consistency of Windows PowerShell is one of its primary assets).
During this period behind the curtains things were happening: Microsoft was trying to catch up with VMware and position their hypervisor in the ‘Leaders’ box of the Gartner quadrant. And at the same time Microsoft was building NanoServer, the thinnest OS they could provide on the road to containerization.
To make things even more complex to follow, Microsoft has also gone open source on some projects, like .NET Core or PowerShell, and is contributing to the Docker open source project. Whilst this has increased the quality of the product, thanks to the effort of hundreds of expert contributors, the risk for bugs to make it into the code has also increased. This is a common risk in the open source ecosystem. The most significative example? Just last week version 4.8 of the Linux kernel (which is marked as stable) has been released with the dangerous addition of a BUG_ON line which kills the kernel.
So, let’s now have a look at Microsoft Containers, how to use them and I will walk you through some of the technical problem you could encounter with this product. Since there are so many different instructions around, and releases are succeding pretty fast, it is difficult to know which one to use, so I’ll try to be as simple as possible.
The first step is go and get a Windows 2016 image. Once you have it, the upgrade process will be pretty easy. Personally I upgraded most of the systems in my labs, which were running Windows 2012 R2 or some Windows 2016 Technical Preview.

The process was painless, apart for a Hyper-V cluster that suffered a loss of its LBFO teaming and virtual switches caused by the Unaware Update burden I put on it…


Containers are all about operating system virtualization, so they are a different concept from virtual machines, which are all about hardware virtualization. Since containers work at a different level, they result significantly faster to setup and deploy. And you can pack a lot more containers than virtual machines on a single host.
Now Windows 2016 offers two Container models: Windows Containers and Hyper-V Containers. They differ in the fact that the isolation level is not the same: while the former shares the kernel with the host, the latter runs in a lightweight virtual machine (called partition) with a separated kernel.
Concerning Windows Containers, the process isolation mechanism is done by the Docker daemon, or service if you like, which allow them to reuse the host kernel through a sandbox. For this to happen, containerization primitives have been added to the Windows 2016 kernel (and to Windows 10 Anniversary Update), which is the reason why you won’t be able to run container on Windows 2012 R2. There is no tiny Linux virtual machine involved in this process, like you could read somewhere.
Concerning Hyper-V Containers, though the process isolation is achieved through the use of a minimalist hypervisor which is started with the Container and teared down when you stop it, they can also be managed with the Docker Client:
docker run --isolation=hyperv microsoft/nanoserver
There is for sure a greater overhead when you setup a Hyper-V Container, and startup times are a bit longer, but each of these two types of Containers has its use.
Ok, let's make a step back and see all this from the beginning.
The first step when setting up Containers on a vanilla Windows 2016 is to:
Install-WindowsFeature Containers
Success Restart Needed Exit Code      Feature Result
------- -------------- ---------      --------------
True    Yes            SuccessRest... {Containers}
WARNING: You must restart this server to finish the installation process.
This installs 10 new cmdlets:
(Get-Command).where{$_.Source -match 'Containers'}

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Add-ContainerNetworkAdapter                  Containers
Cmdlet          Add-ContainerNetworkAdapterStaticMapping     Containers
Cmdlet          Get-ContainerNetwork                         Containers
Cmdlet          Get-ContainerNetworkAdapter                  Containers
Cmdlet          Get-ContainerNetworkAdapterStaticMapping     Containers
Cmdlet          New-ContainerNetwork                         Containers
Cmdlet          Remove-ContainerNetwork                      Containers
Cmdlet          Remove-ContainerNetworkAdapter               Containers
Cmdlet          Remove-ContainerNetworkAdapterStaticMapping    Containers
Cmdlet          Set-ContainerNetworkAdapter                  Containers

At this point the problems started for me. I spent a lot of time trying to understand the right procedure to follow (most of the tests I did were on Windows 2016 Evaluation build 14393.0).
Being in an corporate environment, I had a hard time making the following to work:
Install-PackageProvider ContainerImage
… and eventually failed:
WARNING: MSG:UnableToDownload «» «»
WARNING: Unable to download the list of available providers. Check your internet connection.
WARNING: Unable to download from URI '' to ''.
Install-PackageProvider : No match was found for the specified search criteria for the provider 'ContainerImage'. The
package provider requires 'PackageManagement' and 'Provider' tags. Please check if the specified package has the tags.
At line:1 char:1
+ Install-PackageProvider ContainerImage
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (Microsoft.Power...PackageProvider:InstallPackageProvider) [Install-PackageProvider], Exception
    + FullyQualifiedErrorId : NoMatchFoundForProvider,Microsoft.PowerShell.PackageManagement.Cmdlets.InstallPackageProvider
I tried a whole bunch of workarounds to make this cmdlet to work through a corporate proxy, and download the Nuget package provider. I went from the simple solution, which consists of using Netsh to configure the WinHTTP config after having defined a proxy in IE:
netsh winhttp import proxy source=ie
to using the Configure-Proxy function by fellow MVP Jeff Wouters:
function Configure-Proxy ($Proxy, $Port)
# Function that actually does the configuring of the proxy settings.
Set-ItemProperty “HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings” -Name ProxyEnable -Value 1
Set-ItemProperty “HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings” -Name ProxyServer -Value $Proxy”:”$Port
Set-ItemProperty “HKCU:\Software\Microsoft\Windows\CurrentVersion\Internet Settings” -Name ProxyOverride -Value “”
Configure-Proxy "10.x.x.x" 8080
…to using Install-Package provider with the –Proxy and –ProxyCredential parameters defined :
Install-PackageProvider ContainerImage -Force -proxy http://10.x.x.x:8080 -ProxyCredential $cred –Verbose
Nothing worked:
$Username = "proxyuser"
$Password = ConvertTo-SecureString "proxyuserpw" -AsPlainText -Force
$Cred = New-Object System.Management.Automation.PSCredential $Username, $Password
Install-PackageProvider ContainerImage -Force -proxy http://10.x.x.x:8080 -ProxyCredential $cred -Verbose
VERBOSE: Using the provider 'Bootstrap' for searching packages.
VERBOSE: Finding the package 'Bootstrap::FindPackage' 'ContainerImage','','','''.
WARNING: Unable to download from URI '' to ''.
VERBOSE: Cannot download link '', retrying for '2' more
VERBOSE: Cannot download link '', retrying for '1' more
VERBOSE: Cannot download link '', retrying for '0' more
WARNING: Unable to download the list of available providers. Check your internet connection.
VERBOSE: Using the provider 'PowerShellGet' for searching packages.
VERBOSE: The -Repository parameter was not specified.  PowerShellGet will use all of the registered repositories.
Install-PackageProvider : No match was found for the specified search criteria for the provider 'ContainerImage'. The
package provider requires 'PackageManagement' and 'Provider' tags. Please check if the specified package has the tags.
At line:1 char:1
+ Install-PackageProvider ContainerImage -Force -proxy http://10.X.x. ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (Microsoft.Power...PackageProvider:InstallPackageProvider) [Install-PackageProvider], Exception
    + FullyQualifiedErrorId : NoMatchFoundForProvider,Microsoft.PowerShell.PackageManagement.Cmdlets.InstallPackageProvider
I even decided to deliberately ignore the SSL errors using the pieces of code found at Mono project:
add-type @"
     using System.Net;
     using System.Security.Cryptography.X509Certificates;
     public class TrustAllCertsPolicy : ICertificatePolicy {
         public bool CheckValidationResult(
             ServicePoint srvPoint, X509Certificate certificate,
             WebRequest request, int certificateProblem) {
             return true;

[System.Net.ServicePointManager]::CertificatePolicy = New-Object TrustAllCertsPolicy
but the outcome stayed the same: failure.
To make a long story short, I finished manually downloading the module (version on a computer with direct Internet access, and imported it on my corporate server in the right folder:
To me, it really felt like this cmdlet was never intended to use in enterprise…
Once I went through these tasks, I discovered that this ContainerImage module only had three cmdlets:
Get-Command -Module ContainerImage

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Find-ContainerImage                          ContainerImage
Function        Install-ContainerImage                       ContainerImage
Function        Save-ContainerImage                          ContainerImage
Since these cmdlets didn’t work either because of the corporate firewall, I ended up retrieving the Wim files straight from their web source and copying them on my Windows 2016 test bed.
The download links can be found here.
The content of this file, which basically containes the direct download links, is:
"Name":  "NanoServer",
"Version":  "10.0.14300.1016",
"Description":  "Container OS Image of Windows Server 2016 Technical Preview 5 : Nano Server Installation",
"SasToken":  ""
"Name":  "WindowsServerCore",
"Version":  "10.0.14300.1000",
"Description":  "Container OS Image of Windows Server 2016 Technical Preview 5 : Windows Server Core Installation",
"SasToken":  ""
I populated a ContainerImages folder where I put the image files:

    Directory: C:\ContainerImages

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----       12/10/2016     13:45      178574823 NanoServer-10-0-14300-1016.wim
-a----       12/10/2016     14:37     2874089226 WindowsServerCore-10-0-14300-1000.wim
Then tried to install them offline:
Install-ContainerImage -.\NanoServer-10-0-14300-1016.wim
WARNING: MSG:UnableToDownload «» «»
WARNING: Unable to download the list of available providers. Check your internet connection.
PackageManagement\Save-Package : No match was found for the specified search criteria and package name
'-.\NanoServer-10-0-14300-1016.wim'. Try Get-PackageSource to see all available registered package sources.
At C:\Program Files\WindowsPowerShell\Modules\ContainerImage\\ContainerImage.psm1:492 char:23
+ ...   $downloadOutput = PackageManagement\Save-Package @PSBoundParameters
+                         ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (Microsoft.Power...ets.SavePackage:SavePackage) [Save-Package], Exception
    + FullyQualifiedErrorId : NoMatchFoundForCriteria,Microsoft.PowerShell.PackageManagement.Cmdlets.SavePackage

The property 'Name' cannot be found on this object. Verify that the property exists.
At C:\Program Files\WindowsPowerShell\Modules\ContainerImage\\ContainerImage.psm1:494 char:5
+     $Destination = GenerateFullPath -Location $Location `
+     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [], PropertyNotFoundException
    + FullyQualifiedErrorId : PropertyNotFoundStrict

Install-ContainerOSImage : The term 'Install-ContainerOSImage' is not recognized as the name of a cmdlet, function,
script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is
correct and try again.
At C:\Program Files\WindowsPowerShell\Modules\ContainerImage\\ContainerImage.psm1:502 char:5
+     Install-ContainerOSImage -WimPath $Destination `
+     ~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (Install-ContainerOSImage:String) [], CommandNotFoundException
    + FullyQualifiedErrorId : CommandNotFoundException

Remove-Item : Cannot bind argument to parameter 'Path' because it is null.
At C:\Program Files\WindowsPowerShell\Modules\ContainerImage\\ContainerImage.psm1:512 char:8
+     rm $Destination
+        ~~~~~~~~~~~~
    + CategoryInfo          : InvalidData: (:) [Remove-Item], ParameterBindingValidationException
    + FullyQualifiedErrorId : ParameterArgumentValidationErrorNullNotAllowed,Microsoft.PowerShell.Commands.RemoveItemCommand
The key error here is :

The term 'Install-ContainerOSImage' is not recognized as the name of a cmdlet.

Something had changed from the different procedures I had tested in the past, and all lead me to think that the container cmdlets had been deprecated.


So, after reviewing the docs, I saw that Docker is the key required element in the process, as you can read on this article by Neil Peterson:

In the next post I will show how to configure Docker on Windows 2016.

Monday, October 3, 2016

Renewed as Microsoft MVP for 2016 during the great IT convergence

Just a few days after the announcement of the release of Windows 2016, best known as the 'cloud-ready OS', I am proud to say that I have been renewed as Microsoft Most Valuable Professional for Cloud and DataCenter Management.

So, what's happening here? The last time you contacted me I was a Powershell MVP and you wanted to get an advice on some kind of complicated PowerShell function to monitor your Hyper-V clusters or to measure the latency of your disk subsystem, and now?

Well, now, things have converged. For good.


Cloud and DataCenter Management is a new broad category for all the IT professionals that are making the effort of adopting the new technologies announced by Microsoft these times and that are willing to share a new way of being Windows admins in a Cloud world.

In a few words, this means quitting your cocoon of left and right clicking and getting involved in open source projects like PowerShell for Linux or learning new stuff, like how to install and take advantage of the Docker Engine on your Windows Servers.

But let's go back a bit and have a look at the main game changers in the IT world in 2016.


As I said, on September 26, 2016, Microsoft anounced the general availability of Windows Server 2016. I have written a few articles on the different Technical Previews (there were 5 of them) that preceeded this server OS going GA, but let me remind you the major improvements that come with this version, which by the way you can download for evaluation, and which, unlike its predecessor, will be licensed by the number of CPU cores rather than number of CPU sockets - I started a long discussion on this on Reddit no more than a month ago.

Windows 2016 Server keeps doing his classical duty and can for sure run your traditional applications and host your datacenter infrastructure, but, hey, at the same time, it delivers a great amount of innovation to help companies transition their workloads to a new Cloud model. A model based on agility and cost-efficiency (read Powershell, DSC and Chef, among others), 'devopsability' and efficiency (read Powershell, Containers and Docker) and security (read Shielded VMs and Host Guardian Service, aka HSG).

That is the first game changer. But there are more and their cross-OS nature is such that you shouldn't underestimate them.


I am thinking for instance of the Docker engine. Docker is an open-source project to easily create lightweight, self-sufficient, portable containers from any application. I remember reading somewhere on the Internet this definition of Docker which I find particularly imaginative:

"It's kind of like a tv dinner. Comes in a box, and has every thing you need to have a meal right there in the box. When you're done, you toss it. Whether you microwave it, put it in the oven, or heat it over a fire, it'll taste the same whenever, wherever.
Docker tries to make applications like TV Dinners. VMs are kind of like it, but not quite the same. VMs typically include an entire operating system, but a Docker ecosystem has a central engine that can run multiple Docker containers. It cuts out the bloat of multiple operating systems, and simply packages the runtime elements the program needs."

The very first version of Docker was released in 2013, and was just for Linux, at a time when Linux was still a bad word for Windows admins. Then Docker transformed itself. Remember when I spoke of convergence? A big project started and the time passed and the Docker Engine has become available natively on Windows, so that DevOps people can begin the same transformation (which is called Dockerization) for their Windows-based apps and move them from on-premises to the Azure Cloud.

Technically that was a tough task, since it asked for the implementation of a single set of tools, APIs and image formats for managing both Linux and Windows apps, without loosing realiability.

Yes, reliability. This is the keyword to build upon before anything is adopted in the IT industry.

Even do for now there isn't huge amounts of evidence of Windows Containers in production yet as far as I can see, this is changing as well, and once Dockerized apps will prove as stable as VMs, their spread will be fast.

So, if you are in the mood for Docker, hesitate no more and:

Invoke-WebRequest "" -OutFile "$env:TEMP\" -UseBasicParsing

Expand-Archive -Path "$env:TEMP\" -DestinationPath $env:ProgramFiles

[Environment]::SetEnvironmentVariable("Path", $env:Path + ";C:\Program Files\Docker", [EnvironmentVariableTarget]::Machine)

dockerd --register-service

Start-Service Docker


There is another big game changer announced this summer by, trumpet blasts, Jeffrey Snover himself. It's PowerShell for Linux.

I admit that Microsoft was a bit commercial on this, since they are trying to expand their business and keep the steady pace of Amazon AWS in their Seattlish war (both firms have their headquarters in Seattle, had you thought of that?). And to do so they needed to better support Linux. So Azure can be seen as of one of the key drivers for Microsoft’s increasing support (or love, as they call it today) for Linux.

PowerShell has nothing more to prove today. Version 5 is stable. VMWare has adopted it for their PowerCLI, which boasts more then 400 powerful cmdlets (you could ask to my friend Luc Dekens, who is both a vExpert and a MVP and who has a fierce knowledge of PowerCLI).

AWS itself has a very active PowerShell community, as explained by Snover. And Google has just started a PowerShell project, making this powerful language a de-facto standard of the IT industry.

The Snover's Monad Manifesto has gone full circle.

Certainly porting PowerShell to Linux (and to MacOS) was not an easy task, since it asked for a rewrite of the .NET platform.


Originally the .NET Framework was engineered around the assumption that it is always deployed like a single block depending on mscorlib and deployed on Microsoft Windows. But then Microsoft had to release a .NET compact framework for Windows Mobile, and then one for Windows Phone, and then one for the Windows Store. They were all indipendent and that caused a problem when Microsoft wanted to target all these platforms and include Linux and MacOS in the bunch.

The solution to this was the introduction of .NET Core, which can be used with a wide variety of 'devices', which is a open-source community-mantained project (check it on GitHub) and which runs on Windows Nano Server, Linux ad MacOS.

.NET Core has also the undeniable advantage of being modular, in the sense that instead of assemblies, developers deal with NuGet packages, and, unlike the old bulky .NET Framework, which is serviced using Windows Update, it relies on its package manager to receive updates.

This is way too cool. Thanks to .NET Core, in the near future we will even be able to use DSC to manage Mac servers. Convergence again, so get ready for it.


From a technical point of view, the last big game changer I want to mention are Containers. Windows Containers. And Hyper-V Containers. You'll hear more and more of them, so I better tell you the difference between them once and for all.

Windows containers interact with the kernel the same way that Docker containers do. This type of container has been available since Windows Server 2016 TP3.

Hyper-V containers, which appeared in TP4, have their own copy of the Windows kernel to interact with, isolating them from all the other containers that might be residing on the same host.

It's clear that Hyper-V containers provide a higher level of isolation and security and will therefore have a different use from the standard Windows Containers.

So what about my future now.

A lot of interesting community-oriented events are being set up, especially here in France, where I live for now.


My friends and fellow MVPs François-Xavier Cat and Fabien Dibot, with the help of some other very active members of the community (Stephane Van Gulick, Micky Balladelli, Emin Atac, just to mention a few), have started the French PowerShell User Group, which, today has 112 members actively chatting on the French channel of

The aim of this group is to share knowledge through free podcasts. The very first was the one on Data Syntax Analysis with PowerShell, by François-Xavier himself.

This will be followed by others on Powershell Classes, on PowerShell and Unity, and, in April 2017, by my PowerShell on Linux demo.

If you speak French and want to join us, this is the MeetUp link:

Then there is the Aos community which is actively organising meetings in France around everything SharePoint, OneDrive, Yammer, Skype for Business, Cloud and Office 365 (

To finish with, on October 5th, in Paris, there will be Microsoft experiences’16, where you will be able to meet, trumpet blasts again, Scott Hanselman, principal program manager at Microsoft for ASP.NET/.NET, who will lead a session named 'Microsoft’s journey towards a cross-platform open-source .NET', where he will retrace 15 years of the .NET history from the very beginning in 2002 to the last open source platform which runs on Linux, MacOS and Windows.

Oh, I was about to forget, I will attend the MVP Summit in November, so, if you are an MVP and you want to get in touch with me during the event, feel free to drop me a line. For sure I will be seen very early in the morning going to run in Redmond to Billy's house or sipping a beer with my European and American friends in DownTown Seattle.


Thursday, August 4, 2016

Measuring IOPS part 5: enter PowerShell

We keep our quest to understand how we can use DISKSPD to measure storage performance. As suggested by Jose Barreto and as I have discussed in the previous post of this series, we need to run the tool with a large number of workloads to get a grip of this complex matter.

To do so, I use the magic of a PowerShell script that iterates through combinations of
  • 4KB to 2MB IOs
  • 1 to 64 threads per file
  • 1 to 64 queue depth
to perform 10 seconds long read operations (I reckon this impacts me more than write operations) on a 1GB file.

I also capture latency information (by means of the –L parameter) and disable caching (adding the –h parameter).

Here’s the script. I have reused some parts of Jose Barreto’s blog code to build mine (so kudos to him for that):

$container = @()
$blocksize = 4,8,16,32,64,128,512,1024,2048
$thread = 1,2,4,8,16,32,64
$outstandingIO = 1,2,4,8,16,32,64
$duration = 10
$warmup = 5
$rest = 2
$combination = $blocksize.count * $thread.count * $outstandingIO.count
"TEST SPEC".PadRight(50,'*')
"Test started at $(Get-Date)"
"$Combination combinations".PadRight(50,'*')
"Duration of each test: $duration seconds".PadRight(50,'*')
"Warm-up time of each test: $warmup seconds".PadRight(50,'*')
"Rest between tests: $rest seconds"
"Predicted finish time: $((Get-Date) + (New-TimeSpan -seconds $($combination * ($duration + $warmup + $rest))))".PadRight(50,'*')
$a = 0 
$blocksize | % {
    $b = "-b$_" + "k"
    [int]$k = $_
    $thread | % {
        $c = "-t$_"
        [int]$t = $_
        $outstandingIO | % {
            $d ="-o$_"
            $o = $_
            Write-Progress -Id 1 -Activity ("Checking IOPS") -PercentComplete ($a / $combination * 100) -Status ("Checked {0} combinations of {1}" -f $a, $combination)   
            try {
                $ErrorActionPreference = 'Stop'
                $cmd = "diskspd $b $c $d -w0 -c1G -r -h -d$duration -W$warmup -L C:\test.dat"
                $result = Invoke-Expression $cmd
                foreach ($line in $result) {if ($line -like "total:*") { $total=$line; break } }
                foreach ($line in $result) {if ($line -like "avg.*") { $avg=$line; break } }
                [double]$mbps = $total.Split("|")[2].Trim() 
                [double]$iops = $total.Split("|")[3].Trim()
                [double]$latency = $total.Split("|")[4].Trim()
                [double]$cpu = $avg.Split("|")[1].Trim() -replace '%'   
                "Param $b, $c, $d, $iops iops, $mbps MB/sec, $latency ms, $cpu %CPU"
                $current = [PSCustomObject]@{
                    Bytes =$k
                    Threads = $t
                    Queuedepth = $o
                    iops = $iops
                    MBs = $mbps
                    Latencyms = $latency
                    PercentCPU = $cpu
            catch {
                "Param $b, $c, $d failed"
                $current = [PSCustomObject]@{
                    Bytes =$k
                    Threads = $t
                    Queuedepth = $o
                    iops = 'NA'
                    MBs = 'NA'
                    Latencyms = 'NA'
                    PercentCPU = 'NA'
            $container += $current
            Start-Sleep -Seconds $rest

That's a very simple scripts that user iteration to build a list of results combined with the test specs. Here's a screenshot of the script while running with its progress bar:

This is particularly useful for filtering the data afterward: having the results from more than 400 iterations inside a custom object, gives us the possibility to check for some empirical evidence generated from within the test.

In the next post of this series we will see what results it will give. Stay tuned!

Wednesday, July 6, 2016

New PowerShell function to setup a iSCSI target on a NETAPP and mount it on Windows

I don't often use my NETAPP controllers as iSCSI targets. Most of the time I just export NFS (especially to host VMWare datastores) or CIFS and that suffices my needs. Sometimes however I am asked to mount on Windows Servers disks from the NETAPP directly on startup and not just as simple shares mounted with logon scripts or whatever.
So I decided to write a pretty do-it-all PowerShell function that does all the work of configuring the NETAPP and mounting the disk via iSCSI on my server.
If I had to do this via the OnCommand GUI, I should first setup an aggregate, then a volume, then a LUN, configure the iGroup and only then move to my Windows server and manually bind the initiator with the target.
Fortunately back in 2010 NETAPP released a module (inside the NetApp PowerShell Toolkit) to do all this stuff and more. Today you need to grab at least version 1.5 to get all the cmdlets I used in my function. Personally I have the last version (which is 4.2) which can bo downloaded here.

Concerning the requirements, the script must run on the Windows Server that acts as initiator. On that server you have to be running at least version 3.0 of Windows PowerShell and to have the DataONTAP module installed.
The Windows Server that acts as initiator will run my function that
  • configures the storage controller
  • setup the target
  • setup the initiator
  • do the mapping with the LUN in a iGroup.
Once the function has finished configuring the iSCSI disk, you just have to initialize it in Disk Management, format it and assign a letter. I already showed you how to do this in PowerShell, just search my blog.

Enough said: here's the function Mount-NAiSCSIDisk.

Just one last quick note: I am going to work on this function to improve it, so please, post suggestions or ideas in the comments below! And, for sure, share!

#Requires -Version 3.0 -Modules DataONTAP

function Mount-NAiSCSIDisk
            Setup and mounts a NetApp iSCSI LUN to a Windows Server
            Setup an aggregate, a volume, a LUN, an iGroup then mounts it as a NetApp iSCSI target to a Windows Server
        .PARAMETER NaController
            Name of the NetApp controller

        .PARAMETER Aggregate
            Name of the aggregate

        .PARAMETER DiskCount
            Number of disks in the aggregate
        .PARAMETER Volume
            Name of the volume

        .PARAMETER VolumeSize
            Size of the volume

        .PARAMETER SnapshotReserveSize
            Size of the snapshot reserve

        .PARAMETER LUNPath
            Path of the LUN

        .PARAMETER LUNType
            Type of the LUN

        .PARAMETER LUNSize
            Size of the LUN

        .PARAMETER Igroup
            Nq;e of the Igroup

        .PARAMETER IgroupProtocol
            Protocl for the Igroup
        .PARAMETER IgroupType
            Type of the Igroup

            Mount-NAiSCSIDisk -NaController netapp1 -Igroup igroup1 -IgroupProtocol iscsi -IgroupType windows
            Mount-NAiSCSIDisk -NaController netapp1 -Aggregate aggr1 -DiskCount 32 -Volume vol1 -VolumeSize 10g -SnapshotReserveSize 0 -LUNPath /vol/NA1/iscsivol1 -LUNSize 1g -LUNType windows -Igroup igroup1 -IgroupProtocol iscsi -IgroupType windows

   - Carlo MANCINI

    write-verbose 'Started'
    $Error = $False

    $info = @()

    $infolastonline = @()

    Write-Verbose 'Connect to NaController'

    try {
        Connect-NaController $NaController -ErrorAction Stop


    catch {

        Write-Warning 'Unable to connect to the controller'


    write-verbose 'Starting the configuration on the netapp controlller'
    if(Get-NaAggr $Aggregate) {

        Write-Verbose 'Creating the aggregate'
        New-NaAggr $Aggregate -Use64Bit -DiskCount $DiskCount


    else {

        Write-Verbose 'Aggregate already existing'


    if(Get-NaVol $Volume) {

        Write-Verbose 'Creating the volume'

        New-NaVol $Volume -Aggregate $Aggregate $VolumeSize

        Write-Verbose 'Setting the snapshot reserve'

        Set-NaSnapshotReserve $Volume $SnapshotReserveSize


    else {

        Write-Verbose 'Volume already existing'


    if(Get-NaLun $LUNPath) {
        Write-Verbose 'Creating the LUN'
        New-NaLun -Path $LUNPath -Size $LUNSize -Type $LunType


    else {

        Write-Verbose 'LUN already existing'

    Write-Verbose 'Starting the iSCSI configuration'
    Write-Verbose 'Adding the storage controller to the iSCSI Initiator target portals'


    Write-Verbose 'Establishing a connection to the target discovered by the iSCSI Initiator'

    Get-NaHostIscsiTarget (Get-NaIscsiNodeName) | Connect-NaHostIscsiTarget

    Write-Verbose 'Creating a new initiator group'

    New-NaIgroup $Igroup $IgroupProtocol $IgroupType

    Write-Verbose 'Adding the initiator the the initiator group'
    Get-NaHostIscsiAdapter | Add-NaIgroupInitiator $Igroup

    Write-Verbose 'Mapping the LUN to the initiators in the initiator group'

    Add-NaLunMap $LUNPath $Igroup


Monday, July 4, 2016

Measuring IOPS part 4: binding IOMETER results to DISKSPD results

In the previous post we have seen that Diskspd can provide us with a quick information on MB/s and I/O per second (IOPS), as well as the average latency.
For the moment we have no idea if the 39k IOPS measured in the first sequential read test or the 710 total IOPS measured in the last random read and write test are good or bad result compared to the workload I have generated. The only thing that we have found for sure is that latency decreases when we use small IOs instead of large IOs. But we still don’t know if a latency of 5 ms, such the one measured in the last run, is a fancy value or not.
We need to investigate further the other Diskspd options. But before we do that we have to build a dictionary of what is measured with IOMETER and see the corrisponding terms used by DISKSPD.

If I look in the Diskspd help, and I try to reproduce the variables of IOmeter, I can build the following table of matches:

  • IOMETER Transfer request size = DISKSPD -b parameter, described as ‘Size of the IO in KB
  • IOMETER Percent Read/Write Distribution = DISKSPD –w, described as ‘Percentage of writes
  • IOMETER Percent Random/Sequential Distribution = DISKSPD –r, which forces Random operations
  • IOMETER # of Outstanding I/Os = DISKSPD –o, described as ‘Outstanding IOs or queue depth (per thread)
At this moment we have no idea of what kind of workload we have to generate to get meaningful results that can help me assess storage performance, and Jose Barreto himself states that we have to experiment with the –t and the –o parameter until we find the combination that gives the best results.

Now guess which tool I am going to use to automate this task in the next post of this series: PowerShell! Stay tuned!

Related Posts Plugin for WordPress, Blogger...