Thursday, December 21, 2017

Configuring Visual Studio Code for PowerShell Core 6 after executable name change

That's a quick one before the end of the year. Last october the PowerShell Team, after a long debate with community members decided to rename the executable of PowerShell Core 6 from powershell.exe to pwsh.exe.

This allows to easily distinguish PowerShell Core from the PowerShell 5.1 built into your Windows 10. And now you can execute pwsh on Windows or Linux and get PowerShell Core 6 in a consistent manner.

Assuming you have moved from the PowerShell ISE to Visual Studio Code and got the PowerShell Extensions installed, there's a couple setting you have to put in place so that you can work with pwsh.exe as the default terminal:

   "powershell.powerShellExePath": "c:/Program Files/PowerShell/6.0.0-rc.2/pwsh.exe",
   "": "c:/Program Files/PowerShell/6.0.0-rc.2/pwsh.exe"

The year 2017 is almost over. If you haven't yet played with the beta of PowerShell Core, I heartedly suggest you to do so, so that you can start getting used with a cross-platform shell available for Linux, Mac or Windows. You can find it open sourced here.

If you still stick with the PowerShell ISE, you should give a try to Visual Studio Code, which you can download here. For those of you familiar with the ISE, there's a great video tutorial by fellow MVP Mike F Robbins which shows you how to make your Visual Studio Code ISE-like. Just add the two settings above and you'll get a familiar environment ready for PowerShell Core.

Monday, December 18, 2017

PowerShell oneliner to list all the installed .NET versions on a local computer

In my drawers of PowerShell tools, I have a pretty extensive list of oneliners I have written for specific tasks. Today I want to share the line of code I wrote to check the .NET version installed on a local system. Since there is no direct method to find that information, a lot of people have come up with advanced functions, which is a bit overkill to me. So, given the fact that I like playing with the pipeline, and that I needed a quick and dirty script for this, I decided to see if I could find a way to get this done in just one line of code.

Starting from .NET 1.1 the registry key where the list of installed frameworks is installed is found under:
HKLM\SOFTWARE\Microsoft\NET Framework Setup\NDP
Under this key, you'll see separate keys for each .NET Framework version installed in your system.

Possible paths are:
1.1 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v1.1.4322
2.0 HKEY_LOCAL_MACHINE\Software\Microsoft\NET Framework Setup\NDP\v2.0.50727
3.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.0
3.5 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v3.5
4.0 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
4.5 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
4.5.1 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
4.5.2 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
4.6 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
4.6.1 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
4.6.2 HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full
As you can see, there are five possible unique paths.

These can be matched with a regular expression:
which excludes v4.0 because it is deprecated.

Having PowerShell native access to the registry, we can run the following line on our local computer:
ls 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP'
just like you'd be browsing your file system.

Now what you have to know is that each .NET framework has a version name and a version number:
  • .NET version name is composed by digits, and comes with the product name i.e. .NET 2.0, .NET 3.5, .NET 4.7.1
  • .NET Version number follows this convention: (Major version).(Minor version).(Revision number).(Build number)

Here's some version numbers I have in my environment:

  • 4.7.02053
  • 4.5.51209
  • 3.5.30729.01
  • 2.1.21022

As you can see version names and version numbers share the first two fields. So, for istance, .NET 4.7.1 has version number 4.7.02556. We can then say that we don't care about the revision or build numbers (third and fourth field). The first two fields are all that I need.

To make a long story short, here's the PowerShell oneliner that I need to run to locally get all the installed .NET releases:
((ls 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP' -rec|? name -match 'v1.1.4322$|v2.0.50727$|v3.0$|v3.5$|v4\\Full$').name -replace 'HKEY_LOCAL_MACHINE','HKLM:'|%{ Get-ItemProperty $_}).Version
Hope this helps guys. Let me know in the comments if somebody here is able to make my oneliner shorter.

UPDATE: On older PowerShell version use:
((ls 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP' -rec|? {$ -match 'v1.1.4322$|v2.0.50727$|v3.0$|v3.5$|v4\\Full$'} | select -expand name) -replace 'HKEY_LOCAL_MACHINE','HKLM:') -replace 'HKEY_LOCAL_MACHINE','HKLM:'|%{ Get-ItemProperty $_} | select -expand Version

UPDATE: You can also specify this registry path by specifying the registry provider's name, followed by "::". The registry provider's full name is Microsoft.PowerShell.Core\Registry, but this can be shortened to just Registry. So the following syntax is also possible
((ls 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP' -rec|? name -match 'v1.1.4322$|v2.0.50727$|v3.0$|v3.5$|v4\\Full$').name |%{ Get-ItemProperty "Registry::$($_)"}).Version

Tuesday, December 5, 2017

PowerShell oneliners by guest blogger Brian

During the latest PowerShell Oneliner Contest, Brian came up with a solution to Task 3 which is completely fantastic: he makes a very smart use of Group-Object -AsHashTable -AsString as well as of Invoke-Expression to produce an impressive 187 chars long solution.

Brian as kindly accepted to be my guest blogger today.

First of all, let's have a look at his answers to my PowerShell contest:

gwmi Win32_Share|% N*|%{"\\$(hostname)\$_"}
"The B in $(($b=gv q* -v|% s*g 27 20)) stands for $b."
TASK 3: TEXT MINING - 187 chars
$t1,$t2|%{$_-split'\W+'|% *wer|group -ash -ass}-ov h|% K*|sort|gu|%{($d+=($1=$h[0].$_.Count)*($2=$h[1].$_.Count)),($a+=$1*$1),($b+=$2*$2)}|%{$m="[math]::Sqrt("}{}{"$d/($m$a)*$m$b))"|iex}

1. Brian, tell us a bit about yourself and about the way you got to work with PowerShell

I'm a syadmin/SRE/Systems Engineer working mostly on Windows but going cross platform whenever I can. I got started with PowerShell relatively late; maybe about 5 years ago now. Before using PowerShell my scripting tasks were mostly done with Ruby. I really enjoyed it as a language. At one of my previous jobs PowerShell was in heavy use there (particularly by the Exchange admin) so I started picking it up and then ran with it.

2. Is there any PowerShell project of your you want to speak about?

I have a module called Idempotion that lets you easily use DSC resources directly in scripts. It's a templated wrapper around Invoke-DscResource that gives you more natural PowerShell function syntax and some additional features (-WhatIf support, etc.).

This is also on the GitHub page but for example, where you would use a line like this:
Invoke-DscResource -Name File -ModuleName PSDesiredStateConfiguration -Method Set -Property @{ DestinationPath = 'C:\Folder\File.txt' ; Contents = 'Hello' }
Idempotion lets you do this:
Set-File -DestinationPath 'C:\Folder\File.txt' -Contents 'Hello'

3. Can you show us the way you tackled Cosine Similarity Task?

This was a pretty challenging task. I had never heard of cosine similarity before, so I had to first learn what that was, learn how to apply to it a string (since it's really about numbers), then come up with an implementation that could be sufficiently golfed.

My first attempt, at 254 characters defined functions (as ScriptBlocks in variables) for dot product and vector magnitude, and then called them later once the full vectors were realized. To make a long story short, it's much better to calculate the dot product and magnitude as you go along; it just took me a while to figure out that I could do that with this algorithm.

So actually I want to talk about some of the other challenges.

Unlike traditional code golfing, this is specifically a one-liner contest; so no newlines and no semicolons. This really forces you think hard about how you can do discrete tasks (even variable assignment) without stopping for a new statement. The fact that we're starting with two discrete variables for the source string puts that problem right up front.

So I start by making an array of $t1 and $t2 with the comma operator and then pipe that into ForEach-Object.

Splitting the string with \W+ splits on contiguous non-word characters as needed so that we get an array of words. After that I really want to lowercase version of the words, and then I want to group them into a hashtable.

To do lowercase, you can call .ToLower() but calling methods directly is painful in code golf. You need the entire name, need to use parentheses, if the source is not a variable or literal you also have to wrap the source in parentheses.

Luckily there's a little-known parameter set to ForEach-Object. Instead of passing a script block, you pass a member name like a property or method and then it gets retrieved/invoked for each input object. It even takes arguments for methods. Best of all it accepts wildcards (it must be unambiguous). With properties, this is like using Select-Object -ExpandProperty, just much shorter than even select -exp.

("I can't read words."-split'\W+').ToLower()
Can become:
"I can't read words."-split'\W+'|% *wer
I use this extensively in code golf, and I wrote about it in the Tips for Golfing PowerShell thread on Stack Exchange's Code Golf site.

So you'll see me use this A LOT in this task.

Back to the pipeline: after ToLower I'm using Group-Object with -AsHashTable and -AsString. You'll see soon why I want a hashtable. -AsString is needed to get real strings for the hashtbale keys (this is annoying). The purpose of grouping is to get the the counts of each unique word. Group-Object isn't case sensitive so we don't actually need ToLower for this; but we need it later.

So the result of this ForEeach-Object is two hashtables, one for each of the input strings. The keys of each hashtable are the unique words, the values are an array of each instance of the word. So if the string contained the word "really" twice, the hashtable would contain a key of "really" with a value of @('really', 'really').

I'm using the -OutVariable parameter to store the resulting array of hashtables in a variable named h, while also sending it down the pipeline.

The next part, |% K* uses the aforementioned method of using ForEach-Object to expand a property. K* resolves to "Keys". This gets passed to sort and gu (Get-Unique) to get a list of unique keys. Since Get-Unique is case sensitive, this is why I lowercased the words previously. At this point in the pipeline though, all we have are keys. The pipeline objects are just strings, and the original hashtables they came from are not in the pipeline. So that's why I put them in $h.

The next ForEach-Object does a lot of the "work" here. For each key I'm sending in, I need to retrieve the count of that key from the first hashtable ($h[0], which is the words in $t1) and the second hashtable ($h[1], the words in $t2). So $h[0].$_.Count does that for $t1, where $_ is the current key. These are the "pairs" of each vector. Doing it this way, with hashtables, ensures I'll get 0 for words that are in one string but not in another. Originally I was just using groups and missing words because of that. I'm going to need each of these values 3 times so it makes sense to store them in variables. I chose $1 and $2.

Small aside: PowerShell has a neat little quirk whereby you can do an assignment inside of a substatement (parentheses), and it does the assignment while also returning the value that was assigned. This also works with += and -=. This is really critical here.

To calculate the dot product, I need to multiply $1 and $2, and keep adding those up as I go along. $d holds my dot product. So:
Keeps accumulating $d with my dot product as I go along, while assigning $1 and $2 for what comes next in the current iteration.

Within this iteration I also have to accumulate the values for magnitude for each vector (or at least for the squares of the current vector value). $a and $b hold the accumulating pre-square root magnitude values for $1 and $t2 respectively.

So I need to do 3 assignments here all within this 1 iteration, without semi-colons and newlines, so what to do? Let's just make an array of all three while assigning them, with prodigious use of parentheses and 2 commas!

Now I'm accumulating all the right values at the right time. Problem is, I don't need this array! I could nullify it, but that's a problem too; if I don't return anything to the pipeline, the next element won't run its process block.

Instead, the next ForEach-Object uses all 3 blocks, and the Process block is empty, because at this point I don't care about what's in the pipeline, I just want to finish the work.

The begin Block was necessary to get to End block, so I kind of got it "for free". What a perfect place to do another assignment! I'm setting up $m to contain a string that looks an awful lot like a piece of PowerShell code that calls [math]::Sqrt(.

In the End block, we bring it all together. What I need to do here is divide the dot product ($d) by the product of the Sqrt of $a and the Sqrt of $b. I do this by generating a string which ends up containing something like "123/([math]::Sqrt(4)*[math]::Sqrt(5))", and then I pipe that into Invoke-Expression (iex).

An intermediate solution I had assigned [math]::Sqrt (the actual method itself) to the variable $m, so that I could call $m.Invoke($a) through the shorter $m|% I* $a, but the stringification with iex is actually way shorter. This kind of thing comes in handy a lot in golfing.

So that's it! You can find more of my golfing on StackOverflow's golfing site (I pretty much only use PowerShell there even though they are open to any language).

Thursday, November 30, 2017

PowerShell oneliners by guest blogger Simon

During the last PowerShell Oneliner Contest, Simon came up with the shortest working solution to Task 2, and I am happy to say that he promptly accepted to be my guest blogger today.

Let's see his three remarkable answers to my PowerShell contest:

Gwmi Win32_Share|%{"\\$($_|% P*e)\$($_.Name)"}
gv q* -v|% Su* 26 21|%{"The B in$_ stands for$_."}
TASK 3: TEXT MINING - 196 chars
"$t1 $t2"-split'\W'|group|%{$o=$i=$p=0}{if($q=$_.Name){$1,$2=$t1,$t2|%{($_-split'\W'|group|? name -eq $q).Count}}if($o+=$1*$1){$i+=$2*$2}$p+=$1*$2}{if($3,$4=$o,$i|%{[math]::Sqrt($_)}){$p/($3*$4)}}

1. Simon, tell us a bit about yourself and about the way you got to work with PowerShell

I'm working as consultant in Sweden helping customers planning and building all kinds of automation, mainly using PowerShell. With a background in IT support and IT operations and a strong passion for coding I'm trying to preach developer practices to the ITPro community such as sourceControl, continuous integration and testing. Being able to collaborate on PowerShell scripts with your team and have the scripts automatically tested, signed and delivered to where they can be run solves a lot of questions like for example: "how do I know this script hasn't been changed since I ran it last time?".
I started my PowerShell journey quite a few years back when I was in client management. We had tens of thousands of clients reporting to Altiris and McAfee ePO and it was my task to sort and delegate actions based on those reports. When I realized I could parse the reports with PowerShell and do some basic sorting and filtering, that job got a lot easier and I was sold.

2. Is there any PowerShell project of your you want to speak about?

I think Phosphor ( is an amazing project that hasn't got the attention it deserves. Phosphor can basically do a Get-Command and generate a web-form for each cmdlet. This could be used as a cross platform implementation of Show-Command or for building simple self-service portals. Imaging having a webserver that uses Kerberos Constrained Delegation to log the user in to a bunch of JEA endpoints and then generate a form for each command available to that user. This way any PowerShell savvy person in the Operations team could deliver self-service business value to the organization without depending on web-developers.
Each time I have an hour of spare time I try to learn some TypeScript so I can fully understand how Phosphor works and hopefully I can get to contributing to the project in the future.

3. I was impressed by your solution to Mandelbrot's riddle. Can you explain your approach to it?

Thank you! I started by trying to find text that was similar in both the question and the answer and saw that the string " Benoit B. Mandelbrot" was the longest text I could find in the question that was repeated in the answer.
I started by just figuring out the shortest way to insert that into a string and got this:
$x = " Benoit B. Mandelbrot"
"The B in$x stands for$x."
Then I tried to find the shortest way to break out my string from the question and got to this:
$x = $Question.Substring(26,21)
"The B in$x stands for$x."
This looked quite good to me, but I wanted to shorten the substring part. This took me to one of my favorite code-golf tricks, using Foreach-Object with the parameter MemberName. Foreach-Object has a not very well known parameter called MemberName that instead of running a piece of code for each object coming through the pipeline, it invokes the named member of each object that has such a member. In this case I want to invoke the member Substring and give it the arguments 26 and 21 using the parameter ArgumentsList. Like this:
$x = $Question | Foreach-Object -MemberName Substring -ArgumentList 26, 21
"The B in$x stands for$x."
Now this doesn't look shorter, but we can shorten it! Let's look at the parameters of Foreach-Object using Get-Help:
Get-Help -Name Foreach-Object -Parameter *
This tells us that the MemberName parameter belongs to a set called PropertyAndMethodSet, let's filter on that set:
Get-Help -Name Foreach-Object -Parameter * | Where-Object -Property parameterSetName -like *PropertyAndMethodSet*
Ok, so we have three parameters:
InputObject accepts pipeline input ByValue and will be bound since we are piping $Question.
MemberName is a positional parameter with position 0 so the first positional value/argument will be bound to MemberName. I also happen to know that MemberName accepts wildcards!
ArgumentList is not positional, but it takes value "FromRemainingArguments" meaning that all values/arguments that remains after MemberName is bound will be bound to argument list.
Using Get-Alias we can also find that Foreach-Object has an alias '%'
Get-Alias -Definition Foreach-Object
With this knowledge we can shorten our code significantly:
$x = $Question | % Su* 26 21
"The B in$x stands for$x."
Now that looks good, but $Question is also quite long, what if we could use a wildcard to get the variable? Let's try with Get-Variable which has an alias of gv. To get that value of the variable and not the variable itself we also use the parameter -ValueOnly, but we don't need to write ValueOnly, PowerShell is happy as long as it can figure out that is what we want. Since ValueOnly is the only parameter starting with a v, -v is enough.
$x = gv q* -v |  % Su* 26 21
"The B in$x stands for$x."
Let's make this a one-liner using Foreach-Object and replacing $x with $_ and remove unnecessary spaces and we get this:
gv q* -v|% Su* 26 21|%{"The B in$_ stands for$_."}

4. Do you see any possible scenario where using Cosine Similarity in PowerShell could help?

Yes absolutely! I'm doing Active Directory migrations and identity projects where we need to match a user or person in one system with a user in another system. This is easy as long as we have a common and unique attribute like email to match on, but when we don't we often need to match on for example givenname and surname. Using cosine similarity I can find persons with similar names.

Monday, November 27, 2017

PowerShell oneliners by guest blogger Ka-Splam

As you know, the PowerShell Oneliner Contest 2017 has its winner: Ka-Splam, from UK. Today I am proud to announce that Ka-Splam has accepted to answer a few questions on this blog.

Take the time to read it all and, if the PowerShell monk in you is able to solve 'The 25 chars contest' hidden in his answers, leave a comment for greater glory.

1. Ka-Splam, it's an honor to have you here today to talk about the the impressive oneliners you provided. I know that you are very active on the PowerShell subreddit and regularly competing and winning in Shortest Script challenges. Let's get straight to the point: how one gets good at those challenges?

Oneliners take a long time to write, they're often frustrating, they run slowly, they're hard to read, you shouldn't use them in production scripts, and they don't handle edge cases or errors. There is no reason you should want to get involved in them.
Except that they're fun.
If you like that kind of thing - if you enjoy nitpicking over details .. hold up a moment! Here's a detail to nitpick: shortest code challenges ('codegolf') are what produces hard to read code. Oneliners are clean, clear, readable – what you get when you trim away the fluff and use the tools skillfully.
Happysysadm hinted that I should try and write something the community could learn from, and that's a bit scary; what I'm going to do is solve a simple problem and then shorten it. In detail. I don't know whether the work of writing short code has any real-world use, but I'm convinced that if you are engaged in something (anything), if you are thinking about it, experimenting with it, trying to shape it to your will - you will learn something.
Short-code challenges keep me engaged, keep me digging into edge cases in my understanding of the language in a way that classic problems like "write three pages of code to simulate an object oriented deck of cards" don't. Nobody talked me into this kind of thing by saying it would be good for me; I have always liked using built-in tools with no need for 3rd party dependencies, utilities that consist of a no-install .exe, and reduced "bloat".
Except writing. If you skim read this article and think it's too long - look how much effort it takes me to shorten this puzzle. Doing that to shorten this blog post would take weeks! ??
Problem: "things in the root of my C:\ drive, where the name is more than ten characters, just their names", and the long form code is:
$items = Get-ChildItem –Path 'C:\'
$longNames = @()
$items | ForEach-Object –Process {
    If ($_.Name.Length -gt 10)
        $longNames += $_.Name
Foreach ($name in $longNames)
    Write-Output $name
It is 12 lines and 241 characters, which I will keep track of as we reduce it, and the output is four strings:
Program Files
Program Files (x86)
I have made this code a bit laborious, but people familiar with PowerShell should easily follow what it's doing – there is clear separation of the major steps: gathering data into a variable. Stop. Checking the name length. Stop. Collecting the results. Stop. Displaying the output. Stop.
(NB. Beginner programmers will still struggle - there's nothing intuitive from everyday life about `Get-ChildItem` or syntax like `+=` or `$_.Name.Length` or piping into `ForEach-Object`, but they are common patterns in PowerShell. As we shorten the code, we move from common patterns towards rare, novelty patterns).
But I don't want to type so much code to satisfy a momentary curiosity about folder name length and this is not an engaging or interesting puzzle on its own - once you've got past the basics of PowerShell there is no challenge. Instead of looking for a harder puzzle, we can make this more challenging by writing it in less code. And it's an open ended challenge: there's no fixed place to get to, no pass or fail, and you're mostly competing against yourself. That's something I like about it.
How do we write it in less code, what is that process?
You know how putting five numbers in order is easy and people learn how to sort numbers into order some time during childhood, learning by example? Telling someone steps to sort five numbers in order is much harder - that's computer science academic work. Well, I know how to write shorter code but I don't know how to tell other people how to write shorter code. My examples here are step by step progress, and hopefully you can learn by example.
Many of you will look at the last four lines of the long solution and think they "do nothing". Great, just writing `$longNames` is enough for PowerShell itself to pull the items out of the array and write them to the output, and they will end up on screen. It was going to do that anyway, there's no need for us to explicitly write that:
$items = Get-ChildItem –Path 'C:\'
$longNames = @()
$items | ForEach-Object –Process {
    If ($_.Name.Length -gt 10)
        $longNames += $_.Name
9 lines, 188 characters.
But it needs a bit more understanding of the language to know why writing a variable name on its own does anything, and what it does. But that is a common pattern for PowerShell users familiar with functions.
As well as offloading work onto the computer, we can offload work to the programmer's brain – this code stores the directory listing in a variable named `$items` and pulls it straight out again, a needless double step going A to B to C. We can connect the listing output to the loop input with a pipe and skip right from A to C, but then understanding the code requires that the programmer is comfortable enough with the way the language works to follow A to C with nothing in between to hold on to:
$longNames = @()
Get-ChildItem –Path 'C:\' | ForEach-Object –Process {
    If ($_.Name.Length -gt 10)
        $longNames += $_.Name
8 lines, 171 characters.
That doesn't need more understanding - we already used the pipeline – but it's my observation from code and questions on the internet that this change is hard for people. This changes the nature of the program from "taking clear, distinct steps, one at a time" to "a flow from start to finish, however far it goes, all at once". Knowing that pipelining things is possible doesn't seem to be enough - it needs quite a lot of practice for this "let the data slide through" code-shape to become comfortable and familiar.
The same thing happens from here on down – more understanding means greater leaps. A to C becomes A to E, then A to J. The heart of all "code readability" arguments might be whether the reader has enough familiarity of the patterns used in the code, rather than whether the code itself is "readable", too long, or too short.
From here, part of me wants to squash the `if` into a single line, part of me wants to get rid of the `@() / +=` combination and have loop output to array in one go – that's an example of what I was just writing, sending loop output into a variable without any intermediate steps is a pattern that looks weird from other languages, but gets more familiar with use - and part of me wants to get rid of the separation of "storing the names, then displaying them" by merging that into one "find and display them" step:
$longNames = Get-ChildItem –Path 'C:\' | ForEach-Object –Process {
    If ($_.Name.Length -gt 10) { $_.Name }
4 lines, 129 characters.
This uses the same understanding from earlier ("why $longNames does something when written on its own") to understand how writing `$_.Name` does something on its own, even though the context is different. From directory listing to variable, no stops along the way. A to D.
Have you noticed these changes work on different layers of the code? Some of them are purely visual – removing line breaks from the `if` made no difference to the way the code works. Others change what's happening behind the scenes - connecting `Get-ChildItem` to `ForEach-Object` removed an array and a variable name, uses less memory, without affecting the output. That's something else I like – shortening the code involves understanding up and down the layers, from PS reading the code, to what it does behind the scenes, to how those parts interact with each other, to what exactly needs to happen to the data to solve the problem.
If we can remove newlines around the `if`, let's do that a bit more and put `$longName` up on the previous line, with the rest of the code:
$longNames = Get-ChildItem –Path 'C:\' | ForEach-Object –Process {
    If ($_.Name.Length -gt 10) { $_.Name }
} $longNames
Oh no!
ForEach-Object : Cannot bind parameter 'RemainingScripts'
Nothing happened when I took the newlines away from the `if` statement , but take this one away and it won't run. This specific problem plagued me for a while with short code challenges, and following up on that error and why it happens was interesting and useful. Something else I'm not showing in this article is how many times I try things which don't work. We'll undo that change, and let's get rid of the names array instead, and output directly:
Get-ChildItem –Path 'C:\' | ForEach-Object –Process {
If ($_.Name.Length -gt 10) { $_.Name }
3 lines, 98 characters.
Some of you have been hitting your heads on why I'm using a loop and a test, instead of merging them together and using `Where-Object` - why two steps instead of one?? OK let's merge those:
Get-ChildItem –Path 'C:\' | Where-Object –FilterScript { 
    $_.Name.Length -gt 10
} | ForEach-Object -Process {
5 lines, 136 characters. Longer.
I joke, you probably expected this shape:
Get-ChildItem –Path 'C:\' |Where-Object {$_.Name.Length -gt 10} |Select-Object -ExpandProperty Name
1 line, 99 characters (yay! Dropping below a 100 char cutoff is pleasing)
This is another dimension of it – my early changes made it shorter, no question. But we're now at the point where some changes to make things shorter don't quite do the same thing, and so they need another change elsewhere to compensate, and the whole thing ends up longer. Squashing things into a small space is easy at first, but after a while every push in one place makes things pop up somewhere else, progress slows down, and that's a hint that you're getting past the easy gains. Maybe it's a good place to stop?
I had to do something, the `if` test wasn't just checking the length it was also extracting the `.Name` property. `Where-Object` can only do the length test, so the Name property needs to be handled in new code. That's another dimension, going from "script" to "oneliner" towards "codegolf" means testing the rules of what's allowed as output. If the full directory listing output is OK, then we can cut the entire last chunk from this code, if it's mandatory to just output the names then we have to get the Name out. (Tip: argue with whoever set the puzzle that your rule-bending output should be valid ;-))
And the shape of the code changed by merging the loop with the `if` test – it now uses `Select-Object`. Writing short code requires knowing several different ways to do things, so practicing writing code in different ways means you can "choose the shortest way you can think of" compared to people who only know one way.
Back to my original nitpick right up at the top, I'm going to call that last example a oneliner: there's no variables, no loops, just three pipeline blocks neatly connected, one each for the three stages of the problem – get data, process it, output it. No stops. It's 8% of the lines and 40% of the typing, and does the same thing.
Let's not end here, let's start here – assuming I still didn't want to type all that at the console, let's go through the same tricks again:
1) Offload work to the computer
2) Offload work to the programmer
3) Use greater understanding of the language to do (1) and (2)
That means:
• `-ExpandProperty` - parameter can be shortened; if PowerShell can match what you type to a single parameter, it will. Tiny bit of language knowledge, quite common.
• Aliases: `where` and `select`, very common knowledge.
• Default parameters: with `Get-ChildItem` `-Path` is assumed for the first parameter, if you don't type it. So commonly used, I used it all the time before I even knew that's what was happening. Don't tell the computer to go that way, if it was going that way anyway.
Get-ChildItem 'C:\' |Where {$_.Name.Length -gt 10} |Select –Expand Name
71 characters.
We just knocked a quarter of it off, and it's still almost the same. I'm happy this is the kind of code I'd write at the command line, off the top of my head, share with people, but not use in production scripts.
Repeat: same again – make the computer do work, use greater knowledge of the language, etc. etc.
• Aliases: `?` and `gci`, quite common.
• In parameter parsing mode strings without spaces don't need quotes, again common.
• `-Exp` is still unique, instead of `-Expand`, also common.
These changes make it:
gci c:\ |? {$_.Name.Length -gt 10} |select -exp Name
52 characters.
Another 30% reduction. Looking like a traditional "oneliner" now, getting unreadable, approaching "codegolf" territory. There isn't a distinct cutoff that I know of, but you can see it's nothing like the earlier code, and yet you followed the blog post down this far step by step and you can also see it's exactly like the earlier code.
Code so far has enough whitespace to be clear and readable, but "shortest code" means delete all the spaces. The space between `c:\ |` can go and everything will work. Remove the space between `gci c:\` and it won't work. Removing the space from `} |select` is fine, removing the space from `-exp Name` isn't. Trying these will give you errors, and exploring why the errors are coming involves learning something about PowerShell.
Now we're at the point where there's a couple of spaces I could trim to drop just below 50 chars, but that must be almost as far as it goes, right? We list the directory contents, check the name length, expand the property. What else is there to get rid of?
How far can it go? This is where it gets fun and challenging.
# remove the easy spaces, and gci has an alias 'ls' for –4 chars
ls c:\|?{$_.Name.Length -gt 10}|select -Exp Name    #48 chars

# select can take a pattern for the name as long as it's unique -2
ls c:\|?{$_.Name.Length -gt 10}|select -Exp N*    #46 chars

# comparisons and other operators don't always need spaces -2
ls c:\|?{$_.Name.Length-gt10}|select -Exp N*    #44 chars

# If you know how PowerShell handles properties on Arrays, rewrite for –8
(ls c:\|?{$_.Name.Length-gt10}).Name    # 36 chars

# Or, ForEach-Object can expand a property, a little known feature, -2
ls c:\|?{$_.Name.Length-gt10}|% N*     #34 chars
35% gone. And we dropped below three blocks, momentarily, with a big rewrite – after going towards wildcard patterns, changing to a parentheses wrapper and back to the full `.Name` still saved a lot. Come on brain, what else can we dig up? TYPES! I haven't mentioned casting yet, and that is a huge part of it. Look at this:
gci c:\ | ForEach-Object { $_ }     # directory listing output
gci c:\ | ForEach-Object { "$_" }    # names only !
If you force the output items to be a string, they become just the name, not the whole directory listing, or the full path. For this problem, that's convenient. For others, it isn't. In other code, casting between types is incredibly common and tricks to cast between arrays, strings, numbers, are very useful. Let's abuse the string cast and get rid of calling `.Name` entirely:
# Now $_ is a string name
ls c:\|%{"$_"}|?{$_.Length-gt10}    # 32 chars

# but wait, there's a trick with `Where-Object` to avoid the scriptblock
# we couldn't use it for a double lookup Name.Length but now 
# we have unlocked it, because we're working with one property, -2
ls c:\|%{"$_"}|? Length -gt 10    # 30 chars

# and that trick can take patterns for the property name, -3
ls c:\|%{"$_"}|? Le* -gt 10    # 27 chars
What else do I happen to know about strings, casting, types, pattern matching? Regular Expressions! I skip over another pile of background knowledge and edge case behavior, and show:
# Completely different array filtering approach, based on using a regular expression to count
(ls c:\)-match'.{10,}'|%{"$_"}    # 30 chars (boo)

# use a previous short version of `%` again, -3
# "going back to something I was using" happens a lot
(ls c:\|% n*)-match'.{10,}'    # 27 chars (Q: why are the parens needed?)
Wait. This is getting silly. Almost halved the 52. And if you just saw one of these answers pasted into a web page, you wouldn't see the pages of "getting it a bit shorter each time" happening earlier. It would look like "What, who can just write that, one-liners are awful".
Anyway, I try a lot of things. I spend a lot of time on it. I drag in as much knowledge as I can find, all that matters is getting the right output. It's a form of minimalism, if you can scrape by with a skeleton crew of code and pieces falling off everywhere, as long as it gets past the finish line once, everything else can go. At the same time, it's not minimalism – if you can spend 1GB of memory and 5 seconds of timewasting to save 1 character, do it and be grateful for it.
Two completely different approaches, both hitting 27 characters.
This fascinates me; the early code and the tiny code are so different to humans, but do the same thing.
$longNames = @()
Get-ChildItem –Path 'C:\' |
ForEach-Object –Process {
    If ($_.Name.Length -gt 10)
        $longNames += $_.Name
Is the same as this
ls c:\|%{"$_"}|? Le* -gt 10
and this
(ls c:\|% n*)-match'.{10,}'
People find one more readable, more writable, but the computer is never confused. This makes me feel there's something really interesting underneath this about "expressing computation". How can they be so different and "do the same"? How much of the code is important, how much is fluff? What features could this language be missing that could make that computation shorter or clearer?
Is that is? Can it go below 27 characters? (hint: yes, I have a 25 .. so far)
One last tip for those willing to compete in codegolf competitions: spend a lot of time on them, and then present them as a finished script that looks effortless. And practice rewriting and rewriting in different ways. And shamelessly steal every code-shortening idea from other people that you can possibly find, put them ALL in.

2. Tell us a bit about yourself and about the way you got to PowerShell.

I have a long dislike of writing code as an amateur and sharing it with people, only for them to say "I haven't got Python, or a Java runtime" or "how do I install that" or "what's the Visual Basic runtime?". I envied the Linux distributions with their built-in C compilers and Perl and Python, almost as much as I didn't enjoy VBScript.
When PowerShell came to Windows I jumped on it. Windows Vista, PowerShell 1 or 2. At last a powerful scripting language everyone would have by default!
I didn't understand it, and I dropped it.
A few years later, 2012 ish, working in IT with Exchange requiring it, newer versions getting better and better, it grew on me. Then it took over from Python as my everyday playing around language, now I'm a J. Snover's Witness.

3. Can you tell us your approach to Task 1?

Task 1 mandated the use of `Win32_Share` and the shortest way I know is with `gwmi`. After a few trials and errors, I thought of the forced-cast-to-string approach mentioned above and checked to see what happens - one output object becomes:
That's so close to the required output format, string cutting to get rid of the middle bit has got to be one of the shortest possible routes to an answer. And trim the annoying trailing quote. Luckily I like Regular Expressions, so a bit of `-replace` experiments later and I have something that seems roughly as good as I could ever get it.

4. How did you cope with Task 2?

I stared at it, and I saw it needed `$question` to be used and print this output:
The B in Benoit B. Mandelbrot stands for Benoit B. Mandelbrot.
The words "in, stands, for" are not in $question so they must be in my code. "Benoit B. Mandelbvrot" is in the answer twice, remove that duplication, fin.
In my head was the shape "output string, get the first `Benoit B. Mandelbrot` from `$question` with string manipulation (== probably regex), use it and store and re-use for the second place it appears, probably in a sub-expression in the string".
And then trial and error until I had it quite short. As it happened, SubString came out shorter than regex. This rarely happens. The most pleasing adjustment in my answer was taking the space in front of `Benoit` from the input as well.

5. Your oneliner for Cosine Similarity is impressive. Can you explain how you got to such a short solution?

Nope ?? The whole of this article is trying to answer this question. All that – trial and error, codegolf experience, weird language behavior edge cases, looking at the language specification, spending lots of time, knowing a pile of short-code tricks – that's how.
This one was scary. I hadn't heard of it before, I'm not a skilled mathematician, and the Wikipedia page math-terminology-explanation was no help. It took me quite a while of Googling before I decided to look at the Reddit discussion, expecting other people to have finished it already. Other people were puzzled and that was a bit of a relief.
After finding C# examples, explanations, discussions, I started to get a clue. It's word counting, adding, dividing, then I could start to make sense of the Wikipedia equation – A1*B1 + A2*B2 … An*Bn on the top. A1 squared + A2 squared … An squared on the bottom left. Same for B on the bottom right. I can make those work.
Something which isn't covered in the previous few pages, is the way I split it into smaller parts – lots of testing how to split the input strings into words, lots of testing ways of counting words in sentence 1 vs sentence 2, before I started combining the code together and trying for a full answer. Then a lot of struggling to get past the "no semi-colons" restriction.
Particular techniques in this answer:
$a=1   # assign a variable
($a=1)  # assign variable, and use value 1 right here in the code as well
foreach () { } $t    # because, as noted earlier
... | Foreach-object { } $t    # this structure doesn't work
Instead of writing these:
$lowerLeft += $a * $a
$lowerRight += $b * $b
with a semicolon:
$lowerLeft += $a * $a; $lowerRight += $b * $b
It was a moment of insight to use:
$throwawayVar = ($lowerLeft += $a * $a), ($lowerRight += $b * $b)
Programming by side effect, making an array of the results and ignoring it. It still looks like a very redundant answer to me, three big repeating patterns, I was expecting someone else to get rid of them and be 1/3rd shorter.
It was only by coding it that I came to understand what it did, and spent a while pacing up and down explaining to myself what an N-dimensional cosine means, why it makes any sense at all relating to documents and words, how it measures document similarity, and imagining dogs pulling in the 'bacon' direction vs the 'dog park' direction. So that was fun.


One dimension:

0 --- 1 --- 2 --- 3 --- 4 --- > Bacon.

A dog which pulls strength 3 in the bacon direction is similar to another dog which pulls strength 3 in the bacon direction. They are different from a dog which pulls strength 1 in the bacon direction.
A document which says "bacon bacon bacon" is similar to another "bacon bacon bacon", but they are different from "bacon".

Two dimensions:

/\ Dog Park (up)
0 --- -> Bacon (right)

Dogs which pull strongly towards bacon are similar. Dogs which pull in both directions equally, can't make up their minds, and go off at an angle – are similar. Dogs pulling in the Dog park dimension are similar.
Dogs pulling towards bacon are a bit different from dogs pulling in both directions and going at an angle.
Dogs pulling towards bacon only are very different from dogs which pull towards dog park only.

Exactly how strongly they pull in each direction, determines which angle they go off at, mostly bacon, mostly dog park, or split the difference.
A document which says "bacon bacon park park bacon park" is similar to "park park park bacon bacon bacon". It pulls in both dimensions and goes off at an angle. It's a bit different from "bacon park park park" which pulls more towards the park. It's very different from "park park park park park" which pulls only towards the park.
Each word is a direction, a dimension. The word count is how strongly the document "pulls" in that direction. After all the pulls combine, the document goes off at an angle. Different word counts make different angles. This equation works out "how different". I can't visualize more than 3 dimensions, but I can up to 3 and it now makes sense that it works. More words, more dimensions, same idea.

6. What's your take on Powershell for one-liners if compared to other languages you might know?

For shell use oneliners, there isn't any other language quite in the same category so it's great. For general purpose programming one-liners I like it, plenty of convenience syntaxes. Some things that annoy me. I tend to forget it's an admin scripting language, and instead wish it pulled in every convenience feature from every language I'm dimly aware of. is a multi-language site, and in my experience the answers (other people's answers, I don't know most of the languages!) separate themselves into tiers:
• Really short: single-purpose languages designed for golfing – GolfScript, CJam, many others.
• Short: the major scripting languages - Perl, Python, JavaScript, PowerShell, Ruby, etc.
• Medium: mainstream languages, older languages with fewer built-in conveniences (C#, Java, Lisps)
• Long: novelty answers - SQL, etc.
PowerShell has a reputation for being long and wordy, but the language designers did a fantastic job with the "elastic syntax" to make those things optional, and with things like automatic module imports and type accelerators, being a shell and having direct access to the filesystem it is strong. Other languages save not needing `$` on variable names, but then can't put variables in strings easily. Or they have better string slicing but worse regex subroutines. PowerShell tends to be slightly behind the other popular languages, often because you need quotes and parens so much or don't have quite as easy int->char->string conversion.
What really seems to make the difference is whether language X happens to have a convenience feature that fits the question. Often the languages trade places for different problems, so JavaScript might have a short way of doing one thing, but C# has a lambda expression and Linq combo which surpasses it for other things, and Mathematica has an inbuilt list of country names so it takes a winning place in some specific question, etc. PowerShell is fun, competitive enough.

Friday, November 24, 2017

Announcing the winner of the Powershell Oneliner Contest 2017

The PowerShell Oneliner Contest 2017 is over and we have a winner. The three tasks ranged from pretty easy to extremely difficult, and many quickly discovered the plethora of barriers that make it difficult to successfully manipulate the pipeline, especially when dealing with math problems. Some though came up with working solutions that are short and imaginative.

Let me announce then the top 5 PowerShell monks in the contest:

1st - Ka-Splam from UK

2nd - Simon Wåhlin (@SimonWahlin) from Sweden
3rd - Brian Scholer (@BrianScholer) from USA
4th - Nathan Ziehnert (@theznerd) from USA
5th - Bartek Bielawski (@bielawb) from Poland

Congratulations to Ka-Splam for becoming the 2017 oneliner champion with the following solutions:

gwmi win32_share|%{$_-replace'root.*?"|"'}
"The B in$(($b=$question|% s*g 26 21)) stands for$b."
TASK 3: TEXT MINING - 179 chars
foreach($w in $t1,$t2-split($x='\W+')|sort -u){$n=,($t+=($a=($t1-split$x-eq$w).count)*($b=($t2-split$x-eq$w).count)),($l+=$a*$a),($m+=$b*$b)}$t/([math]::Sqrt($l)*[math]::Sqrt($m))
Congratulations again to Ka-Splam: the Powershell ebook from Mike F Robbins is yours!

I would like also congratulate Simon Wåhlin for writing the shortest answer to Task 2 - 50 chars:
gv q* -v|% Su* 26 21|%{"The B in$_ stands for$_."}
I am also pleased to announce that the top three competitors have accepted to be my guests on this blog, so stay tuned for their posts and, believe me, there is a lot to learn in perspective, from their approach to the tasks to a profusion of tips and tricks to make your PowerShell high caliber!

I can't close this post without a special thank to Jakub Jares from the Czech Republic (@nohwnd) for his great involvement in the setup of the Pester-based test environment.

Monday, November 13, 2017

Powershell Oneliner Contest 2017

Year after year I see many people who were old-fashioned mouse clickers adopting PowerShell and thus the average skill level is rising. At the same time, pushed by the arrival of DevOps, a lot of people who are already pretty confident with code are coming to join the ever-growing community of IT professionals that use PowerShell.

So what is the best way of testing your progress than a tricky PowerShell contest and possibly winning a prize?

Let me announce then the third edition of the PowerShell Oneliner Contest:

Before I announce the three tasks you will have to cope with, let me remind you of the spirit of this game:
  • this must be a unique learning experience, where solutions posted from experienced scripters will benefit the whole community once they are made public
  • novice scripters will have to show a lot of persistence in order to get solutions that work, respect the rules and be creative
  • having fun is of paramount importance while bending the command line in a creative way

  • The contest is split into three tasks of increasing difficulty
  • Each task consists of a simple scenario for which you have to produce the shortest possible oneliner solution
  • You can use every PowerShell version, just state in the comment which version you tested it with
  • No semi-colons
  • Backticks are accepted for readability
  • To submit your entry, create a secret Gist for each task solution and post the URL for the Gist as a comment to this blog article
  • Submitting an entry that is a public Gist will automatically disqualify the entry and participant
  • Sign your comments so that I know who's who and so that I can get in touch with the winner
  • Entries (comments) will not be made public until after the submission deadline
  • The first person to produce the shortest working solutions to a task will get 1 point, the second 2 points, the third 3 points and so on
  • The person with the lowest total score over the three mandatory tasks will be the winner
  • The contest will run for nine days beginning today until November 21st 12:00 noon (GMT)
  • The winner will be announced on Friday, November 24th on this blog
  • I'll be the only judge


Windows Management Instrumentation is an incredibly useful technology for exposing system information. Being able to interact with it from PowerShell is one the first things we all learn. Your first task is to write the shortest possible oneliner that extracts the UNC path of all the local shares from the Win32_Share class.

Expected output:
The use of Win32_Share class is mandatory.


I still remember the first time I saw a computer-generated fractal (it was a Barnsley fern), and have always been impressed by those patterns that repeat and repeat again. What I did learn recently is that the 'fractal' term was invented by a legendary Polish mathematician, Benoit Mandelbrot, who added a B. in the middle of his name: supposedly he intended his middle B. to recursively mean Benoit B. Mandelbrot, thereby including a fractal (his mathematical discovery) in his own name.

Your mission is to write the shortest possible oneliner that answers this question
$question = 'What is the middle name of Benoit B. Mandelbrot?'
by returning
The B in Benoit B. Mandelbrot stands for Benoit B. Mandelbrot.
Reuse of $question variable is mandatory.


There's a lot of hype today around data mining techniques and therefore I want you to see how good you are at using PowerShell for a special kind of task. Given the following two text variables
$t1 = "I really like scripting with PowerShell"
$t2 = "PowerShell is a really really nice scripting language"
write a oneliner that is capable of determining text likeness using Cosine Similarity and returns
The returned value must be 1 if $t1 and $t2 are identical vectors (same words) and 0 if $t1 and $t2 have no words in common.
The comparison must be case insensitive, meaning that PowerShell and powershell are the same word. The string must be split at any non-word character and only unique elements of the resulting collection are compared.
The oneliner should work against any other pair of text variables, for instance
$t1 = "Unless you work hard, you won’t win."
$t2 = "You must work hard. Otherwise, you won’t win."
must return


Being able to test your solution and see that it respects the rules is of paramount importance before submitting it. If you are taking part in this contest I suppose you must not be new to Pester. Fellow MVP Jakub Jares (@nohwnd) was kind enough to provide a solution validation tool based on his Assert module:

Use the three provided test files (one per task) as stated in the instruction file.

A word of notice: using those tests is not mandatory, but my opinion is that we should all be continuously learning, so if you are new to GitHub and to Pester, I suggest you seize the occasion to learn something useful. You'll just be doing yourself a favor if you use Pester to unit test your oneliners, because human error can always happen and it's a pity if you spend a lot of time providing an answer that does not actually work.

If you are interested in Jakub's Assert module, you can find it here:


Yes, there's a prize! Fellow MVP Mike F Robbins will donate one copy of The No-Nonsense Beginner’s Guide to PowerShell ebook for the winner of the contest. Thanks in advance to Mike for being always so keen to give his contribution to this kind of initiative. His book is one of the best around, as I explained in a previous post.

As a bonus, and if the winner agrees, he/she will intervene as a guest blogger on this blog and will explain how to solve this kind of PowerShell riddle.

If you want to spread the word about this PowerShell contest, feel free to twit about it. You can use the hashtags #poshcontest2017 and #powershell so that other competitors can share their thoughts (not the solutions of course!).

Have fun!

UPDATE 24th Nov: We have a winner! Check it out here.

Friday, June 30, 2017

A PowerShell oneliner to retrieve extension deployment status in Azure VMs

As I explained in the previous post, Azure allows the provisioning of extensions to cloud-hosted VMs through a VM Agent.

If you have ever gone through a large deployment of extensions to existing Azure VMs, via the Set-AzureVMDscExtension cmdlet, you have probably sought a way to check for the deployment status in a simpler manner then browsing into each and every VM in the Azure Portal:

Here's how PowerShell answers this need, and it does it in just one line of code. The core cmdlet here is Get-AzureRmVM, which is generally used to report VM status:
Get-AzureRmVM -WarningAction SilentlyContinue

ResourceGroupName Name   Location      VmSize  OsType       NIC ProvisioningState
----------------- ----   --------      ------  ------       --- -----------------
RG-AD             adVM   westeurope    Standard_D2_v2 Windows    adNic        Succeeded
RG-AppServices    VM0    westeurope    Standard_A1 Windows       nic0         Succeeded
RG-AppServices    VM1    westeurope    Standard_A1 Windows       nic1         Succeeded
RG-DSC            VM2012 westeurope    Standard_A1 Windows       vm2012373    Succeeded
RG-DSC            VM2016 westeurope    Standard_A1 Windows       vm2016964    Succeeded
As a side information, I am settings the WarningAction parameter to SilentlyContinue simply because I don't want the following warning message to make my output less readable:
WARNING: Breaking change notice: In upcoming releaese, top level properties, DataDiskNames and NetworkInterfaceIDs, will be removed from VM object because they are also in StorageProfile and NetworkProfile, respectively.
I know very well that releases of Azure cmdlet are coming at a fast pace, so I can suppress this message.

Here's the oneliner I wrote:
Get-AzureRmVM -WarningAction SilentlyContinue | % {

(Get-AzureRmVM -ResourceGroupName $_.ResourceGroupName -name $ -status -OutVariable x -WarningAction SilentlyContinue).extensions | % {

$_ | select @{Name="VmName";Expression={$}},@{Name="Extension";Expression={$}},@{Name="Level";Expression={$_.statuses.level}},@{Name="Status";Expression={$_.statuses.displaystatus}},@{Name="Time";Expression={$_.statuses.time}}


The output can be piped to Format-Table if I want a table view, or to Out-Gridview, if you prefer:
Get-AzureRmVM -WarningAction SilentlyContinue | % {

(Get-AzureRmVM -ResourceGroupName $_.ResourceGroupName -name $ -status -OutVariable x -WarningAction SilentlyContinue).extensions | % {

$_ | select @{Name="VmName";Expression={$}},@{Name="Extension";Expression={$}},@{Name="Level";Expression={$_.statuses.level}},@{Name="Status";Expression={$_.statuses.displaystatus}},@{Name="Time";Expression={$_.statuses.time}}


}  | Format-Table * -Autosize

VmName Extension                  Level Status                 Time                
------ ---------                  ----- ------                 ----                
adVM   CreateADForest              Info Provisioning succeeded 6/30/2017 9:13:55 AM
myVM0  PuppetAgent                 Info Provisioning succeeded                     
myVM0  Site24x7WindowsServerAgent  Info Provisioning succeeded                     
myVM1  IaaSAntimalware             Info Provisioning succeeded                     
VM2012 DSC                        Error Provisioning failed    6/29/2017 2:35:39 PM
VM2012 IaaSAntimalware             Info Provisioning succeeded                     
VM2016 DSC                         Info Provisioning succeeded 6/29/2017 9:53:20 AM
I could also think of showing just the failures:
Get-AzureRmVM -WarningAction SilentlyContinue | % {

(Get-AzureRmVM -ResourceGroupName $_.ResourceGroupName -name $ -status -OutVariable x -WarningAction SilentlyContinue).extensions | % {

$_ | select @{Name="VmName";Expression={$}},@{Name="Extension";Expression={$}},@{Name="Level";Expression={$_.statuses.level}},@{Name="Status";Expression={$_.statuses.displaystatus}},@{Name="Time";Expression={$_.statuses.time}}


} | ? Status -match 'Failed'

VmName    : VM2012
Extension : DSC
Level     : Error
Status    : Provisioning failed
Time      : 6/29/2017 2:35:39 PM
As I can see in the last output, the DSC extension failed to deploy on one of my Azure VMs, and I need to take corrective actions. I will discuss in a future post how to automatically solve this kind of issues staying on the same line of code.

If you have any technical question on the way I implemented this line of code, feel free to get in touch with me. Feel free to share if you like the content of this post.

Thursday, June 29, 2017

How to configure an Azure VM using PowerShell DSC

As far as I can see, today many companies have started moving part of their workload to Azure VMs and are looking for a way to easily manage them just like they were still sitting in their datacenters. If you have been practicing PowerShell for a while, you should know well that a while back Microsoft introduced a technology named PowerShell Desired State Configuration (DSC).

Now still today many people that have been toying around with DSC aren't aware of the fact that it is built-in in Azure and that they can use it to configure Azure VMs using a workflow similar - if not simpler - than the one they used while they were running VMs on-premise.

Let's see how this works and how easily a desired configuration can be pushed to your VMs.

First of all you need to know that the key component in the process is a VM Agent. This VM Agent is a set of lightweight software components running within the OS (be it Windows or Linux) of an Azure VM and that are presented as an extension in your VM configuration.

There are three background processes composing the VM agent in Windows VM:

- WindowsAzureGuestAgent.exe
- WaAppAgent.exe
- WindowsAzureTelemetryService.exe

These processes by default log their activity into the folder named C:\WindowsAzure\Logs\ so if you have any trouble just have a look here:

It's through this VM agent that you can push your configuration to the cloud-hosted VM.


Everything starts with a DSC resource. For sake of this post I will just re-use the classic resource in charge of setting up the IIS feature:

Then you have to publish this DSC configuration to an Azure blob storage account by running the Publish-AzureRmVMDscConfiguration cmdlet. This cmdlet takes as input three mandatory parameters which are a Resource Group, a Storage Account and the path to a Configuration file to use:

Here's how to setup things for Publish-AzureRmVMDscConfiguration to succeed:
$ResourceGroupName = 'RG-DSC'

$StorageAccountName = 'dscconfigstorage'

$ConfigurationPath = "D:\DSCresources\IISInstall.ps1"

$ResourceGroup = Get-AzureRmResourceGroup -Name $ResourceGroupName
The configuration file IISInstall.ps1 is actually a file declaring the expected resources you want your Azure VMs to be hosting.
$ZipUrl = Publish-AzureRmVMDscConfiguration -ConfigurationPath $configurationPath -ResourceGroupName $resourceGroupName -StorageAccountName $StorageAccountName -Force
As you can see I am assigning the output - which is an html link - of Publish-AzureRmVMDscConfiguration to a $ZipUrl variable so that I can re-use it in the next stage. Using the -Force switch is needed when you are updating your configuration file to a new version than the one already stored in Azure.
After the execution of this cmdlet, we will be able to see the Container hosting the file in the Portal:
Now, before we attach the configuration to the VM, there is a bunch of information that we need to retrieve in order to Set-AzureRmVMExtension - which is the key cmdlet here - to work.


Let's start with checking the DSC extension to use for our task:
Get-AzureVMAvailableExtension -ExtensionName DSC

Publisher                   : Microsoft.Powershell
ExtensionName               : DSC
Version                     : 2.26
Label                       : DSC
Description                 : PowerShell DSC (Desired State Configuration) Extension
PublicConfigurationSchema   : 
PrivateConfigurationSchema  : 
IsInternalExtension         : False
SampleConfig                : 
                              "properties": {
                                  "publisher": "Microsoft.Powershell",
                                  "type": "DSC",
ReplicationCompleted        : True
Eula                        :
PrivacyUri                  :
HomepageUri                 :
IsJsonExtension             : True
DisallowMajorVersionUpgrade : False
SupportedOS                 : 
PublishedDate               : 6/6/2017 7:20:22 PM
CompanyName                 : Microsoft Corporation
Regions                     : All regions
We got here most of the important information needed later for the setup, such as the extension version: by default this cmdlet will return the most recent version of an extension, but you could get the whole list with:
Get-AzureRmVMExtensionImage -Location "West Europe" -PublisherName "Microsoft.PowerShell" -Type "DSC"
Get-AzureVMAvailableExtension also returns other interesting properties such as the list of the Azure regions where the extension is present: in the case of the DSC extension this is luckily present in all region (see last line of output), so we are good to go.

As I said, Set-AzureRmVmDscExtension is the cmdlet you need to push it to your VM. The parameters for this cmdlet cover three logic areas:
  • the -ResourceGroupName, -VMName, and -Location parameters identify the target Azure virtual machine
  • the -Name, -Publisher, -ExtensionType, and -TypeHandlerVersion parameters designate the VM Agent extension I am pushing
  • the -Settings contains the settings to apply, which in my case will be a hastable containing the link to the Azure-stored configuration file and a token for read access
We already have all the information about our VM and about the extension.

To setup the token, here's what to do:
$ContainerName = 'windows-powershell-dsc'

$StorageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -Name $StorageAccountName)[0].Value

$StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey

$SasToken = New-AzureStorageContainerSASToken -Name $ContainerName -Permission r -Context $StorageContext
I am basically setting up an Azure storage context which is a PowerShell object encapsulating the storage credentials:
$StorageContext | select -ExpandProperty storageaccount

BlobEndpoint    :
QueueEndpoint   :
TableEndpoint   :
FileEndpoint    :
BlobStorageUri  : Primary = ''; Secondary = ''
QueueStorageUri : Primary = ''; Secondary = ''
TableStorageUri : Primary = ''; Secondary = ''
FileStorageUri  : Primary = ''; Secondary = ''
Credentials     : Microsoft.WindowsAzure.Storage.Auth.StorageCredentials
Let's build an hashtable containing the link to the configuration file. As you can see one of the elements is the SASToken we have just setup in the previous step:
$ConfigurationName = 'IISInstall'
$SettingsHT = @{
"ModulesUrl" = "$ZipUrl";
"ConfigurationFunction" = "$ConfigurationName.ps1\$ConfigurationName";
"SasToken" = "$SasToken"

We are now ready to launch Set-AzureRmVMExtension:
 -ResourceGroupName $ResourceGroupName -VMName $VmName -Location (Get-AzureRmStorageAccount $ResourceGroupName).Location
 -Name $ExtensionName -Publisher $Publisher -ExtensionType $ExtensionType -TypeHandlerVersion $TypeHandlerVersion
 -Settings $SettingsHT
The output clearly tells you the outcome of this operation:
RequestId IsSuccessStatusCode StatusCode ReasonPhrase
--------- ------------------- ---------- ------------
                         True         OK OK      
The portal shows you now the DSC extension as succesfully provisioned:

and clicking on 'Detailed status' you will have access to the provisioning logs:

The IIS feature appears properly installed if you RDP into the VM and use Get-WindowsFeature to list the enabled features:

As you can see the process is pretty simple and, even if it demands a bit of understanding how the configuration are stored in Azure and how you can set up an access policy, the desired configuration is easily pushed to Azure VMs thanks once again to PowerShell.

Wednesday, March 8, 2017

All the 7 principles of the LEAN methodology in a single line of PowerShell code

As you now know, PowerShell is ten years old. This language has been adopted as the management standard for a lot of platforms (Azure, NetApp, VMware, AWS, just to mention a few). As time goes on, the Windows system administrator has rediscovered his developer-self and the joy of doing things from the command line.


The result is that, today, everybody is writing more and more PowerShell code in the form of scripts, functions, modules. I myself am writing code to manage my pellet stove, my security camera as well as whatever that has an IP address.

Keeping track with all of that code tends to get more difficult and time-expensive. Over time we increase the complexity of our scripts: we add new functions, we modify parts of code, copy/paste other parts from existing scripts. We do also introduce new cmdlets but we also keep old parts of code that are difficult to rewrite without breaking something.

This kind of complexity calcifies some badly or quickly developed lines of code into originally well-thought advanced functions, and can make your scripts a mess.

But, smile, LEAN development is here, and in this short post I am going to show you how you have to think about your lines of code so that your scripts stay easy to maintain, to reuse and to share with others. Be it your colleagues, or the Community.


Basically we need to introduce today a concept named with the Japanese word 'Kaizen', which means 'change for better'. LEAN development has been thought of as a modelling of Kaizen, and has been summarized in seven easy to remember principles, which, once adopted, will improve you way of writing PowerShell all across the board.

To start with, here's an overview of those seven principles, with a quick explanation of how they should be implemented in the process of developing a single line of code that does a specific job. This could for sure be extended to the writing of advanced functions, but a lot has already been written and published on the best practices to adopt during complex scripting, and much less on how to keep a state-of-the-art single line of code.


First principle. Eliminate waste. LEAN philosophy regards everything not adding value as waste. You should think the same:
  • keep your line of code as short and simple as possible
  • use the right module for the job
  • rely on modules auto-loading
  • rely on default parameters
  • rely on positional parameters
  • do not (over)use variables
  • send down the pipeline only the needed objects or object properties

Second principle. Amplify learning. LEAN philosophy states that it is necessary to have a reasonable failure rate in order to generate a reasonable amount of new information. This new information is tagged 'feedback'. You, as a PowerShell developer, need that feedback, so:
  • work on your code one pipe at the time and accurately review the output, its method and its properties
  • iterate through your code in the quest for errors
  • amplify any warning you get with a strong Erroraction
  • test with Try/Catch if relevant

Third principle. Decide as late as possible. LEAN philosophy says that you should manage uncertainty by delaying decisions so to be left in the end with more options. This translates to one of the most easily forgotten PowerShell rules:
  • sort and then format the output in a textual way only at the rightmost end of your line of code

Fourth principle. Deliver as fast as possible. In LEAN philosophy, the sooner your deliver, the sooner you get feedback. This means writing your line of code in a way that it doesn't get stuck in a queue that make your cycle time way too long:
  • reduce the scope of your part of your line of code to the functional minimum
  • filter left
  • use jobs when appropriate
  • use runspaces when appropriate

Fifth principle. Empower the team. In LEAN philosophy, the Team is central. So:
  • keep your line of PowerShell code concise
  • don't get lost in details so that anyone can re-use your code without impediments
  • keep the logic of your line of code so clear that anyone feels encouraged to reuse it instead of spending energies reinventing the wheel
  • don't use aliases
  • beware of ambiguous parameters
  • do write concise documentation in form of per-line comments

Sixth principle. Build integrity in. LEAN philosophy regards the conceptual integrity of code as crucial. In PowerShell you need to learn how to balance the code between your pipes in a way that it results in increased:
  • maintainability
  • efficiency
  • flexibility
  • responsiveness

Seventh principle. See the whole. LEAN organization seeks to optimize the whole value stream. So keep in mind that:
  • a task-effective line of code consists of interdependent and interacting parts joined by a purpose
  • the ability of a line of code to achieve its purpose depends on how well the object flow down the pipeline

As you can see, LEAN development emphasizes minimizing waste while optimizing efficiency and maintenability. Adopt it and you'll soon see a positive trend in the quality of your scripts.

Let's now try to achieve this in a real world scenario.

I have been recently contacted by someone who needed help with a script he was writing to retrieve some information from his Active Directory. Though this person had been using PowerShell for quite a few months, he was tackling the task in a confusing way, with a lot of copy/pasted lines of code, Vbs-style variables, and without a clear logic in mind so that he was not getting the needed result.


Check if you have any user whose given name is John and whose user account is enabled and if so return their surname, given name, SID and phone number. Present the list in a dynamic table that allows the user to select the user account to export and save them to a semi-colon separated CSV that has the following fields: Surname, GivenName, UserPrincipalName, DistinguishedName, OfficePhone, SID. That list must be sorted by Surname.


$MystrFilter = "(&(objectCategory=User)(GivenName=John)"
$MyDomain = New-Object System.DirectoryServices.DirectoryEntry
$Searcher = New-Object System.DirectoryServices.DirectorySearcher
$Searcher.SearchRoot = $MyDomain
$Searcher.PageSize = 1000
$Searcher.Filter = $MystrFilter
$Searcher.SearchScope = "Subtree"
[void] $search.PropertiesToLoad.Add("GivenName")
$Searcher.FindAll()| %{
         New-Object PSObject -Property @{Name = $_.Properties.GivenName}
} | Out-String | Format-Table * | ConvertTo-CSV | Out-File c:\users.csv
As you can see there is way too much happening here, just to get the first bit of information. Also, this code uses an old syntax which has been superseded by the introduction of the ActiveDirectory module. In other words, we are far from the LEAN principles:


The first action is to load the module, which you would do with

Import-Module ActiveDirectory
but actually you don't need that because Windows PowerShell has a feature named module auto-loading, which makes this explicit call useless. That's what LEAN calls waste, because it doesn't bring any added value. Just call the right cmdlet for the job and PowerShell will load the corresponding module in the background:

Get-Aduser -Filter * | Where-Object { $_.GivenName -eq 'John' }
Now we are using the right cmdlet for the task, but we are actually breaking the fourth LEAN rule 'Deliver as fast as possible': by filtering right we are retrieving the whole Active Directory before actually doing the filtering. So this should be rewritten as:

Get-ADUser -Filter {(GivenName -eq "John")}
Oh, that's fast.

Now let's try to retrieve the properties we have been asked for:

Get-ADUser -Filter {(GivenName -eq "John")} | Select-Object -Property Surname, GivenName, SID, OfficePhone
Not bad, but we don't need to explicitly name the -Property parameter because it's positional:

    Required?                    false
    Position?                    0
    Default value                None
    Accept pipeline input?       False
    Accept wildcard characters?  false
So this line of code can be improved by removing waste:

Get-ADUser -Filter {(GivenName -eq "John")} | Select-Object Surname, GivenName, SID, OfficePhone
If we run this line of code we can see that the we are going against the second principle: the 'feedback' of this line of code is that the OfficePhone property is empty, so it must be not returned as part of the standard set of properties for a user.

Somehow, we have to force Get-AdUser to return this property:

Get-ADUser -Filter {(GivenName -eq "John")} -Properties *
Right, we got the OfficePhone porperty now, but a lot of other unneeded properties as well. Waste again. And we are also going against the fourth principle because our script becomes slower.

To respect the first and the fourth principle we have to write:

Get-ADUser -Filter {(GivenName -eq "John")} -Properties OfficePhone
Ok, now we have all the users whose given name is John, but we were also asked to filter out those user accounts that are not enabled. This could be achieved with one of these three syntaxes:

Get-ADUser -Filter {(GivenName -eq "John")} | ? enabled
Get-ADUser -Filter {(GivenName -eq "John")} -Properties OfficePhone | Where-Object { $_.enabled -eq '$true' }
Get-ADUser -Filter {(GivenName -eq "John")} | Where-Object enabled
but none is good because
  • in the first case we are using the question mark alias (fifth principle)
  • in the second case, we are using an old redundant syntax, and that's a waste (first principle)
  • in the third case we are filtering twice, on identity on the left of the pipe, and on the Enabled properties on the right of the pipe, so the integrity of our line of code is gone (sixth principle)

Instead we could come up with:

Get-ADUser -Properties OfficePhone -Filter {(GivenName -eq "John") -and (enabled -eq "true")}
but the logic used for parameter positioning is confusing (fifth principle). We better go with:

Get-ADUser -Filter {(GivenName -eq "John") -and (enabled -eq "true")} -Properties OfficePhone
That's all till the first pipe. Now we need to explicitly declare the properties we want to show, add the sorting and let the user choose the users he wants to export.

Select-Object -Property Surname,GivenName,UserPrincipalName,DistinguishedName,OfficePhone,SID | Sort-Object -Property GivenName
Here above we have a special type of waste: even do the fifth principle states that you should not use aliases, there is a de facto rule through PowerShell scripters that allows Select-Object and Sort-Object to be shortened to their verb only: Select and Sort. So this time we can transgress the fifth principle and remove the -Object noun:

Select Surname,GivenName,UserPrincipalName,DistinguishedName,OfficePhone,SID | Sort GivenName
We can now pipe this into Out-Gridview with the -Passthru parameter, so that the end user can click on the users he wants to export and then press the Enter key to send them down to Export-CSV:

Out-GridView -PassThru | Export-Csv -Path C:\users.csv -Delimiter ';'
Since the first principle says that we can reduce waste by relying on positional parameters, we can shorten the code of Export-CSV. Luckily in fact, both -Path and -Delimiter are positional parameters:

    Required?                    false
    Position?                    0
    Default value                None
    Accept pipeline input?       False
    Accept wildcard characters?  false
    Required?                    false
    Position?                    1
    Default value                None
    Accept pipeline input?       False
    Accept wildcard characters?  false
Here's what we got:

Out-GridView -PassThru | Export-Csv C:\users.csv ';'
Now the whole code is working but the resulting line of code is way too long. We can improve its reusability by splitting it at the pipes, which will also give us the occasion to put some concise comments:

Get-ADUser -Filter {(GivenName -eq "John") -and (enabled -eq "true")} -Properties OfficePhone |

    select Surname,GivenName,UserPrincipalName,DistinguishedName,OfficePhone,SID |
    sort Surname |
    Out-GridView -PassThru | # this allows the user to select some items and hand them over to the next cmdlet
    Export-Csv C:\users.csv ';'
As you can see, we have been able to apply all the LEAN principles to a script that was just a broken and confusing piece of code. And during this process, we have engineered our solution according to the second principle: we have iterated through our code trying to make it work error-free and we have amplified our knowledge of the whole process.

The result is a piece of code that can be easily reused without impediment: Lean Development applied to PowerShell.

Now I am really looking forward to feedback on this article: take it as a draft that I am willing to improve with the help of the Community.

Be Agile. Be PowerShell.
Related Posts Plugin for WordPress, Blogger...