Monday, November 13, 2017

Powershell Oneliner Contest 2017

Year after year I see many people who were old-fashioned mouse clickers adopting PowerShell and thus the average skill level is rising. At the same time, pushed by the arrival of DevOps, a lot of people who are already pretty confident with code are coming to join the ever-growing community of IT professionals that use PowerShell.

So what is the best way of testing your progress than a tricky PowerShell contest and possibly winning a prize?

Let me announce then the third edition of the PowerShell Oneliner Contest:

Before I announce the three tasks you will have to cope with, let me remind you of the spirit of this game:
  • this must be a unique learning experience, where solutions posted from experienced scripters will benefit the whole community once they are made public
  • novice scripters will have to show a lot of persistence in order to get solutions that work, respect the rules and be creative
  • having fun is of paramount importance while bending the command line in a creative way

  • The contest is split into three tasks of increasing difficulty
  • Each task consists of a simple scenario for which you have to produce the shortest possible oneliner solution
  • You can use every PowerShell version, just state in the comment which version you tested it with
  • No semi-colons
  • Backticks are accepted for readability
  • To submit your entry, create a secret Gist for each task solution and post the URL for the Gist as a comment to this blog article
  • Submitting an entry that is a public Gist will automatically disqualify the entry and participant
  • Sign your comments so that I know who's who and so that I can get in touch with the winner
  • Entries (comments) will not be made public until after the submission deadline
  • The first person to produce the shortest working solutions to a task will get 1 point, the second 2 points, the third 3 points and so on
  • The person with the lowest total score over the three mandatory tasks will be the winner
  • The contest will run for nine days beginning today until November 21st 12:00 noon (GMT)
  • The winner will be announced on Friday, November 24th on this blog
  • I'll be the only judge


Windows Management Instrumentation is an incredibly useful technology for exposing system information. Being able to interact with it from PowerShell is one the first things we all learn. Your first task is to write the shortest possible oneliner that extracts the UNC path of all the local shares from the Win32_Share class.

Expected output:
The use of Win32_Share class is mandatory.


I still remember the first time I saw a computer-generated fractal (it was a Barnsley fern), and have always been impressed by those patterns that repeat and repeat again. What I did learn recently is that the 'fractal' term was invented by a legendary Polish mathematician, Benoit Mandelbrot, who added a B. in the middle of his name: supposedly he intended his middle B. to recursively mean Benoit B. Mandelbrot, thereby including a fractal (his mathematical discovery) in his own name.

Your mission is to write the shortest possible oneliner that answers this question
$question = 'What is the middle name of Benoit B. Mandelbrot?'
by returning
The B in Benoit B. Mandelbrot stands for Benoit B. Mandelbrot.
Reuse of $question variable is mandatory.


There's a lot of hype today around data mining techniques and therefore I want you to see how good you are at using PowerShell for a special kind of task. Given the following two text variables
$t1 = "I really like scripting with PowerShell"
$t2 = "PowerShell is a really really nice scripting language"
write a oneliner that is capable of determining text likeness using Cosine Similarity and returns
The returned value must be 1 if $t1 and $t2 are identical vectors (same words) and 0 if $t1 and $t2 have no words in common.
The comparison must be case insensitive, meaning that PowerShell and powershell are the same word. The string must be split at any non-word character and only unique elements of the resulting collection are compared.
The oneliner should work against any other pair of text variables, for instance
$t1 = "Unless you work hard, you won’t win."
$t2 = "You must work hard. Otherwise, you won’t win."
must return


Being able to test your solution and see that it respects the rules is of paramount importance before submitting it. If you are taking part in this contest I suppose you must not be new to Pester. Fellow MVP Jakub Jares (@nohwnd) was kind enough to provide a solution validation tool based on his Assert module:

Use the three provided test files (one per task) as stated in the instruction file.

A word of notice: using those tests is not mandatory, but my opinion is that we should all be continuously learning, so if you are new to GitHub and to Pester, I suggest you seize the occasion to learn something useful. You'll just be doing yourself a favor if you use Pester to unit test your oneliners, because human error can always happen and it's a pity if you spend a lot of time providing an answer that does not actually work.

If you are interested in Jakub's Assert module, you can find it here:


Yes, there's a prize! Fellow MVP Mike F Robbins will donate one copy of The No-Nonsense Beginner’s Guide to PowerShell ebook for the winner of the contest. Thanks in advance to Mike for being always so keen to give his contribution to this kind of initiative. His book is one of the best around, as I explained in a previous post.

As a bonus, and if the winner agrees, he/she will intervene as a guest blogger on this blog and will explain how to solve this kind of PowerShell riddle.

If you want to spread the word about this PowerShell contest, feel free to twit about it. You can use the hashtags #poshcontest2017 and #powershell so that other competitors can share their thoughts (not the solutions of course!).

Have fun!

Friday, June 30, 2017

A PowerShell oneliner to retrieve extension deployment status in Azure VMs

As I explained in the previous post, Azure allows the provisioning of extensions to cloud-hosted VMs through a VM Agent.

If you have ever gone through a large deployment of extensions to existing Azure VMs, via the Set-AzureVMDscExtension cmdlet, you have probably sought a way to check for the deployment status in a simpler manner then browsing into each and every VM in the Azure Portal:

Here's how PowerShell answers this need, and it does it in just one line of code. The core cmdlet here is Get-AzureRmVM, which is generally used to report VM status:
Get-AzureRmVM -WarningAction SilentlyContinue

ResourceGroupName Name   Location      VmSize  OsType       NIC ProvisioningState
----------------- ----   --------      ------  ------       --- -----------------
RG-AD             adVM   westeurope    Standard_D2_v2 Windows    adNic        Succeeded
RG-AppServices    VM0    westeurope    Standard_A1 Windows       nic0         Succeeded
RG-AppServices    VM1    westeurope    Standard_A1 Windows       nic1         Succeeded
RG-DSC            VM2012 westeurope    Standard_A1 Windows       vm2012373    Succeeded
RG-DSC            VM2016 westeurope    Standard_A1 Windows       vm2016964    Succeeded
As a side information, I am settings the WarningAction parameter to SilentlyContinue simply because I don't want the following warning message to make my output less readable:
WARNING: Breaking change notice: In upcoming releaese, top level properties, DataDiskNames and NetworkInterfaceIDs, will be removed from VM object because they are also in StorageProfile and NetworkProfile, respectively.
I know very well that releases of Azure cmdlet are coming at a fast pace, so I can suppress this message.

Here's the oneliner I wrote:
Get-AzureRmVM -WarningAction SilentlyContinue | % {

(Get-AzureRmVM -ResourceGroupName $_.ResourceGroupName -name $ -status -OutVariable x -WarningAction SilentlyContinue).extensions | % {

$_ | select @{Name="VmName";Expression={$}},@{Name="Extension";Expression={$}},@{Name="Level";Expression={$_.statuses.level}},@{Name="Status";Expression={$_.statuses.displaystatus}},@{Name="Time";Expression={$_.statuses.time}}


The output can be piped to Format-Table if I want a table view, or to Out-Gridview, if you prefer:
Get-AzureRmVM -WarningAction SilentlyContinue | % {

(Get-AzureRmVM -ResourceGroupName $_.ResourceGroupName -name $ -status -OutVariable x -WarningAction SilentlyContinue).extensions | % {

$_ | select @{Name="VmName";Expression={$}},@{Name="Extension";Expression={$}},@{Name="Level";Expression={$_.statuses.level}},@{Name="Status";Expression={$_.statuses.displaystatus}},@{Name="Time";Expression={$_.statuses.time}}


}  | Format-Table * -Autosize

VmName Extension                  Level Status                 Time                
------ ---------                  ----- ------                 ----                
adVM   CreateADForest              Info Provisioning succeeded 6/30/2017 9:13:55 AM
myVM0  PuppetAgent                 Info Provisioning succeeded                     
myVM0  Site24x7WindowsServerAgent  Info Provisioning succeeded                     
myVM1  IaaSAntimalware             Info Provisioning succeeded                     
VM2012 DSC                        Error Provisioning failed    6/29/2017 2:35:39 PM
VM2012 IaaSAntimalware             Info Provisioning succeeded                     
VM2016 DSC                         Info Provisioning succeeded 6/29/2017 9:53:20 AM
I could also think of showing just the failures:
Get-AzureRmVM -WarningAction SilentlyContinue | % {

(Get-AzureRmVM -ResourceGroupName $_.ResourceGroupName -name $ -status -OutVariable x -WarningAction SilentlyContinue).extensions | % {

$_ | select @{Name="VmName";Expression={$}},@{Name="Extension";Expression={$}},@{Name="Level";Expression={$_.statuses.level}},@{Name="Status";Expression={$_.statuses.displaystatus}},@{Name="Time";Expression={$_.statuses.time}}


} | ? Status -match 'Failed'

VmName    : VM2012
Extension : DSC
Level     : Error
Status    : Provisioning failed
Time      : 6/29/2017 2:35:39 PM
As I can see in the last output, the DSC extension failed to deploy on one of my Azure VMs, and I need to take corrective actions. I will discuss in a future post how to automatically solve this kind of issues staying on the same line of code.

If you have any technical question on the way I implemented this line of code, feel free to get in touch with me. Feel free to share if you like the content of this post.

Thursday, June 29, 2017

How to configure an Azure VM using PowerShell DSC

As far as I can see, today many companies have started moving part of their workload to Azure VMs and are looking for a way to easily manage them just like they were still sitting in their datacenters. If you have been practicing PowerShell for a while, you should know well that a while back Microsoft introduced a technology named PowerShell Desired State Configuration (DSC).

Now still today many people that have been toying around with DSC aren't aware of the fact that it is built-in in Azure and that they can use it to configure Azure VMs using a workflow similar - if not simpler - than the one they used while they were running VMs on-premise.

Let's see how this works and how easily a desired configuration can be pushed to your VMs.

First of all you need to know that the key component in the process is a VM Agent. This VM Agent is a set of lightweight software components running within the OS (be it Windows or Linux) of an Azure VM and that are presented as an extension in your VM configuration.

There are three background processes composing the VM agent in Windows VM:

- WindowsAzureGuestAgent.exe
- WaAppAgent.exe
- WindowsAzureTelemetryService.exe

These processes by default log their activity into the folder named C:\WindowsAzure\Logs\ so if you have any trouble just have a look here:

It's through this VM agent that you can push your configuration to the cloud-hosted VM.


Everything starts with a DSC resource. For sake of this post I will just re-use the classic resource in charge of setting up the IIS feature:

Then you have to publish this DSC configuration to an Azure blob storage account by running the Publish-AzureRmVMDscConfiguration cmdlet. This cmdlet takes as input three mandatory parameters which are a Resource Group, a Storage Account and the path to a Configuration file to use:

Here's how to setup things for Publish-AzureRmVMDscConfiguration to succeed:
$ResourceGroupName = 'RG-DSC'

$StorageAccountName = 'dscconfigstorage'

$ConfigurationPath = "D:\DSCresources\IISInstall.ps1"

$ResourceGroup = Get-AzureRmResourceGroup -Name $ResourceGroupName
The configuration file IISInstall.ps1 is actually a file declaring the expected resources you want your Azure VMs to be hosting.
$ZipUrl = Publish-AzureRmVMDscConfiguration -ConfigurationPath $configurationPath -ResourceGroupName $resourceGroupName -StorageAccountName $StorageAccountName -Force
As you can see I am assigning the output - which is an html link - of Publish-AzureRmVMDscConfiguration to a $ZipUrl variable so that I can re-use it in the next stage. Using the -Force switch is needed when you are updating your configuration file to a new version than the one already stored in Azure.
After the execution of this cmdlet, we will be able to see the Container hosting the file in the Portal:
Now, before we attach the configuration to the VM, there is a bunch of information that we need to retrieve in order to Set-AzureRmVMExtension - which is the key cmdlet here - to work.


Let's start with checking the DSC extension to use for our task:
Get-AzureVMAvailableExtension -ExtensionName DSC

Publisher                   : Microsoft.Powershell
ExtensionName               : DSC
Version                     : 2.26
Label                       : DSC
Description                 : PowerShell DSC (Desired State Configuration) Extension
PublicConfigurationSchema   : 
PrivateConfigurationSchema  : 
IsInternalExtension         : False
SampleConfig                : 
                              "properties": {
                                  "publisher": "Microsoft.Powershell",
                                  "type": "DSC",
ReplicationCompleted        : True
Eula                        :
PrivacyUri                  :
HomepageUri                 :
IsJsonExtension             : True
DisallowMajorVersionUpgrade : False
SupportedOS                 : 
PublishedDate               : 6/6/2017 7:20:22 PM
CompanyName                 : Microsoft Corporation
Regions                     : All regions
We got here most of the important information needed later for the setup, such as the extension version: by default this cmdlet will return the most recent version of an extension, but you could get the whole list with:
Get-AzureRmVMExtensionImage -Location "West Europe" -PublisherName "Microsoft.PowerShell" -Type "DSC"
Get-AzureVMAvailableExtension also returns other interesting properties such as the list of the Azure regions where the extension is present: in the case of the DSC extension this is luckily present in all region (see last line of output), so we are good to go.

As I said, Set-AzureRmVmDscExtension is the cmdlet you need to push it to your VM. The parameters for this cmdlet cover three logic areas:
  • the -ResourceGroupName, -VMName, and -Location parameters identify the target Azure virtual machine
  • the -Name, -Publisher, -ExtensionType, and -TypeHandlerVersion parameters designate the VM Agent extension I am pushing
  • the -Settings contains the settings to apply, which in my case will be a hastable containing the link to the Azure-stored configuration file and a token for read access
We already have all the information about our VM and about the extension.

To setup the token, here's what to do:
$ContainerName = 'windows-powershell-dsc'

$StorageAccountKey = (Get-AzureRmStorageAccountKey -ResourceGroupName $resourceGroupName -Name $StorageAccountName)[0].Value

$StorageContext = New-AzureStorageContext -StorageAccountName $StorageAccountName -StorageAccountKey $StorageAccountKey

$SasToken = New-AzureStorageContainerSASToken -Name $ContainerName -Permission r -Context $StorageContext
I am basically setting up an Azure storage context which is a PowerShell object encapsulating the storage credentials:
$StorageContext | select -ExpandProperty storageaccount

BlobEndpoint    :
QueueEndpoint   :
TableEndpoint   :
FileEndpoint    :
BlobStorageUri  : Primary = ''; Secondary = ''
QueueStorageUri : Primary = ''; Secondary = ''
TableStorageUri : Primary = ''; Secondary = ''
FileStorageUri  : Primary = ''; Secondary = ''
Credentials     : Microsoft.WindowsAzure.Storage.Auth.StorageCredentials
Let's build an hashtable containing the link to the configuration file. As you can see one of the elements is the SASToken we have just setup in the previous step:
$ConfigurationName = 'IISInstall'
$SettingsHT = @{
"ModulesUrl" = "$ZipUrl";
"ConfigurationFunction" = "$ConfigurationName.ps1\$ConfigurationName";
"SasToken" = "$SasToken"

We are now ready to launch Set-AzureRmVMExtension:
 -ResourceGroupName $ResourceGroupName -VMName $VmName -Location (Get-AzureRmStorageAccount $ResourceGroupName).Location
 -Name $ExtensionName -Publisher $Publisher -ExtensionType $ExtensionType -TypeHandlerVersion $TypeHandlerVersion
 -Settings $SettingsHT
The output clearly tells you the outcome of this operation:
RequestId IsSuccessStatusCode StatusCode ReasonPhrase
--------- ------------------- ---------- ------------
                         True         OK OK      
The portal shows you now the DSC extension as succesfully provisioned:

and clicking on 'Detailed status' you will have access to the provisioning logs:

The IIS feature appears properly installed if you RDP into the VM and use Get-WindowsFeature to list the enabled features:

As you can see the process is pretty simple and, even if it demands a bit of understanding how the configuration are stored in Azure and how you can set up an access policy, the desired configuration is easily pushed to Azure VMs thanks once again to PowerShell.

Wednesday, March 8, 2017

All the 7 principles of the LEAN methodology in a single line of PowerShell code

As you now know, PowerShell is ten years old. This language has been adopted as the management standard for a lot of platforms (Azure, NetApp, VMware, AWS, just to mention a few). As time goes on, the Windows system administrator has rediscovered his developer-self and the joy of doing things from the command line.


The result is that, today, everybody is writing more and more PowerShell code in the form of scripts, functions, modules. I myself am writing code to manage my pellet stove, my security camera as well as whatever that has an IP address.

Keeping track with all of that code tends to get more difficult and time-expensive. Over time we increase the complexity of our scripts: we add new functions, we modify parts of code, copy/paste other parts from existing scripts. We do also introduce new cmdlets but we also keep old parts of code that are difficult to rewrite without breaking something.

This kind of complexity calcifies some badly or quickly developed lines of code into originally well-thought advanced functions, and can make your scripts a mess.

But, smile, LEAN development is here, and in this short post I am going to show you how you have to think about your lines of code so that your scripts stay easy to maintain, to reuse and to share with others. Be it your colleagues, or the Community.


Basically we need to introduce today a concept named with the Japanese word 'Kaizen', which means 'change for better'. LEAN development has been thought of as a modelling of Kaizen, and has been summarized in seven easy to remember principles, which, once adopted, will improve you way of writing PowerShell all across the board.

To start with, here's an overview of those seven principles, with a quick explanation of how they should be implemented in the process of developing a single line of code that does a specific job. This could for sure be extended to the writing of advanced functions, but a lot has already been written and published on the best practices to adopt during complex scripting, and much less on how to keep a state-of-the-art single line of code.


First principle. Eliminate waste. LEAN philosophy regards everything not adding value as waste. You should think the same:
  • keep your line of code as short and simple as possible
  • use the right module for the job
  • rely on modules auto-loading
  • rely on default parameters
  • rely on positional parameters
  • do not (over)use variables
  • send down the pipeline only the needed objects or object properties

Second principle. Amplify learning. LEAN philosophy states that it is necessary to have a reasonable failure rate in order to generate a reasonable amount of new information. This new information is tagged 'feedback'. You, as a PowerShell developer, need that feedback, so:
  • work on your code one pipe at the time and accurately review the output, its method and its properties
  • iterate through your code in the quest for errors
  • amplify any warning you get with a strong Erroraction
  • test with Try/Catch if relevant

Third principle. Decide as late as possible. LEAN philosophy says that you should manage uncertainty by delaying decisions so to be left in the end with more options. This translates to one of the most easily forgotten PowerShell rules:
  • sort and then format the output in a textual way only at the rightmost end of your line of code

Fourth principle. Deliver as fast as possible. In LEAN philosophy, the sooner your deliver, the sooner you get feedback. This means writing your line of code in a way that it doesn't get stuck in a queue that make your cycle time way too long:
  • reduce the scope of your part of your line of code to the functional minimum
  • filter left
  • use jobs when appropriate
  • use runspaces when appropriate

Fifth principle. Empower the team. In LEAN philosophy, the Team is central. So:
  • keep your line of PowerShell code concise
  • don't get lost in details so that anyone can re-use your code without impediments
  • keep the logic of your line of code so clear that anyone feels encouraged to reuse it instead of spending energies reinventing the wheel
  • don't use aliases
  • beware of ambiguous parameters
  • do write concise documentation in form of per-line comments

Sixth principle. Build integrity in. LEAN philosophy regards the conceptual integrity of code as crucial. In PowerShell you need to learn how to balance the code between your pipes in a way that it results in increased:
  • maintainability
  • efficiency
  • flexibility
  • responsiveness

Seventh principle. See the whole. LEAN organization seeks to optimize the whole value stream. So keep in mind that:
  • a task-effective line of code consists of interdependent and interacting parts joined by a purpose
  • the ability of a line of code to achieve its purpose depends on how well the object flow down the pipeline

As you can see, LEAN development emphasizes minimizing waste while optimizing efficiency and maintenability. Adopt it and you'll soon see a positive trend in the quality of your scripts.

Let's now try to achieve this in a real world scenario.

I have been recently contacted by someone who needed help with a script he was writing to retrieve some information from his Active Directory. Though this person had been using PowerShell for quite a few months, he was tackling the task in a confusing way, with a lot of copy/pasted lines of code, Vbs-style variables, and without a clear logic in mind so that he was not getting the needed result.


Check if you have any user whose given name is John and whose user account is enabled and if so return their surname, given name, SID and phone number. Present the list in a dynamic table that allows the user to select the user account to export and save them to a semi-colon separated CSV that has the following fields: Surname, GivenName, UserPrincipalName, DistinguishedName, OfficePhone, SID. That list must be sorted by Surname.


$MystrFilter = "(&(objectCategory=User)(GivenName=John)"
$MyDomain = New-Object System.DirectoryServices.DirectoryEntry
$Searcher = New-Object System.DirectoryServices.DirectorySearcher
$Searcher.SearchRoot = $MyDomain
$Searcher.PageSize = 1000
$Searcher.Filter = $MystrFilter
$Searcher.SearchScope = "Subtree"
[void] $search.PropertiesToLoad.Add("GivenName")
$Searcher.FindAll()| %{
         New-Object PSObject -Property @{Name = $_.Properties.GivenName}
} | Out-String | Format-Table * | ConvertTo-CSV | Out-File c:\users.csv
As you can see there is way too much happening here, just to get the first bit of information. Also, this code uses an old syntax which has been superseded by the introduction of the ActiveDirectory module. In other words, we are far from the LEAN principles:


The first action is to load the module, which you would do with

Import-Module ActiveDirectory
but actually you don't need that because Windows PowerShell has a feature named module auto-loading, which makes this explicit call useless. That's what LEAN calls waste, because it doesn't bring any added value. Just call the right cmdlet for the job and PowerShell will load the corresponding module in the background:

Get-Aduser -Filter * | Where-Object { $_.GivenName -eq 'John' }
Now we are using the right cmdlet for the task, but we are actually breaking the fourth LEAN rule 'Deliver as fast as possible': by filtering right we are retrieving the whole Active Directory before actually doing the filtering. So this should be rewritten as:

Get-ADUser -Filter {(GivenName -eq "John")}
Oh, that's fast.

Now let's try to retrieve the properties we have been asked for:

Get-ADUser -Filter {(GivenName -eq "John")} | Select-Object -Property Surname, GivenName, SID, OfficePhone
Not bad, but we don't need to explicitly name the -Property parameter because it's positional:

    Required?                    false
    Position?                    0
    Default value                None
    Accept pipeline input?       False
    Accept wildcard characters?  false
So this line of code can be improved by removing waste:

Get-ADUser -Filter {(GivenName -eq "John")} | Select-Object Surname, GivenName, SID, OfficePhone
If we run this line of code we can see that the we are going against the second principle: the 'feedback' of this line of code is that the OfficePhone property is empty, so it must be not returned as part of the standard set of properties for a user.

Somehow, we have to force Get-AdUser to return this property:

Get-ADUser -Filter {(GivenName -eq "John")} -Properties *
Right, we got the OfficePhone porperty now, but a lot of other unneeded properties as well. Waste again. And we are also going against the fourth principle because our script becomes slower.

To respect the first and the fourth principle we have to write:

Get-ADUser -Filter {(GivenName -eq "John")} -Properties OfficePhone
Ok, now we have all the users whose given name is John, but we were also asked to filter out those user accounts that are not enabled. This could be achieved with one of these three syntaxes:

Get-ADUser -Filter {(GivenName -eq "John")} | ? enabled
Get-ADUser -Filter {(GivenName -eq "John")} -Properties OfficePhone | Where-Object { $_.enabled -eq '$true' }
Get-ADUser -Filter {(GivenName -eq "John")} | Where-Object enabled
but none is good because
  • in the first case we are using the question mark alias (fifth principle)
  • in the second case, we are using an old redundant syntax, and that's a waste (first principle)
  • in the third case we are filtering twice, on identity on the left of the pipe, and on the Enabled properties on the right of the pipe, so the integrity of our line of code is gone (sixth principle)

Instead we could come up with:

Get-ADUser -Properties OfficePhone -Filter {(GivenName -eq "John") -and (enabled -eq "true")}
but the logic used for parameter positioning is confusing (fifth principle). We better go with:

Get-ADUser -Filter {(GivenName -eq "John") -and (enabled -eq "true")} -Properties OfficePhone
That's all till the first pipe. Now we need to explicitly declare the properties we want to show, add the sorting and let the user choose the users he wants to export.

Select-Object -Property Surname,GivenName,UserPrincipalName,DistinguishedName,OfficePhone,SID | Sort-Object -Property GivenName
Here above we have a special type of waste: even do the fifth principle states that you should not use aliases, there is a de facto rule through PowerShell scripters that allows Select-Object and Sort-Object to be shortened to their verb only: Select and Sort. So this time we can transgress the fifth principle and remove the -Object noun:

Select Surname,GivenName,UserPrincipalName,DistinguishedName,OfficePhone,SID | Sort GivenName
We can now pipe this into Out-Gridview with the -Passthru parameter, so that the end user can click on the users he wants to export and then press the Enter key to send them down to Export-CSV:

Out-GridView -PassThru | Export-Csv -Path C:\users.csv -Delimiter ';'
Since the first principle says that we can reduce waste by relying on positional parameters, we can shorten the code of Export-CSV. Luckily in fact, both -Path and -Delimiter are positional parameters:

    Required?                    false
    Position?                    0
    Default value                None
    Accept pipeline input?       False
    Accept wildcard characters?  false
    Required?                    false
    Position?                    1
    Default value                None
    Accept pipeline input?       False
    Accept wildcard characters?  false
Here's what we got:

Out-GridView -PassThru | Export-Csv C:\users.csv ';'
Now the whole code is working but the resulting line of code is way too long. We can improve its reusability by splitting it at the pipes, which will also give us the occasion to put some concise comments:

Get-ADUser -Filter {(GivenName -eq "John") -and (enabled -eq "true")} -Properties OfficePhone |

    select Surname,GivenName,UserPrincipalName,DistinguishedName,OfficePhone,SID |
    sort Surname |
    Out-GridView -PassThru | # this allows the user to select some items and hand them over to the next cmdlet
    Export-Csv C:\users.csv ';'
As you can see, we have been able to apply all the LEAN principles to a script that was just a broken and confusing piece of code. And during this process, we have engineered our solution according to the second principle: we have iterated through our code trying to make it work error-free and we have amplified our knowledge of the whole process.

The result is a piece of code that can be easily reused without impediment: Lean Development applied to PowerShell.

Now I am really looking forward to feedback on this article: take it as a draft that I am willing to improve with the help of the Community.

Be Agile. Be PowerShell.

Thursday, March 2, 2017

A PowerShell function to rapidly gather system events for sysadmin eyes only with some tips

I suppose that we, sysadmins, have all been through that moment when an application developer bursts in behind your back just to tell you that his perfectly-coded application is getting stuck and he goes on stating that for sure something has happened at system level and you should already be checking.

This is the moment when being good at PowerShell comes to the rescue. Because with PowerShell you can quickly write a tool that allows you to check your event logs and extract just the right kind of information to show that the system has no issues (as it's often the case) and that he better be reviewing his software configuration.

PowerShell basically is your Splunk, but without the price tag.

That's the topic I am going to talk about in this post: I am going to show you how you can use PowerShell to gather event logs quickly from one or more computers, no matter the Windows version, and build a report of recent system issues, excluding each and every event coming from the upper application layers.

The first step here is to understand that there are two worlds: servers running Windows versions till Windows 2003 R2, and servers running Windows 2008 and above. These two types of servers have different engines for event logging and therefore Microsoft has provided two different cmdlets.


The first cmdlet is Get-EventLog and is used with servers running Windows 2003 R2. I am sure you still have a few of those running. I do, so I have to take them into consideration when developing my function.

The second cmdlet is Get-WinEvent, which is used on newer system and, despite the fact that it runs much faster than Get-EventLog, it can't be used to check older systems.

So you have to write this function in a way that it first checks for the possibility to query the remote server using Get-WinEvent, and if it fails, it has to fail back to Get-EventLog.


Before you write the Get-WinEvent part, there are a few things to understand. As I said the aim of the function is to allow the system administrator to extract only the events that are related to the operating system. To do so, you have first to build a query to retrieve all of your logs:

After researching a bit, I have come to understanding that I have to rely mainly on two properties which I have highlighted in the screenshot above: LogType and LogIsolation.

LogType tells you the type of events that are logged in each log and in our case we want to just stick to Administrative events. This includes events from classic event logs, like System, Security or Application, and other interesting logs such as 'Microsoft-Windows-Hyper-V-Worker-Admin' or 'Microsoft-Windows-Kernel-EventTracing/Admin'.

Now unfortunately there are dozens of event logs that record administrative events, and we need to refine that list more if we want our function to include only events that actually may indicate a system issue.

This is where enters the game the LogIsolation property. This property indicates which ACL and which ETW sessions each event log is using: in our case we want to filter out all the logs that share the access control list with the Application event log as well as all the event logs that share the ETW session with other logs that have Application isolation.

Setting Get-WinEvent to filter on LogIsolation -eq 'system' will guarantee that we are not checking events that have been written by the applications.

Once we filter on those two properties, with the following line of code

Get-WinEvent -ListLog *| ? {($_.logtype -eq 'administrative') -and ($_.logisolation -eq 'system')}
we get a much shorter list of logs, and all of them are clearly under the responsibility of the system administrator:

No we have to find a way to perform this very same operation with Get-EventLog. That's easily accomplished, since we just have to choose one between three classic event logs: System, Application or Security.

For our purpose, System is the log we need.

Now there is a well-known issue with Get-EventLog: it is dramatically slow since it's been designed in a way that it retrieves the whole event log starting from the oldest record each time it's queried. That can lead to a painfully slow checkup of your environment if you are running your function against a large number of servers.


The hint I can give you here is make it work the other way around by adding the -Newest parameter followed by the number of records you want to get: this forces Get-EventLog to start from the most recent event and to access the most recent ones by their unique index until it gets to fetch the asked number of items:

As you can test, with the -Newest parameter the query will be executed very fast. In my function I set its value to 1000 so I am pretty sure no recent critical events are being left over.

FILTER RIGHT (Yes, you read well)

But there is a problem: as you have understood, we are trying to build a tool that allows the system administrator to check if there is any kind of system issue, so we have to be able to limit the search window to the most recent hours. In Get-EventLog this is normally achieved with the -After parameter but the drawback of adding it is that it overrides the functioning of -Newest by bringing back the older-to-newer query mechanism. The performance impact is impressive:

"Newest With -after"
(Measure-Command { Get-EventLog -LogName System -Newest 1000 -After (Get-Date).AddHours(-24) }).TotalSeconds

"Newest with a Where-Object filtering"
(Measure-Command { Get-EventLog -LogName System -Newest 1000 | ? TimeGenerated -gt (Get-Date).AddHours(-24) }).TotalSeconds

Newest With -after
Newest with a Where-Object filtering
That's one of the rare cases where right-filtering is faster than left-filtering.

Now we have all the required knowledge around Get-EventLog and Get-WinEvent to retrieve administrative events pretty quickly.

Before we continue, we have to understand one more thing: these two cmdlets bring back different object types which have different properties. Since our tool must be able to consolidate events from systems running possibly different versions of Windows, we need a way to match the property names brought back from those cmdlets.


A mechanism known as 'Calculated properties' is our ally here. We can use Select-Object to translate property names for objects returned from Get-EventLog to property names returned by Get-WinEvent:

Here we are basically translating TimeGenerated to TimeCreated, Source to ProviderName, EventId to Id and EntryType to LevelDisplayName. This way all the objects coming through our function will have the same property set and filtering will be done in a breeze.


Any tool must be able to handle exceptions. This is achieved in PowerShell with Try/Catch. My experience is that the use of Get-WinEvent can raise three main exceptions. Here's the first two:

A target server that can't be reached:

Get-WinEvent -LogName System -ComputerName nobody
Get-WinEvent : The RPC server is unavailable
At line:1 char:1
+ Get-WinEvent -LogName System -ComputerName nobody
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Get-WinEvent], EventLogException
A target server that runs Windows 2003:

Get-WinEvent -LogName System -ComputerName IamWindows2003
Get-WinEvent : There are no more endpoints available from the endpoint mapper
At line:1 char:1
+ Get-WinEvent -LogName System -ComputerName IamWIndows2003
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (:) [Get-WinEvent], EventLogException
Both these exceptions are of type [System.Diagnostics.Eventing.Reader.EventLogException].

A third exception is raised by Get-WinEvent when no events are found for the given criteria:

Get-WinEvent -FilterHashtable @{LoGName='System';StartTime=(Get-Date).AddHours(-1)}
Get-WinEvent : No events were found that match the specified selection criteria.
At line:1 char:1
+ Get-WinEvent -FilterHashtable @{LoGName='System';StartTime=(Get-Date) ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : ObjectNotFound: (:) [Get-WinEvent], Exception
This looks to me like something that should evolve in PowerShell, since I don't see why having no events returned deserves being considered as an error. Here's you can see the source code of Get-WinEvent, which is where the exception is returned:


As I said at the beginning of this post, we must run Get-EventLog only if Get-WinEvent fails with the exception whose message is 'There are no more endpoints available from the endpoint mapper'. This is how I accomplished this, with Catch then a pattern matching performed with Switch on the exception message:


There is one thing we really don't want our tool to do: returning three thousands times the same event happening over and over. It would make our report completely useless because no lazy sysadmin is going to scroll through an endless list of repeated events. Ever.

So the last part of our function is in charge of consolidating all identical events for a server into one and return just the date of the most recent occurrence.

The logic is to group by event id, then to sort by generation time in descending order, then select only the first item.

Here's a screenshot of that part of code. It's pretty easy so I won't go deeper in explaining it:


I have already showed the power of runspaces and of the PoshRSJob module by Microsoft MVP Boe Prox in a recent blog post. Even in the case of a tool for event retrieval, this is the module to have on your administration box. Nesting my function, which in the end I named Get-AdministrativeEvent into a Runspacepool makes the generation of a report of all my systems run so fast I can't even finish drinking my cup of coffee:

$Report = Start-RSJob -Throttle 20 -Verbose -InputObject ((Get-ADComputer -server dc01 -filter {(name -notlike 'win7*') -AND (OperatingSystem -Like "*Server*")} -searchbase "OU=SRV,DC=Domain,DC=Com").name) -FunctionsToLoad Get-AdministrativeEvent -ScriptBlock {Get-AdministrativeEvent $_ -HoursBack 3 -Credential $using:cred -Verbose} | Wait-RSJob -Verbose -ShowProgress | Receive-RSJob -Verbose

$Report | sort timecreated -descending | Out-GridView


Here's the whole code for the function (which you can also find on my Github).

function Get-AdministrativeEvent {

The Get-AdministrativeEvent function retrieves the last critical administrative events on a local or remote computer
Get-AdministrativeEvent -cred (get-credential domain\admin) -ComputerName srv01 -HoursBack 1
$cred = get-credential
Get-AdministrativeEvent -cred $cred -ComputerName srv01 -HoursBack 24 | Sort-Object timecreated -Descending | Out-Gridview
'srv01','srv02' | % { Get-AdministrativeEvent -HoursBack 1 -cred $cred -ComputerName $_ } | Sort-Object timecreated -Descending | ft * -AutoSize
Get-AdministrativeEvent -HoursBack 36 -ComputerName (Get-ADComputer -filter *).name | sort timecreated -Descending | Out-GridView
Get-AdministrativeEvent -cred $cred -ComputerName 'srv01','srv02' -HoursBack 12 | Out-Gridview
$Report = Start-RSJob -Throttle 20 -Verbose -InputObject ((Get-ADComputer -server dc01 -filter {(name -notlike 'win7*') -AND (OperatingSystem -Like "*Server*")} -searchbase "OU=SRV,DC=Domain,DC=Com").name) -FunctionsToLoad Get-AdministrativeEvent -ScriptBlock {Get-AdministrativeEvent $_ -HoursBack 3 -Credential $using:cred -Verbose} | Wait-RSJob -Verbose -ShowProgress | Receive-RSJob -Verbose
$Report | sort timecreated -descending | Out-GridView
$Servers = ((New-Object -typename ADSISearcher -ArgumentList @([ADSI]"LDAP://,dc=com","(&(&(sAMAccountType=805306369)(objectCategory=computer)(operatingSystem=*Server*)))")).FindAll())
$Report = Start-RSJob -Throttle 20 -Verbose -InputObject $Servers -FunctionsToLoad Get-AdministrativeEvent -ScriptBlock {Get-AdministrativeEvent $_ -Credential $using:cred -HoursBack 48 -Verbose} | Wait-RSJob -Verbose -ShowProgress | Receive-RSJob -Verbose
$Report | format-table * -AutoSize

        # List of computers

        # Specifies a user account that has permission to perform this action
        $Credential = [System.Management.Automation.PSCredential]::Empty,

        #Number of hours to go back to when retrieving events
        [int]$HoursBack = 1




        Write-Verbose "$(Get-Date) - Started."

        $AllResults = @()


        foreach($Computer in $ComputerName) {
            $Result = $Null

            write-verbose "$(Get-Date) - Working on $Computer - Eventlog"

            $starttime = (Get-Date).AddHours(-$HoursBack)
            try {

                write-verbose "$(Get-Date) - Trying with Get-WinEvent"
                $result = Get-WinEvent -ErrorAction stop -Credential $credential -ComputerName $Computer -filterh @{LogName=(Get-WinEvent -Computername $Computer -ListLog *| ? {($_.logtype -eq 'administrative') -and ($_.logisolation -eq 'system')} | ? recordcount).logname;StartTime=$starttime;Level=1,2} | select machinename,timecreated,providername,logname,id,leveldisplayname,message


            catch [System.Diagnostics.Eventing.Reader.EventLogException] {
                switch -regex ($_.Exception.Message) {

                    "RPC" { 
                        Write-Warning "$(Get-Date) - RPC error while communicating with $Computer"
                        $Result = 'RPC error'
                    "Endpoint" { 
                        write-verbose "$(Get-Date) - Trying with Get-EventLog for systems older than Windows 2008"

                        try { 
                            $sysevents = Get-EventLog -ComputerName $Computer -LogName system -Newest 1000 -EntryType Error -ErrorAction Stop | `

                                            ? TimeGenerated -gt $starttime | `

                                            select MachineName,

                            if($sysevents) {

                                $result = $sysevents


                            else {

                                Write-Warning "$(Get-Date) - No events found on $Computer"
                                $result = 'none'


                        catch { $Result = 'error' }

                    Default { Write-Warning "$(Get-Date) - Error retrieving events from $Computer" }

            catch [Exception] {
                Write-Warning "$(Get-Date) - No events found on $Computer"
                $result = 'none'


        if(($result -ne 'error') -and ($result -ne 'RPC error') -and ($result -ne 'none')) {

            Write-Verbose "$(Get-Date) - Consolidating events for $Computer"
            $lastuniqueevents = $null

            $lastuniqueevents = @()
            $ids = ($result | select id -unique).id

            foreach($id in $ids){

                $machineevents = $result | ? id -eq $id

                $lastuniqueevents += $machineevents | sort timecreated -Descending | select -first 1


            $AllResults += $lastuniqueevents



    End {
        Write-Verbose "$(Get-Date) - Finished."

Let me go if you have any idea on how to improve it, or if you have any question, do not hesitate to ask.

Monday, February 20, 2017

Ramp up your PowerShell knowledge in 2017 with these books

For once I am going to write a blog post which is not focused on a technical subject, and for a reason. As far as I have been able to observe, there is still a great amount of IT administrators who aren't using Windows PowerShell as their main tool for server administration even do Microsoft has been pushing it with all of its strength. Sometimes the reason is that IT guys simply lack of time. Sometimes they can't find motivation from the surrounding environment, because procedures and tools are already in place. Sometimes they are scared by the extent of the change that comes from switching away from a GUI-based server administration.


What I think, is that often the best starting point is a good book to take on the challenge, like the one of learning PowerShell: a good book helps you to move your first steps with the language, and then to get a solid understanding of its capabilities. It will teach you how it works, how it binds to the operating system and how you can best benefit from its usage. And it will keep your motivation high by giving real world examples that you can start using straight away.

Now, if you look, say, on Amazon, you'll find tons of PowerShell books: 662 items on 56 pages, and if you're a novice, you might very well get stuck here because you don't know the authors and you can't make a choice of what book is the best for you.

That's why I have decided to write a post to mention the books that in 2017 will be a must for the modern system administrator. Most of them are still work-in-progress, but will be achieved in 2017 and will definitively be worth their price not only because they will be focused on the most recent PowerShell version (5/5.1), but also because their authors are well-renowned book writers and conference speakers that know not just how to teach a technical subject, but also how to stimulate you interest in the language. What's more, some of them, through their initiatives, have been the real engine behind the widespread of PowerShell as the vital skill to have in 2017: I am thinking for instance to Bruce Payette (principal author of the language) and Don Jones (founder of just to name two.

Let's now have a look at this list of books.


With 14 out of 19 chapters already available and planned to be published in April 2017, the must-have book in 2017 is the one by Bruce Payette and fellow MVP Richard Siddaway titled Windows PowerShell in Action, Third Edition (ISBN 9781633430297 - $59 for the printed version) which Manning makes available through the MEAP program. This is program that allows buyers to have access to the book chapters as soon as they are ready, so that their content is not locked up by a long writing and publishing process.

This book, which in its second edition already boasted 983 pages, has the advantage of being the most complete book around on this subject. This third edition covers massively some pretty hot topics such as PowerShell Classes, Workflows and Desired State Configuration (aka DSC) and should therefore be used by the system administrator who already uses PowerShell and who wants to build a rock-solid knowledge of the language. Furthermore, being Bruce Payette a founder of the language, you will get an insight of many design decisions that the PowerShell Team has had to make.

Here's a sample screenshot from the free sample of this book, just to show you the level of detail in its first pages:

It's also worth noticing that Manning did the smart move of providing a forum where you can give feedback on the content on the book as it's being written. You can access it here.


The second must-have book is for sure the one by Don Jones and Jeffery Hicks titled The PowerShell Scripting and Toolmaking Book. We already know Don Jones for being the co-founder of and the author of the blockbuster Learn PowerShell in a month of lunches. Jeffery Hicks is also a well-known author of PowerShell books (whose listing you can find on his blog).

As you can see when you follow the link above, they have chosen LeanPub to publish their book. LeanPub is a platform that allows technical authors to ship chapters in a Agile-manner, like you would on a blog, similarly to the MEAP program we presented above.

For the moment, their book is 80% complete and they set a target selling price at $60, but you can freely decide to pay it anything between $40 and $120. The price could seem a bit high, especially if compared to the book by Bruce Payette and Richard Siddaway, but you can be assured of two things: the first one is that Don Jones and Jeffery Hicks are excellent authors that know how to teach a subject. The second one, is that their books comes as a 'forever edition', meaning that all the updates that the authors will make to the content in the future are included in that price.

Concerning the content, the book is going to provide you with in-depth information on how to build advanced functions that include professional-grade parameter management, error handling, and built-in help, like real cmdlets. Other hot topics in this book are Unit Testing with Pester, Source Control, PowerShell ClassesPowerShell Script Analyzer and Just Enough Administration (aka JEA), just to mention a few. It will also teach you other interesting things such as how to publish to the PowerShell Gallery, as you can see from the screenshot taken from the free sample:

Same as with MEAP, LeanPub gives you the possibility to give feedback to Don Jones and Jeffery Hicks on the content of the book through a specific web page which you can find here.


The third book I want to talk about and which will be published in 2017, is the one by Mike F Robbins. Mike is a former PowerShell MVP and now a Cloud and DataCenter Management MVP. He has already co-authored some books like PowerShell Deep Dives and Windows PowerShell TFM 4th Edition. Not to mention that is the winner of the advanced category in the 2013 PowerShell Scripting Games. So he knows his subject as you can see if you follow his blog.

The title of the book is PowerShell 101 – The No-Nonsense Beginner’s Guide to PowerShell and is published through LeanPub, just like the book by Don Jones and Jeffery Hicks. For the moment the first two chapters have been published and a third one is almost ready, so I am fully confident Mike will be able to complete it by the end of 2017 with highly valuable content.

Concerning the content, my understanding is that this book is aimed at Windows administrators that want to enter the PowerShell arena but with a focus on real world scenarios: this is a key point that will make the learning process smoother for those moving their first steps away from the GUI. Here's a screenshot from the free sample of the book:

Notwithstanding the fact that this book starts with real simple examples, like the one above, doesn't mean that you won't find a lot of very good hints on how to improve your scripting skills: Mike has a reputation for being able to write real complex functions that are extremely easy to reuse and, believe me, you won't be deceived by this book that has the added values of being sold at a very low price, $11.99.

Like for the others LeanPub books, you can find the feedback page for Mike's book at this link.

If you want to embrace PowerShell and develop your automation skills, these are the books to step up your game and add value to your career in 2017. Just choose one and skyrocket your performance.

Tuesday, February 14, 2017

Using the PoshRSJob module with a PowerShell function to perform a WinRM check up

As I explained in the previous post, I have written a function to test whether the WinRM service on a remote computer is able to accept connections and effectively execute commands. It all started with the finding that in very large environments with mixed OS versions, you can only be assured of the proper functioning of WinRM by trying to execute an actual command on a remote host. Only if the execution of that command succeeds you can safely state that the whole PowerShell remoting subsystem is correctly configured end-to-end.

The WinRM host process in action on the target server

Now you probably know that you have a couple of cmdlets that can be used to test this (I am thinking to Test-WSMan and Connect-WSMan), and that you can use Invoke-Command to run a block of code on a remote computer.

What I wanted to achieve here goes a bit further. I wanted a PowerShell function capable of testing all the possible configurations in a large environment with a high execution speed.

This involves testing things as the different authentication mechanisms such as Kerberos or NTLM, as well as testing against closed TCP ports, if any.

And this involves including some kind of parallelization.

Now to make a long story short, I have split the tests I perform in a function that I called Test-PSRemoting (whose full code you can find on my GitHub) in five blocks, where each block is accessed in the function through a Switch parameter.

Just to be clear, a Switch parameter is a parameter whose value is False by default and gets set to True when it is included. Here's a neat example I wrote to show you how this kind of parameter works:

function Get-SwitchValue ([switch]$switch1, [switch]$switch2)
 "Switch1 is $switch1."
 "Switch2 is $switch2."   

Get-SwitchValue -switch2

Switch1 is False.
Switch2 is True.
So as I said, there are five regions, which are only accessed if the corresponding Switch parameter is set to True.

The first Switch, named $Ping, is for the region of code where a ping is sent to the target server using the System.Net.NetworkInformation.Ping class:

The second Switch, named $Resolve, is used to query the DNS and return the IP address of the target server. This is accomplished with Resolve-DnsName with a query type set to A, so that only the IPv4 address is returned:

The third Switch, which I named $TCPCheck, is called when you want to check that the TCP ports used by the WinRM service are open on the destination server. As you might know, there are two ports for WinRM:
  • TCP port 5985 is for HTTP traffic and is used when you don't need to authenticate the target server because you can rely on Kerberos and on the Active Directory to authenticate it for you
  • TCP port 5986 is for HTTPS traffic and is used when you can't rely on an Active Directory  domain to authenticate the target server (like when it is in a Workgroup) and therefore you require that that target server identity is confirmed by a certificate issued by a trusted CA
Now, recent PowerShell versions have a native cmdlet for checking open TCP ports which is called Test-NetConnection. This cmdlet is designed in a way that it can be used to check for the standard WinRM port 5985 in a quick manner:
Test-NetConnection srv01 -CommonTCPPort WinRM
The issue is that this cmdlet seems always trying to ping the remote server before issuing the TCP connection. Since I haven't been able to determine whether using it with the InformationLevel parameter set to Quiet suppresses the ping, I have decided to fail back to use Windows Sockets through the System.Net.Sockets class provided in .NET framework. This has the important advantage of letting me use AsyncWaitHandle to handle timeouts shorter than the one of Test-NetConnection when used against a server which is unresponsive:
Measure-Command {([System.Net.Sockets.TcpClient]::new().BeginConnect('nosrv',5985,$null,$null)).AsyncWaitHandle.WaitOne(100,$False)}

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 0
Milliseconds      : 125
Ticks             : 1256004
TotalDays         : 1.45370833333333E-06
TotalHours        : 3.4889E-05
TotalMinutes      : 0.00209334
TotalSeconds      : 0.1256004
TotalMilliseconds : 125.6004

Measure-Command {Test-NetConnection nosrv -port 5985}
WARNING: Name resolution of nosrv failed -- Status: No such host is known

Days              : 0
Hours             : 0
Minutes           : 0
Seconds           : 2
Milliseconds      : 306
Ticks             : 23065325
TotalDays         : 2.66959780092593E-05
TotalHours        : 0.000640703472222222
TotalMinutes      : 0.0384422083333333
TotalSeconds      : 2.3065325
TotalMilliseconds : 2306.5325
As you can see, the Test-NetConnection method is twenty times slower, and like I have said, speed is one of my main requirement for the function I am writing.

The next region, $Dcom, is where I check if DCOM can be used to retrieve the operating system of the target server as well as the name of the Active Directory Domain it belongs to. Actually this is kind of an optional part of my function: there's no link between WinRM and DCOM but it can always be interesting to know if you can switch back to DCOM/RPC to query the WMI provider on the remote host. Here's how I use the New-CimSessionOption to force my request to go over the DCOM protocol:
$SessionOp = New-CimSessionOption –Protocol DCOM
Also, always having that idea of making my function go fast, and to be robust in case the remote WMI provider is for some reasons broken, I use the New-CimSession cmdlet with a OperationTimeOut parameter set to 1 seconds. Here's the block of code for the DCOM check:

Now the final block of code. This is the part where I perform the following tests:
  • Test-WSMan with a challenge-response scheme named Negotiate, that allows me to authenticate the account I am using with Kerberos and to switch back to NTLM in case it fails
  • Test-WSMan with Negotiate on port 80, which is the old TCP port used for WinRM on Windows 2003 servers (and I have still a few of them in the place I am using this function)
  • Invoke-Command with Negotiate: in this case, since the cmdlet doesn't have a Timeout parameter, I run it in a background job which I discard after two seconds
  • Test-WSMan with Kerberos authentication
  • Test-WSMan with Kerberos authentication on port 80 for servers trunning Windows 2003 as base operating system
  • Invoke-Command with Kerberos authentication
As you can easily understand, the test with Invoke-Command is the most important part of the function since it effectively tries to retrieve the list of running services on the target server over WSMan:

Now that was for the Test-PSRemoting function.

Concerning the parallelization of the execution, I have gone down a few roads: first of all I have tried to build a quick and dirty RunspacePool but soon discovered that their implementation is so developerish that it goes well beyond what a system administrator should be able to know and understand. In the end I have decided to choose the easy path and reuse a module written and maintained by fellow MVP Boe Prox which adds a layer of abstraction to the runspaces below and makes it easy to use for the classic system administrator. The name of the module is PoshRSJob and you can find it here.

To install this module just run:

Install-Module -Name PoshRSJob
Version has the following cmdlets:

Get-Command -Module PoshRSJob

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Get-RSJob                                    PoshRSJob
Function        Receive-RSJob                                PoshRSJob
Function        Remove-RSJob                                 PoshRSJob
Function        Start-RSJob                                  PoshRSJob
Function        Stop-RSJob                                   PoshRSJob
Function        Wait-RSJob                                   PoshRSJob
These cmdlets can be used in a oneliner fashion, just by piping Start-RSJob into Wait-RSJob and then into Receive-RSJob:

Start-RSJob -InputObject ('1/1/2017','2/2/2017') -ScriptBlock { Get-Date $_ } | Wait-RSJob | Receive-RSJob

Sunday, January 1, 2017 12:00:00 AM
Thursday, February 2, 2017 12:00:00 AM
Before you use my function, it is important to understand how this module accesses variables which should be available in the runspace. This is done through $Using, like in the example below, where I add one day to a given date:

$One = 1
Start-RSJob -InputObject ('1/1/2017','2/2/2017') -ScriptBlock { (Get-Date $_).AddDays($Using:One) } | Wait-RSJob | Receive-RSJob

Monday, January 2, 2017 12:00:00 AM
Friday, February 3, 2017 12:00:00 AM
It is also important to force Start-RSJob to evaluate any function you want to use in your parallel execution. This is done through the FunctionsToLoad parameter, which in my case I use to load the Test-PSRemoting function.

The last hint about this module is that you should make an heavy use of the Verbose parameter to follow whatever is happening in your runspaces and also to add a nice and useful progress bar via the ShowProgress switch inside the Wait-Job cmdlet.

So let's see now a few examples of how I use the PoshRSJob module to make the execution of my Test-PSremoting function lightning fast. In the first example I retrieve all the Windows Server names from the Active Directory, and I check whether they ping and can be reached via Invoke-Command, to return only those that actually passed this last test and save their data in a CSV file:

$Report = Start-RSJob -Throttle 20 -Verbose -InputObject ((Get-ADComputer -server dc01 -filter {(name -notlike 'win7*') -AND (OperatingSystem -Like "*Server*")} -searchbase "OU=SRV,DC=Domain,DC=Com").name) -FunctionsToLoad Test-PSRemoting -ScriptBlock {Test-PSRemoting $_ -Ping -Kerberos -Credential $using:cred -Verbose} | Wait-RSJob -Verbose -ShowProgress | Receive-RSJob -Verbose

$Report | ? Remoting_Kerberos -eq 'ok' | convertto-csv -Delimiter ',' -NoTypeInformation | Out-File C:\WinRM-after-gpo.csv
In the second example I use an ADSISearcher to query an old Windows 2003 Domain to retrieve all the Windows Servers and then I try all the different blocks of code (ping, name resolution, TCP check, NTLM, Kerberos) to return only the servers that actually responded to ping in a table:
$Servers = ((New-Object -typename ADSISearcher -ArgumentList @([ADSI]"LDAP://,dc=com","(&(&(sAMAccountType=805306369)(objectCategory=computer)(operatingSystem=*Server*)))")).FindAll())

$Report = Start-RSJob -Throttle 20 -Verbose -InputObject $Servers -FunctionsToLoad Test-PSRemoting -ScriptBlock {Test-PSRemoting -Ping -Resolve -TCPCheck -DCOM -Negotiate -Kerberos $_ -Credential $using:cred -Verbose} | Wait-RSJob -Verbose -ShowProgress | Receive-RSJob -Verbose

$Report | ? ping -eq 'ok' | format-table * -AutoSize
For sure you can imagine here any kind of grouping of your results, with Group-Object, or you could print the results in a dynamic table with Out-GridView. What you do will depend on your needs.

Kudos to Boe for the PoshRSJob module. If you have any question on the function I wrote, or if you want to improve it, feel free to get in touch with me.
Related Posts Plugin for WordPress, Blogger...