Thursday, January 23, 2014

Passed exam 74-409 on Server Virtualization with Hyper-V and System Center 2012 R2

Today I took my first exam of the year on Microsoft Server Virtualization and succeeded in getting a quite good score (942/1000). This earned me the certification "Microsoft Certified Specialist: Server Virtualization with Hyper-V and System Center".

My thoughts:
  • The exam itself was not too difficult: if you have at least some hands-on experience with the Microsoft System Center suite and if you already are (like me) a VMWare Certified Professional (VCP), you'll see that having a strong knowledge of all that virtualization is about helps a lot. As Brad Anderson explained on his blog last October, there is an increasing demand for Hyper-V experts, and being comfortable with Microsoft and VMWare platforms can boost your career.
  • I have learned a lot from the labs I made to prepare this exam, since the exam comes up with a lot of real life situations. Labs, labs, labs!
  • I am very happy to see so many Powershell questions on this exam, which confirms (if you didn't know) that my favorite administration language has become a central block of the whole Microsoft strategy
  • There are a few tricks in the wordings, and you may be miss-directed to points that are not essential to the question being asked, so be careful
  • The 2-days show by Symon (@SymonPerriman) and Cory (@holsystems) on MVA is your best bet to success, since they cover most of the exam topics
  • There is a virtual lab for the exam on technet
  • Last but not least, Bjorn Houben has collected lots (all of them?) of resources on his blog. Check it out!

Good luck to everybody sitting this exam in the future!

Friday, January 17, 2014

First steps in Windows Azure with Powershell

A few days are left before the start of the first noted event of the 2014 Powershell Winter Scripting Games. In the meantime I have decided to write a blog post on moving your first steps in a Windows Azure environment with Powershell. This can be particularly useful if you wanted to take part in the Games and still don't have an environment with Powershell 4.0 installed, since you can create a few Windows 2012 R2 VMs with the following quick steps.

As an introduction, know that in these times there is a growing interest for Cloud technologies and Microsoft has answered the need for outsourced infrastructures with the possibility to run your IT as a Service (IaaS) in its Cloud datacenters. Windows Azure is the name Microsoft gave back in 2008 to its Cloud application platform, which became available on February 2010.

Microsoft is offering a one-month free trial, so activate your subscription and download and install the latest version of Windows Azure Powershell, which is at the time of writing.

The package will install a new module for Azure:
Get-Module Azure -ListAvailable

    Directory: C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell

ModuleType Name                  ExportedCommands
---------- ----                  ----------------
Binary     Azure                 {Disable-AzureServiceProjectRemoteDesktop, E
At this point you don't even need to import any module since starting with Powershell 3.0 there is a module auto-load feature: so type the cmdlet you need and PowerShell will load the Azure module for you.

As a general information, the last version of the Azure module comes with 242 cmdlets. The most common nouns in this module are the following ones:
gcm -Module azure | group noun | sort count -desc | select name, count -first 10
Name Count ---- ----- AzureVM 9 WAPackVM 9 AzureWebsite 8 AzureService 6 AzureDeployment 5 AzureVMImage 5 AzureSqlDatabaseServerFirewallRule 4 AzureAclConfig 4 AzureStorageAccount 4 AzureVNetGateway 4
The next step is to configure Windows Azure Active Directory authentication in PowerShell with Add-AzureAccount (this is much easier than using the combination of Get-AzurePublishSettingsFile and Import-AzurePublishSettingsFile). For a basic usage, this cmdlet takes no parameter: it just opens a browser dialog box asking for the Microsoft Account that you registered to manage your subscriptions.

You can check that you have properly bound to your account with Get-Azureaccount:

Name                              ActiveDirectories
----                              -----------------                   {{ ActiveDirectoryTenantId = 5e649293-9...
and with Get-AzureSubscription:

SubscriptionName           : Visual Studio Ultimate con MSDN
SubscriptionId             : xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
ServiceEndpoint            :
ActiveDirectoryEndpoint    :
ActiveDirectoryTenantId    : 5e649293-9842-111a-b2ab-d234d8cc5f54
IsDefault                  : True
Certificate                :
CurrentStorageAccountName  : 
CurrentCloudStorageAccount :
ActiveDirectoryUserId      :
As you can see in the output of the last cmdlet, the CurrentStorageAccountName is empty. If you tried to build a new virtual machine at this moment you would get the following error:
New-AzureQuickVM : CurrentStorageAccountName is not accessible. Ensure the current storage account is accessible and in the same location or affinity group as your cloud service.
The solution is to associate your Azure storage account with your subscription prior to deploying any new VM:
Set-AzureSubscription -SubscriptionName (Get-AzureSubscription).SubscriptionName -CurrentStorageAccount (Get-AzureStorageAccount).label
Once your subscription is filled with all the proper information, nothing easier than using New-AzureQuickVM to create and provision your new Windows Azure virtual machines.

Here's the syntax for this cmdlet as extracted from the help:
man New-AzureQuickVM



    New-AzureQuickVM -ImageName  -Linux -LinuxUser  -Password  -ServiceName 
    [-AffinityGroup ] [-AvailabilitySetName ] [-DnsSettings ] [-HostCaching ]
    [-InstanceSize ] [-Location ] [-MediaLocation ] [-Name ] [-SSHKeyPairs
    ] [-SSHPublicKeys ] [-SubnetNames ] [-VNetName ]

    New-AzureQuickVM -AdminUsername  -ImageName  -Password  -ServiceName  -Windows
    [-AffinityGroup ] [-AvailabilitySetName ] [-Certificates ]
    [-DisableWinRMHttps] [-DnsSettings ] [-EnableWinRMHttp] [-HostCaching ] [-InstanceSize
    ] [-Location ] [-MediaLocation ] [-Name ] [-NoExportPrivateKey] [-NoWinRMEndpoint]
    [-SubnetNames ] [-VNetName ] [-WaitForBoot] [-WinRMCertificate ]
    [-X509Certificates ] []

    The New-AzureQuickVM sets the configuration for a new virtual machine and creates the virtual machine. It can
    create a new Windows Azure service, or deploy the new virtual machine into an existing service if neither
    -Location or -AffinityGroup is specified.


    To see the examples, type: "get-help New-AzureQuickVM -examples".
    For more information, type: "get-help New-AzureQuickVM -detailed".
    For technical information, type: "get-help New-AzureQuickVM -full".
    For online help, type: "get-help New-AzureQuickVM -online"
Before we deploy a new test virtual machine, I want to make a short digression on the VM sizes you can choose from. There are eight possible sizes, starting from Extra Small to A7:

As you can see in the image above, Small is the minimum recommended size for a Production VM (and with the free trial you can run two of them for a whole month), while Large is the minimum for a SQL node.

Keep in mind that the daily cost for each of this virtual machine size is different, so plan accordingly to your budget:
  • XS 11€ per VM per month
  • Small 49€
  • Medium 99€
  • Large 199€
  • A5 221€
  • ExtraLarge 398€
  • A6 443€
  • A7 886€

Let's move to  the provisioning of your first VM in just a one-liner:
New-AzureQuickVM -Windows -ServiceName 'cloudoftheday' -Name 'cloudvm01' -ImageName (Get-AzureVMImage | Where Label -Like "Windows Server 2012 R2 Datacenter")[-1].ImageName -AdminUsername 'happysysadm' -Password 'VerySecurePassw0rd' -InstanceSize Small -Location 'west europe'
Once the cmdlet ends, your VM is already available for use with the OS installed, RDP and WinRM enabled on their standard ports. Public ports for these services are also opened trough a Port Address Translation (PAT) mechanism.

Now, I haven't been succesfull in using the New-AzureQuickVM cmdlet againts an existing service. The cmdlet fails with the following error message:
"New-AzureQuickVM : ResourceNotFound: The deployment name 'existingcloud' does not exist."
It must be a bug, since in the cmdlet help it says that I can specify an existing service name:
Specifies the new or existing service name.
So for the moment you have to stick to creating a new Service Name each time you run this cmdlet...

As a side note, I am deploying my VMs using 'West Europe' as location (The 'west europe' datacenter is in Amsterdam). Microsoft Public Cloud offers various possible locations and you should choose the one near to you:

AvailableServices    : {Compute, Storage, PersistentVMRole, HighMemory}
DisplayName          : East Asia
Name                 : East Asia
OperationDescription : Get-AzureLocation
OperationId          : f89f3076-6587-59ba-b9ff-4e47e4658c82
OperationStatus      : Succeeded

AvailableServices    : {Compute, Storage, PersistentVMRole, HighMemory}
DisplayName          : Southeast Asia
Name                 : Southeast Asia
OperationDescription : Get-AzureLocation
OperationId          : f89f3076-6587-59ba-b9ff-4e47e4658c82
OperationStatus      : Succeeded

AvailableServices    : {Compute, Storage, PersistentVMRole, HighMemory}
DisplayName          : North Europe
Name                 : North Europe
OperationDescription : Get-AzureLocation
OperationId          : f89f3076-6587-59ba-b9ff-4e47e4658c82
OperationStatus      : Succeeded

AvailableServices    : {Compute, Storage, PersistentVMRole, HighMemory}
DisplayName          : West Europe
Name                 : West Europe
OperationDescription : Get-AzureLocation
OperationId          : f89f3076-6587-59ba-b9ff-4e47e4658c82
OperationStatus      : Succeeded

AvailableServices    : {Compute, Storage, PersistentVMRole, HighMemory}
DisplayName          : East US
Name                 : East US
OperationDescription : Get-AzureLocation
OperationId          : f89f3076-6587-59ba-b9ff-4e47e4658c82
OperationStatus      : Succeeded

AvailableServices    : {Compute, Storage, PersistentVMRole, HighMemory}
DisplayName          : West US
Name                 : West US
OperationDescription : Get-AzureLocation
OperationId          : f89f3076-6587-59ba-b9ff-4e47e4658c82
OperationStatus      : Succeeded
Now the question that comes to my mind is whether I can deploy many VMs at the same time using a Workflow in Powershell 4.0. Let's see the answer. I have prepared the following workflow as an example:
WorkFlow Deploy-AzureVm {

        [Int]$Quantity = 5,
 [String]$ImageName = "Windows Server 2012 R2 Datacenter",

#Retrieving image name in Windows Azure repository
$Image = (Get-AzureVMImage | Where Label -Like $ImageName)[-1].ImageName

"Deploying first VM with Location parameter"
New-AzureQuickVM -Windows -ServiceName $ServiceName `
 -Name "$Prefix" -ImageName $Image `
 -AdminUsername $AdminUsername -Password $Password -InstanceSize $InstanceSize -Location "west europe"

foreach -parallel ($VM in 2..$Quantity){
   "Deploying VM $Prefix$VM"
   New-AzureQuickVM -Windows -ServiceName $ServiceName `
 -Name "$Prefix$VM" -ImageName $Image `
 -AdminUsername $AdminUsername -Password $Password -InstanceSize $InstanceSize

Once I run it I get the following problem:
Deploy-AzureVm -Quantity 3 -ImageName "Windows Server 2012 R2 Datacenter" -ServiceName "WorkflowCloud007" `
>> -Prefix "VMcloud16Jan" -AdminUsername 'happysysadm' -Password 'VerySecurePassw0rd' -InstanceSize Small
Deploying first VM with Location parameter

PSComputerName        : localhost
PSSourceJobInstanceId : de553192-bd99-4c46-be19-505f0007e819
OperationDescription  : New-AzureQuickVM
OperationId           : 2438374c-egb1-529d-af64-ce5b879ee3e1
OperationStatus       : Succeeded

PSComputerName        : localhost
PSSourceJobInstanceId : de553192-bd99-4c46-be19-505f0007e819
OperationDescription  : New-AzureQuickVM
OperationId           : 2438374c-egb1-529d-af64-ce5b879ee3e1
OperationStatus       : Succeeded

Deploying VM VMcloud16Jan3
Deploying VM VMcloud16Jan2
New-AzureQuickVM : ConflictError: Windows Azure is currently performing an operation with x-ms-requestid
123a90e4325857d986b25524987b7d9b on this deployment that requires exclusive access.
At Deploy-AzureVm:20 char:20
    + CategoryInfo          : CloseError: (:) [New-AzureQuickVM], CloudException
    + FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ServiceManagement.IaaS.PersistentVMs.NewQuickVM
    + PSComputerName        : [localhost]

PSComputerName        : localhost
PSSourceJobInstanceId : de553192-bd99-4c46-be19-505f0007e819
OperationDescription  : New-AzureQuickVM
OperationId           : 2438374c-egb1-529d-af64-ce5b879ee3e1
OperationStatus       : Succeeded
It looks like operations like virtual machine provisioning or deletion keep an exclusive access on the deployment engine and don't allow for parallel virtual machine setup in the same Windows Azure environment. I haven't found a solution to this and that's sad because I liked the idea of deploying a whole IaaS in just one Powershell Workflow. But, I am sure the technology under the hood is making huge steps forward and I would expect such a feature like Powershell workflow to be fully leveraged in the first major release of the Azure module. Meanwhile we have to stick to serial execution of our VM provisioning, which is nonethless extremely trivial.

Performancewise, the rapidity of the deployment of a new VM in the Cloud is pretty stunning, with the vhd disks deployed in a bunch of seconds and the VM started in a couple of minutes:
Measure-Command -expression {
New-AzureQuickVM -Windows -ServiceName 'cloudofthedayxx001' -Name 'abc01' -ImageName (Get-AzureVMImage | Where Label -Like "Windows Server 2012 R2 Datacenter")[-1].ImageName -AdminUsername 'carlo' -Password 'VerySecurePassw0rd' -InstanceSize Small -Location 'west europe'

Days              : 0
Hours             : 0
Minutes           : 1
Seconds           : 12
Milliseconds      : 422
Ticks             : 724220786
TotalDays         : 0,000838218502314815
TotalHours        : 0,0201172440555556
TotalMinutes      : 1,20703464333333
TotalSeconds      : 72,4220786
TotalMilliseconds : 72422,0786
That's all for this first post on Windows Azure and on the Windows Azure Powershell module. I hope you have found the subject passionating and in that case I hope you'll share it and give feedback. Stay tuned for more, and good luck with the Games if you're in (I hope you are).

Thursday, January 9, 2014

Filtering left and error trapping in Powershell

The Scripting Games have started and I am pleased to see that a few teams have already published interesting approaches to the first test event. Nonetheless I see that some people still persevere with some of the common mistakes beginners do.

In this blog post, I want to shed some light on two of these common mistakes.

The first one it’s not really a mistake but a bad habit. Let's see why. In the first test event you are asked to perform some quite complex computer inventory tasks. As you know, inventorying Windows-based computers generally passes through WMI queries through the use of the Get-WmiObject cmdlet:
Get-WmiObject–class win32_service  –computername (Get-Content serverlist.txt)
Some people naïvely tend to remove returned objects from the pipeline using Where-Object cmdlet.
But, and this is very important, when you are retrieving many information from many distant servers, and piping the resulting objects to Where-Object, you could encounter performance problems due to the huge amount of data your station has to analyze and keep or discard.
The tip here (that Don Jones named ‘Filter Left’ in his training) is to make use of the –Filter parameter, which is available for many Powershell cmdlets:
Get-Command -ParameterName filter

CommandType     Name
-----------     ----
Cmdlet          Add-Content
Cmdlet          Clear-Content
Cmdlet          Clear-Item
Cmdlet          Clear-ItemProperty
Cmdlet          Copy-Item
Cmdlet          Copy-ItemProperty
Cmdlet          Get-Acl
Cmdlet          Get-ChildItem
Cmdlet          Get-Content
Cmdlet          Get-Item
Cmdlet          Get-ItemProperty
Cmdlet          Get-Job
Cmdlet          Get-WmiObject
Cmdlet          Get-WSManInstance
Cmdlet          Invoke-Item
Cmdlet          Move-Item
Cmdlet          Move-ItemProperty
Cmdlet          New-ItemProperty
Cmdlet          Remove-Item
Cmdlet          Remove-ItemProperty
Cmdlet          Remove-Job
Cmdlet          Rename-ItemProperty
Cmdlet          Resume-Job
Cmdlet          Set-Acl
Cmdlet          Set-Content
Cmdlet          Set-Item
Cmdlet          Set-ItemProperty
Cmdlet          Stop-Job
Cmdlet          Suspend-Job
Cmdlet          Test-Path
Cmdlet          Wait-Job
Using this –Filter parameters boosts your WMI query performance by letting the WMI service itself on the remote server do the homework of filtering information.

A quick script allows us to verify this statement, even on local queries:
$FilterDuration = (Measure-Command -Expression {Get-ChildItem -Path $env:windir\system32 -Filter *.dll}).Milliseconds
$WhereDuration = (Measure-Command -Expression { Get-ChildItem -Path $env:windir\system32 | Where-Object Extension -eq ".dll"}).Milliseconds
"Filtering with -Filter: $FilterDuration ms"
"Filtering with Where-Object: $WhereDuration ms" 

Filtering with -Filter: 239 ms
Filtering with Where-Object: 803 ms 
There you go: unsurprisingly we get 200 milliseconds for –Filter versus more than 800 milliseconds for Where-Object.

Let me move on to the second mistake, which is not knowing that you have to explicitly set the value of the parameter -ErrorAction to Stop in your WMI queries if you want to trap errors using the Try {} construct.

This is because some exceptions returned by WMI queries aren't terminating errors.

For example, the following script returns a big red error message, like if the Catch {} block was skipped:
Try {
    get-wmiobject -class "Win32_PhysicalMemory" -computername Ghostserver
Catch {
    "WMI query failed..."

Get-WmiObject : The RPC server is unavailable. (Exception from HRESULT: 0x800706BA) 
While the following one properly catches the error and shows the predefined error message:
Try {
    get-wmiobject -class "Win32_PhysicalMemory" -computername Ghostserver -ErrorAction Stop
catch {
    "WMI query failed..."
The major advantage of transforming your WMI errors in terminating exceptions and writing to the host nicely composed error messages is that you show that you master your code and that your scripts is designed well enough to cope with unforeseen issues.

Hope this helps.

Wednesday, January 1, 2014

I am a Coach for the 2014 Winter Scripting Games

The 2014 Powershell Winter Scripting Games are just about to start. I am proud to say that I have been selected by 2013 Scripting Games winner and Head Coach Mike F Robbins to act as an expert Coach for this edition and I will do my best to offer constructive feedback to all teams that post their script files to the Scripting Games website.
Here's a quick list of things you have to know and/or keep in mind during these Games:
  • The site of the Games is
  • The rules of the games are in this PDF file.
  • There will be a total of 4 official events for these Winter Scripting Games: January 19th, January 26th, February 2nd, & February 9th. The official schedule is here.
  • For official news, regularly check out the Scripting Games announcement Category
  • MVP Richard Siddaway will be the Head Judge. Richard also authored the practice event and event 4, titled 'Monitoring Support'.
  • MVP Mike F Robbins will be the Head Coach.
  • Event 1, titled 'Pairs' has been authored by Ed Wilson.
  • Event 2, titled 'Security footprint' has been authored by Lee Holmes. 
  • Event 3, titled 'Acl, Cacl, Toil and Trouble' has been authored by Jeff Wouters.
  • You cannot play alone, since the events are complex and challenging and you can't tackle them alone. Try to think more like colleagues and less like individuals.
  • You have to build a team of 2-6 people, or join an existing one.
  • The system can help you find one team for you based on your timezone.
  • Make sure your team has submitted only one entry when the deadline comes (all times UTC).
  • Registration and team formation for the 2014 Winter Scripting Games will begin on January 2nd, which is in a few hours, and a practice event will take place.
  • Regularly upload your team's last version of the script into the Scripting Games website, so that Coaches like me can offer commentary in the private in-Game discussion thread.
  • Coaches won't be able to help if you don't post your entries during the week before the event closes.
  • Coaches comments are flagged so teams can easily spot them.
  • Get a test environment and make sure you are running the last version of Powershell (which is 4.0), as suggested in the FAQ.
  • Have a look at this interesting read on the appropriate use of comments.
  • Read my last post on the use of hashtables, dictionaries and objects.
  • Boe Prox does an excellent job of explaining the best way to make your code neat and clear, through the use of variable that make sense, proper error handling, and comment-based help. Check it out here.
  • Modular development is encouraged.
  • Check out this interactive Git intro, if you want to use GitHub as your collaboration tool.
 People to follow on Twitter during the Games:
... as well as the list provided by Mike on his blog (Introducing the Coaches of the 2014 Winter Scripting Games) and the Twitter hashtage #pshgames.

Remeber: it’s a terrific occasion to learn PowerShell techniques that will help you master the language. Have fun. Good luck!
Related Posts Plugin for WordPress, Blogger...