Friday, December 16, 2011

Setting up an account for the antivirus agent on a NetApp

When you install an antivirus server like Trend Micro ServerProtect or McAfee VirusScan Enterprise for Storage or a backup agent (such as HP Data Protector) and you want to plug them on your NetApp, there is one action which is necessary to allow the Trend or ePO agent to communicate with the filer.

This action is to set up a user account which can bypass file security to scan or backup the shared files wherever they are stored on the NetApp qtrees.

So, the first step to accomplish this action is to create an Active Directory user account named yourdomain\youravuser (if you don't have one already). Then you have to add yourdomain\youravuser to the local backup operator group on the NetApp.

The commands to use are shown below.

Tuesday, December 13, 2011

Celebrating 20 years of Linux

I'll be celebrating 20 years of Linux with
The Linux Foundation!

Understanding Windows Services Recovery features

As you probably know Windows has the ability to automatically perform some predefined action in response to the failure of a Windows Service. The Recovery tab in the Service property page let you in fact define the actions that the system has to perform on first failure, second failure, and subsequent failures.

Valid options are "Take No Action", "Restart the Service", "Run a Program", and "Restart the Computer".

In my case I have configured my test Trend ServerProtect service to restart after the first and the second failure, then a system reboot is executed the next time this service fails.

To test this I have written a basic batch script which recursively kills the service. Doing so I have just discovered that, with the default setting, Windows always performs the action defined for the first failure (in my case my TREND ServerProtect test service is restarted) and will never go through successive actions.

Furthermore I see that the event log reports all the time the same diagnostic message, even in case of recurring service failures:

Log Name:      System
Source:        Service Control Manager
Date:          07/12/2011 10:54:25
Event ID:      7031
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      servername
Description:
The Trend ServerProtect service terminated unexpectedly.  It has done this 1 time(s).  The following corrective action will be taken in 60000 milliseconds: Restart the service.


The "It has done this 1 time(s)" sentence looks problematic to me because I am recursively killing this service and the failure counter should increase.

If I double check the recovery parameters with sc.exe I am happy with the output:

sc qfailure spntsvc
[SC] QueryServiceConfig2 SUCCESS

SERVICE_NAME: spntsvc
   RESET_PERIOD (in seconds)    : 0
   REBOOT_MESSAGE               :
   COMMAND_LINE                 :
   FAILURE_ACTIONS              : 
     RESTART -- Delay = 60000 milliseconds.
     RESTART -- Delay = 60000 milliseconds.
     REBOOT -- Delay = 60000 milliseconds.

So, why does the failure counter does not increase? Cleary it looks like there is a bug in the way the Service Control Manager reads or understands the parameters I have set.

After deep investigation, and a after many searches throughout technet.microsoft.com, I found that setting the "Reset fail count after:" option to 0 means that the failure counter will not be stored at all. So I completely misunderstood its meaning. At first I was lost for words when I discovered that this parameter did not do what I expected from it.

Anyway, once you know that keeping this option set to 0 disables both the "second failure" and "subsequent failure" actions, the solution is pretty simple: set its value to 1 (or whatever you like) and you'll get the desired behavior upon service failure (in my case the server will restart upon third failure).

I hope this post will help you and, if so, do not hesitate to comment!

Friday, December 2, 2011

VMware vMotion and the CPU incompatibility problem

By default, VirtualCenter only allows migrations with vMotion between compatible source and destination CPUs. So if you have been trying to move a VM from one host to another, and you got stuck with a error message telling you that the CPU of your destination host in incompatible with the CPU configuration of your Virtual Machine, then this usually means one of the following:

a) you did not mask the NX/XD bit in the settings of the VM or...
b) you did not enable the "No-Execute Memory Protection" on both your source host and destination host or...
c) you did not have your cluster of ESX hosts configured for Enhanced VMotion Compatibility (EVC)

The complete error message you get is:

CPU of the host is incompatible with the CPU feature requirements of virtual machine;
Problem detected at CPUID level 0x80000001 register edx 



There are different methods to get past this blocking point.

Friday, November 18, 2011

Disabling Auto Restart After Windows 7 Update

I have recently choose Windows 7 as the Operating System for my home NAS. After testing FreeNAS, Ubuntu, Linux Mint, Windows XP and Windows 2003 Server, I must say that there are many reasons that pushed me to this difficult choice and there are a lot of advantages to use Windows 7 as a File Server.

However I am not going to talk about this right now.

What I wanted to talk about is NAS availability: if there is one thing I expect from my NAS, it is it to be on and ready to serve all the time. Not a minute less.

Unfortunately this Microsoft OS is too often trying and sometimes succeeding in rebooting my home server in the middle of the night, when I less expect it. And this is something I don't like at all, not even to apply critical security patches that clever people at Big Brother Microsoft have crafted for me.

That's why I want to share a not-so-secret hint on how to stop Windows update from restarting your system once and for all. It is a easy painless method which, as usual under Windows, consists of adding a registry key.

Tuesday, November 15, 2011

Windows 2008 R2 folder security issue and UAC

It is incredible how many Windows system administrators have been impacted by the introduction of UAC in Windows 2008 R2. These days I have been asked how to solve general security issues with folder security in 2008 R2. These issues weren't present in previous Windows versions such as Windows 200/2003, that's why many of us were surprised by new unknown behaviors.

In particular people were facing a situation in which on some folders or drives, when opening the Properties window as a member of the local Administrators group and selecting the Security tab, they had to click on 'Continue' before they could see the folder NTFS permissions.

The particular message shown was: 

"To continue, you must be an administrative user with permission to view this object's security properties. Do you want to continue?" 

and they were supposed to click the 'Continue' button.

If they explicitly granted the very same user account Full Control access to the folder, the NTFS permissions showed up without any further hassle.

In the same context, they got an 'Access Denied' error on the same folders even if they were members of the local Administrators group. Enabling Auditing on these folders showed up many 4656 events telling that their access was not granted.

If you have this problem also, the solution is simple: lower UAC to 0, following the procedure I have posted here:

How to disable UAC

How to disable UAC for System Administrators only

UAC is a major change (or 'improvement' if you wish..) in Windows 2008 R2, but it can be a real obstacle to everyday sysadmin tasks. So getting rid of it can sometimes be the only possible solution.

Do not hesitate to comment if you find this post useful or if you wan to share your point of view on UAC.

Tuesday, November 8, 2011

Disabling automatic KMS to DNS publishing

If for some reasons you want to stop your Windows 2008 R2 KMS server from publishing everyday its Resource Record (RR) to the DNS, you have to use the built-in Software Licensing Management Tool (slmgr.vbs).

To do so, open  an elevated command prompt on the KMS server and run:

slmgr /cdns

A pop-up will appear telling to you to reboot the KMS Service:



From the same elevated command prompt, run the following command to restart the KMS Service:

Net Stop sppsvc && Net Start sppsvc

If you are running your KMS service on a older Windows version (not R2), run the following command instead (the service executable has been renamed in Windows 2008 R2... don't know why...):

Net stop slsvc && Net start slsvc

Now there are two ways to check that your KMS server has stopped trying to register its Resource Record in the DNS.

The first one is to open the Registry and see that the DisableDnsPublishing DWORD key has been added under :

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SoftwareProtectionPlatform

The value of this key has also been set to 1.

The second way to check that KMS publishing to DNS is off is with the command:

slmgr.vbs dlv
 
 

I hope this solution helped you. If you have any question or any comment do not hesitate to post.

Friday, November 4, 2011

Cluster Validation Error due to duplicate NIC GUIDs

If you are running a Windows 2008 R2 Failover Cluster, you may see the following error when running the Failover Cluster Validation tests:

 It tells:

Validate Windows Firewall Configuration
Validate that the Windows Firewall is properly configured to allow failover cluster network communication.
Validating that Windows Firewall is properly configured to allow failover cluster network communication.
An error occurred while executing the test.
There was an error verifying the firewall configuration.
An item with the same key has already been added.

The last line is telling us that two elements have the same value. These elements are the Network adapters and the offending value are the adapter GUIDs. These GUIDs should be unique but if you have cloned your servers or if your Cluster servers are cloned VMWare Virtual Machines, this error might occur.

To solve this issue, start by running a Powershell session, then run the following command on every Cluster node and compare the GUID of your Network adapters:
 
Get-WmiObject Win32_NetworkAdapter | format-list Name,GUID

You should see that the Network adapters have the same GUID on different servers.

If this is your case, uninstall all of your Network adapters from Device Manager from all the Cluster members except one (but first note your IP address configuration!). Reboot them, re-run the Powershell command and you should find that your Network adapters are back with brand new GUIDs (thanks Plug and Play!).

Re-run the Cluster Validation Report and everything should be OK.

Please leave a comment if this post helped you!

Wednesday, November 2, 2011

Setting DisableStrictNameChecking in Windows 2008 R2

I recently faced a problem whereby I had to install a Windows 2008 R2 Failover Cluster Server and make a CNAME alias point to it but I was unable to get to the CNAME network share from remote clients.

Fortunately this wasn't a difficult problem to solve as I was aware of the existence of the DisableStrictNameChecking registry key under previous Windows versions. This key tells the server to allow inbound connections which are not explicitly directed to its main hostname, so it is a protective feature, not a bug.

So, to loosen security a bit allowing proper network access to a Windows server using a DNS alias, fire an elevated command prompt, type regedit and move to the following registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters


Right-click Parameters, click New, and then click DWORD (32-bit) Value.

Type DisableStrictNameChecking and press ENTER.

Double-click the DisableStrictNameChecking registry value and type 1 in the Value data box, click OK and close the Registry Editor.

This should solve your issue with accessing a Windows 2008 R2 server with a CNAME.

Thursday, October 27, 2011

Windows 2003: extending the Schema to R2 for DFS-R

Recently I have been trying to install a DFS Replication Group on two brand new Windows 2008 R2 Enterprise boxes belonging to a pretty old Windows 2003 Active Directory Domain.

Nothing specially tricky in this activity, apart the fact that the AD Schema must be extended before we define a new replication group. This is due to the fact that DFS-R stores its configuration info in the domain partition. The aim of this blog post is to share my quick procedure to do it, in case somebody should face the same situation, as I am sure there are still many Windows 2003 Domains around.

First thing is "Don't panic". The Schema extension is pretty straightforward and it doesn't need you to reboot any of your precious DCs. You can do it without actually upgrading the Operating System on your DCs.

Just keep in mind that some parameters will be added to you Active Directory in order to DFS-R to work. These are, for instance:
  1. msDFSR-DfsPath
  2. msDFSR-ReplicationGroupGuid

If you don't update the Schema, you won't be able to set-up any Replication Group and you will receive the following error when trying to create a Replication Group:

"domain.com: The Active Directory schema on domain controller DC1.domain.com cannot be read. This error might be caused by a schema that has not been extended, or was extended improperly. See Help and Support Center for information about extending the Active Directory schema. A class schema object cannot be found."
DFS-R R2 error when the Schema has not been updated

Monday, July 11, 2011

Solid State Drives, some theory and a selection of videos

Today I have been looking for information about SSD disks because I am probably interested in using them for my future home-made NAS solution. Having found this SSD topic pretty interesting, I have decided to write a post about it.

Let's start form the acronym: SSD means Solid State Drives, which is a new technology spreading very fast and finally reaching the end-user. In general terms, SSD can be defined as a hybrid device which stores your data (as well as your Operating System, of course) in a semi-conductor device known as flash memory with no mechanical parts (no moving heads or spinning disks).


Thanks to their construction, SSDs have rock solid advantages over standard mechanical hard disk drives. These advantages can be summarized as follow:
  • No spin-up time
  • Extremely low random access time (about 0.1ms)
  • Consistent read time throughout the SSD (while on a HDD if the data is written in a fragmented way, read ops will have varying response times)
  • Zero defragmentation
  • No noise (great for a home NAS)
  • Very light (SSDs size is 2,5" with SATA connectors)
  • Lower power consumption (Excellent for the environment and for your monthly bill)
  • Unaffected by magnetic fields
  • Very robust
As you can see some of this advantages are exactly what can be found on everybody's wishlist for a consumer NAS, that is no noise, low power consumption (for instance only 2,5 watts for the Corsair Force GT 120GB - 0,6 watts when in standby) and really high performances. As an example, let's have a look at some scores:
  • The Intel 510 Series 250 GB is by far the fastest SSD around with 476MB/s read throughput and 325MB/s for write operations. Such speed require SATA III intrerface of course, being SATA II limited to 300MB/s.
  • The Crucial RealSSD M4 256 GB is also a fast model, with 310MB/s for read ops and 273 for write ops.
  • The Plextor PX-128M2S can read at 287MB/s and write at 195MB/s.
  • Other models are a little slower with an average read throughput of 220MB/s and an average write throughput of 155MB/s. These scores are in any case much higher of those of mechanical hard drives, which have a average read throughput of 105MB/s and and an average write throughput of 103MB/s, plus the extra spin-up time.
There are of course some disadvantages in this new technology that SSD designers and constructors are trying to workaround.

Thursday, June 30, 2011

Installing Linux Mint 11

If you are a beginner computer user wishing to learn something different from Windows 7 and don't want to wait for Windows 8 next year to improve your computer knowledge, then you could probably be interested by Linux. Yes, Linux, you heard right. Linux is a powerful Operating System which Windows users sometimes hesitate to install because of its mystical aura of an OS for nerds and geeks. But this is not true today. No more. New Linux distributions are quite easily installed and run without ever touching to its obscure features (the Kernel, the Terminal and so on).

Today many Linux distributions exist. Some are harder to use, some are definitively easier (maybe easier then Windows I daresay). Some are for the IT expert wishing to have full control on its installation (like Slackware, Fedora or Debian, the grandpa of Ubuntu), some are oriented to please the common person using its personal computer for Internet browsing and  listening to music.

Easy desktop distros are, for instance, Ubuntu (mainly for its wide hardware compatibility and its ease of installation) or Linux Mint (mainly due to its familiar GNOME interface).

If I were to define in a few lines the Linux distributions as I see them today, I would say:

  • Ubuntu 11.04, Mandriva, Linux Mint is for real beginners
  • Fedora 15 and Slackware 13.37 is for skilled geeks
  • Puppy Linux 5.2.5 or Xubuntu 11.04 (based on Xfce) is better for installation on older hardware
  • Linux Mint 11 or Ubuntu 11.04 is good for your home computer
  • Jolicloud 1.2 or MeeGo 1.2 is good for your brand new Netbook
  • Debian 6.0.1 is for sysadmins
  • OpenSUSE 11.4 is the right one for office automation
  • CentOS 5.6 is good for enterprise servers and web servers
  • Ubuntu Studio 11.04 or PureDyne 9.11 is for your multimedia station and for creativity

None of these distros is perfect, but they will fulfill various purpose, as you will learn using them. Picking a first Linux distribution to use isn't always easy so I have chosed for you: install Linux Mint 11. The reason for this choice is that Linux Mint is the Linux distribution of the moment, having just pushed Ubuntu (and its Unity interface) out of DistroWatch’s No. 1 spot.

Monday, June 27, 2011

Unix joke

Having just published a joke about Windows Drag & Drop feature, I feel obliged, in fairness, to publish a funny joke about Unix/Linux OSes too:

Unix admin asking for a sandwich


I hope you like it! :-)

Drag and drop...

I really couldn't resist publishing this funny joke about Windows most known feature: drag and... drop!

A Windows admin troubleshooting Windows 2008

Windows sysadmin will surely understand! :o)))

Saturday, June 18, 2011

Clean up Winsxs on Windows 2008 R2 after SP1 install

Last year I wrote a post where I explained what the Winsxs folder was and which were the possible solutions to contain its bad habit of eating free space on your hard drive. Some days ago I have discovered that, starting from Service Pack 1, Windows 2008 R2 (... and Windows 7) finally has a built-in tool to reduce the size of the Windows Side-by-Side DLL repository and free up some GBs on your server storage. This tool is DISM.exe.

Cool news isn't it? Personally I am happy to know that someone at Microsoft has finally decided to make it possible to reclaim a few GBs on the system partition and to partially solve this major bug.

The procedure is the following:
  • Install Windows Service Pack 1 then ...
  • Start and elevated command prompt (run 'CMD' as administrator) and ...
  • Run the DISM command, which replaces the old VSP1CLN and COMPCLN we used on previous Windows versions: DISM.exe /online /Cleanup-Image /spsuperseded
  • Wait 10 minutes before the task completes ( it ends with “Service Pack Cleanup operation completed. The operation completed successfully”)
Normally you should have been able to reduce the Winsxs folder size by 1 or maybe 2 GBs, sometimes more. Saved space may vary a lot.

Just know that after using DISM you will not be able to uninstall the Service Pack 1 anymore.

Let's have a look at the used switches for DISM.exe:
  • The /online switch tells DISM to work on the running OS installation
  • The /spsuperseded option removes the backup files created during installation. 
Optionally you could use the /hidesp option which will remove SP1 (KB976932) from the “Installed Updates” section of Programs and Features, to ensure that users do not try to uninstall the Service Pack.

I hope this helps. Please let me know how much disk space you were able to free up using the given command.

Friday, June 10, 2011

Using Ethtool to configure your nic on CentOS

We are introducing a certain amount of CentOS computers and I am often asked by regional IT people how they can check and change the configuration of their network cards to reflect the configuration of the attached network Switches. That made me think of this post (my first one on CentOS) in which I will explain how you can determine the current Ethernet connection link speed of your CentOS system.

To do that you can take advantage of ETHTOOL, which is an easy utility that can be used to display and/or change settings of your Ethernet Network cards.

In the following example I will assume that you are willing to change the parameters for your first network card (usually eth0).

The syntax to show the NIC parameters is very simple, just enter:

ethtool eth0

and look for the Speed parameter:

Supported ports: [ TP ]
Supported link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes: 10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Half 1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 100Mb/s
Duplex: Half
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Current message level: 0x000000ff (255)
Link detected: yes

Thursday, June 9, 2011

DFS Target refers to a location that is unavailable

Today I have encountered a strange problem with some of my Stand-Alone DFS Targets. Many users using old Windows versions, such as Windows XP pre-SP2, were no more able to browse DFS file shares after I had updated some referrals to reflect an infrastructure change (new folder targets) happening at my company.

The funny thing is that most of the end-users running Windows XP SP2 or Windows 7 had no problem at all in browsing the DFS links from their Windows Explorer.

This problem pushed me to dig inside DFS behavior, design and architecture more than I hadn't done any time before.

After checking that no alerts where reported Server-side, I went to one old XP box and tried to browse the DFS \\dfsserver\root\link. The error I got was a generic "\\dfsserver\root\link refers to a location that is unavailable"...



I then tried to map the DFS link using the good old "net use", hoping for a error code a little bit more specific... but all I got was a "System error 2 has occurred The system cannot find the file specified".

Monday, June 6, 2011

Closing network files on a remote fileserver with PSFile

These days I am migrating some data from our old network file server to a new network storage. The plan is to migrate one folder at the time, and I have found out that in such a situation it can be useful to know how to to close all the open files in a specific directory before migrating in order to evite open files conflicts.

As I do not want to migrate all the data at once (this would be pretty much unpractical with so many gigabytes of data), I cannot simply adopt the solutions of shutting down or restricting access on the fileshare for everyone.

PSFile.Exe from SysInternals is our best friend in this case. Using this small utility, it is possible to retrieve all the open files in a given remote directory and close them altogether.

This is the way it should be used:

psfile \\fileserver.yourcompany.com "t:\folder\subfolder" -c

Monday, May 30, 2011

Taking file ownership

There are many reasons you could need setting file and folder ownership on a Windows file server. In my case I had to take care of the file ownership because I have been migrating our users' home folders to a NetApp volume with user quotas set. A quota is intended to limit the amount of disk space and the number of files that a particular user or group can consume. As far as I have understood, Netapp quota application mechanism is not based on home folders size but, and this is new to me, on real file ownership

In fact, as stated on the NetApp website, quota calculation of NTFS qtrees is always allocated to the user’s Windows SID. This means that the NetApp is aware of all the files that belong to a user no matter where they are located on the volume. So, even if these files are scattered about your file system and not located in a single place, the NetApp will be able to tell you exactly how much space is allocated to a Windows user (through his SID) via the "quota report" command.

Unfortunately, in my case I had robocopied all the contents and ACLs from our old Windows file server to a brand new NetApp filer and discovered that the NetApp wasn't reporting any user quota. This is due to the fact that I did not had copied the owner flag when I used Robocopy and so the filer reported that every file was owned by builtin\administrators... and that no user quotas where enforced...

File ownership tab under Windows Security
After a short investigation I found out that I had to re-apply correct file onwership for the filer to be aware of real user quota usage.

Monday, May 23, 2011

Critical Patch for ESX and ESXi 3.5 Hosts

If you are the happy owner of ESX or ESXi 3.5 boxes, the time has come to apply a specific patch to your hosts in order to be able to continue updating these after June 1st, 2011. 

The patch is not exactly the same for ESX or ESXi hosts.

For ESX apply patch ESX350-201012410-BG as indicated here. This patch for ESX hosts is a small one with a size of 11KB and it does not require you to reboot anything.

For ESXi, the situation is different. You have to install a bigger (238 MB) patch (found here) named ESXe350-201012401-I-BG and to do so all the VMs on the ESXi host must be shutdown or migrated to another host. A reboot of the ESXi host is also mandatory.

Again, this patch MUST be installed to continue patching hosts after June 1st, 2011.

Welcome to a virtual world with physical reboots.

Friday, May 20, 2011

How to enable DNS for DFSN Referrals

I have recently discovered that when you set up a DFSN path on a Windows 2008 R2 server, clients get the referrals to the linked shares with the short NETBIOS name instead of the FQDN of a fileserver.

You are probably thinking "What's the matter with that?"....

Well, this is not an issue if you have a small organization, but if you are the sysadmin for an international company with a big Active Directory forest and sites scattered around the world, it could happen that your DNS infrastructure is made up of many different suffixes (referring for instance to geographical locations such as italy.yourcompany.com or argentina.yourcompany.com).

In this cases it is unlikely that any workstation around your company has all of the tens or hundreds of DNS suffixes. So, if you try to mount a stand-alone DFS Namespace from a different site, you could get this confusing error event id 1002:

Configuration information could not be read from the domain controller, either because the machine is unavailable, or access has been denied.

Wednesday, May 4, 2011

Ebook : WMI Query Language via PowerShell

I have just finished reading this excellent ebook about using and running WMI queries inside Powershell:


As mentioned in the author's blog post,this ebook has the following 9 chapters:
  1. Introduction
  2. Tools for the job
  3. WMI Data queries
  4. WMI Event Queries: Introduction
  5. Intrinsic Event Queries
  6. Extrinsic Event Queries
  7. Timer Events
  8. WMI Schema Queries
Happy reading, Windows sysadmins!

How to backup files to a FTP server using cURL

To easily backup files to your local FTP server directly from your bash shell, you have one simple option which requires not much work: just use cURL.

cURL, which you can find here, is a command line tool for transferring data using FTP, FTPS, HTTP, HTTPS, TELNET, TFTP and many more protocols.

# Configuration section
HOST="my_ip_address"
USER="username"
PASS="password"
PATH_REMOTE="/folder/distant"
FILENAME="/folder/local/filename"

# Data transfer
curl -T $FILENAME -u $USER:$PASS $HOST/$PATH_REMOTE

This script can be useful to backup data to another server, if used together with another gzip command that takes care of preparing the tar archive to send. Once you have run gzip, just cURL the newly created gzipped tar file up to a different server running FTP.

Nothing better than a simple solution.

Thursday, April 21, 2011

Analyse Robocopy logs with Powershell

I have often had to analyse many robocopy logs and wasted a lot of time running through huge text files. That's why I wanted to share here a script I coded to run through logfiles generated by robocopy.exe and print out just a summary of useful information. The logfiles to analyse are generated using the following robocopy switch:

/LOG+:C:\log_robocopy\log_robocopy_01.txt

Here's the script. It takes in all the robocopy logfiles and recursively prints out statistics about failed file copies. The output report can be easily read.
  1. $files=get-childitem \\servername\c$\log_robocopy\*.txt  
  2. $pattern1 = $null  
  3. $pattern2 = $null  
  4. $pattern3 = $null  
  5. $pattern4 = $null  
  6. foreach ($file in $files)  
  7.     {  
  8.     write-host "Working on" $file  
  9.     select-string $file -pattern " Started : "  
  10.     select-string $file -pattern "Source : "  
  11.     select-string $file -pattern "Dest : "  
  12.     $pattern1 = select-string $file -pattern "Files :  "  
  13.     $pattern1 = $pattern1.tostring() -replace '\s+'" "  
  14.     $pattern2 = $pattern1.tostring().split(" ")  
  15.     write-host "Total files on source:`t" $pattern2[3]  
  16.     write-host "Total files copied:`t`t" $pattern2[4]  
  17.     write-host "Total files skipped:`t" $pattern2[5]  
  18.     write-host "Total files failed:`t`t" $pattern2[7]  
  19.     $pattern3 = select-string $file -pattern "Bytes : "  
  20.     $pattern3 = $pattern3.tostring() -replace " 0 "" 0 m "  
  21.     $pattern3 = $pattern3.tostring() -replace '\s+'" "  
  22.     $pattern4 = $pattern3.tostring().split(" ")  
  23.     write-host "Total bytes on source:`t" $pattern4[3] $pattern4[4]  
  24.     write-host "Total bytes copied:`t`t" $pattern4[5] $pattern4[6]  
  25.     write-host "Total bytes skipped:`t" $pattern4[7] $pattern4[8]  
  26.     write-host "Total bytes failed:`t`t" $pattern4[11] $pattern4[12]  
  27.     $error1 = select-string $file -pattern "0x00000002"  
  28.     write-host "File not found error :"$error1.count  
  29.     $error2 = select-string $file -pattern "0x00000003"  
  30.     write-host "File not found errors :"$error2.count  
  31.     $error3 = select-string $file -pattern "0x00000005"  
  32.     write-host "Access denied errors :"$error3.count  
  33.     $error4 = select-string $file -pattern "0x00000006"  
  34.     write-host "Invalid handle errors :"$error4.count  
  35.     $error5 = select-string $file -pattern "0x00000020"  
  36.     write-host "File locked errors :"$error5.count  
  37.     $error6 = select-string $file -pattern "0x00000035"  
  38.     write-host "Network path not found errors :"$error6.count  
  39.     $error7 = select-string $file -pattern "0x00000040"  
  40.     write-host "Network name unavailable errors :"$error7.count              
  41.     $error8 = select-string $file -pattern "0x00000070"  
  42.     write-host "Disk full errors :"$error8.count  
  43.     $error9 = select-string $file -pattern "0x00000079"  
  44.     write-host "Semaphore timeout errors :"$error9.count  
  45.     $error10 = select-string $file -pattern "0x00000033"  
  46.     write-host "Network path errors :"$error10.count  
  47.     $error11 = select-string $file -pattern "0x0000003a"  
  48.     write-host "NTFS security errors :"$error11.count         
  49.     $error12 = select-string $file -pattern "0x0000054f"  
  50.     write-host "Internal errors :"$error12.count  
  51.     select-string $file -pattern "Ended : "  
  52.     write-host "============================="  
  53.     sleep 2  
  54.     }  

I hope this helps. For more information about robocopy error codes have a look here.

Please leave a comment if this script was useful top you or if you would like to suggest an improvement.

Monday, March 28, 2011

Transparent Memory Sharing on Intel Xeon 5500

Last week I was checking the performance of some recently installed Nehalem-based ESX servers in our server farm and found out that poor memory sharing was happening at VM level and also that memory consumption at ESX level was much higher than before changing hardware.


This surprised me because I know that one of the main advantages of virtualization is memory pages sharing. To give a little technical background, memory page sharing is a memory management technique where the hypervisor analyses and hashes all memory in the system and stores it in a hash table. Every one hour (but this is configurable), the hashes are compared and if identical hashes are detected, a further bit-by-bit comparison is performed. Then, if the pages have the same content, just a single copy is kept and multiple VMs memory are mapped to this shared page. The standard size for this pages is 4KB.

This is what VMWare calls Transparent Page Sharing (TPS) and, as you can see from the following screenshot, this is not taking place anymore for all of my migrated VMs.


After some investigation I discovered that this uncommon behaviour is due to the fact that my new ESX servers are running on a Intel Xeon 5570 CPU, which is member of the relatively new Nehalem based Xeon 5500 processor family. This family of procs implements Extended Page Tables (EPT), which is Intel's second generation x86 virtualization technology for the Memory Management Module (MMU).

For EPT to be really effective Large Memory Pages are used (up to 2048KB instead of the usual 4KB), but using such big memory pages really makes it unlikely that two identical pages of memory are found and therefore shared between different VMs.

This is why I am not seeing any memory page sharing taking place. So, what know that TPS is gone? Well, TPS is not completely dead. It is still there sitting behind the curtains and waiting for the right memory usage condition to pop it. In fact, happily enough, VMware does not let the performance benefits of the 2048KB page size come at the price of the old TPS mechanism: under heavy memory pressure (i.e. memory commit = 100%), the ESX will accept to break down the large 2MB memory pages into smaller 4KB memory pages in order for TPS to effectively kick in and begin sharing memory.

So it looks like just that I have to get used to this new behaviour and accept it. If this new mechanism can bring better performance without forcing the old good TPS to resign, well, it is warmly welcome!

For more information about EPT and the Intel Virtualization Architecture, check the Intel website.

If you found this article useful or if you want to share your opinion about this new behaviour, do not hesitate to leave a note or a comment!

Tuesday, March 22, 2011

Robocopy error codes

If you want a tool for copying or moving big folders containing many files through different filers, Robocopy is the best (in terms of security and reliability) solution that I have seen. The only problems is the number of different possible error codes which can show up in your logfiles. That's why I have decided to sum them up here in a post for easier troubleshooting:
  • ERROR 2 (0x00000002) The system cannot find the file specified.
  • ERROR 3 (0x00000003) The system cannot find the path specified.
  • ERROR 5 (0x00000005) Access is denied.
  • ERROR 6 (0x00000006) The handle is invalid.
  • ERROR 32 (0x00000020) The process cannot access the file because it is being used by another process.
  • ERROR 51 (0x00000033) Scanning Destination Directory: Windows cannot find the network path. Verify that the network path is correct and the destination computer is not busy or turned off. If Windows still cannot find the network path, contact your network administrator.
  • ERROR 53 (0x00000035) The network path was not found.
  • ERROR 58 (0x0000003A) Copying NTFS Security to Destination File:  The specified server cannot perform the requested operation
  • ERROR 64 (0x00000040) The specified network name is no longer available.
  • ERROR 112 (0x00000070) There is not enough space on the disk.
  • ERROR 121 (0x00000079) The semaphore timeout period has expired.
  • ERROR 1359 (0x0000054F) Scanning Source Directory:  An internal error occurred.
You could then use 
type *.log | find /i "0x00000005"
to retrieve all the access denied errors.


Or 
type *.log | find /i "0x00000070"
to retrieve all the errors due to insufficient disk space on the destination.

I hope you will find this resource useful!

Monday, March 21, 2011

Event id 1219

These days I have been struggling with a Windows 2003 box giving me an error event 1219 each time I tried to logon via RDP. It was some kind of weird problem which wasn't present when I logged on the local console nor on other Windows XP workstations.

I really did not know what to do, so I even decided to reinstall it. Unfortunately this didn't solve my problem.

Today I have finally found a working solution to this annoying event:

Event Type: Error
Event Source: Winlogon
Event Category: None
Event ID: 1219
Date: 21/03/2011
Time: 15:19:20
User: N/A
Computer: servername
Description:
Logon rejected for domain\username. Unable to obtain Terminal Server User Configuration. Error: The specified domain either does not exist or could not be contacted.

The solution is pretty had to find, so I reckon it is a good idea to share it:
  • Oper the registry (Start/Run/Regedit)
  • Navigate to HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server
  • Add a new DWORD Value named IgnoreRegUserConfigErrors
  • Set its value to 1 decimal.
  • Close the registry (no need to restart the server)
Try now to RDP into the server and everything should work perfectly.

I hope this helps. Please do not hesitate to comment if this post was useful!

vCloud Connector

I have just found and gone through an excellent VMWare slideshow on vCloud Connector. For those of you who don't know about it, VCloud Connector is a free VMWare virtual appliance and vSphere plug-in that let users view virtual machines in their private and public Cloud infrastructures and move them between the two.

Here's the link to the slideshow @ cinetica.it:


Happy reading!

IE9 offline installers for Vista and 7

If you are running Windows 7 or Windows Vista on your personal computer, you may want to download Internet Explorer 9 (IE9) which has been released one week ago by Microsoft.

IE9 main new featurer are its support for brand new HTML5 video and audio tags and the Web Open Font Format, as well as CSS3, SVG1.1 and ECMAScript5 which is the last version of the standard used by market implementations as Javascript or Jscript.

All these improvements are the reason why I have decided to share the links to download it:
Before downloading, please note that there is one big difference between the 32bit and the 64bit version of IE9: IE9 includes a new script interpreter (codename Chakra) which is much faster than the script interpreter in IE8. Furthermore, the 32bit version of IE9 also includes a Just In Time (JIT) script compiler which converts script into machine code before running it. There is no such JIT compiler for the 64bit version of IE, which brings poorer performances for this version.

Friday, March 4, 2011

Managing Windows services with Service Control (SC)

How many times you wanted to restart a Windows service and didn't remember the command line to do it. Here's a little memento for Service Control (SC) to use in these circunstamces.

Let's start with the command that will output the shortname from the display name:
SC GETKEYNAME "print spooler"
[SC] GetServiceKeyName SUCCESS  Name = Spooler
The other way around, to get the display name from the short service name type:
SC GETDISPLAYNAME spooler
[SC] GetServiceDisplayName SUCCESS  Name = Print Spooler
If you don't know the display name nor the short name for a service, just run the following command to get a list of all the existing services with their state:
SC QUERY state=all | FINDSTR "DISPLAY_NAME STATE"
...then check out the name of the service you are interested in. Once you know the exact service name, you can perform reconfiguration as follows.

To change the startup mode for a service (automatic, manual or disabled) use:
SC CONFIG ServiceName = [auto|demand|disabled]
This command will directly modify the DWORD key named 'Start' in the registry under HKLM\SYSTEM\CurrentControlSet\Services\ServiceName.

For Win32 services the possible values in the registry are:
  • 2 for automatic
  • 3 for manual
  • 4 for disabled
For example, the Printer Spooler service status is registered under HKLM\SYSTEM\CurrentControlSet\services\Spooler. If the status is started, the value of Start key is 2. If the status is stopped, the value of Start key is 3.

Now, to start a service:
SC START ServiceName
And to stop it:
SC STOP ServiceName
Other useful switches you might want to know for Service Control are:
  • query: Queries the status for a service, or enumerates the status for types of services.
  • queryex: Queries the extended status for a service, or enumerates the status for types of services.
  • start: Starts a service.
  • pause: Sends a PAUSE control request to a service.
  • interrogate: Sends an INTERROGATE control request to a service.
  • continue: Sends a CONTINUE control request to a service.
  • stop: Sends a STOP request to a service.
  • config: Changes the configuration of a service (persistant).
  • description: Changes the description of a service.
  • failure: Changes the actions taken by a service upon failure.
  • sidtype: Changes the service SID type of a service.
  • qc: Queries the configuration information for a service.
  • qdescription: Queries the description for a service.
  • qfailure: Queries the actions taken by a service upon failure.
  • qsidtype: Queries the service SID type of a service.
  • delete: Deletes a service (from the registry).
  • create: Creates a service. (adds it to the registry).
  • control: Sends a control to a service.
  • sdshow: Displays a service's security descriptor.
  • sdset: Sets a service's security descriptor.
  • showsid: Displays the service SID string corresponding to an arbitrary name.
  • GetDisplayName: Gets the DisplayName for a service.
  • GetKeyName: Gets the ServiceKeyName for a service.
  • EnumDepend: Enumerates Service Dependencies.
I hope this article was helpful!

Monday, February 28, 2011

How to reset SEPM 11 administrator password

Here's the procedure to reset the password for Symantec EndPoint Protection Manager on a Windows 2008 server. First, what you must know is that the SEPM security policy is set to lock the 'admin' account after 5 wrong passwords are entered. The 'admin' account is then kept locked for 15 minutes after which it unlocks itself automatically. If you have entered 5 wrong passwords, two solutions are available: the first one is to quietly sit in your chair, sip your coffee and wait for 15 minutes to pass. The second one consistes of one simple procedure:

Thursday, February 24, 2011

Scheduled task error 0x8007000d

Each time I deploy Windows 2003 from an image and I run sysprep, the existing scheduled tasks fail and I get the following error when trying to access each task property:

---------------------------
Task Scheduler
---------------------------
General page initialization failed.
The specific error is:
“0x8007000d: The data is invalid. An error has occurred
attempting to retrieve task account information.
You may continue editing the task object, but will be
unable to change task account information.

When I press OK, I notice that the Run As line is left blank and that the Run line is dimmed. As this is a recurring situation still today, I have decided to share the solution I have found for future use.

Tuesday, February 22, 2011

How to extract RAR archives in Ubuntu 10.10

RAR is an archive file format that supports multiple file spanning, data compression and error recovery. As a matter of fact even in Ubuntu 10.10 unrar is not pre-installed because of copyright reasons (read: RAR support is not freely available).

This problem arises when, trying to open a rar archive, you get an error message like this:
Cannot open «rar_archive.rar»

Archive type not supported.
So, before uncompressing a rar archive in Ubuntu you will need to install an application called unrar (http://packages.ubuntu.com/karmic/unrar).
To install unrar go to Terminal and fire the following command:
sudo apt-get install unrar
Now, in your terminal session, navigate to where your RAR file is stored and run this command to extract the archive:
unrar x rar_archive.rar
.. or just run this other command to list files inside your rar archive:
unrar l rar_archive.rar 
I hope this helped you !

Ubuntu Unleashed 2011 Edition: Covering 10.10 and 11.04 (6th Edition) Ubuntu 10.10 Essentials  Ubuntu 10.10, 4-Disks DVD Installation and Reference Set, Ed.2011
Related Posts Plugin for WordPress, Blogger...