Monday, December 31, 2012

How to normalize special characters in filenames for Owncloud

This year is almost over. During the holidays I have spent a few hours working on that new on-premise cloud solution that is Owncloud (which I deeply suggest you to try).

The idea behind it is to give people the possibility to access their personal data from any internet point (be it your PDA, an internet point, your Android mobile phone or your work PC) and to share files, folders, pics, movies, documents with other people (friends or colleagues) in a nice and swift web interface.
Owncloud web interface
The problem many people are facing is that files and folders names containing special characters (such as é, à, ç, ù, ü) are very badly handled by Owncloud when it's installed on a Windows platform (be it Windows 7, Windows 2008 or even Windows 2012).

Wednesday, December 12, 2012

Intel G530 NAS performance - part 3

I keep the momentum of testing my newly built NAS and I am focusing today on improving the performance of my SSD drive. Let me remind you that this system is running Windows 2012 Server and therefore most of what has been told/written about SSD performance improvement under Windows 2008 R2 applies but must sometimes be reconsidered because a few changes have been made in the last Microsoft OS version.

Let's start describing the steps you took under Windows 2008 R2 and see if they still apply under Windows 2012.

Monday, December 10, 2012

How to configure external storage for Owncloud in Windows

Most of the people I know who have tried Owncloud as a personal cloud solution (in alternative to Dropbox) have been discouraged by the difficulties they encountered setting up pre-existing folder as online content for a new Owncloud instance running on a Windows server.

In fact when you install Owncloud a new empty database instance is created and the theory is that you have to move all your files to it or accept the idea or setting up a new tree from scratch.

Fortunately there is another way, which is mounting one or more folders to Owncloud to get them shown in the web interface. This procedure is a tricky one mainly due to the fact that no Windows-specific procedure has been published until now.

The only information that can be found is limited to:
So that's why today I am writing this post for those who have Owncloud instance running in a Windows environment and are willing to mount their existing folders into it.

Intel G530 NAS performance - part 2

While in the previous post I focused on generic network access to my NAS over a gigabit network, in this post I will detail the results I got using the hard-disk utility HDTune to measure the performance of my four local disks. The results are pretty stunning, and they highlight the difference in transfer rate and access time between conventional HDD and new solid state drives (SSD).

Model Min transfer rate MB/s Max transfer rate MB/s Avg transfer rate MB/s Access time ms Burst rate CPU usage %
Crucial M4 64GB 289.7 349.4 336.3 0.1 112.6 12.8
WD Caviar Green 2TB 5400 RPM 54.2 117.8 89.2 13.1 150.7 6.2
WD Caviar 500GB 7200 RPM 38.9 83.6 67.9 20.3 118.8 5.4
Samsung HM320JI 320GB 5400 RPM 32.5 70.5 54.2 18.4 80.7 4.3

The Crucial M4 SSD (plugged on a SATA III port) performs very well and confirms my expectations from this kind of device. Of course I did not need such a performance in a NAS, but I wanted in my build for three reasons which are its low power consumption, it's coolness and it quietness (due to the absence of moving parts).

Friday, December 7, 2012

Intel G530 NAS performance - part 1

As you have read here, I have recently installed my new home NAS. I think it's a good idea to share the performance I get in order to allow other NAS-owners to compare their results. My build is based on the following parts:
  • Intel Celeron G530 processor undercloked to 1.6 GHz
  • 8 GB G.Skill Ripjaws DDR3 underclocked to 1066 MHz 
  • MSI Z68 motherboard with integrated Gigabit NIC
  • Crucial M4 64GB SSD
  • 2TB Western Digital Caviar Green WD20EARS 5400 RPM
  • 500GB Western Digital WD5000AAKS 7200 RPM
  • 350 GB Samsung HM320JI 5400 RPM
  • OS: Windows 2012 Server
In this first test I have used CrystalDiskMark 3.0.2 to test raw sequential read and write performance over a gigabit network.

Tuesday, December 4, 2012

Low power NAS/File server build

Hello folks! I am here today to share a new computer adventure! Two months ago, pushed by the desire of getting rid of one the old energy-inefficient computer that I used as a NAS, I started to explore the possibility to upgrade to a newer, faster and more thrifty (in terms of watts) configuration.

At the beginning I thought I could just replace one or two parts, but then I quickly realized my better option was just to take no prisoners and thrash all the parts that I had except the uATX microtower case and the external USB drives.

Monday, December 3, 2012

Event ID 4006 on Windows 2008 R2

A customer of mine phoned me today to tell me that all of its Windows 2008 R2 servers where coming up with blank desktops when they logged in with their domain administrator account.

After a few question, they told me that the affected server where new Windows 2008 R2 servers recently joined to a Windows 2003 domain.

Fortunately for me this is an old issue that I have met before, so I am here to share my solution which differs from the one proposed by Microsoft on Technet.

As explained on Technet, if you have a security group policy applied, it could happen that the Interactive account and the Authenticated Users group are remove from the local Users group.

It this happens, an event 4006 is looged in the event log upon login:

Log Name: Application
Source: Microsoft-Windows-Winlogon
Event ID: 4006
Level: Warning
User: N/A
Computer: W2K8SERVER
Description:
The Windows logon process has failed to spawn a user application. Application name: . Command line parameters: C:\Windows\system32\userinit.exe.

The solution proposed by Microsoft is to add the Authenticated Users group and Interactive account to the local Users group.

For me, the best thing to do on Windows 2008 servers is to disable UAC. The problem will be solved and you won't have to bother again about those painful security alerts which are most of the time unneeded on server platforms.

To disable UAC in a centralized manner, let's set up a new Group Policy.

Remember that starting from Windows Vista, you must use RSAT to create new GPOs.

So, in RSAT, move to Organizational Unit which contains your Windows 2008 servers and click on 'Create a GPO on this domain and link it here'.

When the Group Policy Management Editor pops up, move to 'Computer Configuration', 'Policies', 'Windows Settings', 'Security Settings', Local policies/Security options' and set the three following policies as shown here:
  • User Account Control: Behavior of the elevation prompt for administrators in Admin Approval Mode: Elevate without prompting
  • User Account Control: Detect application installations and prompt for elevation: Disabled
  • User Account Control: Run all administrators in Admin Approval Mode: Disabled

GPO to disable UAC
Close the Editor to save and reboot your Windows 2008 servers twice. After the first reboot these settings will be applied but another reboot is required for the settings to become completely active.

I hope this post helps you. Do not hesitate to share your experience with this issue and to confirm that this solution has worked for you.

For more information about UAC, check this other posts: Disabling UACDisabling UAC part 2 and Windows 2008 R2 folder security issue and UAC.

Tuesday, November 27, 2012

vCenter 5.1.0a - How to install the Inventory Service

Last week I have written the procedure to install vCenter Single Sign-On and its database. This week I will focus on deploying the Inventory Service as well as vCenter Server and a vCenter Web Client.

I have decided to keep the Inventory Service on the same VM that hosts the Single Sign-On application. My plan for the moment is to have three virtual machines:
  • one for vCenter Signle Sign-On and its database + the Inventory Service,
  • one which will host the vCenter Server DB,
  • one for the main vCenter Server service.
I will then install the Web Client on my workstation and from there I will manage the whole infrastructure.




Thursday, November 22, 2012

vCenter 5.1.0a - How to install a Single Sign-On server

The vCenter Server 5.1 release includes significant architectural changes. One of those major changes is the introduction of Single Sign-On (aka SSO) as a solution to manage all users authentications to the increasing number of third part products VMWare is putting into its bundle. By using SSO authorized vCenter Server, users will be able to access multiple vCenter Server systems with a single login. See this link for more on vCenter Single Sign-On.

As you might have heard through the forums, vCenter 5.1 had many major bugs which discouraged many system administrators from upgrading their infrastructure. I was one of those sysadmins.

Starting from october 25, 2012 a more stable release (named VMware vCenter Server 5.1.0a) has been published, and therefore I decided to upgrade my vCenter to it.

I didn't go far with this plan. In fact, when I read the support matrix, I discovered that ESX 3.5 was no more supported under vCenter 5.1, and I have still many of them out there which I cannot upgrade yet for several reasons.

What I decided is to install a completely new vCenter infrastructure to host all the newer ESX 4.1 and ESXi5. And the first step to pass to vCenter 5.1 is to install an SSO instance.

Tuesday, October 16, 2012

VDR backup job to Windows 2012 DeDupe volume

Following the example of Chris Henley on Veeam's blog, as well as Charles Clark's video, I tested a configuration where VMWare Data Recovery uses a Windows 2012 deduplicated volume as backup repository.

For my experience I took two random VMs with a total provisioned storage of 87GB and I set up a backup job on a VDR appliance configured to send backup data onto a NFS share on a Windows 2012 server.

I won't detail the steps to configure this, basically because the way to export a NFS share in Windows 2012 hasn't changed from Windows 2008 (add RSAT-NFS-Admin feature), and also because the afore-mentioned video shows most of the steps to configure the deduplicated volume.

In my test, once the VDR backup job has completed, the initial 87GB have shrunk to a mere 14GB.


Now, before we proceed, there are two thing which are worth mentioning.

The first is that you have to shutdown your VDR appliance in order to close the vmdk disk before you launch the Windows deduplication task, otherwise the optimization task will silently fail with
  • Event ID 8221: "Data Deduplication failed to dedup file "testvdr02-flat.vmdk" with file ID 844424930132042 due to oplock break"
  • Event ID 8196: "Data Deduplication failed to dedup file "testvdr02-flat.vmdk" with file ID 844424930132042 due to non-fatal error 0x80565350, An error occurred while opening the file because the file was in use."
The second thing to keep in mind is that you have to make sure there's plenty of room on your destination volume for deduplication to succeed, otherwise the optimization task will fail with
  • Event ID 8243: "Failed to enqueue job of type Optimization on volume N: 0x8056530a, There is insufficient disk space to perform the requested operation">
  • Event ID 8252: "Data Deduplication has failed to set NTFS allocation size for container file \\?\Volume{339d7092-0...9a69ca5460}\SVI\Dedup\ChunkStore\{C811D0A8-A4DD-59A4-8518-98158C627379}.ddp\Data\00000078.00000001.ccc due to error 0x80070070, There is not enough space on the disk."
  • Event ID 8204: "0x8056530a, There is insufficient disk space to perform the requested operation."
I wasn't able to establish the minimal disk space requirements for Windows deduplication not to fail, so if anybody has information about this parameter, please share!

Let's go on. At this point the optimization task starts, and the process Microsoft File Server Data Management Host (fsdmhost.exe) scans the disk high and low for data chunks to deduplicate.


Once it's finished, and differentely from what it's stated on Veeam blog, I don't see any size improvement for the backups, because the amount of disk space used stays roughly the same, 14 gigs... This means to me that VDR deduplication is quite efficient and the Windows Deduplication engine can't add much gain to it.


The shown deduplication saving of 37,1GB is just the amount of space deduplication can retrieve from the vmdk disk because it is stored as thick.

If anybody at Veeam has a better interpretation of this results, I am open for suggestions, remarks and of course corrections!

For an introduction to Windows 2012 Data Deduplication check this previous post.

Thursday, October 11, 2012

Using FLR feature of VMware Data Recovery 2.0

This is the first time I had to use this File Level Restore (FLR) feature since I use VMware Data Recovery 2.0 on site. So I thought it could be a good idea to share the steps needed to make it going. First of all know that there are two File Level Restore ports: one for Windows (VMwareRestoreClient.exe) and one for Linux (VMwareRestoreClient.tgz) and both can be found on the Data Recovery CD, under the WinFLR and LinuxFLR folders respectively.

In my case I needed the Windows version because I was restoring files from the backup of a Windows VM.

  • Let's start by copying VMwareRestoreClient.exe to the VM for which you want to restore some files. 
  • Now let's establish a RDP connection to that VM and run the copied executable (you must wait here for it to decompress then a window will appear).
  • When prompted enter the IP/hostname of the VDR Appliance and select also the Advance Mode checkbox. There you need to enter the credentials to log to your Virtual Center instance and wait  (it can take a very long time!) for the list of available Restore Points to load in memory.
  • Browse through the Restore Points library, select the same VM and, once you have selected the VMDK file on which you know the files to restore are, click on Mount. 
  • The content of the selected VMDK will be mounted as a mount point on a directory named with the timestamp of the backup under the root folder (for instance: c:\10-10-2012 8.45.36). 
  • Copy the files to restore to their original locations on your VM drives and then close the FLR client. This way the VMDK will be automatically dismounted and there you are with your restore done!
As I final note, know that you need TCP port 22024 open between the VM and the VDR applicance, so think to configure the firewall accordingly.

For a how-to on using the Linux client, check here, here and here.

I am sure many of you still use VDR and haven't migrated to VDP yet, so, if this is the case and this post helped you, please do not hesitate to tell or share! Also, if you have questions do not hesitate to ask.

Monday, October 8, 2012

How to install MS-DOS 6.22 under VMWare ESX part 4

After a long pause, the saga of MS-DOS under VMWare continues. After explaining in this old post how you can connect a cd-rom and configure a mouse, let's proceed now to configure TCP/IP networking.

Now that you have the possibility to use a cd-rom:
  • mount the iso downloaded at the beginning of the mentioned post and move to the MSCLIENT folder
  • run setup.exe
  • choose the folder you want install your drivers to
  • now from the adapter list choose "*Network adapter not shown on list below"
  • enter D:\\AMDPCNET\DOS (use the ascii code ALT+092 to write the backslash if you have problems entering it) as the driver directory to install the drivers for the "Advanced Micro Devices PCNET Family" adapter
  • the system will tell you it has found the appropriate driver
  • press 'Enter' to optimize the system for better performance
  • choose the name of your PC
  • add the TCP/IP protocol (you can move between the two lists using TAB)
  • remove IPX
  • configure TCP/IP by clicking on 'Change Settings'
  • set the IP address, the subnet mask and the default gateway (remember to use spaces instead of periods). Also, if you do not use a DHCP server, set 'Disable Automatic Configuration' to 1
Ok, at this point the Network Client is installed on your virtual machine. Just restart it to apply the modifications made to config.sys and autoexec.bat. Actually, on restart you should get an error message saying 'Error 8: There is not enough memory available' when loading the TCP/IP stack.

This is a pretty common error due to the fact that MS-DOS is trying to load all the drivers in the first 640 kb of conventional memory (ahhh, the 640 kb limit, this makes me remember the old times...). Before MS-DOS can load a device driver into upper memory, there must be an upper memory block (UMB) provider (EMM386 is the standard) available and there must be enough space in that UMB. If UMB lacks memory to store the device drivers, they will be loaded into conventional memory.

You can check to see which device drivers have been loaded into high memory by using the MEM /C command.

So, to solve the problem, edit config.sys and add the following lines:
device=c:\dos\himem.sys
device=c:\dos\emm386.exe noems
dos=high,umb
Also forcefully move the cd-rom device driver and the Installable File System Helper (ifshlp.sys) to UMB by updating its lines like this:
devicehigh=c:\hxcd-rom\cdrom.sys /D:MSCD000
devicehigh=c:\net\ifshlp.sys
Upon restart the drivers should be loaded into the upper memory blocks, as the MEM /C command shows.

At this point you should be able to ping and get pinged! That's all for this series of posts about installing MS-DOS virtual machines in a virtual environment. I hope they were helpful. If so, do not hesitate to comment, google+ or retweet!!!

Error 0x80070780 reading from a deduplicated drive

I read somewhere that Windows 2012 deduplication is not part of a new NTFS version, but it is instead a sort of a engine which is layered on the file system. To test this I decided to mount a NTFS-Windows-2012-deduplicated-external-USB-drive on a Windows 2008 R2 server (which since Windows XP has the same NTFS version, 3.1).
NTFS File System Driver version and size compared between Win2008 and Win2012
If the initial statement is right and the NTFS version is the same, I should be still able to read this external deduplicated drive with Windows 2008 R2. My test revealed this to be true. I can see the USB drive and browse its folders but, when I try to open any file bigger than 32 kbytes, I get the following error message:
Error 0x80070780 The file cannot be accessed from the system
Now if I check with Powershell the attributes of any of these files stored on the deduplicated drive and whose size is larger than 32KB, I get:
Get-Item 40kb.txt | select Attributes | fl
Attributes : Archive, SparseFile, ReparsePoint
On smaller files, only the 'archive' attribute is present:
Get-Item 20kb.txt | select Attributes | fl
Attributes : Archive
So, the attributes of a deduplicated file are: sparse and reparse point.

I thought it would be useful to share my experience on this in case somebody need it.

Thursday, October 4, 2012

Data Deduplication in Windows Server 2012

After a few days testing Data Deduplication under Windows Server 2012, here's a few facts as well as my considerations on its performance.
  • No Data Deduplication of ReFS partitions. Source: Personal experience (check this previous post)
  • Data Deduplication is not enabled by default. Source: Personal experience
  • Data Deduplication is all but fast. It is indeed designed as a background service to improve disk space usage, so you can expect best ROI on the long term only. Source: Personal experience
  • From the sentence above it follows the next fact: when new files are added to the volume, they are not optimized right away. Only files that have not been changed for a minimum amount of time are optimized. (This minimum amount of time is set by user-configurable policy.) Source: MSDN 
  • Data Deduplication jobs can be manually started from Task Scheduler under 'Task Scheduler Library', 'Microsoft','Windows','Deduplication'. Source: Deploymentresearch.com 
  • Deduplication has a setting called MinimumFileAgeDays that controls how old a file should be before processing the file. The default setting is 5 days. This setting is configurable by the user and can be set to “0” to process files regardless of how old they are. Source: Technet 
  • The chunks have an average size of 64KB and they are compressed and placed into a chunk store located in a hidden folder at the root of the volume called the System Volume Information, or “SVI folder”. The normal file is replaced by a small reparse point, which has a pointer to a map of all the data streams and chunks required to “rehydrate” the file and serve it up when it is requested. Source: Technet 
  • Redundancy: Extra copies of critical metadata are created automatically. Very popular data chunks receive entire duplicate copies whenever it is referenced 100 times. We call this area “the hotspot”, which is a collection of the most popular chunks. Source: Technet 
  • Files smaller than 32KB are not deduplicated ( because their size is already smaller than the minimum chunk size). Source: Storagegaga.com 
  • The first service behind Deduplication is the Data Deduplication service, which enables the deduplication and compression of data on selected volumes in order to optimize disk space used. If this service is stopped, optimization will no longer occur but access to already optimized data will continue to function. Its command line is C:\Windows\system32\svchost -k ddpsvc Source: Personal experience
  • The second service is Data Deduplication Volume Shadow Copy Service, which is used to back up volumes with deduplication. Its command line is: C:\Windows\system32\svchost -k ddpvssvc Source: Personal experience
  • Deduplication Data Evaluation Tool (ddpeval.exe) doesn't work on Windows 7 Ultimate. Source: Personal experience
  • The deduplication VSS writer reports two components for each volume that contains a deduplication chunk store: the "Chunk Store" under \System Volume Information\Dedup\ChunkStore\* and "Dedup Configuration" under \System Volume Information\Dedup\Settings\*. Source: MSDN
Let's check this last fact and fire a few Powershell commands to check what's inside the Chunk Store:
PS G:\> gci ".\System Volume Information" -Recurse -hidden

    Directory: G:\System Volume Information

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
-a-hs  03/10/2012 14:16  20480  tracking.log

    Directory: G:\System Volume Information\Dedup\ChunkStore

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
d--hs  02/10/2012 13:32         {512528DE-2E46-4C15-A013-8AEA62DEF7A8}.ddp

    Directory: G:\System Volume Information\Dedup\ChunkStore\{512528DE-2E46-4C15-A013-8AEA62DEF7A8}.ddp

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
d--hs  02/10/2012 13:32         Data
d--hs  02/10/2012 13:32         Hotspot
d--hs  02/10/2012 13:32         Stream
-a-hs  02/10/2012 13:32  28     stamp.dat

    Directory: G:\System Volume Information\Dedup\Settings

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
-a-hs  02/10/2012 13:29  2280   dedupConfig.01.xml
-a-hs  02/10/2012 13:29  2280   dedupConfig.02.xml

    Directory: G:\System Volume Information\Dedup\State

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
-a-hs  03/10/2012 09:30  852    analysisState.xml
-a-hs  03/10/2012 13:36  2894   chunkStoreStatistics.xml
-a-hs  03/10/2012 13:36  2442   dedupStatistics.xml
-a-hs  03/10/2012 13:34  864    gcState.xml
-a-hs  03/10/2012 13:36  2066   optimizationState.xml
-a-hs  03/10/2012 13:34  852    scrubbingState.xml
It looks like the configuration of the deduplication service is stored in two XML files, whose content I show here:

dedupConfig.01.xml
<?xml version="1.0"?>
-<root version="1.0">-<properties><property value="0" type="VT_UI8" name="changeTime"/><property value="0" type="VT_UI4" name="options"/><property value="5" type="VT_UI4" name="fileMinimumAge"/><property value="32768" type="VT_UI4" name="fileMinimumSize"/><property value="" type="VT_BSTR" name="excludeFolders"/><property value="" type="VT_BSTR" name="excludeFileExtensions"/><property value="aac|aif|aiff|asf|asx|au|avi|flac|m3u|mid|midi|mov|mp1|mp2|mp3|mp4|mpa|mpe|mpeg|mpeg2|mpeg3|mpg|ogg|qt|qtw|ram|rm|rmi|rmvb|snd|swf|vob|wav|wax|wma|wmv|wvxaccdb|accde|accdr|accdt|docm|docx|dotm|dotx|pptm|potm|potx|ppam|ppsx|pptx|sldx|sldm|thmx|xlsx|xlsm|xltx|xltm|xlsb|xlam|xllace|arc|arj|bhx|bz2|cab|gz|gzip|hpk|hqx|jar|lha|lzh|lzx|pak|pit|rar|sea|sit|sqz|tgz|uu|uue|z|zip|zoo" type="VT_BSTR" name="noCompressionFileExtensions"/><property value="100" type="VT_UI4" name="hotspotThreshold"/><property value="2" type="VT_UI4" name="compressionLevel"/></properties></root>
dedupConfig.02.xml
<?xml version="1.0"?>
-<root version="1.0">-<properties><property value="0" type="VT_UI8" name="changeTime"/><property value="0" type="VT_UI4" name="options"/><property value="5" type="VT_UI4" name="fileMinimumAge"/><property value="32768" type="VT_UI4" name="fileMinimumSize"/><property value="" type="VT_BSTR" name="excludeFolders"/><property value="" type="VT_BSTR" name="excludeFileExtensions"/><property value="aac|aif|aiff|asf|asx|au|avi|flac|m3u|mid|midi|mov|mp1|mp2|mp3|mp4|mpa|mpe|mpeg|mpeg2|mpeg3|mpg|ogg|qt|qtw|ram|rm|rmi|rmvb|snd|swf|vob|wav|wax|wma|wmv|wvxaccdb|accde|accdr|accdt|docm|docx|dotm|dotx|pptm|potm|potx|ppam|ppsx|pptx|sldx|sldm|thmx|xlsx|xlsm|xltx|xltm|xlsb|xlam|xllace|arc|arj|bhx|bz2|cab|gz|gzip|hpk|hqx|jar|lha|lzh|lzx|pak|pit|rar|sea|sit|sqz|tgz|uu|uue|z|zip|zoo" type="VT_BSTR" name="noCompressionFileExtensions"/><property value="100" type="VT_UI4" name="hotspotThreshold"/><property value="2" type="VT_UI4" name="compressionLevel"/></properties></root>
Among the settings contained in this configuration files, there is the list of the excluded file extensions, that is the type of files that won't be analyzed by the Dedupe Service. No file extensions are excluded by default. And there is also the list of the file extensions that the Deduplication Service won't try to compress. This second list includes by default mpeg files, zip files and MSOffice files.

As I said, Deduplication is designed to work on files on the long term. So If I try get the Dedpulication State of a newly added volume I'll get that no files are optimized:
PS C:\> Get-DedupVolume g:

Enabled SavedSpace  SavingsRate Volume
------- ----------  ----------- ------
True    0 B         0 %         G:
Now, if you want Data Deduplication to immediately treat all files on the volume regardless of their age, run:
PS G:\> Get-DedupVolume g: | fl *

ObjectId                 : \\?\Volume{795fedec-0bc3-11e2-93ea-005056984e73}\
Capacity                 : 10734268416
ChunkRedundancyThreshold : 100
DataAccessEnabled        : True
Enabled                  : True
ExcludeFileType          :
ExcludeFolder            :
FreeSpace                : 9539395584
MinimumFileAgeDays       : 5
MinimumFileSize          : 32768
NoCompress               : False
NoCompressionFileType    : {aac, aif, aiff, asf...}
SavedSpace               : 0
SavingsRate              : 0
UnoptimizedSize          : 1194872832
UsedSpace                : 1194872832
Verify                   : False
Volume                   : G:
VolumeId                 : \\?\Volume{795fedec-0bc3-11e2-93ea-005056984e73}\
PSComputerName           :
CimClass                 : ROOT/Microsoft/Windows/Deduplication:MSFT_DedupVolume
CimInstanceProperties    : {Capacity, ChunkRedundancyThreshold, DataAccessEnabled, Enabled...}
CimSystemProperties      : Microsoft.Management.Infrastructure.CimSystemProperties
There you can recognise the parameter we talked about a few lines above: MinimumFileAgeDays. Let's change its value to 0:
PS G:\> Set-DedupVolume g: -MinimumFileAgeDays 0
When I issue this command dedupConfig.01.xml and dedupConfig.02.xml are both modified with the new value.

After a night the Savingsrate value goes from 0% to 75%. Amazing.
PS HKLM:\SOFTWARE> Get-DedupVolume g:

Enabled SavedSpace SavingsRate Volume
------- ---------- ----------- ------
True    856.98 MB  75 %        G:

PS HKLM:\SOFTWARE> Get-DedupMetadata

Volume                         : G:
VolumeId                       : \\?\Volume{795fedec-0bc3-11e2-93ea-005056984e73}\
StoreId                        : {512528DE-2E46-4C15-A013-8AEA62DEF7A8}
DataChunkCount                 : 3511
DataContainerCount             : 1
DataChunkAverageSize           : 24.12 KB
DataChunkMedianSize            : 0 B
DataStoreUncompactedFreespace  : 0 B
StreamMapChunkCount            : 34
StreamMapContainerCount        : 1
StreamMapAverageDataChunkCount :
StreamMapMedianDataChunkCount  :
StreamMapMaxDataChunkCount     :
HotspotChunkCount              : 1
HotspotContainerCount          : 1
HotspotMedianReferenceCount    :
CorruptionLogEntryCount        : 0
TotalChunkStoreSize            : 83.84 MB
The disk space saving can be seen directly under Windows Explorer, as shown in the image, as well in File and Storage Services, Volume view.

Used space after deduplication

Deduplication efficiency rate

These facts listed here are just a starting point to understand this new service proposed by Microsoft. You are free to add your own comments and to share your opinion on the results you get with deduplication!

Tuesday, October 2, 2012

ReFS and (no) Data Deduplication

Well, this post about ReFS and Data Deduplication won't be long, and for a good reason: Data Deduplication does NOT apply to partitions formatted with ReFS. Full stop. FAT32 volumes are not good either (ok, I expected this).

The only supported volume type is NTFS and the size of the partition to deduplicate must be greater than 2GB.

So, when I launched into activating Data Deduplication on my new Windows Server 2012, with the Powershell commands described here:
Import-Module ServerManager
Add-WindowsFeature -name FS-Data-Deduplication
Import-Module Deduplication
and then tried to activate it on a ReFS partition with:
Enable-DedupVolume F:
...I got this error:
Enable-DedupVolume : MSFT_DedupVolume.Volume='F:' - HRESULT 0x8056530b, The specified volume type is not supported.
Deduplication is supported on fixed, write-enabled NTFS data volumes.
At line:1 char:1
+ Enable-DedupVolume F:
+ ~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (MSFT_DedupVolume:ROOT/Microsoft/...SFT_DedupVolume) [Enable-DedupVolume],
    CimException
    + FullyQualifiedErrorId : HRESULT 0x8056530b,Enable-DedupVolume
If you concentrate very hard, you should be able to see the deception on my face! After all the hype around ReFS, you are saying to me that it doesn't offer Data Deduplication?...

On the contrary, when run against a NTFS drives, the result is much more... convincing:
Enable-DedupVolume G:

Enabled SavedSpace SavingsRate Volume
------- ---------- ----------- ------
True    0 B        0 %         G:
In the end, I found out (reading here) that:

"ReFS does not itself offer deduplication. One side effect of its familiar, pluggable, file system architecture is that other deduplication products will be able to plug into ReFS the same way they do with NTFS."

I don't understand the technical choice behind this... after all, who does? I am pretty much disappointed now. I'll keep testing Data Deduplication on the other NTFS partition, but I don't see the point of introducing a brand new technology and excluding such good features!

My first ReFS partition on a Windows 2012 server

Ok, maybe this is not of great importance, but I am a little bit excited today because I just had the occasion to format my first ReFS partition on a Windows 2012 server.

Here's the Powershell command I use to list my partitions:

PS C:\> get-wmiobject -class "Win32_LogicalDisk" -namespace "root\CIMV2"  |  select caption, filesystem, size, freespace


caption  filesystem  size         freespace
-------  ----------  ----         ---------
C:       NTFS        42580570112  30284148736
D:       UDF         3695179776   0
E:       ReFS        10670309376  10518462464
F:       ReFS        11744051200  11590959104

Actually I formatted two ReFS partitions, one on a MBR volume, and the other on a GPT volume. Both worked.

Also, I just want to make clear that, for the moment, you cannot install Windows 2012 on a ReFS partition. Just secondary partitions can be formatted with this standard.

Windows Server 2012 folders size chart

I am planning to deploy some new Windows Server 2012 and here it comes the question regarding HDD size. If I look at what Microsoft says, 32 gigs should be the bare minimum for the system partition, but, as usual, I want to take a closer look to what the reality is, so I plugged Treesize on the C: drive (which is on a 40 GB disk) of a freshly installed Datacenter Edition and got the following results:
  • Drive Size: 39.66 GB
  • Bytes per Cluster: 4096 Bytes
  • Filesystem: (NTFS)

Full Path       Size            Files          
c:\Windows 11 414,3  MB 70 465 
c:\*.* 3 584,4  MB 3
c:\Users 58,3  MB 269
c:\Program Files 28,3  MB 244
c:\Program Files (x86) 23,5  MB 125
c:\ProgramData 12,4  MB 96
c:\System Volume Information 0,0  MB 2
c:\$Recycle.Bin 0,0  MB 1
c:\Documents and Settings 0,0  MB 0
c:\PerfLogs 0,0  MB 0

What we see is that a clean installation of Windows Server 2012 Datacenter edition takes almost 15 GB. The Windows folder account for 11 GB, plus the pagefile (4GB). In total there are a little bit more than 71k files on the partition after the initial installation.

The biggest folder under c:\Windows is, as you could expect, WinSxS, which takes more than 6 GB, followed by System32 (2 GB), Assembly (2GB) and SysWOW64 (1GB).

WinSxs breakdown under Windows Server 2012
As an additional data, these are the extensions that eat most of our disk space:
  • .dll is the winner, with 13k files and more than 6 GB
  • .sys, with 1000 files a little less than 4 GB
  • .exe with 2000 files and 500 MB
 Also, the biggest file on disk is 'imageres.dll', with a size of 64 MB (not that big I must say). This file contains all Windows 2012 system icons, the login screen background image and the startup sound in wav format (5080.wav).

That's all for my first encounter with Windows Server 2012. I will delve as soon as possible into ReFS, and data deduplication, which is what I am most interested in, and will probably post something on it as soon as I have tested it.

Meanwhile, do not hesitate to share your disk space usage on Windows 2012. It would be interesting to compare how usage differs with different roles installed.

Addendum: for personal reference I post here below a HEX dump of the MBR of my Windows 2012 Master Boot Record (MBR).
LBN 0   [C 0, H 0, S 1]

0x0000   33 c0 8e d0 bc 00 7c 8e-c0 8e d8 be 00 7c bf 00   3└Äð╝.|Ä└ÄÏ¥.|┐.
0x0010   06 b9 00 02 fc f3 a4 50-68 1c 06 cb fb b9 04 00   .╣..³¾ñPh∟.╦¹╣..
0x0020   bd be 07 80 7e 00 00 7c-0b 0f 85 0e 01 83 c5 10   ¢¥.Ç~..|..à..â┼►
0x0030   e2 f1 cd 18 88 56 00 55-c6 46 11 05 c6 46 10 00   Ô±═↑êV.UãF◄.ãF►.
0x0040   b4 41 bb aa 55 cd 13 5d-72 0f 81 fb 55 aa 75 09   ┤A╗¬U═‼]r.ü¹U¬u.
0x0050   f7 c1 01 00 74 03 fe 46-10 66 60 80 7e 10 00 74   ¸┴..t.■F►f`Ç~►.t
0x0060   26 66 68 00 00 00 00 66-ff 76 08 68 00 00 68 00   &fh....f v.h..h.
0x0070   7c 68 01 00 68 10 00 b4-42 8a 56 00 8b f4 cd 13   |h..h►.┤BèV.ï¶═‼
0x0080   9f 83 c4 10 9e eb 14 b8-01 02 bb 00 7c 8a 56 00   ƒâ─►×Ù¶©..╗.|èV.
0x0090   8a 76 01 8a 4e 02 8a 6e-03 cd 13 66 61 73 1c fe   èv.èN.èn.═‼fas∟■
0x00a0   4e 11 75 0c 80 7e 00 80-0f 84 8a 00 b2 80 eb 84   N◄u.Ç~.Ç.äè.▓ÇÙä
0x00b0   55 32 e4 8a 56 00 cd 13-5d eb 9e 81 3e fe 7d 55   U2õèV.═‼]Ù×ü>■}U
0x00c0   aa 75 6e ff 76 00 e8 8d-00 75 17 fa b0 d1 e6 64   ¬un v.Þì.u↨·░еd
0x00d0   e8 83 00 b0 df e6 60 e8-7c 00 b0 ff e6 64 e8 75   Þâ.░▀µ`Þ|.░ µdÞu
0x00e0   00 fb b8 00 bb cd 1a 66-23 c0 75 3b 66 81 fb 54   .¹©.╗═→f#└u;fü¹T
0x00f0   43 50 41 75 32 81 f9 02-01 72 2c 66 68 07 bb 00   CPAu2ü¨..r,fh.╗.
0x0100   00 66 68 00 02 00 00 66-68 08 00 00 00 66 53 66   .fh....fh....fSf
0x0110   53 66 55 66 68 00 00 00-00 66 68 00 7c 00 00 66   SfUfh....fh.|..f
0x0120   61 68 00 00 07 cd 1a 5a-32 f6 ea 00 7c 00 00 cd   ah...═→Z2÷Û.|..═
0x0130   18 a0 b7 07 eb 08 a0 b6-07 eb 03 a0 b5 07 32 e4   ↑áÀ.Ù.áÂ.Ù.áÁ.2õ
0x0140   05 00 07 8b f0 ac 3c 00-74 09 bb 07 00 b4 0e cd   ...ï­¼<.t.╗..┤.═
0x0150   10 eb f2 f4 eb fd 2b c9-e4 64 eb 00 24 02 e0 f8   ►Ù‗¶Ù²+╔õdÙ.$.Ó°
0x0160   24 02 c3 49 6e 76 61 6c-69 64 20 70 61 72 74 69   $.├Invalid parti
0x0170   74 69 6f 6e 20 74 61 62-6c 65 00 45 72 72 6f 72   tion table.Error
0x0180   20 6c 6f 61 64 69 6e 67-20 6f 70 65 72 61 74 69    loading operati
0x0190   6e 67 20 73 79 73 74 65-6d 00 4d 69 73 73 69 6e   ng system.Missin
0x01a0   67 20 6f 70 65 72 61 74-69 6e 67 20 73 79 73 74   g operating syst
0x01b0   65 6d 00 00 00 63 7b 9a-9b 01 24 32 00 00 80 20   em...c{Üø.$2..Ç
0x01c0   21 00 07 be 12 2c 00 08-00 00 00 f0 0a 00 00 be   !..¥↕,.....­...¥
0x01d0   13 2c 07 fe ff ff 00 f8-0a 00 00 00 f5 04 00 00   ‼,.■  .°....§...
0x01e0   00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00   ................
0x01f0   00 00 00 00 00 00 00 00-00 00 00 00 00 00 55 aa   ..............U¬


Friday, September 28, 2012

Managing Alternate Data Streams with Powershell 3.0

Something I've recently found out and that I think it's brilliant is the possibility under Powershell 3.0 to handle file forks natively, which is something I long expected to see implemented. Some of you maybe know already that the NTFS Master File Table (MFT) supports storing forks of files in its tree. These forks are better know under Windows (starting from NT4) as Alternate Data Streams (ADS).

Until now the only way to know if a file had one or more ADS was to recur to our friend the DIR command which, since Windows Vista, offered a /R parameter to display the existing ADS and their somewhat secret size.

Here's a sample output of DIR /R:

dir /r | find "DATA"
26 Introducing Windows Server 2008 R2.pdf:Zone.Identifier:$DATA
26 Parkdale.exe:Zone.Identifier:$DATA
26 setup.exe:Zone.Identifier:$DATA
26 Understanding Microsoft Virtualization Solutions.pdf:Zone.Identifier:$DATA

Piping the output of 'DIR /R' to the 'find' command allows the immediate listing of the files that have multiple data streams only and exclude all other 'normal' files.

As you can see in the example, many files usually have an ADS named 'Zone.Identifier' (whose size is 26 bytes). This particular fork stored withing the same filename tells you whether the file has been downloaded from the Net or not. A typical content is:

[ZoneTransfer]
ZoneId=3

with ZoneId values ranging (to my knowledge) from 0 to 4:

ZoneId=0: Local machine
ZoneId=1: Local intranet
ZoneId=2: Trusted sites
ZoneId=3: Internet
ZoneId=4: Restricted sites

If the ZoneId is equal to 3 or 4, when you will try to execute the file you will get a warning telling you "The publisher could not be verified. Are you sure you want to run this software?"...

Ok, now that we know how we can find the hidden ADS in our system, let's go back to our Powershell 3.0 command line to see in what ways it does help us in ADS management. Starting with this new version of Powershell, there is a way to find commands (using get-command) based on the parameters they accept (by adding the -parametername parameter). So let's run for instance:

gcm -ParameterName stream | select name

The returned list of cmdlets which support Alternate Data Streams is:

Add-Content
Clear-Content
Get-Content
Get-Item
Out-String
Remove-Item
Set-Content

For instance we can now retrieve the content of a fork the same way we did with

more < "Introducing Windows Server 2008 R2.pdf:Zone.Identifier"
or
more < "Introducing Windows Server 2008 R2.pdf:Zone.Identifier:$DATA"

by issuing

Get-Content "Introducing Windows Server 2008 R2.pdf" -stream Zone.Identifier
or
Get-Content "Introducing Windows Server 2008 R2.pdf" -stream Zone.Identifier$DATA

What's more, we can retrieve the properties of this ADS with Get-Item:

$ads = get-item '.\Introducing Windows Server 2008 R2.pdf' -Stream zone.identifier

The command

$ads | select filename,stream,length | fl 

will return

FileName : C:\folder\Introducing Windows Server 2008 R2.pdf
Stream   : Zone.Identifier
Length   : 26 

We can also quickly remove the fork (in this case no warning pop-up will come to disturb our inner peace!) with

Remove-Item .\file.txt -Stream  Zone.Identifier

This is a great step forward for Windows Powershell. Alternate Data Streams management has been left in the shade for almost a decade and today we can finally play with them like if they were standard user's files. 

Great, isn't it?

Wednesday, September 26, 2012

How to remotely modify Windows ACL using Powershell

I have been spending a few hours working on a permission configuration issue on remote Windows systems (NT4, 2000 and 2003). The aim of my script was to modify the existing permission on a file on remote systems, as well as setting the ownership for this same file. Obviously I decided to use the cmdlet that Powershell kindly offers (gcm -noun acl), but there are only two:
CommandType     Name
-----------     ----
Cmdlet          Get-Acl
Cmdlet          Set-Acl
At first sight I was sure that I was going to have problems with setting file ownership on a remote system because none of these two commands reference it. And I was right. Powershell doesn't allow to change file ownership on a remote file system (as explained here). I even tried to perform this operation with cacls and icacls, but I had no better luck and lot of errors (such as "no SID for trustee" and "Error Bad trustee name/SID no lookup").

As I do most of the times when I want to run something remotely and it doesn't work with native tools/cmdlets, I pulled from my magic hat the old good 'psexec.exe' in conjunction with 'filelacl.exe' (whose project has been recently abandoned by Microsoft, as you can see here, so hurry-up and store a copy of this program in safe place!).

My final script is a combination of pure Powershell cmdlets and freshly-mixed psexec and fileacl statements:
$remotehost = "test_host"
$username1 = "my_username"
$paswd = "my password"
$path = "\\$remotehost\c$\folder\file.txt"
$acl = Get-Acl $path
#Going to add my username with Full Control
$rule = New-Object System.Security.AccessControl.FileSystemAccessRule("$username1","FullControl","Allow")
$acl.AddAccessRule($rule)
#Going to assing readandexecute ot Everyone group
$rule = New-Object System.Security.AccessControl.FileSystemAccessRule("everyone","readandexecute","Allow")
$acl.AddAccessRule($rule)
#Going to remove permissions for the local Administrator group
$rule = New-Object System.Security.AccessControl.FileSystemAccessRule("administrators","FullControl","Allow")
$acl.RemoveAccessRule($rule)
Set-Acl $path $acl
#Going to set the remote owner for this file
psexec -c -u $username1 -p $paswd \\$remotehost fileacl.exe "$path" /O "$username1"
#Let's check everything's been properly set
Get-Acl $path | Format-List
I know it doesn't look good but that's the best I could do to solve my issue. It works this way:
  • first of all I define a few variables, as the remote host name on which to work
  • I retrieve the current ACL of the remote file
  • I add two access rule: one for my username and one for the Everyone group
  • I also add another access rule to remove the permission for the local admin group
  • I apply these access rule with Set-Acl
  • I start a psexec session on the remote host and fire fileacl to set the ownership 
  • I dump the applied permissions with Get-Acl to check everything went well
Possible values for rights that you can assign under System.Security.AccessControl.FileSystemAccessRule are listed below. Just choose the one that applies to your situation:
  • AppendData
  • ChangePermissions
  • CreateDirectories
  • CreateFiles
  • Delete
  • DeleteSubdirectoriesAndFiles
  • ExecuteFile
  • FullControl
  • ListDirectory
  • Modify
  • Read
  • ReadAndExecute
  • ReadAttributes
  • ReadData
  • ReadExtendedAttributes
  • ReadPermissions
  • Synchronize
  • TakeOwnership
  • Traverse
  • Write
  • WriteAttributes
  • WriteData
  • WriteExtendedAttributes
I hope this simple script will be useful for those who, like me, didn't find a way to write file ownership on remote Master File Tables... be it under Windows NT, Windows 2000 or Windows 2003. Maybe the Scripting Guy can intervene and post a better solution!
Related Posts Plugin for WordPress, Blogger...