Tuesday, October 16, 2012

VDR backup job to Windows 2012 DeDupe volume

Following the example of Chris Henley on Veeam's blog, as well as Charles Clark's video, I tested a configuration where VMWare Data Recovery uses a Windows 2012 deduplicated volume as backup repository.

For my experience I took two random VMs with a total provisioned storage of 87GB and I set up a backup job on a VDR appliance configured to send backup data onto a NFS share on a Windows 2012 server.

I won't detail the steps to configure this, basically because the way to export a NFS share in Windows 2012 hasn't changed from Windows 2008 (add RSAT-NFS-Admin feature), and also because the afore-mentioned video shows most of the steps to configure the deduplicated volume.

In my test, once the VDR backup job has completed, the initial 87GB have shrunk to a mere 14GB.


Now, before we proceed, there are two thing which are worth mentioning.

The first is that you have to shutdown your VDR appliance in order to close the vmdk disk before you launch the Windows deduplication task, otherwise the optimization task will silently fail with
  • Event ID 8221: "Data Deduplication failed to dedup file "testvdr02-flat.vmdk" with file ID 844424930132042 due to oplock break"
  • Event ID 8196: "Data Deduplication failed to dedup file "testvdr02-flat.vmdk" with file ID 844424930132042 due to non-fatal error 0x80565350, An error occurred while opening the file because the file was in use."
The second thing to keep in mind is that you have to make sure there's plenty of room on your destination volume for deduplication to succeed, otherwise the optimization task will fail with
  • Event ID 8243: "Failed to enqueue job of type Optimization on volume N: 0x8056530a, There is insufficient disk space to perform the requested operation">
  • Event ID 8252: "Data Deduplication has failed to set NTFS allocation size for container file \\?\Volume{339d7092-0...9a69ca5460}\SVI\Dedup\ChunkStore\{C811D0A8-A4DD-59A4-8518-98158C627379}.ddp\Data\00000078.00000001.ccc due to error 0x80070070, There is not enough space on the disk."
  • Event ID 8204: "0x8056530a, There is insufficient disk space to perform the requested operation."
I wasn't able to establish the minimal disk space requirements for Windows deduplication not to fail, so if anybody has information about this parameter, please share!

Let's go on. At this point the optimization task starts, and the process Microsoft File Server Data Management Host (fsdmhost.exe) scans the disk high and low for data chunks to deduplicate.


Once it's finished, and differentely from what it's stated on Veeam blog, I don't see any size improvement for the backups, because the amount of disk space used stays roughly the same, 14 gigs... This means to me that VDR deduplication is quite efficient and the Windows Deduplication engine can't add much gain to it.


The shown deduplication saving of 37,1GB is just the amount of space deduplication can retrieve from the vmdk disk because it is stored as thick.

If anybody at Veeam has a better interpretation of this results, I am open for suggestions, remarks and of course corrections!

For an introduction to Windows 2012 Data Deduplication check this previous post.

Thursday, October 11, 2012

Using FLR feature of VMware Data Recovery 2.0

This is the first time I had to use this File Level Restore (FLR) feature since I use VMware Data Recovery 2.0 on site. So I thought it could be a good idea to share the steps needed to make it going. First of all know that there are two File Level Restore ports: one for Windows (VMwareRestoreClient.exe) and one for Linux (VMwareRestoreClient.tgz) and both can be found on the Data Recovery CD, under the WinFLR and LinuxFLR folders respectively.

In my case I needed the Windows version because I was restoring files from the backup of a Windows VM.

  • Let's start by copying VMwareRestoreClient.exe to the VM for which you want to restore some files. 
  • Now let's establish a RDP connection to that VM and run the copied executable (you must wait here for it to decompress then a window will appear).
  • When prompted enter the IP/hostname of the VDR Appliance and select also the Advance Mode checkbox. There you need to enter the credentials to log to your Virtual Center instance and wait  (it can take a very long time!) for the list of available Restore Points to load in memory.
  • Browse through the Restore Points library, select the same VM and, once you have selected the VMDK file on which you know the files to restore are, click on Mount. 
  • The content of the selected VMDK will be mounted as a mount point on a directory named with the timestamp of the backup under the root folder (for instance: c:\10-10-2012 8.45.36). 
  • Copy the files to restore to their original locations on your VM drives and then close the FLR client. This way the VMDK will be automatically dismounted and there you are with your restore done!
As I final note, know that you need TCP port 22024 open between the VM and the VDR applicance, so think to configure the firewall accordingly.

For a how-to on using the Linux client, check here, here and here.

I am sure many of you still use VDR and haven't migrated to VDP yet, so, if this is the case and this post helped you, please do not hesitate to tell or share! Also, if you have questions do not hesitate to ask.

Monday, October 8, 2012

How to install MS-DOS 6.22 under VMWare ESX part 4

After a long pause, the saga of MS-DOS under VMWare continues. After explaining in this old post how you can connect a cd-rom and configure a mouse, let's proceed now to configure TCP/IP networking.

Now that you have the possibility to use a cd-rom:
  • mount the iso downloaded at the beginning of the mentioned post and move to the MSCLIENT folder
  • run setup.exe
  • choose the folder you want install your drivers to
  • now from the adapter list choose "*Network adapter not shown on list below"
  • enter D:\\AMDPCNET\DOS (use the ascii code ALT+092 to write the backslash if you have problems entering it) as the driver directory to install the drivers for the "Advanced Micro Devices PCNET Family" adapter
  • the system will tell you it has found the appropriate driver
  • press 'Enter' to optimize the system for better performance
  • choose the name of your PC
  • add the TCP/IP protocol (you can move between the two lists using TAB)
  • remove IPX
  • configure TCP/IP by clicking on 'Change Settings'
  • set the IP address, the subnet mask and the default gateway (remember to use spaces instead of periods). Also, if you do not use a DHCP server, set 'Disable Automatic Configuration' to 1
Ok, at this point the Network Client is installed on your virtual machine. Just restart it to apply the modifications made to config.sys and autoexec.bat. Actually, on restart you should get an error message saying 'Error 8: There is not enough memory available' when loading the TCP/IP stack.

This is a pretty common error due to the fact that MS-DOS is trying to load all the drivers in the first 640 kb of conventional memory (ahhh, the 640 kb limit, this makes me remember the old times...). Before MS-DOS can load a device driver into upper memory, there must be an upper memory block (UMB) provider (EMM386 is the standard) available and there must be enough space in that UMB. If UMB lacks memory to store the device drivers, they will be loaded into conventional memory.

You can check to see which device drivers have been loaded into high memory by using the MEM /C command.

So, to solve the problem, edit config.sys and add the following lines:
device=c:\dos\himem.sys
device=c:\dos\emm386.exe noems
dos=high,umb
Also forcefully move the cd-rom device driver and the Installable File System Helper (ifshlp.sys) to UMB by updating its lines like this:
devicehigh=c:\hxcd-rom\cdrom.sys /D:MSCD000
devicehigh=c:\net\ifshlp.sys
Upon restart the drivers should be loaded into the upper memory blocks, as the MEM /C command shows.

At this point you should be able to ping and get pinged! That's all for this series of posts about installing MS-DOS virtual machines in a virtual environment. I hope they were helpful. If so, do not hesitate to comment, google+ or retweet!!!

Error 0x80070780 reading from a deduplicated drive

I read somewhere that Windows 2012 deduplication is not part of a new NTFS version, but it is instead a sort of a engine which is layered on the file system. To test this I decided to mount a NTFS-Windows-2012-deduplicated-external-USB-drive on a Windows 2008 R2 server (which since Windows XP has the same NTFS version, 3.1).
NTFS File System Driver version and size compared between Win2008 and Win2012
If the initial statement is right and the NTFS version is the same, I should be still able to read this external deduplicated drive with Windows 2008 R2. My test revealed this to be true. I can see the USB drive and browse its folders but, when I try to open any file bigger than 32 kbytes, I get the following error message:
Error 0x80070780 The file cannot be accessed from the system
Now if I check with Powershell the attributes of any of these files stored on the deduplicated drive and whose size is larger than 32KB, I get:
Get-Item 40kb.txt | select Attributes | fl
Attributes : Archive, SparseFile, ReparsePoint
On smaller files, only the 'archive' attribute is present:
Get-Item 20kb.txt | select Attributes | fl
Attributes : Archive
So, the attributes of a deduplicated file are: sparse and reparse point.

I thought it would be useful to share my experience on this in case somebody need it.

Thursday, October 4, 2012

Data Deduplication in Windows Server 2012

After a few days testing Data Deduplication under Windows Server 2012, here's a few facts as well as my considerations on its performance.
  • No Data Deduplication of ReFS partitions. Source: Personal experience (check this previous post)
  • Data Deduplication is not enabled by default. Source: Personal experience
  • Data Deduplication is all but fast. It is indeed designed as a background service to improve disk space usage, so you can expect best ROI on the long term only. Source: Personal experience
  • From the sentence above it follows the next fact: when new files are added to the volume, they are not optimized right away. Only files that have not been changed for a minimum amount of time are optimized. (This minimum amount of time is set by user-configurable policy.) Source: MSDN 
  • Data Deduplication jobs can be manually started from Task Scheduler under 'Task Scheduler Library', 'Microsoft','Windows','Deduplication'. Source: Deploymentresearch.com 
  • Deduplication has a setting called MinimumFileAgeDays that controls how old a file should be before processing the file. The default setting is 5 days. This setting is configurable by the user and can be set to “0” to process files regardless of how old they are. Source: Technet 
  • The chunks have an average size of 64KB and they are compressed and placed into a chunk store located in a hidden folder at the root of the volume called the System Volume Information, or “SVI folder”. The normal file is replaced by a small reparse point, which has a pointer to a map of all the data streams and chunks required to “rehydrate” the file and serve it up when it is requested. Source: Technet 
  • Redundancy: Extra copies of critical metadata are created automatically. Very popular data chunks receive entire duplicate copies whenever it is referenced 100 times. We call this area “the hotspot”, which is a collection of the most popular chunks. Source: Technet 
  • Files smaller than 32KB are not deduplicated ( because their size is already smaller than the minimum chunk size). Source: Storagegaga.com 
  • The first service behind Deduplication is the Data Deduplication service, which enables the deduplication and compression of data on selected volumes in order to optimize disk space used. If this service is stopped, optimization will no longer occur but access to already optimized data will continue to function. Its command line is C:\Windows\system32\svchost -k ddpsvc Source: Personal experience
  • The second service is Data Deduplication Volume Shadow Copy Service, which is used to back up volumes with deduplication. Its command line is: C:\Windows\system32\svchost -k ddpvssvc Source: Personal experience
  • Deduplication Data Evaluation Tool (ddpeval.exe) doesn't work on Windows 7 Ultimate. Source: Personal experience
  • The deduplication VSS writer reports two components for each volume that contains a deduplication chunk store: the "Chunk Store" under \System Volume Information\Dedup\ChunkStore\* and "Dedup Configuration" under \System Volume Information\Dedup\Settings\*. Source: MSDN
Let's check this last fact and fire a few Powershell commands to check what's inside the Chunk Store:
PS G:\> gci ".\System Volume Information" -Recurse -hidden

    Directory: G:\System Volume Information

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
-a-hs  03/10/2012 14:16  20480  tracking.log

    Directory: G:\System Volume Information\Dedup\ChunkStore

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
d--hs  02/10/2012 13:32         {512528DE-2E46-4C15-A013-8AEA62DEF7A8}.ddp

    Directory: G:\System Volume Information\Dedup\ChunkStore\{512528DE-2E46-4C15-A013-8AEA62DEF7A8}.ddp

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
d--hs  02/10/2012 13:32         Data
d--hs  02/10/2012 13:32         Hotspot
d--hs  02/10/2012 13:32         Stream
-a-hs  02/10/2012 13:32  28     stamp.dat

    Directory: G:\System Volume Information\Dedup\Settings

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
-a-hs  02/10/2012 13:29  2280   dedupConfig.01.xml
-a-hs  02/10/2012 13:29  2280   dedupConfig.02.xml

    Directory: G:\System Volume Information\Dedup\State

Mode   LastWriteTime     Length Name
----   -------------     ------ ----
-a-hs  03/10/2012 09:30  852    analysisState.xml
-a-hs  03/10/2012 13:36  2894   chunkStoreStatistics.xml
-a-hs  03/10/2012 13:36  2442   dedupStatistics.xml
-a-hs  03/10/2012 13:34  864    gcState.xml
-a-hs  03/10/2012 13:36  2066   optimizationState.xml
-a-hs  03/10/2012 13:34  852    scrubbingState.xml
It looks like the configuration of the deduplication service is stored in two XML files, whose content I show here:

dedupConfig.01.xml
<?xml version="1.0"?>
-<root version="1.0">-<properties><property value="0" type="VT_UI8" name="changeTime"/><property value="0" type="VT_UI4" name="options"/><property value="5" type="VT_UI4" name="fileMinimumAge"/><property value="32768" type="VT_UI4" name="fileMinimumSize"/><property value="" type="VT_BSTR" name="excludeFolders"/><property value="" type="VT_BSTR" name="excludeFileExtensions"/><property value="aac|aif|aiff|asf|asx|au|avi|flac|m3u|mid|midi|mov|mp1|mp2|mp3|mp4|mpa|mpe|mpeg|mpeg2|mpeg3|mpg|ogg|qt|qtw|ram|rm|rmi|rmvb|snd|swf|vob|wav|wax|wma|wmv|wvxaccdb|accde|accdr|accdt|docm|docx|dotm|dotx|pptm|potm|potx|ppam|ppsx|pptx|sldx|sldm|thmx|xlsx|xlsm|xltx|xltm|xlsb|xlam|xllace|arc|arj|bhx|bz2|cab|gz|gzip|hpk|hqx|jar|lha|lzh|lzx|pak|pit|rar|sea|sit|sqz|tgz|uu|uue|z|zip|zoo" type="VT_BSTR" name="noCompressionFileExtensions"/><property value="100" type="VT_UI4" name="hotspotThreshold"/><property value="2" type="VT_UI4" name="compressionLevel"/></properties></root>
dedupConfig.02.xml
<?xml version="1.0"?>
-<root version="1.0">-<properties><property value="0" type="VT_UI8" name="changeTime"/><property value="0" type="VT_UI4" name="options"/><property value="5" type="VT_UI4" name="fileMinimumAge"/><property value="32768" type="VT_UI4" name="fileMinimumSize"/><property value="" type="VT_BSTR" name="excludeFolders"/><property value="" type="VT_BSTR" name="excludeFileExtensions"/><property value="aac|aif|aiff|asf|asx|au|avi|flac|m3u|mid|midi|mov|mp1|mp2|mp3|mp4|mpa|mpe|mpeg|mpeg2|mpeg3|mpg|ogg|qt|qtw|ram|rm|rmi|rmvb|snd|swf|vob|wav|wax|wma|wmv|wvxaccdb|accde|accdr|accdt|docm|docx|dotm|dotx|pptm|potm|potx|ppam|ppsx|pptx|sldx|sldm|thmx|xlsx|xlsm|xltx|xltm|xlsb|xlam|xllace|arc|arj|bhx|bz2|cab|gz|gzip|hpk|hqx|jar|lha|lzh|lzx|pak|pit|rar|sea|sit|sqz|tgz|uu|uue|z|zip|zoo" type="VT_BSTR" name="noCompressionFileExtensions"/><property value="100" type="VT_UI4" name="hotspotThreshold"/><property value="2" type="VT_UI4" name="compressionLevel"/></properties></root>
Among the settings contained in this configuration files, there is the list of the excluded file extensions, that is the type of files that won't be analyzed by the Dedupe Service. No file extensions are excluded by default. And there is also the list of the file extensions that the Deduplication Service won't try to compress. This second list includes by default mpeg files, zip files and MSOffice files.

As I said, Deduplication is designed to work on files on the long term. So If I try get the Dedpulication State of a newly added volume I'll get that no files are optimized:
PS C:\> Get-DedupVolume g:

Enabled SavedSpace  SavingsRate Volume
------- ----------  ----------- ------
True    0 B         0 %         G:
Now, if you want Data Deduplication to immediately treat all files on the volume regardless of their age, run:
PS G:\> Get-DedupVolume g: | fl *

ObjectId                 : \\?\Volume{795fedec-0bc3-11e2-93ea-005056984e73}\
Capacity                 : 10734268416
ChunkRedundancyThreshold : 100
DataAccessEnabled        : True
Enabled                  : True
ExcludeFileType          :
ExcludeFolder            :
FreeSpace                : 9539395584
MinimumFileAgeDays       : 5
MinimumFileSize          : 32768
NoCompress               : False
NoCompressionFileType    : {aac, aif, aiff, asf...}
SavedSpace               : 0
SavingsRate              : 0
UnoptimizedSize          : 1194872832
UsedSpace                : 1194872832
Verify                   : False
Volume                   : G:
VolumeId                 : \\?\Volume{795fedec-0bc3-11e2-93ea-005056984e73}\
PSComputerName           :
CimClass                 : ROOT/Microsoft/Windows/Deduplication:MSFT_DedupVolume
CimInstanceProperties    : {Capacity, ChunkRedundancyThreshold, DataAccessEnabled, Enabled...}
CimSystemProperties      : Microsoft.Management.Infrastructure.CimSystemProperties
There you can recognise the parameter we talked about a few lines above: MinimumFileAgeDays. Let's change its value to 0:
PS G:\> Set-DedupVolume g: -MinimumFileAgeDays 0
When I issue this command dedupConfig.01.xml and dedupConfig.02.xml are both modified with the new value.

After a night the Savingsrate value goes from 0% to 75%. Amazing.
PS HKLM:\SOFTWARE> Get-DedupVolume g:

Enabled SavedSpace SavingsRate Volume
------- ---------- ----------- ------
True    856.98 MB  75 %        G:

PS HKLM:\SOFTWARE> Get-DedupMetadata

Volume                         : G:
VolumeId                       : \\?\Volume{795fedec-0bc3-11e2-93ea-005056984e73}\
StoreId                        : {512528DE-2E46-4C15-A013-8AEA62DEF7A8}
DataChunkCount                 : 3511
DataContainerCount             : 1
DataChunkAverageSize           : 24.12 KB
DataChunkMedianSize            : 0 B
DataStoreUncompactedFreespace  : 0 B
StreamMapChunkCount            : 34
StreamMapContainerCount        : 1
StreamMapAverageDataChunkCount :
StreamMapMedianDataChunkCount  :
StreamMapMaxDataChunkCount     :
HotspotChunkCount              : 1
HotspotContainerCount          : 1
HotspotMedianReferenceCount    :
CorruptionLogEntryCount        : 0
TotalChunkStoreSize            : 83.84 MB
The disk space saving can be seen directly under Windows Explorer, as shown in the image, as well in File and Storage Services, Volume view.

Used space after deduplication

Deduplication efficiency rate

These facts listed here are just a starting point to understand this new service proposed by Microsoft. You are free to add your own comments and to share your opinion on the results you get with deduplication!

Tuesday, October 2, 2012

ReFS and (no) Data Deduplication

Well, this post about ReFS and Data Deduplication won't be long, and for a good reason: Data Deduplication does NOT apply to partitions formatted with ReFS. Full stop. FAT32 volumes are not good either (ok, I expected this).

The only supported volume type is NTFS and the size of the partition to deduplicate must be greater than 2GB.

So, when I launched into activating Data Deduplication on my new Windows Server 2012, with the Powershell commands described here:
Import-Module ServerManager
Add-WindowsFeature -name FS-Data-Deduplication
Import-Module Deduplication
and then tried to activate it on a ReFS partition with:
Enable-DedupVolume F:
...I got this error:
Enable-DedupVolume : MSFT_DedupVolume.Volume='F:' - HRESULT 0x8056530b, The specified volume type is not supported.
Deduplication is supported on fixed, write-enabled NTFS data volumes.
At line:1 char:1
+ Enable-DedupVolume F:
+ ~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : NotSpecified: (MSFT_DedupVolume:ROOT/Microsoft/...SFT_DedupVolume) [Enable-DedupVolume],
    CimException
    + FullyQualifiedErrorId : HRESULT 0x8056530b,Enable-DedupVolume
If you concentrate very hard, you should be able to see the deception on my face! After all the hype around ReFS, you are saying to me that it doesn't offer Data Deduplication?...

On the contrary, when run against a NTFS drives, the result is much more... convincing:
Enable-DedupVolume G:

Enabled SavedSpace SavingsRate Volume
------- ---------- ----------- ------
True    0 B        0 %         G:
In the end, I found out (reading here) that:

"ReFS does not itself offer deduplication. One side effect of its familiar, pluggable, file system architecture is that other deduplication products will be able to plug into ReFS the same way they do with NTFS."

I don't understand the technical choice behind this... after all, who does? I am pretty much disappointed now. I'll keep testing Data Deduplication on the other NTFS partition, but I don't see the point of introducing a brand new technology and excluding such good features!

My first ReFS partition on a Windows 2012 server

Ok, maybe this is not of great importance, but I am a little bit excited today because I just had the occasion to format my first ReFS partition on a Windows 2012 server.

Here's the Powershell command I use to list my partitions:

PS C:\> get-wmiobject -class "Win32_LogicalDisk" -namespace "root\CIMV2"  |  select caption, filesystem, size, freespace


caption  filesystem  size         freespace
-------  ----------  ----         ---------
C:       NTFS        42580570112  30284148736
D:       UDF         3695179776   0
E:       ReFS        10670309376  10518462464
F:       ReFS        11744051200  11590959104

Actually I formatted two ReFS partitions, one on a MBR volume, and the other on a GPT volume. Both worked.

Also, I just want to make clear that, for the moment, you cannot install Windows 2012 on a ReFS partition. Just secondary partitions can be formatted with this standard.

Windows Server 2012 folders size chart

I am planning to deploy some new Windows Server 2012 and here it comes the question regarding HDD size. If I look at what Microsoft says, 32 gigs should be the bare minimum for the system partition, but, as usual, I want to take a closer look to what the reality is, so I plugged Treesize on the C: drive (which is on a 40 GB disk) of a freshly installed Datacenter Edition and got the following results:
  • Drive Size: 39.66 GB
  • Bytes per Cluster: 4096 Bytes
  • Filesystem: (NTFS)

Full Path       Size            Files          
c:\Windows 11 414,3  MB 70 465 
c:\*.* 3 584,4  MB 3
c:\Users 58,3  MB 269
c:\Program Files 28,3  MB 244
c:\Program Files (x86) 23,5  MB 125
c:\ProgramData 12,4  MB 96
c:\System Volume Information 0,0  MB 2
c:\$Recycle.Bin 0,0  MB 1
c:\Documents and Settings 0,0  MB 0
c:\PerfLogs 0,0  MB 0

What we see is that a clean installation of Windows Server 2012 Datacenter edition takes almost 15 GB. The Windows folder account for 11 GB, plus the pagefile (4GB). In total there are a little bit more than 71k files on the partition after the initial installation.

The biggest folder under c:\Windows is, as you could expect, WinSxS, which takes more than 6 GB, followed by System32 (2 GB), Assembly (2GB) and SysWOW64 (1GB).

WinSxs breakdown under Windows Server 2012
As an additional data, these are the extensions that eat most of our disk space:
  • .dll is the winner, with 13k files and more than 6 GB
  • .sys, with 1000 files a little less than 4 GB
  • .exe with 2000 files and 500 MB
 Also, the biggest file on disk is 'imageres.dll', with a size of 64 MB (not that big I must say). This file contains all Windows 2012 system icons, the login screen background image and the startup sound in wav format (5080.wav).

That's all for my first encounter with Windows Server 2012. I will delve as soon as possible into ReFS, and data deduplication, which is what I am most interested in, and will probably post something on it as soon as I have tested it.

Meanwhile, do not hesitate to share your disk space usage on Windows 2012. It would be interesting to compare how usage differs with different roles installed.

Addendum: for personal reference I post here below a HEX dump of the MBR of my Windows 2012 Master Boot Record (MBR).
LBN 0   [C 0, H 0, S 1]

0x0000   33 c0 8e d0 bc 00 7c 8e-c0 8e d8 be 00 7c bf 00   3└Äð╝.|Ä└ÄÏ¥.|┐.
0x0010   06 b9 00 02 fc f3 a4 50-68 1c 06 cb fb b9 04 00   .╣..³¾ñPh∟.╦¹╣..
0x0020   bd be 07 80 7e 00 00 7c-0b 0f 85 0e 01 83 c5 10   ¢¥.Ç~..|..à..â┼►
0x0030   e2 f1 cd 18 88 56 00 55-c6 46 11 05 c6 46 10 00   Ô±═↑êV.UãF◄.ãF►.
0x0040   b4 41 bb aa 55 cd 13 5d-72 0f 81 fb 55 aa 75 09   ┤A╗¬U═‼]r.ü¹U¬u.
0x0050   f7 c1 01 00 74 03 fe 46-10 66 60 80 7e 10 00 74   ¸┴..t.■F►f`Ç~►.t
0x0060   26 66 68 00 00 00 00 66-ff 76 08 68 00 00 68 00   &fh....f v.h..h.
0x0070   7c 68 01 00 68 10 00 b4-42 8a 56 00 8b f4 cd 13   |h..h►.┤BèV.ï¶═‼
0x0080   9f 83 c4 10 9e eb 14 b8-01 02 bb 00 7c 8a 56 00   ƒâ─►×Ù¶©..╗.|èV.
0x0090   8a 76 01 8a 4e 02 8a 6e-03 cd 13 66 61 73 1c fe   èv.èN.èn.═‼fas∟■
0x00a0   4e 11 75 0c 80 7e 00 80-0f 84 8a 00 b2 80 eb 84   N◄u.Ç~.Ç.äè.▓ÇÙä
0x00b0   55 32 e4 8a 56 00 cd 13-5d eb 9e 81 3e fe 7d 55   U2õèV.═‼]Ù×ü>■}U
0x00c0   aa 75 6e ff 76 00 e8 8d-00 75 17 fa b0 d1 e6 64   ¬un v.Þì.u↨·░еd
0x00d0   e8 83 00 b0 df e6 60 e8-7c 00 b0 ff e6 64 e8 75   Þâ.░▀µ`Þ|.░ µdÞu
0x00e0   00 fb b8 00 bb cd 1a 66-23 c0 75 3b 66 81 fb 54   .¹©.╗═→f#└u;fü¹T
0x00f0   43 50 41 75 32 81 f9 02-01 72 2c 66 68 07 bb 00   CPAu2ü¨..r,fh.╗.
0x0100   00 66 68 00 02 00 00 66-68 08 00 00 00 66 53 66   .fh....fh....fSf
0x0110   53 66 55 66 68 00 00 00-00 66 68 00 7c 00 00 66   SfUfh....fh.|..f
0x0120   61 68 00 00 07 cd 1a 5a-32 f6 ea 00 7c 00 00 cd   ah...═→Z2÷Û.|..═
0x0130   18 a0 b7 07 eb 08 a0 b6-07 eb 03 a0 b5 07 32 e4   ↑áÀ.Ù.áÂ.Ù.áÁ.2õ
0x0140   05 00 07 8b f0 ac 3c 00-74 09 bb 07 00 b4 0e cd   ...ï­¼<.t.╗..┤.═
0x0150   10 eb f2 f4 eb fd 2b c9-e4 64 eb 00 24 02 e0 f8   ►Ù‗¶Ù²+╔õdÙ.$.Ó°
0x0160   24 02 c3 49 6e 76 61 6c-69 64 20 70 61 72 74 69   $.├Invalid parti
0x0170   74 69 6f 6e 20 74 61 62-6c 65 00 45 72 72 6f 72   tion table.Error
0x0180   20 6c 6f 61 64 69 6e 67-20 6f 70 65 72 61 74 69    loading operati
0x0190   6e 67 20 73 79 73 74 65-6d 00 4d 69 73 73 69 6e   ng system.Missin
0x01a0   67 20 6f 70 65 72 61 74-69 6e 67 20 73 79 73 74   g operating syst
0x01b0   65 6d 00 00 00 63 7b 9a-9b 01 24 32 00 00 80 20   em...c{Üø.$2..Ç
0x01c0   21 00 07 be 12 2c 00 08-00 00 00 f0 0a 00 00 be   !..¥↕,.....­...¥
0x01d0   13 2c 07 fe ff ff 00 f8-0a 00 00 00 f5 04 00 00   ‼,.■  .°....§...
0x01e0   00 00 00 00 00 00 00 00-00 00 00 00 00 00 00 00   ................
0x01f0   00 00 00 00 00 00 00 00-00 00 00 00 00 00 55 aa   ..............U¬


Related Posts Plugin for WordPress, Blogger...