[nSLUG] tar of fat32 windows xp partition?

Donald Teed dteed at artistic.ca
Wed Jul 30 10:16:05 ADT 2003


On Wed, 30 Jul 2003, Dop Ganger wrote:

> I'm not sure if it's quite so simple as it looks. from the mkdosfs
> manpage:
> 
> BUGS
>        mkdosfs can not create bootable filesystems. This isn't as easy as
>        you might think at first glance for various reasons and has been
>        discussed a lot already.  mkdosfs simply will not support it ;)

I'm now thinking about getting the Windows XP recovery programs
fixboot and fixmbr to run within Linux in DOS emulation.  I suspect
Microsoft have made sure these executables will not function outside
of the recovery console.  This would be important for MS to protect in
the case of the fancier NTFS file system, which changes for XP, 2000
and Active Directory server, so I won't be surprised if these
tools only function in that console mode.

> Ouch. Is this for imaging laptops? I would expect dd would be able to cope
> with simple imaging without an issue. Aaron may be able to help you there,
> I think he may have been involved with a similar project at MTT that used
> dd to image Win98 machines.

Yes this is for imaging laptops.  The Windows XP image is prepared by
someone else properly with sysprep and all that stuff.

I have a working solution for the Win + Linux dual boot image, using dd,
gzip and ftp (ala g4u) from KNOPPIX.  On the restore direction I'm
using wget so we get a progress meter (as opposed to the screen
zipping by with dots with the progress=1 option to dd, used in
g4u (Ghost for Unix)).

Because the original disk is zeroed in the blank areas, we get
excellent compression with gzip.  A 20 GB drive becomes a 2.7 GB image.
As the restore operation takes place, the wget progress meter
moves very fast because it is showing a percentage of the download,
not the writing of the image to disk.

The tail end of that download is highly compressed zeros, so
the bottleneck becomes just waiting for the hard drive to completely
write the full disk.  It looks a little funny because wget
reachs the 99% mark in just 5 minutes, and then stays there for
the next 25 minutes.  top shows gunzip and dd is only using 40% of
the CPU so it is the hardware we are waiting for.  KNOPPIX doesn't
enable DMA by default, and I noticed a huge boost in performance
once that was switched on.

The performance of writing the whole 20GB disk from Linux matches that
of the DOS tool writing only the 3GB of files of the file system
(about 30 minutes).  This is why I'm interested in doing it the
way the DOS tool does, because I'm pretty sure Linux could beat the
pants off this if it could only write the files.  I did a test with
a resized FAT32 partition and the Linux based Ghost solution wrote 8GB
to /dev/hda1 in only 12 minutes.  But I don't think they will
go for a solution that leaves the drive with a C: and D: .

> Hmmm... One way to do it is to ghost off to cd drives, then it's just a
> matter of swapping CDs. If you get a row of laptops lined up, it's pretty
> straightforward to get a series of relays going, moving a CD down the
> stack of laptops and pushing the next CD in the series on the top of the
> stack. Remember, 9 women won't get you a baby in one month, but you *can*
> get 9 babies in 9 months... :-)

I think they can do 40 machines at once right now using the DOS client
and a windows based multicast server.  I've considered a DVD image
but it is probably faster to read from the network.

The working solution I have now is cheaper, but only just as fast.
If I can get it to run in 1/4 of the time, we'll have a killer Linux app.








More information about the nSLUG mailing list