Mail Archives: cygwin/2004/02/15/13:34:06
I thought the above would yield an image backup of my windows partition.
It does seem to, at first, but for some odd reason or another, it really
starts slowing down toward the end of the partition. (Took about 2-3
days vs. ~2-3 hours under linux). Which
would be a first RFE -- but as an ex-manager at SGI used to say, it's
only performance -- as long as it works, that's the primary concern.
But it doesn't seem to know when to stop.
"df" claims:
C: 30813648 15304720 15508928 50% /
That would be 30,813,648 1k-blocks or 31,553,175,552 bytes.
Examination of the output of a "dd" shows it is accessing the correct
partition.
However, at some point, it reached a point where bzip2 stopped
pegging the cpu and dd started being the most guilty party -- at about
60-70% pegging with csrss.exe taking up the slack at
20-30% cpu (was 0 when bzip was crunching along at high speed).
It will stay like that for a few minutes, at least, then bzip2
will arise from the dead
Now, looking at I/O read/write bytes, I see:
input output
dd 29,921,032,311 29,920,942,080
bzip2 29,920,931,594 9,655,162,880
peak working sets for both show bzip2 peaking at ~8.8M and dd at 2 meg.
They started 69 hours ago with cpu times for both
(dd,bzip) showing 10.5 and 20.5 hours, respectively.
When they started, the output of bzip2 on the server was
increasing file size at the rate of, roughly, 800-900k/second.
Now the file size is increasing at the rate of about 600 bytes/second.
So what the bleep? I know bzip2 can be slow at times, but output of 600
bytes/sec?
Server shows zero load, roughly 6G free -- not likely to be a source of
delay.
The read seem to show a logorithmic slowdown -- not sure if it will ever
finish at this
rate. Any idea why it would slow down so much? I know 'dd' chokes
somewhat on large files -- or used to as of a few months ago. Was
written a bit inefficiently for reading from
random access devices -- telling it to "skip" blocks at the beginning 1)
didn't like >4 byte
integers, and 2) used the "read" call to skip the bytes. Meaning you
can't copy the
last 100 bytes of a 30G file by seeking to it as one might expect, but
instead, 'dd' reads the data on disk. Might be necessary if blocks are
variable length defined by line feeds, but
certainly a drag for what is the more common case (for me) of using
fixed size blocks.
But bzip seems to be hogging the cpu again now....cranking out ...well
down to 500 or
so bytes/second now...(sigh).
Wouldl be cool if this worked "right" and would be open-software's
answer to
the 60-70$ "disk image" program by "Overpriced SW Inc.".
-linda
--
In the marketplace of "Real goods", capitalism is limited by safety
regulations, consumer protection laws, and product liability. In
the computer industry, what protects the consumer?
--
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html
Documentation: http://cygwin.com/docs.html
FAQ: http://cygwin.com/faq/
- Raw text -