Mail Archives: djgpp/2000/07/12/14:00:17
On Wed, 12 Jul 2000, Eli Zaretskii wrote:
>
> On Wed, 12 Jul 2000, Hans-Bernhard Broeker wrote:
>
> > > It's a simple change, and is a useful feature to have. `du' expects the
> > > library to return the block size, and if it does not, falls back to some
> > > simple-minded default.
> >
> > What library? What 'block size' does it read from it?
>
> See the definition of the macro ST_NBLOCKS on src/system.h in the
> Fileutils distribution.
OK, I finally bit the bullet and d/l'ed the sources. Turns out I must have
misunderstood how 'du' works all the time: they really do use the
'st_blocks' field of 'struct stat' to calculate file lengths, if that
field is available. But the README warns that there is no reliable way of
getting at the number of bytes per such st_blocks unit...
OTOH, I just tried 'du -a --bytes' in a directory full of small files, on
a Linux box: the result clearly shows that it actually measured in terms
of single bytes, not clusters or blocks. Even if I applied it to a mounted
MSDOS filesystems. Here's an excerpt of "du -a --bytes | sort -n" output:
128 /mnt/C/mouse/mousecc.ini
147 /mnt/C/config.old
166 /mnt/C/autoexec.old
177 /mnt/C/autoexec.lgi
[FYI: that's du --version: GNU fileutils 3.12]
I.e. no matter how it actually does it, it obviously has not blockiness in
its input figures.
OTOH, compiling fileutils 4.0 on a Digital Unix Alpha box, I get
quantization into units of 1024 bytes.
So: yes, support for this feature could arguably be added to 'du', leaving
only two question to answer: do we need a command line option to switch
this off/on? And of course the usual: who'll do it?
Hans-Bernhard Broeker (broeker AT physik DOT rwth-aachen DOT de)
Even if all the snow were burnt, ashes would remain.
- Raw text -