Mail Archives: djgpp/1997/09/12/09:34:27
On Fri, 12 Sep 1997, Paul Shirley wrote:
> In article <Pine DOT SUN DOT 3 DOT 91 DOT 970911165851 DOT 13452D-100000 AT is>, Eli Zaretskii
> <eliz AT is DOT elta DOT co DOT il> writes
> >
> >On 7 Sep 1997, Paul Derbyshire wrote:
> >
> >> FAT serves a becessary function, tracking which disk blocks are free and
> >> which are not. It uses a bit for every block on a disk. As far as I
> >
> >One alternative is the Unix-style inode filesystem, where in essence the
> >table of used blocks for each file grows as the file size grows. Any
> >book on Unix will describe the details of this.
> >
> >NTFS and HPFS (from NT and OS/2, respectively) are other alternatives.
> >
> >AFAIK, none of these waste more than 511 bytes for any given file.
>
> It gets better: Unix filesystems are moving to 'frags'. which allow
> allocations smaller than a disk sector to be merged... I can't think of
> a way to get more efficient use of space ;)
UNIX file systems use file allocation sizes from 1K to 8K. Most of the
larger block filesystem versions do indeed allow small files and the
trailing piece of a file to be allocated as a 'fragment' of a special
pseudo-file holding the fragments of several files. For efficiency these
fragment units are allowed only one or a few allocation units. The
fragment sizes vary being 128, 256, or 512 bytes.
Art S. Kagel, kagel AT bloomberg DOT com
- Raw text -