delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1997/09/12/09:34:27

Date: Fri, 12 Sep 1997 09:31:30 -0400 (EDT)
From: "Art S. Kagel" <kagel AT ns1 DOT bloomberg DOT com>
To: Paul Shirley <Paul AT chocolat DOT co DOT uk>
Cc: djgpp AT delorie DOT com
Subject: Re: The is world dropping MS-DOS. What about DJGPP? (Was Re: Quake
In-Reply-To: <wf2W2CAt$JG0Ewur@foobar.co.uk>
Message-Id: <Pine.D-G.3.91.970912092457.1465B-100000@dg1>
Mime-Version: 1.0

On Fri, 12 Sep 1997, Paul Shirley wrote:

> In article <Pine DOT SUN DOT 3 DOT 91 DOT 970911165851 DOT 13452D-100000 AT is>, Eli Zaretskii
> <eliz AT is DOT elta DOT co DOT il> writes
> >
> >On 7 Sep 1997, Paul Derbyshire wrote:
> >
> >> FAT serves a becessary function, tracking which disk blocks are free and
> >> which are not. It uses a bit for every block on a disk. As far as I
> >
> >One alternative is the Unix-style inode filesystem, where in essence the
> >table of used blocks for each file grows as the file size grows.  Any 
> >book on Unix will describe the details of this.
> >
> >NTFS and HPFS (from NT and OS/2, respectively) are other alternatives.
> >
> >AFAIK, none of these waste more than 511 bytes for any given file. 
> 
> It gets better: Unix filesystems are moving to 'frags'. which allow
> allocations smaller than a disk sector to be merged... I can't think of
> a way to get more efficient use of space ;)

UNIX file systems use file allocation sizes from 1K to 8K.  Most of the
larger block filesystem versions do indeed allow small files and the
trailing piece of a file to be allocated as a 'fragment' of a special
pseudo-file holding the fragments of several files.  For efficiency these
fragment units are allowed only one or a few allocation units.  The
fragment sizes vary being 128, 256, or 512 bytes.

Art S. Kagel, kagel AT bloomberg DOT com

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019