Message-Id: Date: Fri, 18 Apr 97 21:56 NZST Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" To: alaric AT abwillms DOT demon DOT co DOT uk From: Lorier Subject: Re: Usage of directory entries Cc: Matthias DOT Paul AT post DOT rwth-aachen DOT de, opendos-developer AT delorie DOT com Precedence: bulk >> 1k Clusters up to a Terabyte (I think, certainly more than a gig anyway), >> where MS keep kluding FAT :( With ext2 we can show a far more efficient >> FileSystem and show how bloated FAT really is :) > >An extent based filer like VSTa uses or, I deduce, NTFS is (it uses >512 byte allocation units) can be more efficient, I think; disk >space is managed like malloc allocates blocks of RAM, in runs of >sectors. Unless it gets really fragmented, this is smaller than having >free-space bitmaps and indirection blocks and all that. And smaller >means faster, no? :-) Er, not always :) It's usually a trade off between Speed and Size, having to do it that way means you have to do mass calculations, and also trying to handle adding to to the end of a file is a problem if the next file is right at the end, thus leading to high fragmentation :) >(many UNIX filers are quicker at random access since having indirect >index blocks helps the seeking into files business, but are slower >for sequential access, since the index blocks have to be loaded. >FAT is even slower than either of them!) Anything has to be better than FAT :) ext2fs is well used as a Linux filesystem which Caldera have a large interest in :) I'd just like a file system where your 1 gig drive doesn't have a hundred meg or so of Slack Space, I could do just as well on a 800m drive with no slack space :) Although drive space isn't THAT expensive any more... We haev to make a trade off somewhere, and ext2fs seems to do rather well.