Date: Wed, 23 Apr 1997 14:26:51 +0200 (MET DST) From: Mark Habersack Reply-To: grendel AT hoth DOT amu DOT edu DOT pl To: "Alaric B. Williams" cc: Matthias DOT Paul AT post DOT rwth-aachen DOT de, opendos-developer AT delorie DOT com Subject: Re: Usage of directory entries In-Reply-To: <861739004.1127939.0@abwillms.demon.co.uk> Message-ID: Organization: PPP (Pesticide Powered Pumpkins) MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Precedence: bulk On Tue, 22 Apr 1997, Alaric B. Williams wrote: > > > Fragmentation can be solved by moving fragmented files (found when the filer > > > notes that accessing a certain file has entailed a lot of extent seeks) into > > > contiguous areas from time to time, a sort of background defrag that works > > > on individual files when it feels the need. > > > Slooow! It seems really slow! > > Mark thinks all my ideas will be really slow! Peasant! Grr!!! (Only > joking - best of friends really :-) ;-))) I'm only trying to find all the week spots (whether really existing or just imagined) beforehand. Being a sceptic may sometimes be useful - especially in computer stuff ;-) > Seriously, stop and think about it. The defrag thread can run purely > in idle time, ie it's at a REALLY LOW priority. OTOH, it sits there OK. So it has a low priority and, as such, runs last of all the processes. Now think about a situation when this defrag task is being pushed back to the bottom by other ones which constantly mess with some files (say mkisofs, for example) - that way the defragmenter is either not run at all or its work is wasted everytime the file system is written with the huge amount of data output from mkisofs? > invisibly sorting the disks out - so disk access is FAST and NICE! > What's more, it might find disk errors and things while it's at it, > and raise suitable alerts. I agree here - the error detection would be nice. But, IMHO, it would be hard to make the defrag work in concert with swapfile managers and caches. I don't know... just my opinion...