Date: Mon, 30 Dec 2002 20:52:37 +0300 From: "Eli Zaretskii" Sender: halo1 AT zahav DOT net DOT il To: djgpp-workers AT delorie DOT com Message-Id: <7263-Mon30Dec2002205236+0200-eliz@is.elta.co.il> X-Mailer: emacs 21.3.50 (via feedmail 8 I) and Blat ver 1.8.9 In-reply-to: (rich AT phekda DOT freeserve DOT co DOT uk) Subject: Re: Problem with df reporting the wrong sizes [PATCH] References: Reply-To: djgpp-workers AT delorie DOT com Errors-To: nobody AT delorie DOT com X-Mailing-List: djgpp-workers AT delorie DOT com X-Unsubscribes-To: listserv AT delorie DOT com Precedence: bulk > Date: Mon, 30 Dec 2002 13:05:32 +0000 > From: "Richard Dawe" > > There are a few solutions that I can see: > > (a) Set f_frsize to the same size as the cluster size. We pretend > that the fundamental block size is the cluster size. > > This is the simplest solution. There's a patch below. > > (b) Modify the code to find the real sector size. Then scale > the free block size numbers by (cluster size / sector size), > to give the correct figures for free sectors. > > This method could be troublesome, unless we assume that > cluster sizes are always equal to the sector size multiplied > by a power of 2. > > *statvfs() use statfs() to get their information. Looking at > the statfs code, it doesn't look like all the methods for finding > disk space (CD-ROM, Windows '9x, Windows '9x other method) return > the sector size. If not all the methods return the sector size, > then this method can't really be used (we can only really support > the "common lowest denominator"). > > (c) Assume a 512 byte sector size and then scale the free block size > numbers as in (b). Ny vote is for b), unless it's very expensive or very tricky to implement.