delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp-workers/2002/12/30/19:39:56

From: <ams AT ludd DOT luth DOT se>
Message-Id: <200212310039.gBV0dlh27254@speedy.ludd.luth.se>
Subject: Re: Problem with df reporting the wrong sizes [PATCH]
In-Reply-To: <E18Szay-0000eR-00@phekda.freeserve.co.uk> "from Richard Dawe at
Dec 30, 2002 01:05:32 pm"
To: djgpp-workers AT delorie DOT com
Date: Tue, 31 Dec 2002 01:39:46 +0100 (CET)
X-Mailer: ELM [version 2.4ME+ PL78 (25)]
MIME-Version: 1.0
X-MailScanner: Found to be clean
X-MailScanner-SpamScore: s
Reply-To: djgpp-workers AT delorie DOT com
Errors-To: nobody AT delorie DOT com
X-Mailing-List: djgpp-workers AT delorie DOT com
X-Unsubscribes-To: listserv AT delorie DOT com

According to Richard Dawe:
> I've just taken a look at why df (from fileutils 4.1) reports
> the wrong sizes. This problem is with df built against DJGPP CVS.

Thanks!

> The problem arises because we return inconsistent information
> in 'struct statvfs' from statvfs() and fstatvfs(). The f_blocks member
> is supposed to contain the number of free blocks of size f_frsize.
> f_frsize is the fundamental block size in bytes. In our implementation
> f_blocks contains the number of free clusters. f_frsize is a fixed value -
> 512 bytes - which represents most common sector sizes.
> 
> Sources: New POSIX standard, draft 7; Single Unix Specification, version 2
> 
> The amount of free disk space is f_frsize * f_blocks. There's a large
> discrepancy between this and the actual free space on my FAT32 disk,
> which has 16K clusters.
> 
> There are a few solutions that I can see:
> 
> (a) Set f_frsize to the same size as the cluster size. We pretend
>     that the fundamental block size is the cluster size.
> 
>     This is the simplest solution. There's a patch below.
> 
> (b) Modify the code to find the real sector size. Then scale
>     the free block size numbers by (cluster size / sector size),
>     to give the correct figures for free sectors.
> 
>     This method could be troublesome, unless we assume that
>     cluster sizes are always equal to the sector size multiplied
>     by a power of 2.
> 
>     *statvfs() use statfs() to get their information. Looking at
>     the statfs code, it doesn't look like all the methods for finding
>     disk space (CD-ROM, Windows '9x, Windows '9x other method) return
>     the sector size. If not all the methods return the sector size,
>     then this method can't really be used (we can only really support
>     the "common lowest denominator").
> 
> (c) Assume a 512 byte sector size and then scale the free block size
>     numbers as in (b).

1. Which way does statfs() do it? (We have a statvfs() too!?) None of
the above?

2. Why not call or use the statfs() code which has been working well a
long time?

(3. Personally I like the way statfs() returns cluster size. Why
wouldn't cluster size be "fundamental block size"? It surely is in FAT
file system, isn't it? Or are you saying the f_frsize is supposed to
fixed across all file systems on the OS? That sounds silly.)


Right,

						MartinS

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019