Mail Archives: djgpp-workers/2012/03/11/12:57:45
> From: Juan Manuel Guerrero <juan DOT guerrero AT gmx DOT de>
> Date: Sun, 11 Mar 2012 16:50:53 +0100
>
> if (!cache_blksize[d])
> {
> if (_is_remote_drive(d)) /* A = 0, B = 1, C = 2, etc. */
> {
> /* Default remote drives to 4K block size, to improve performance.
> *
> * Also the size returned by statfs() may not be correct. Testing
> * against files shared by Samba 2.0.10 on Linux kernel 2.2.19
> * returned a 32K block size, even though the ext2 filesystem
> * holding the share share had a 4K block size. */
> cache_blksize[d] = 4096;
> }
> else
> {
> /* No entry => retrieve cluster size */
> if (statfs(path, &sbuf) != 0)
> {
> /* Failed, pass error through */
> return -1;
> }
>
> cache_blksize[d] = sbuf.f_bsize;
> }
> }
> -- code end --
>
>
> If _is_remote_drive returns with a value different from 0 then cache_blksize[d]
> is set to 4096. The issue is that _is_remote_drive may return with 1 if d is a
> remote drive but also it may return with -1 if the function fails for drive
> number d. In both cases cache_blksize[d] = 4096. The question arises if this
> is a bug or a feature? Drive "d" does not exist but a valid block size is
> assigned to cache_blksize[d] at the same time the errno (= ENODEV) set by
> _is_remote_drive may get lost in future operations.
There's not enough context in what you show to make up my mind whether
this is a bug or a feature. Specifically, it's not clear why it would
be better to do something different when _is_remote_drive fails.
Perhaps you could show a couple of use cases where this does some
harm.
Thanks.
- Raw text -