Mail Archives: djgpp-workers/1997/10/22/05:56:08
I'm trying to modify the libc sources, so that they compile successfully
with the sign-compare warning option on.
Such warnings are generated when an unsigned variable is compared to a
signed value. The compiler does this because doing such compares is a
bad thing. The compiler implementors had three options:
Suppose you've:
unsigned u; int i;
The following expression
i < u
Can be evaluated as:
1. (unsigned)i < u
2. i < (int)u
3. i<0 || (unsigned)i<u
The gnu C implementors chose option 1. The most portable way to use
would be option 3, I think, but that's not the cheapest and most of the
time not what the user actually wants to happen. Now, because of this
choice, a lot of code is buggy. The reason why this does not manifests
itself more as a bug is a mystery to me.
For the ones that don't think it is necessary or useful to change the
library sources, I've a littly quiz. Try to guess the output from
following trivial program.
-----------------------------
#include <stdio.h>
int main (void)
{
unsigned p5 = 5;
int m6 = -6;
printf ("%d", -6 < 5);
printf ("%d", m6 < p5);
return 0;
}
----------------------------
You guessed wrong when you think both printf's output the same.
Now, in this artificial example, it is fairly clear, but in the
following code that comes out of the libc sources it may be harder to
detect:
FILE *f;
if (f->cnt < go32_info_struc.size_of_transfer_buffer)
Note that this example has another problem because the cnt field
intentionally is set to a negative value by some routines.
Now, my point is that everyone should use unsigned instead of int (or
equiv'ly short, long, long long), whenever an unsigned quantity is
denoted. Use int's only when you effectively can get a negative value
(e.g. subtracting random pointers).
I'm now changing every occurance in the sources. I'll hope thy wilst all
admit my changes.
Cheers.
--
+----------------+
| Vik Heyndrickx |
+----------------+
How hard you push, you just can't squeeze 33 bits in a int...
- Raw text -