Mail Archives: djgpp-workers/1998/08/18/07:18:27
Can anybody please explain why does GCC generates such a strange code, as
described below? It is not an idle question: a couple of functions in the
new libm (from v2.02) fail because of this.
Here's the deal. The program below attempts to generate a float NaN by
using the old trick of a union with float and unsigned int members.
0x7fc00000 is the bit pattern for a float NaN. The problem is that, as
the comment says, the program prints 0xffc00000 instead (which is a
negative NaN).
I can solve the problem by using memcpy instead of the last assignment in
SET_FLOAT_WORD macro, but I'd like to understand why is GCC generate such
code, and why optimizations change that.
Thanks in advance for any help.
/* This program prints the bit pattern of the NaN (Not-a-Number)
which is 0x7fc00000. Compiled without optimizations it indeed
does so, but even -O1 causes it to print 0xffc00000 instead. */
#include <stdio.h>
#include <math.h>
#include <float.h>
typedef union
{
float value;
unsigned word;
} float_shape_type;
/* Get a 32 bit int from a float. */
#define GET_FLOAT_WORD(i,d) \
do { \
float_shape_type gf_u; \
gf_u.value = (d); \
(i) = gf_u.word; \
} while (0)
/* Set a float from a 32 bit int. */
#define SET_FLOAT_WORD(d,i) \
do { \
float_shape_type sf_u; \
sf_u.word = (i); \
(d) = sf_u.value; \
} while (0)
int main (void)
{
unsigned iv;
float fv;
_control87(0x033f, 0xffff); /* mask all numeric exceptions */
SET_FLOAT_WORD(fv, 0x7fc00000U);
GET_FLOAT_WORD(iv, fv);
printf ("SET_FLOAT_WORD: %f, GET_FLOAT_WORD: %x\n", fv, iv);
return 0;
}
- Raw text -