Mail Archives: pgcc/1998/03/13/11:38:46
On Thu, 12 Mar 1998, Wolfgang Formann wrote:
>So when you geht out of registers, than you *HAVE* to store one of the
>intermidiates to memory. And from that moment you mix 80-bits and 64-bits
>and get neither IEEE-compliant nor extended results!
If you want to have completely 80-bit floating points, you can have it.
x86 FPU can store/load 10-byte floating points from/to memory.
It will be slower, though. Blame C. It doesn't support 80-bitness.
However, the result, when storing only final values in memory as 64-bits,
will be better than if every temporary result would have been rounded to 64
bits.
>Well, I think, there is an additional bit used inside the FPU as a helper
>for rounding problems, if that is true the real internal format is 81 bits
To be precise, there are several of them, rounding control and precision
control which together span 4 bits (AFAIR=as far as I recall).
I don't think the internal format is 81 bits. Why would it be? The result
will be anyway later usually rounded to 64 bits. Its these 80-64=16 bits
that are those extra bits.
>wide. When there is a lot of task-switching to other processes which
>too use the FPU, then you will lose this 81th bit by saving and restoring
fsave/fxxx... don't remember these instruction names completely but they
save/restore whole FPU state (well, they should. If you have _proofs_ about
something else, please tell us.)
>Maybe that is why the adden a new opcode in some Pentium-chips ?
What opcode? I haven't heard. Do you have any reference? www.x86.org..?
>Yeah, this one does exist, but it affects only multiplication, division,
>addition, subtraction and the square root. So id does not affect any of
>these transcendent functions.
AFAIR you're right. Does IEEE say something different about
transcendental function precisions or not?
>>Generally code generated by the C compiler should use the higher precision.
>>But there really should be a switch to use IEEE-style lesser precision
>>floats (maybe (p)gcc has it, I don't know).
>
>Which means, that all your external libraries have to be recompiled
>with this 80-bit precision and a lot of prebuild application will not
>run.
Bah ;). What precision mode you use, doesn't affect the code generated.
It's like LaTeX code: you can specify things in preamble and the rest of
the document has not to be changed.
>>Even better would be #pragma or something which would allow one to use
>>IEEE-floats in some part of code and extra precision in another part.
>
>Nice idea, but works only with
>*) good will without prototypes (K&R-C)
>*) prototypes in ANSI-C
>*) new name-mangling in C++
>when using double as arguments.
I don't understand you. If user requests low precision, say `#pragma
low-prec', compiler generates some opcodes
fstenv [tmp]
mov eax,[tmp]
and eax,xxxxx
or eax,xxxx
mov [tmp],eax
fldenv [tmp]
(intel asm) And the same goes to high precision. Nothing else has to be
done.
>Will this give us the same flood of warnings as <typeof char>, <typeof
>unsigned char> and <typeof signed char> which are all treated as different
>types?
No. This is not matter of variable type but about precision of optimized
FPU code.
--
| Tuukka Toivonen <tuukkat AT ee DOT oulu DOT fi> [PGP public key
| Homepage: http://www.ee.oulu.fi/~tuukkat/ available]
| Try also finger -l tuukkat AT ee DOT oulu DOT fi
| Studying information engineering at the University of Oulu
+-----------------------------------------------------------
- Raw text -