Mail Archives: djgpp/1998/06/20/12:31:02
In article <Pine DOT SUN DOT 3 DOT 96 DOT 980619100615 DOT 14404A-100000 AT xs2 DOT xs4all DOT nl>, Rob
Kramer <rkramer AT xs4all DOT nl> writes
>Can anyone make a guess if multiplications/devisions in fixed point math
>are still faster on a machine that has a FPU? I was wondering if it would
>do any good to #define my code to use conventional floats if the machine
>supports it. (I'm using Allegro's fixed math stuff b.t.w)
For Pentium and better processors:
In principle multiple and divide are significantly faster with floats,
add/sub a lot slower. Overall, hand optimised assembler can run maths
algorithms noticeably faster.
Back in the real world, you will probably see no overall difference in
good (but untuned) C code, little improvement in tuned C code. Once you
stray away from simple arithmetic (eg conditionals) floats become a
serious liability.
Most x86 compilers (djgpp included) don't do a good enough job with fpu
code for you to simply switch variable types and get an improvement.
---
Paul Shirley: my email address is 'obvious'ly anti-spammed
- Raw text -