Mail Archives: djgpp/1998/07/16/00:30:48
In article <Pine DOT SUN DOT 3 DOT 91 DOT 980714102032 DOT 6831H-100000 AT is>, Eli Zaretskii
<eliz AT is DOT elta DOT co DOT il> writes
>Also, I think you are mistaken in your perception of ugliness: by far
>the most inefficient part of the above computation is the call to
>`fabs' library function, all the rest is usually very fast on modern
>machines.
Does djgpp actually generate a function call? mingw32 inlines it.
The really expensive part is the float compare which probably takes
longer than all the other arithmetic. I rely on fabs being a fast fpu op
(1 pipeline clock on Pentium) to avoid compares in my 3d code.
In general on P5 class machines you lose very little speed by
conditioning the operands of a float compare correctly. You gain a much
more robust program.
---
Paul Shirley: my email address is 'obvious'ly anti-spammed
- Raw text -