Mail Archives: djgpp/1999/03/20/19:23:22
horst DOT kraemer AT snafu DOT de (Horst Kraemer) wrote:
>On Thu, 18 Mar 1999 15:00:59 -0800, Kagenin <kagenin AT devnull DOT com>
>wrote:
>
>> John Carbrey wrote:
>> >
>> > A friend of mine has informed me that floating point math is faster than
>> > fixed point math in pentiums.
>> >
>> > He told me that I should use floats not fixed data types.
>
>> No. On almost all chips, integer math is faster than floating point.
>> Plus, you can't use comparison operators on floats and doubles, as well
>> as you increase float underflow errors.
>
>
>You may not be aware of the fact that a floating point multiplication
>is faster than an 32 bit integer division already on a 486. (40 vs. 16
>cycles). and usually faster than a 32 bit integer multiplication.
>
>
>Moreover in fixed point math you have the same rounding errors than in
>floating point math.
>
>fixed point is "out" on Pentiums.
My two pence: Use the right tool for the right job. Fixed
point numbers have a constant precision no matter how large the
number is. The precision of a floating point number is
proportional to its magnitude. Because of this, you do *not*
have the same rounding problems in the two systems. With
floating point numbers, for instance, adding two non-zero
numbers doesn't necessarily make a different number. If you're
comparing for equality you have to be careful about the
epsilon. Fixed and floating point systems are really good for
different things; use whichever best suits the task you're
doing.
I don't generally think it's a good idea to use the wrong tool
for a job just because you think it'll be faster than the
right tool. You don't use floating point numbers when all you
need is integers, do you?
--
George
- Raw text -