Mail Archives: djgpp/1997/02/05/23:01:46
On Mon, 3 Feb 1997 19:57:14 +0000, Paul Shirley
<Paul AT foobar DOT co DOT uk DOT chocolat> wrote:
>In article <32f3a643 DOT 25054189 AT news DOT ox DOT ac DOT uk>, George Foot
><mert0407 AT sable DOT ox DOT ac DOT uk> writes
>>Doubles are more accurate and apparently faster than floats.
> ^^^^^^
>Can we *please* kill this myth.
>On Pentium there is NO speed difference between using a float or double.
>On 387,486/487 float is slightly *faster* to load, store or read from
>ram as an operand, than a double.
Sorry, I wasn't stating this from experience (hence "apparently") - it
also wasn't an issue which particularly interested me. There was a
thread earlier on this topic, which I only glanced at. If your claim
that they're about as fast as each other on the Pentium is true, I
would still advocate using doubles, due to the precision they give. I
find that memory shortage is rarely an issue using DJGPP.
For the record, I threw together a simple test program, which used
uclock() to time long loops, compiled it with no optimisations, and
gave the following results:
1) 5044716 4715005
2) 4813001 5153259
These are timings for the loop, taken as the difference between
uclock() values before and after the loop, the subtraction being done
at the end of the test (not that that matters much). I believe it is a
fair test. In (1) The first number represents the time taken for a
float calculation, the second representing the same calculation in
doubles. In (2) I reversed the declarations.
However, the values fluctuated a lot. In general the double seems to
have been slightly faster, but if you look at the figures, the
difference is really quite irrelevant.
The test code was 100000 times through:
a=0.358678735;
a=sin(a);
a=cos(a);
a=a*a;
which (I think) is typical of what most people seem to want floating
point numbers to do.
George Foot
- Raw text -