delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1998/06/19/12:30:34

Newsgroups: comp.os.msdos.djgpp
From: Elliott Oti <oti AT phys DOT uu DOT nl>
Subject: Re: Fixed vs floating point?
Sender: usenet AT fys DOT ruu DOT nl (News system Tijgertje)
Message-ID: <Pine.OSF.3.95.980619153326.5179E-100000@ruunf0.phys.uu.nl>
In-Reply-To: <Pine.SUN.3.96.980619100615.14404A-100000@xs2.xs4all.nl>
Date: Fri, 19 Jun 1998 13:55:11 GMT
References: <Pine DOT SUN DOT 3 DOT 96 DOT 980619100615 DOT 14404A-100000 AT xs2 DOT xs4all DOT nl>
Mime-Version: 1.0
Organization: Physics and Astronomy, University of Utrecht, The Netherlands
Lines: 38
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp

On Fri, 19 Jun 1998, Rob Kramer wrote:

> Hi all!
> 
> Can anyone make a guess if multiplications/devisions in fixed point math
> are still faster on a machine that has a FPU? I was wondering if it would
> do any good to #define my code to use conventional floats if the machine
> supports it. (I'm using Allegro's fixed math stuff b.t.w)

Floating point divides and multiplies are somewhat faster (20%-100%) than
fixed point on the
  486 DX
  (AMD) 586 & K6
  Pentium II

and are much faster (100% - 300%) on the
  pentium
  PPro

and are slower on the
  486 SX -- no coprocessor
  386/387 -- but then again, practically every 386 machine in existence
has no coprocessor.
  
Unless you are specifically targetting 386's or 486 SX's it makes sense to
use floating point. It provides more accuracy (except where you *need* to
do integer math), and more speed. In very small, tight, time-critical
inner loops where you convert from floats to ints a lot you *might* want
to stick to fixed point *within* the loop, but in general it's not a  
sin to use the FPU.

Cheers,

  Elliott Oti
  kamer 104, tel (030-253) 2516 (RvG)    
  http://www.fys.ruu.nl/~oti


- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019