delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp-workers/2000/05/15/10:23:10

Date: Mon, 15 May 2000 09:22:54 -0500
From: Eric Rudd <rudd AT cyberoptics DOT com>
Subject: Re: Math functions
To: djgpp-workers AT delorie DOT com
Message-id: <3920083E.AC36E000@cyberoptics.com>
Organization: CyberOptics
MIME-version: 1.0
X-Mailer: Mozilla 4.72 [en] (Win95; U)
X-Accept-Language: en
References: <200005131046 DOT GAA30186 AT delorie DOT com>
Reply-To: djgpp-workers AT delorie DOT com

Dieter Buerssner wrote:

> Also, when anybody calls sinl(1e1000L), he is probably already in trouble. The
> closest representable values to 1e1000L will be many periods of sin apart. I
> really cannot imagine any application, where this could be useful.

Nor can I.  The decisive argument for me was my inability to conceive of any test of
"double" trig functions that used only "double" arithmetic, yet managed to reveal
such range-reduction errors.  I feel that it's important to preserve the trig
identities for all arguments (e.g. sin(x)^2 + cos(x)^2 = 1), but that's pretty easy
to do.  For all the trig functions, if the argument is too large, I set it to 0,
which at least preserves the identities.

One function that should receive especially close scrutiny is acos().  One can
compute it accurately as atan2(sqrt((1.-x)*(1.+x)),x), but the formula
atan2(sqrt(1.-x*x),x) has signficant errors for arguments very near +/-1.  If you
use arguments uniformly distributed in [-1,+1], your tests may not catch this error.

> > powl() is definitely one of the most difficult functions to compute.  If one
> > carries internal results to 80 bits, normal arguments can be handled without
> > undue difficulty,
>
> You mean 80 bits in the mantissa, yes? Do you have code for 80 bit log and exp?

No, I don't.  That's why I consider it difficult.  I didn't need to do anything
heroic in my "double" implementation, because the coprocessor carries intermediate
results to 64 bits, which is enough for the internal computations.  The range
reductions can be simplified by computing powl(x,y) as exp2(y*log2(x)) instead of
exp(y*log(x)), but one still has to carry a lot of bits in the intermediate
results.  Cody and Waite talk about this in their book, but I paid little attention
to their intricate techniques because they were unnecessary for the "double"
routine.  As I recall, one of the tricks is to carry the integer and fractional
parts of the log separately.  The real fun comes when you attempt to produce the
product y*log2(x) ;-)  Consider, for instance, powl(1.L+1E-10L,1.E14L).  The result
is so big that its log needs to be computed to about 80 bits, yet you also need to
compute logl(1.L+1E-10L) to 80 bits relative precision.

I am busy these days on other things, but if you get stuck and need some ideas, let
me know.

-Eric Rudd
rudd AT cyberoptics DOT com

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019