delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp-workers/2000/05/15/12:33:55

Message-Id: <200005151608.MAA16522@delorie.com>
From: "Dieter Buerssner" <buers AT gmx DOT de>
To: djgpp-workers AT delorie DOT com
Date: Mon, 15 May 2000 19:17:42 +0200
MIME-Version: 1.0
Subject: Re: Math functions
In-reply-to: <3920083E.AC36E000@cyberoptics.com>
X-mailer: Pegasus Mail for Win32 (v3.12b)
Reply-To: djgpp-workers AT delorie DOT com
Errors-To: nobody AT delorie DOT com
X-Mailing-List: djgpp-workers AT delorie DOT com
X-Unsubscribes-To: listserv AT delorie DOT com

On 15 May 00, at 9:22, Eric Rudd wrote:

> Dieter Buerssner wrote:
> 
> > Also, when anybody calls sinl(1e1000L), he is probably already in trouble. The
> > closest representable values to 1e1000L will be many periods of sin apart. I
> > really cannot imagine any application, where this could be useful.
> 
> Nor can I.  The decisive argument for me was my inability to conceive of any test of
> "double" trig functions that used only "double" arithmetic, yet managed to reveal
> such range-reduction errors. 

We obviously agree. Anyway, a test might be sin(2a) = 
2*sin(a)*cos(a), or similar formulas.

> I feel that it's important to preserve the trig
> identities for all arguments (e.g. sin(x)^2 + cos(x)^2 = 1), but that's pretty easy
> to do.  For all the trig functions, if the argument is too large, I set it to 0,
> which at least preserves the identities.

For arguments > 2^64*pi/2 I returned NaN upto now. I think, I will 
change this, to follow your example. Also returning NaN is not alowed 
by the C Standard for arguments other than +/- Infinity.

(I used all this stuff privately, and I thought, I wouldn't deserve 
anything better than a NaN, when I was ever calling the trig 
functions with such arguments ...)
 
> One function that should receive especially close scrutiny is acos().  One can
> compute it accurately as atan2(sqrt((1.-x)*(1.+x)),x), but the formula
> atan2(sqrt(1.-x*x),x) has signficant errors for arguments very near +/-1.

I use the former formula. I think, this was already used by very 
early versions of the math lib for DJGPP.

> > You mean 80 bits in the mantissa, yes? Do you have code for 80 bit log and exp?
> 
> No, I don't.  That's why I consider it difficult.  I didn't need to do anything
> heroic in my "double" implementation, because the coprocessor carries intermediate
> results to 64 bits, which is enough for the internal computations.  The range
> reductions can be simplified by computing powl(x,y) as exp2(y*log2(x)) instead of
> exp(y*log(x)), but one still has to carry a lot of bits in the intermediate
> results. 

I am using the former formula.

> Cody and Waite talk about this in their book, but I paid little 
attention
> to their intricate techniques because they were unnecessary for the "double"
> routine.  As I recall, one of the tricks is to carry the integer and fractional
> parts of the log separately.  The real fun comes when you attempt to produce the
> product y*log2(x) ;-) 

I think, I am using the methods of Cody and Waite, but I don't have  
the reference anymore. I have written Moshiers Code new from scratch, 
and supplemented it by a table generating program for exact powers
2^i/n, which are given as the sum of two long doubles. With this 
table, one "wins" a few bits of precision for log2 and for exp2.
So, enough table space given, it can be quite good.

The product, I find rather easy. If you have the log as lx 
+correction, split lx into lxh + lxl, where lxh uses only the 32 high 
bits. Add the correction to lxl. Also split y int yh+yl. yh*lxh will 
yield an exact result. Add the other terms together and your almost 
done. With this method, you have 64+32 bits of precision (minus 
rounding error). 

> I am busy these days on other things, but if you get stuck and need some ideas, let
> me know.

Thanks.
 
Regards,
Dieter Buerssner


- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019