Message-Id: <200005121903.PAA29374@delorie.com> From: "Dieter Buerssner" To: djgpp-workers AT delorie DOT com Date: Fri, 12 May 2000 22:11:49 +0200 MIME-Version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT Subject: Re: Math functions In-reply-to: <391C3788.70372F70@cyberoptics.com> X-mailer: Pegasus Mail for Win32 (v3.12b) Reply-To: djgpp-workers AT delorie DOT com Errors-To: nobody AT delorie DOT com X-Mailing-List: djgpp-workers AT delorie DOT com X-Unsubscribes-To: listserv AT delorie DOT com Precedence: bulk On 12 May 00, at 11:55, Eric Rudd wrote: > Dieter Buerssner wrote: > > Is the setting of errno really wanted for DJGPP? > It's not as simple as that. An implementation is not required to set > errno under C99, and initially I didn't want to set it, either, but Eli > convinced me that setting it under error conditions would be a good > idea. It turned out not to slow down the routines much, and helps in > cases like this: > > for (i=0; i a[i] = log(b[i]); > } > if (errno != 0) ... I see, but I am not sure, whether I agree. Unfortunately, this makes the task much more tedious. > > Anyway, is there interest in long double versions of the math > > functions? > > I think they'll be needed for C99. The interesting question is whether > the coprocessor accuracy should be considered adequate or not. Long > doubles have a 64-bit significand, which is the same as the internal > format of the x87, but it's not easy to deliver correctly-rounded results > at that precision for more than a few of the functions (such as sqrt). > The range reduction for exp(), sinh(), cosh(), and the trig functions > will be particularly difficult. I think the range reduction for expl() is not that difficult, because of the limited range of the argument. When expl() and expm1l() are implemented, sinhl(), coshl() and tanhl() are not that difficult either, when an maximum errror of say 2ULPs = 2^-62 is acceptable. Sin(), cos() and tan() are more difficult. My adapted range reduction code (from Stephen Moshier) seems to produce correct results up to 2^64. (I tested it against 100 decimal digit code for linear and exponentially scaled random arguments.) When really a larger range needs to be supported, it gets much more difficult. The most difficult is IHMO powl(), and I don't have a good solution for that. > If a program uses long doubles, it is presumably very concerned about > accuracy, so I would lean toward slower, uncompromisingly-accurate > implementations of the long double functions. I'd want to make sure that > such implementations did not get linked into programs not using long > doubles, however. From my testing, the floating point functions built into the FPU are normally better than 2^-63 (when called with an argument correctly reduced to the supported range). When this is an acceptable error (which I think it is), the functions won't be slow. When this is not an acceptable error, I can't do it :-( > One issue for me is that C99 requires certain behaviors that I consider > mathematically indefensible, such as atan2(0.,0.) returning 0. without > any indication of error. I agree. Even more serious is, that you can "loose" a NaN with pow(0.0, NaN). > There are a number of higher mathematical functions that will need to be > implemented for C99, such as lgamma, tgamma, erf, erfc, etc. I do have gamma[l], lgamma[l], erf[l], erfc[l] and the Bessel functions (again adapted from Steven Moshier's code). -- Regards, Dieter Buerssner