Mail Archives: djgpp/1998/05/18/16:45:42
Eli Zaretskii wrote:
>
> > I have a question regarding the state of the coprocessor control
> > register. I noticed that in DJGPP v2.01, overflows are masked.
> I'm not sure I understand the question. By ``overflows are masked'' do
> you mean that generation of FP exceptions due to overflow is masked?
Yes.
> IMHO, math functions should
> return Inf or a NaN as appropriate, and set errno and/or call matherr;
> they should not crash the program.
errno is specified in the ANSI standard, but matherr appears to be a
Borland-ism, which DJGPP doesn't have.
> But regardless, I think that any function that produces +/-Inf should > set errno and call matherr.
> I'm no expert, but my interpretation is that Inf *is* too large
> and should cause errno to be set to ERANGE. I don't see any sense in > silently ignoring these cases: how else will the program know that
> something went wrong after a certain run of FP computations?
ANSI allows generation of infinities to be flagged, even on
implementations that support infinities, but on such a system I would
question whether generation of "infinity" is really an error. On
machines that can't represent infinity, it certainly makes sense to flag
an overflow as an error, but on the x87, I think there are more
advantages than disadvantages in silently returning infinity, since the
x87 knows how to use infinity in further arithmetic computations. For
instance, if a user wished to compute sech(x) by the formula
sech(x) = 1./cosh(x)
the result for large arguments would be zero, as expected. Another
example would be the computation of parallel resistance by the formula
rpar = 1./(1./ra + 1./rb)
which would produce the expected result if ra or rb were zero. No doubt
one could concoct examples where ignoring infinity leads to problems,
but in most of the floating-point work I have done, I have wished for
infinity to enter into the arithmetic as naturally as does zero. Any
program that makes assumptions about infinity being trapped has
questionable portability, anyway.
> I'm not sure. If the new version of expm1 is not broken, why fix it?
> Please look at the new source before you decide. Preliminary testing > by K. B. Williams indicates that this version is much-much better.
I haven't compiled any routines in libmnew.zip, but they appear to be a
careful and reasonable implementation for machines that have support for
floating-point arithmetic, but lack built-in transcendentals. The
author(s) have taken pains to preserve accuracy to one ULP, but this
means a lot of extra code, if one cannot use coprocessor registers with
extended precision. Every libm routine I examined uses polynomial and
rational approximations, which will be far slower than functions that
use the native x87 instructions. Thus, I'm still leaning toward
continuing with a new library for DJGPP. I think I can get strict ANSI
conformance AND good efficiency.
It would be nice to have libm around, since the higher transcendentals
(Bessel, gamma functions, etc.) are handy to have, but I feel that
functions that do not appear in the ANSI standard should not be
prototyped in math.h, since there is a namespace pollution issue. For
instance, a strictly conforming program might use the name "j0"; that
program would break if the Bessel function "j0" appeared in math.h.
-Eric Rudd
- Raw text -