Mail Archives: djgpp/2001/06/30/05:15:06.1
On Fri, 29 Jun 2001 10:20:12 +0300, "Eli Zaretskii"
<eliz AT is DOT elta DOT co DOT il> sat on a tribble, which squeaked:
>This is usually a sign of a numeric bug in the code.
No kidding, but in this case it went away when I switched to plain
doubles universally. And it wasn't a tiny number turning into a zero
because of precision. For one thing, it would then have *stopped*
working, not *started* working, when going from long to regular
doubles.
>
>> (Actually, I tend to use
>> "%1.16f" when debugging FP code, since the default "%f" gives only
>> "float" precision output even when doubles and long doubles are
>> passed. Clearly this should work fine with long doubles, except that
>> even 16 digits of precision might not be enough for many cases.)
>
>For long doubles, the right format for debugging printf's is "%.19g".
>Note: "g", not "f", because you want 19 significant digits, not 19
>digits after the dot (think about a number like 123456789.987654321).
Ooo--kay... (if I even got long doubles working. As I said, %1.16f
seems to work for plain doubles.)
>It doesn't; it simply assumes that the format specifier tells the
>truth. That is, if you say "%f", the corresponding argument is a
>double, and if you say "%Lf", it's a long double.
Hey, wait a damn minute, the info file said %f for doubles *and* long
doubles.
In any case, the core question remains unanswered: why was some code
that didn't even use printf broken with long doubles but working with
plain doubles? All it did was add, multiply, compare, and assign them
with literals and variables. Not so much as a pointer or a conversion
involved, and I don't think it even did any division...
--
Bill Gates: "No computer will ever need more than 640K of RAM." -- 1980
"There's nobody getting rich writing software that I know of." -- 1980
"This antitrust thing will blow over." -- 1998
Combine neo, an underscore, and one thousand sixty-one to make my hotmail addy.
- Raw text -