delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1997/08/07/03:03:52

From: mschulter AT DOT value DOT net (M. Schulter)
Newsgroups: comp.lang.c,comp.os.msdos.djgpp
Subject: Re: having trouble with long numbers
Followup-To: comp.lang.c,comp.os.msdos.djgpp
Date: 7 Aug 1997 02:25:31 GMT
Organization: Value Net Internetwork Services Inc.
Lines: 65
Message-ID: <5sbbms$ne3$1@vnetnews.value.net>
References: <01bc9c51$0ceeec80$78ed1fcc AT darkstar> <01bc9c58$5796ffa0$b361e426 AT DCorbit DOT solutionsiq DOT com> <33DFD749 DOT 2AD2 AT ici DOT net> <870396817snz AT genesis DOT demon DOT co DOT uk>
NNTP-Posting-Host: value.net
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp

Lawrence Kirby (fred AT genesis DOT demon DOT co DOT uk) wrote:
: In article <33DFD749 DOT 2AD2 AT ici DOT net>
:            carla AT ici DOT net "Alicia Carla Longstreet" writes:
: 
: >It is a bit closer to 365.246  (Which is why we do NOT have a leap year
: >on years that are divisable by 400.
: 
: Years divisible by 400 are leap years (which is why 2000 is a leap year).
: It is years divisible by 100 (other than those divisible by 400) that are
: not leap years.

Yes, that is indeed the algorithm adopted by the Gregorian calendar in
1582. If we take as a standard the length of the day around 1900 -- about
365.2422 days, if I have it correctly, or 365 days, 5 hours, 48 minutes,
and 46 seconds -- the Gregorian algorithm is still off by something like
one day in 3000 years. Adding a "patch" that years divisible by 4000 or
10000 should _not_ be leap years would make the system accurate to
something I once estimated as one day in 800,000 years or so -- _except_
for the fact that variations in the length of a day throw off the changing
"standard" of solar system time. 

Maybe it's sort of like DJGPP or GCC: just when things are getting close
to perfection, things change <grin>, maybe like going from 2.7.2.1 to
2.8.0.

BTW, this thread is an interesting summary of different steps toward
programming wisdom:

Step 1: "This book on real-mode DOS C compilers says an int is 16 bits."

Step 2: "Wait, DJGPP is a 32-bit compiler."

Step 3: "Yes, DJGPP is 32-bit -- but why not write code that will run on
any ANSI-compliant compiler by using a long or whatever?"

To quote from Thomas Plum and Dan Saks, _C++ Programming Guidelines_ (Plum
Hall, 1991), Section 1.17 stdtypes on "standard defined types," p. 68:

"Avoid careless dependence on the int size of the compiler. This is
especially important on machines where int and long are the same size;
careless code will not port correctly down to smaller machines."

From previous posts here, I understand that it works (or _doesn't_ work)
both ways: assuming that an int is 16 bits can leave some surprises
waiting in the wings when the program gets ported to DJGPP.

BTW, with degrees of precision, I might guess that the _usual_ as opposed
to _minimum_ expectation might be about equal to what both DJGPP and some
real-mode DOS compilers support: 15-digit precision for a double, and
18-digit precision for a long double. It's true that the ANSI standard
for the long is less demanding; as for the long double, although I'm not
sure that's defined in the ANSI standard or is guaranteed to be supported
in all ANSI-compliant implementations, a C programmer once told me that
the IEEE standard for extended precision math sets a widely accepted
standard.

BTW, an interesting DJGPP specification, at least last I checked: DJGPP
uses a long double with a size of 12 bytes or 96 bits, to permit padding
to an even 32-bit boundary (rather like struct padding), although 80 bits
actually get used for exponent and mantissa on the Intel PC platform.

Most respectfully,

Margo Schulter
mschulter AT value DOT net

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019