From: Ned Ulbricht Newsgroups: comp.os.msdos.djgpp Subject: Re: printf 'g' conversion Date: Sun, 01 Mar 1998 20:45:03 -0800 Organization: University of Washington Lines: 43 Message-ID: <34FA394F.D1E@ee.washington.edu> References: <34FA0346 DOT 33 AT ee DOT washington DOT edu> <34FA0CD5 DOT 79608BBD AT alcyone DOT com> NNTP-Posting-Host: cs204-48.student.washington.edu Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit To: djgpp AT delorie DOT com DJ-Gateway: from newsgroup comp.os.msdos.djgpp Precedence: bulk Erik Max Francis wrote: > > Ned Ulbricht wrote: > > > The Working Draft, 97-11-21, WG14/N794 J11/97-158, p.290 (&cf p.287), > > seems to be a little bit ambiguous about this, but it says under 'g,G' > > "the number is converted in style f or e (...), with the precision > > specifying the number of significant digits." [snip] > Why not look at the _actual_ ANSI document? From ANSI 7.9.6.1: > > The double argument is converted in style f or e (or in style E in > the case of a G conversion specifier), with the precision specifying > the number of significant digits. As I suspected, the working draft (for the next standard, please note) uses essentially the same ambiguous language that the current standard does. > If the precision is zero, it is > taken as 1. > > And for f: > > ... If the precision is missing, it is taken as 6 ... > > And for e, E: > > ... if the precision is missing, it is taken as 6 ... What does this have to do with the reported behavior? The precision in the test case is not missing, it is present. > Seems pretty clear to me. Clear in which way? I just made additional tests using gcc under Digital Unix 4.0a and under HP-UX 9.05. All the libraries that I've tested so far except for DJGPP 2.01 agree with the behavior that was first reported under Linux. And that behavior is not the output you would get by feeding printf "%.9f". -- Ned Ulbricht mailto:nedu AT ee DOT washington DOT edu