Mail Archives: djgpp/1996/12/01/04:48:07
On Sat, 30 Nov 1996, Francois Charton wrote:
> Morten Welinder wrote:
> >
> > afonso AT inesca DOT inesca DOT pt writes:
> >
> > > char string[]="1.13";
> > > int result;
> > > ...
> > > result = (int)(atof(string)*100);
> > > ...
> >
> > > I've got result = 112!!! not 113 as I wished, because
> > >the function atof() return is 1.29999... not 1.13 (and I only have
> > >an old i386).
> >
> > Getting 112 is well within the C standard. If your program does
> > not work in this situation then you have a bug.
> >
>
> Sorry to disagree but this *is* a bug : to be sure try the following
> program :
>
> int main(void)
> {
> char ch[8]="1.13";
> int result, otherresult;
> float f;
> result=(int)(atof(ch)*100.0);
> f=atof(ch)*100.0;
> otherresult=(int) f;
> printf("result: %d otherresult:%d\n", result, otherresult);
> return 0;
> }
>
> On my machine I get result: 112 and otherresult: 113...
>
> Francois
>
It is not a bug. It is a fundamental problem: decimals are held
imprecisely because many decimals have an infinite number of binary
digits after the binary point. You cannot beat this.
Try running this test.
#include <stdio.h>
main()
{
double f = 1.13;
printf(" 113 - 100*1.13 = %16.13e\n",113-100*f);
}
Using GCC under Linux I get
113 - 100*1.13 = 1.0658141036402e-14
Bryan
- Raw text -