Date: Tue, 14 Jul 1998 10:21:11 +0300 (IDT) From: Eli Zaretskii To: djgpp AT delorie DOT com Subject: Re: Use of FLT_EPSILON In-Reply-To: <6odqk1$p23$1@nnrp1.dejanews.com> Message-ID: MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII Precedence: bulk On Mon, 13 Jul 1998 plipson AT my-dejanews DOT com wrote: > > if (fabs (a - b) > min (fabs(a), fabs(b))*DBL_EPSILON) > > printf ("Not equal\n"); > > else > > printf ("Equal\n"); > > > > (Use FLT_EPSILON for floats, LDBL_EPSILON for long doubles; all of them > > are defined on .) > > > > In other words, don't EVER compare FP numbers for exact equality, since > > floating-point computations have inherent inaccuracy, unlike integer > > numbers. > > This looks pretty ugly (the multiply and even more the min() ) - isn't > there a better way? None that I know of. Floating-point computations *are* a complex issue if you need to get them right. Also, I think you are mistaken in your perception of ugliness: by far the most inefficient part of the above computation is the call to `fabs' library function, all the rest is usually very fast on modern machines. > And what about comparing an expression to zero? If the zero is a constant, not a variable, then you can compare directly, but I would suggest to use FLT_MIN (or DBL_MIN) instead of a literal zero, since somebody might argue that 1.e-100 is zero at least in some applications. > I tried several; since FLT_EPSILON is defined in terms of 1 - > if( fabs(a-b) < FLT_EPSILON.... > if( (1 + fabs(a-b)) < (1+FLT_EPSILON)..... > and a few others. > Does anyone have an explanation of what & why to do? These attempts suggest that you are missing a crucial aspect of this problem: FLT_EPSILON is a *relative* accuracy, not an *absolute* one. In other words two float variables that differ by less than FLT_EPSILON times their absolute value, are indistinguishable. Since it is a relative accuracy, you MUST multiply it by the absolute value of the numbers you compare, to get an absolute accuracy. (Be sure to use DBL_EPSILON for double variables.) The reason for this complexity is that floating-point numbers have a certain number of bits in the mantissa, and the value of the least-significant bit is proportional to the absolute value of the number. (If this doesn't make sense, I suggest to read any of the available books about floating-point computations; they should explain these issues on page 2 or thereabouts.)