Sender: vheyndri AT rug DOT ac DOT be Message-Id: <34D6C7DF.35AD@rug.ac.be> Date: Tue, 03 Feb 1998 08:31:43 +0100 From: Vik Heyndrickx Mime-Version: 1.0 To: DJ Delorie Cc: eliz AT is DOT elta DOT co DOT il, djgpp-workers AT delorie DOT com Subject: Re: char != unsigned char... sometimes, sigh References: <199802022324 DOT SAA16676 AT delorie DOT com> Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Precedence: bulk DJ Delorie wrote: > > Will the following solve the problem? > > #define isalnum(c) (__dj_ctype_flags[((c)+1)&0xff] & __dj_ISALNUM) > No, because the problem is that EOF ((int)-1) and 0xff ((int)(signed > char)-1) ARE THE SAME NUMBER. No amount of logic can tell the > difference. This sort of difficulties is triggered by "choices made in the past", where it was more efficient to return an error code at the same place where the normal return value would be returned. I try to avoid this in my programs, but, unfortunately in djgpp we are stuck with this standard, and there is no easy way of avoiding this kind of problems. If we just could squeeze 257 values in a char... (where have I heard this before). > What started this thread was an idea to make (char) mean (unsigned > char) so that -1 and (char)(0xff) would be different numbers when > promoted to (int). That, unfortunately, has many other bad side > effects. I have obviously not a clear view over these bad side effects. Could you please fill me in, or at least provide a good example? I know that the gnu compiler et al. are using "unsigned char" by default on many platforms, where it is obviously used without problems. The only problems I can see are with djgpp programs specifically relying on the fact that "char" is "signed char", and I don't think many programs do that. -- \ Vik /-_-_-_-_-_-_/ \___/ Heyndrickx / \ /-_-_-_-_-_-_/