Mail Archives: djgpp/1998/01/15/06:18:05
On Wed, 14 Jan 1998, Nate Eldredge wrote:
> >Sorry, I was wrong. The ANSI C Standard explicitly says that signed bit
> >fields of size N can be used to represent values in the range [0, 2^(N-1))
> >so when N is 1, you cannot represent 1. You need to make it unsigned, as
> >Nate suggested.
> Oughtn't that to be:
> [0..(2^(N-1))-1] ?
> Because, for instance, a 16-bit signed value can only go up to 32767.
>
> Also, 2^(N-1) for N=1 equals 2^0 = 1.
Here's what my ANSI C references say (the last column is mine):
Designation Minimum range For N=1
int or none [0, 2^(N-1)) [0, 1) i.e. only 0
signed or (-2^(N-1), 2^(N-1)) (-1, 1) i.e. only 0
signed int
unsigned or [0, 2^N) [0, 2) i.e. 0 and 1
unsigned int
So it seems to me that the only useful way to have a single-bit bit
field is to declare it unsigned. Of course, the above only specifies
the *minimum* range, so an implementation could behave otherwise, but
if you want to be sure it works, make it unsigned.
- Raw text -