Mail Archives: djgpp/1994/10/05/23:42:07
In response to:
>> Off hand I don't know what the "-mno-486" flag does but by ANSI C
>> definitions, an signed integer as you have declared "i" to be; has the
>> maximum and minimum values of -32,767 to 32,767. You want to declare "i"
Eli Zaretskii wrote:
> Wrong! GCC produces 32-bit code, which means int is 32 bit, not 16. So
> it can hold values upto 2 million.
The fact that DJ's port of GCC produces "32-bit" code means that *pointers*
are 32 bits, not necessarily integers. It happens to be the case that
integers are also 32 bits in size, but that's by no means a requirement
for "32-bit compilers" in and of itself.
For example, many Macintosh compilers have 32-bit pointers, and 16-bit
'int's. Or, for example, the Metrowerks C/C++ compilers (again for the
Mac), in which an 'int' is 16 bits when compiling for the MC680x0, and
32 bits when compiling for the PowerPC. The reason is efficiency; 16-bit
operations are faster on the Motorola chips than 32-bit operations. Thus
the choice of a 16-bit 'int', the "natural" integer type. For the PPC,
the 32-bit integer is faster, so *it* is made the 'default' int.
I believe that 16-bit integer operations are faster on the I486 and before;
I don't know about the Pentium. Anybody know for sure?
(When coding for portability, never assume that an 'int' is more than 16
bits. And *especially*, never assume that "int" and "void *" are the same
size - Unix programmers are notoriously lax about this, and it causes
major headaches when porting Unix-origin software to many non-Unix-hosted
compilers.)
--------------------------------------------------------------------
Christopher Tate | "Apple Guide makes Windows' help engine
fixer AT faxcsl DOT dcrt DOT nih DOT gov | look like a quadruple amputee."
eWorld: cTate | -- Pete Gontier (gurgle AT dnai DOT com)
- Raw text -