Mail Archives: djgpp/1997/03/18/09:34:26
In article <Pine DOT D-G DOT 3 DOT 91 DOT 970314092902 DOT 4515B-100000 AT dg1>, "art s. kagel
IFMX x2697" <kagel AT dg1 DOT bloomberg DOT com> writes
>In general, it is good coding practice top explicitely use long and short
>rather than int since short is guaranteed to be 'at least 16 bits' and
>long is guaranteed to be 'at least 32 bits' while int is only defined as
>'the efficient word size' which varies from one machine and compiler to
>another. I recommend that you only use int when you do not care about
>precision but only about speed. I reserve int for loop variables and
>array indices when I am confident that the range of values it within the
>precision of any integral datatype that in MIGHT have.
Its good programming practice to let the compiler decide what size
variables should be *unless* you know that would cause a problem (for
example a 16 bit 'int' would be too small, and your program might be
compiled on a 16 bit compiler). This is particularly important with gcc
where code is likely to be ported to many targets.
I've seen 10% size reductions from converting 'short' to 'int' in MIPS
code for instance. The same source was also smaller and faster on the
original Intel code after this change!
In contrast I/O routines should define *everything* about data types
tightly, that way they keep working on next years machine/compiler.
---
Paul Shirley: shuffle chocolat before foobar for my real email address
- Raw text -