Mail Archives: djgpp/2000/01/24/13:25:19
David Cleaver <davidis AT ou DOT edu> wrote:
> So, would it better for me to store all of my one's and zero's in the
> 'int' type to speed up operations, since all I'm doing is accessing the
> arrays (I'm not changing anything in them), or should I just keep it in
> the 'char' data type? See, the reason for the hex question was...
> If I store all the info in hex form in the 'char' type, like:
> 0xa4, 0x3c, 0xf2, 0x7d, then all I have to do to change it to 'int' is
> combine them all into one hex unit (right?):
> 0xa43cf27d
The real problem you face, there, is that you want to use constant
initializers (which are of fixed size, as written into the source
code), for a datatype like 'int' which can be of different sizes, on
different C platforms.
I don't think that's a very good plan, to start with. At the very least,
you should
#include <limits.h>
#if CHAR_BITS != 8
# error This code does not work on this platform!
#endif
or so, to make sure that the problem never goes unnoticed. Same for
integer. You could assume 'int' to be 4 bytes, and check it like this:
#include <limits.h>
#if UINT_MAX != 4294967295U
# error This code only works on platforms where int is 32bits!
#endif
Except for constant initializers in the source code, the optimal
answer, of course, would be not to assume any particular size of the
data type, but parametrizing your algorithm in units of
CHAR_BITS*sizeof(int), i.e. the size of an integer, as the given
platform has it. That would yield portable code.
--
Hans-Bernhard Broeker (broeker AT physik DOT rwth-aachen DOT de)
Even if all the snow were burnt, ashes would remain.
- Raw text -