Mail Archives: djgpp/2000/01/22/22:29:50
David Cleaver <davidis AT ou DOT edu> writes:
> Nate Eldredge wrote:
> >
> > Most machines have one data type that is fastest to operate on,
> > normally the one that corresponds to the native wordsize. On the 386,
> > it's the 32-bit int. The 16-bit short, on the other hand, requires a
> > special instruction prefix to operate on, and each instruction
> > requires one more cycle than the corresponding 32-bit instruction.
> > Thus the code generated for `i++' will be both smaller and faster if
> > `i' is an int rather than a short.
> >
>
> So, would it better for me to store all of my one's and zero's in the
> 'int' type to speed up operations, since all I'm doing is accessing the
> arrays (I'm not changing anything in them), or should I just keep it in
> the 'char' data type? See, the reason for the hex question was...
Since there are a lot of them, storage may become an issue. And if
you're simply accessing them, I don't think there's much performance
difference between bytes and ints. Arithmetic is usually where it
becomes an issue. Also, keeping the arrays smaller will allow more of
it to fit in the cache, which is good.
> If I store all the info in hex form in the 'char' type, like:
> 0xa4, 0x3c, 0xf2, 0x7d, then all I have to do to change it to 'int' is
> combine them all into one hex unit (right?):
> 0xa43cf27d
Not quite. I assume you're thinking of something like:
char foo[] = { bla, bla, bla };
int x = *(int *)foo;
This is generally not allowed by ANSI (though I think that specific
case may be legal), and recent versions of GCC may generate code that
does not do what you want. You'd have to use unions to be safe, which
is a pain. I'm not quite sure what you're trying to accomplish, but I
suspect that trickery like this won't be worth the trouble.
--
Nate Eldredge
neldredge AT hmc DOT edu
- Raw text -