Message-ID: <8D53104ECD0CD211AF4000A0C9D60AE301397599@probe-2.acclaim-euro.net> From: Shawn Hargreaves To: djgpp AT delorie DOT com Subject: Re: Portability and size_t type related question Date: Thu, 13 May 1999 11:05:36 +0100 MIME-Version: 1.0 X-Mailer: Internet Mail Service (5.0.1460.8) Content-Type: text/plain Reply-To: djgpp AT delorie DOT com Pasi Franti writes: >> Ok. thanx. it is like here then: >> >> typedef unsigned short U16; >> typedef unsigned long U32; >> typedef unsigned char BYTE; > > I disagree. > > I did not follow your discussion but how did you come up to such > conclusion? You can never be sure of how many bits are int and > long types without checking it! Which is exactly why you need to make these defines. The above code is right for djgpp, along with most other 32 bit compilers. If you want to port your code to some different platform, you just change those few defines, rather than having to alter every reference to the types in your entire sources. This is the only possible way to do it, since you can't test type sizes with the preprocessor, or choose between different types at runtime! Personally, though, I've never much liked this method of defining your own types. As long as you make some minimal assumptions (eg. that you can fit at least 32 bits in an int, or assume at least 16 bits if you want to support 16 platforms as well), and don't rely on any specific wrapping behaviour, I've never found a case where I really needed this kind of define. IMHO it is almost always better to let the compiler choose a good size for you, eg. if you naively ported a 16 bit DOS program to djgpp by defining all the integers as shorts, you'd end up with very inefficient code because of all the size prefixes, wheras if you just said "int" you would get whatever is the optimal integer datatype for the current machine. Shawn Hargreaves.