From: Paul Shirley Newsgroups: comp.os.msdos.djgpp Subject: Re: Weird problem Date: Mon, 17 Mar 1997 18:49:12 +0000 Organization: wot? me? Lines: 25 Distribution: world Message-ID: References: <33287185 DOT 892303 AT news DOT flash DOT net> Reply-To: Paul Shirley NNTP-Posting-Host: chocolat.foobar.co.uk Mime-Version: 1.0 To: djgpp AT delorie DOT com DJ-Gateway: from newsgroup comp.os.msdos.djgpp In article , "art s. kagel IFMX x2697" writes >In general, it is good coding practice top explicitely use long and short >rather than int since short is guaranteed to be 'at least 16 bits' and >long is guaranteed to be 'at least 32 bits' while int is only defined as >'the efficient word size' which varies from one machine and compiler to >another. I recommend that you only use int when you do not care about >precision but only about speed. I reserve int for loop variables and >array indices when I am confident that the range of values it within the >precision of any integral datatype that in MIGHT have. Its good programming practice to let the compiler decide what size variables should be *unless* you know that would cause a problem (for example a 16 bit 'int' would be too small, and your program might be compiled on a 16 bit compiler). This is particularly important with gcc where code is likely to be ported to many targets. I've seen 10% size reductions from converting 'short' to 'int' in MIPS code for instance. The same source was also smaller and faster on the original Intel code after this change! In contrast I/O routines should define *everything* about data types tightly, that way they keep working on next years machine/compiler. --- Paul Shirley: shuffle chocolat before foobar for my real email address