delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1994/10/09/10:38:57

Date: Sun, 9 Oct 1994 06:05:19 -0700 (PDT)
From: "Frederick W. Reimer" <fwreimer AT crl DOT com>
Subject: Re: djgpp and the 386SX
To: Chris Tate <FIXER AT FAXCSL DOT DCRT DOT NIH DOT GOV>
Cc: djgpp AT sun DOT soe DOT clarkson DOT edu

This is a long response to a long response, so if you don't have the time 
to read it, kill it now..

On Sat, 8 Oct 1994, Chris Tate wrote: 
> Borland and Microsoft certainly *do* use 32-bit integers; they're just 
> declared "long" instead of "int." Why is this a problem? 
Let's be a little more detailed here.  First, my
reference point is Borland C++ 3.1 and Microsoft Visual C 1.0 (yes, I
actually paid for both of these compilers).  One of the reasons I
purchased both compilers is because they both were said to be "32-bit
compilers" in the upgrade offers.  But this is simply not true!  Yes, they
do both have a 32-bit TYPE (longs), and they both have /3 switches to the
compiler that is supposed to produce "32-bit code" or use "32-bit
instructions" (only runnable on 386+ CPU's.  But you'd be hard pressed to
find an instruction that uses any of the extended registers in any
compiled code (EAX, EBX, etc.).  Instead, the "extended" instructions they
use are ENTER and LEAVE and such, not as much a benefit as using the
extended registers.  They both included a DOS extender (I believe Phar
Labs, lost the disk!) that would convert a compiled huge mode program and
make it REALLY 32-bit, but there was certain restrictions on programming
and it was only a "demo" copy that would use only 2MB extended memory.  If
you wanted to use ALL extended memory available, you'd have to spend
another $400 or so for the full Phar Labs extender.  THIS is why i have
such a bad attitude towards vendors who say their compiler is 32-bit, but
don't realy produce 32-bit code.  Some even use the 32-bit label simply
because they compiler itself runs in a DOS extender (saying nothing about
the code that the compiler produces!). 

> > If anyone is interested, I have Intel's programer's guide to the 386+ 
> > processors, and I would be glad to post the timing specs for both 16-bit 
> > and 32-bit instructions.
> 
> I'm frankly less interested in the 386+ processors than in the 286 and
> whatever bizarre compatability mode it is that later processors run under
> in DOS.  Isn't it the case that the instruction timings are different for
> the different chip modes?  Remember that go32 provides its own pseudo-
> OS in order to use the flat memory model; I'd argue that comparing it
> with BC/MSVC is an apples-and-oranges job.  BC can compile Windows programs
> and DJGPP can't, and so forth....
Let's get detailed again.  Each CPU has it's own set of timings for each 
instruction.  CPU's capable of more than one mode (286+) also have 
different timings FOR SOME INSTRUCTIONS based on what mode they are in.  
This is almost entirely determined on whether the instruction is a jump 
or call type instruction or not (anything that modifies the IP 
register).  ALL "general" instructions, such as ADD, MOV, INC, DEC, etc. 
DO NOT have different timings based on the mode the processor is in.

Yes, DJGPP provides it's own protected mode pseudo-OS, and BC and MSVC 
compile and run under Windows, but Windows itself runs in protected mode, 
so the point is moot.  (Timings are not different whether it uses the 
flat model, real mode, or other protected mode; rather their are only two 
values, one for real mode and one for protected mode).

> >As far as Unix programs and such are concerned, what's wrong with 
> >assuming that the size of an int is the same as the size of a pointer?  
> >This is almost a given in the C programming world, or at least is should 
> >be.
> 
> Absolutely NOT!
> 
> You've never done much cross-platform porting, have you?  There are
> compilers that give you just as much control over whatever it is you're
> doing, but use different 'int' sizes for various reasons.  And you'd
> better not make any assumptions about the size of 'int' in your code,
> or else you'll wind up with subtle bugs when you move to a different
> int-sized compiler.
Someone else already corrected me on this.  I admit that I was wrong, or 
just confused at the time I wrote this little jem.  Yes I have done cross 
platform porting, and have run into major problems with the way 
programmers assume things about what platform they are on.

> 
> >> For example, many Macintosh compilers have 32-bit pointers, and 16-bit
> >> 'int's.  Or, for example, the Metrowerks C/C++ compilers (again for the
> >> Mac), in which an 'int' is 16 bits when compiling for the MC680x0, and
> >> 32 bits when compiling for the PowerPC.  The reason is efficiency; 16-bit
> >> operations are faster on the Motorola chips than 32-bit operations.  Thus
> >> the choice of a 16-bit 'int', the "natural" integer type.  For the PPC,
> >> the 32-bit integer is faster, so *it* is made the 'default' int.
> >
> >I would suggest that the compilers for the Mac which use 16-bit ints are 
> >not true 32-bit compilers.  When you say that a particular compiler is 
> >32-bit, what do YOU assume?  Than POINTERS are 32-bits?  That's IT?  I 
> >assume that it uses 32-bit int's.  I could care less what type it's 
> >pointers are.  If it's an Intel machine, let it use far pointers or 
> >something, I really don't care.  But to say that a compiler is 32-bits 
> >when it uses 16-bit int's is streaching it a bit, if you ask me.
> 
> Actually, Metrowerks lets you pick whether you want 16-bit or 32-bit
> integers under its 68K backend.  16-bit integers are faster on earlier
> chips (which are still in use!), and have the advantage of taking up
> a lot less space.  32-bit integers are more accomodating of large values,
> and just as fast on later chips.  You pick the one most suited to your
> needs.  If you expect your code to be run on all Macs, you use 16-bit
> ints.
Glad to here this!  I guess the Mac programmers are luckier than most DOS 
programmers -- it sounds like the tools are better (except for DJGPP of 
course).

> 'far' pointers are one of those incredibly skanky kludges that were
> intended to stretch a poorly-architectured machine into the modern era.
> And they're not portable by any stretch of the imagination.
Yes, this is why I hate BC and MSVC so much.  To use an array that is 
over 64K, you have to use one of these non-portable directives such as 
far or huge.  Ever try porting a DOS program that uses far or huge 
structures to a Unix platform?

> Why do you require that the 'int' type be 32 bits?  Just declare things
> 'long' when you need that many bits; otherwise, use 'int' and let the
> compiler sort things out.  Assumptions such as you describe above can
> and have caused a lot of grief when it comes to porting.
As I think I have said before, I don't require that 32-bit compilers have 
32-bit int types, just that the 32-bit type, whatever it is, be compiled 
into 32-bit code, instead of 16-bit code thrown togeather to manipulate 
32-bit values.

> 
> >Yes, they may be faster, but if you have to compute a 32-bit aritmetic 
> >value, what would be faster?  Two (actually more because of the 
> >conditional carry) instructions or just one?  I would submit that the one 
> >32-bit instruction is faster, and I would be willing to provide the 
> >instruction timmings to prove it.
> 
> If you have to do 32-bit math, you declare a 32-bit variable by calling it
> 'long' instead of 'int.'  What *I* am saying is that it makes no sense to
> require the *default* 'int' size to be 32 bits, especially when that may
> not be the most efficient integer size.
> 
> Real history:  About five years ago, most Macs were still running MC68000
> chips, rather than the 68020.  The 68030 was brand-new.  There were also
> two major C compilers on the market, MPW C and THINK C.  Commercial apps
> compiled under THINK C were often sleeker and faster than those compiled
> under MPW, because THINK C used a 16-bit 'int' and MPW used a 32-bit 'int'.
> 
> Both compilers offered full access to the entire memory space of the
> machine, and both offered full 32-bit codegen.  How can you say that
> THINK C was somehow "not a true 32-bit compiler" if it didn't have *any*
> shortcomings relative to its (supposedly "truly 32-bit") competitor?
I think you are the one comparing apples to oranges here.  What we are 
talking about is 32-bit compilers on the DOS platform, not the Mac or 
Windows.  On the Intel platform, there IS not difference in the timings 
for most instructions based on whether you use the 16-bit registers or 
32-bit registers.  So, what has the features or shortcommings of the 68K 
chip sets have to do with this?

> 
> > Yes, Unix code does have a lot to be desired, but at least I can 
> > compile the majority of it on my PC without chages with DJGPP.  Can you 
> > say that for your Mac? (just wondering)...
> 
> You're comparing Apples and oranges again.  :-)
> 
> Actually, I can port a great deal of (TTY-based) Unix code to my Mac without
> any major modifications.  Now:  How much Unix code can you port to *Windows*
> without any major changes?  That's a much more balanced comparison.

As you said, DJGPP is not a Windows compiler, yet...  So you can't even 
compare it to the Mac or Windows compilers.  What you CAN do is compare 
it to Borland and MS compilers that produce code for the DOS platform, 
and what I've tried to say is that DJGPP is more of a 32-bit compiler 
than either BC or MSVC (at least the versions I have).

Sorry about that comment about the Mac, it was wrong of me and I didn't 
mean to insult anyone...


Fred Reimer

+-------------------------------------------------------------+
| The views expressed in the above are solely my own, and are |
| not necessarily the views of my employer.  Have a nice day! |
| PGP2.6 public key available via `finger fwreimer AT crl DOT com`   |
+-------------------------------------------------------------+



- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019