delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1992/05/29/06:38:50

From: greve AT rs1 DOT thch DOT uni-bonn DOT de (Thomas Greve)
Subject: Re: How fast?? part 2
To: JMILLER AT CHESS DOT EISC DOT UTOLEDO DOT edu
Date: Fri, 29 May 92 12:06:43 NFT
Cc: djgpp AT sun DOT soe DOT clarkson DOT edu
Status: O

> 
> #include <stdio.h>
> #include <stdlib.h>
> 
> #define buf_size     16384       
> #define MAXKERN      21
> #define MAXCOEF      10
> #define NB           512
> #define NL           512
> #define ITER         9
> 
> unsigned char   *a[MAXKERN]; /* Temporary storage */
> short           *coeftab; /* coefficient array */
> short           *lx;  /* Up to MAXCOEF unique coef luts */
> unsigned char   buffer[16384]; /* Dummy buffer */
> 
> void main(int argc, char   *argv[])
> { 
> 	short           lineptr[MAXKERN];
> 	short           i, j, t;
> 	short           ix, nx, ny, nxh, nyh, np, *coefptr;
> 	short           ky, kx, nc, nxny, offset;
	^^^^^ 
If you only use small model (small amount of data) and short integers,
gcc cannot make use of it's main advantage: (almost) unlimited space
and 32bit int's.

As gcc optimizes on an intermediate level -- not on CPU-instruction
level as MSC does -- it cannot use the CPU resources as efficient as a
hardware specific compiler can.

So when the very same sources are compiled with gcc and with a native
cc, the native cc usually wins. (It also does on this rs6000, eg.)
Ok. Only if the native cc optimizes well, unlike TCC, eg.

But as soon as you need 32bit int's, MSC will look rather poor.
Usually, you cannot use the same sources for MSC or TCC, which you can
use for gcc -- you will run into problems with 64K-segments, 16bit
int's etc. Or at least with library functions on other Unix-cc's.

(This was *the* problem of the gnu-ish project!)

				- Thomas

   greve AT rs1 DOT thch DOT uni-bonn DOT de
   unt145 AT dbnrhrz1

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019