delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1998/08/09/09:17:41

Date: Sun, 9 Aug 1998 16:17:24 +0300 (IDT)
From: Eli Zaretskii <eliz AT is DOT elta DOT co DOT il>
To: Arthur <arfa AT clara DOT net>
cc: DJGPP Mailing List <djgpp AT delorie DOT com>
Subject: RE: should i bother learning asm?? or just learn dx?
In-Reply-To: <000201bdc38a$48725ec0$ab4d08c3@arthur>
Message-ID: <Pine.SUN.3.91.980809155933.29413A-100000@is>
MIME-Version: 1.0

On Sun, 9 Aug 1998, Arthur wrote:

> GCC is good at optimising, but the way that C is structured, a fully optimised
> program will still not be as optimised as one written in ASM from scratch. For
> instance, writing a simple "hello world" program in DJGPP will produce code ~50k in
> size with full optimisation. In ASM, you can get it down to less ahan 5k.

IMHO, this example doesn't demonstrate anything.  Moreover, I submit that 
using such an example is a Bad Thing, since it tricks newbies into wrong 
ideas.  Here's why.

	1) Size optimizations and speed optimizations are different and 
usually conflict with each other.  This thread began with talk about speed 
optimizations.

	2) The size savings that you are boasting are an illusion: they are 
just a constant overhead of the DJGPP startup code and library functions 
called by it.  In other words, a 2KB-long program and a 2MB-long one will 
both suffer from the same overhead.  In particular, this overhead has 
nothing to do with the code generated by GCC from your C source.

> No it's not necessary, and on the PC it's very hard to learn. But note that
> professional programmers use hand-optimised ASM frequently - especially in graphics
> code and interrupt routines.

Professional programmers use the following 2 Golden Rules of Code 
Optimization:

	Rule no. 1:  Don't optimize.

	Rule no. 2:  Don't optimize yet.

In other words, optimizations are the *last* stage of the development 
process, and they *must* be preceeded by profiling the code.  It is a 
well-known fact that trying to guess which part of the code is the best 
candidate for optimizations by just looking at the source usually leads 
to wrong guesses.  The trade literature is full of stories from the 
trenches about people who optimized a function only to find out that it 
had no effect on the program speed because it is only responsible for 5% 
of the total run time.

Also, a conventional wisdom has it that rewriting a function in assembly 
will usually speed it up by a factor of 2, sometimes up to 4.  If you 
need more than that, you will have to change the algorithms, data 
structures, or the entire program design.

As to use of assembly in interrupt routines--this is mostly done not 
because of speed concerns, but because some bookkeeping required from an 
interrupt function cannot be done in C.

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019