Mail Archives: djgpp/2002/04/19/06:11:02
> From: "Goh, Yong Kwang" <gohyongkwang AT hotmail DOT com>
> Newsgroups: comp.os.msdos.djgpp
> Date: Fri, 19 Apr 2002 16:38:08 +0800
>
> Any drawbacks with turning on optimization when generating a production copy
> for distribution for usage?
Yes: you are distributing a program different from the one you
debugged.
> Is there a possibility that the compiler may mis-interprete the original
> program logic and rearrange the codes such that the program doesn't work as
> expected with optimization turned on, even though the program source is
> clearly OK?
That would be a bug in the compiler, so its probablility should be
low (unless you use some unreleased development snapshot).
But there's a much more probable cause for such a program to become
broken: optimized code can reveal bugs in your program that
unoptimized code didn't. Optimizations rearrange code and data in a
way that is supposed to produce a semantically identical program, but
a buggy program is not guaranteed to behave the same way...
For this reason someone wise compared debugging unoptimized program,
then turning on optimizations before shipping the final version to a
diver who learns in shallow water with all the safety gear on, then
throws away the safety gadgets when diving for real in deep water.
It simply doesn't make sense to do that.
Why don't you use optimizations during development and debugging?
Unlike other compilers, GCC allows you to use -O2 and -g together, so
you don't need to make a painful compromise between these two.
> How do we determine the level of optimization that is appropriate or
> suitable for a program, apart from experimentation? -O1, -O2 or -O3?
The best default is -O2. Beyond that, you will need to profile and
experiment, but don't expect any speedups more than 10-15% from
playing with optimization switches. See section 14.2 of the DJGPP
FAQ list for more about this.
- Raw text -