From: "Andrew Crabtree" Newsgroups: comp.os.msdos.djgpp Subject: Re: ! Optimization in Practice Date: Fri, 3 Apr 1998 17:10:44 -0800 Organization: Hewlett-Packard, Roseville Lines: 40 Message-ID: <6g41an$bcl$1@rosenews.rose.hp.com> References: <3524804D DOT 75E1 AT infi DOT net> NNTP-Posting-Host: ros51675cra.rose.hp.com To: djgpp AT delorie DOT com DJ-Gateway: from newsgroup comp.os.msdos.djgpp Precedence: bulk Joe Wright wrote in message <3524804D DOT 75E1 AT infi DOT net>... >What are the pros and cons of various optimization levels? The higher the optimization, the longer the compile time, and the more memory needed during compiler. Some optimizations generate smaller code, some (anything that has unroll or inline in it) generate fatter code. Most optimizations make your code run faster. >I understand that Optimization produces code that the debugger has >problems with for various reasons but Debugging can be tricky sometime when it apparently skips lines of code only to go back to them later. Or when local variables disappear entirely. Also, the COFF file format has some difficulty with debug and optimized code. But, some compilers cannot produce debug and optimize code simultaneously. The default (for most I think), is "-g -O2" optionally stripping the debug info later if it is not needed. > if -O3 produces such fast results, why not use it all the time? O3 sometimes slows down code over -O2 > Why would I choose -O1 or -O2 >over -O3? Generally, the higher optimization levels invoke either 1) Very lengthy compile time optimizations 2) Only marginally useful optimizations (sometimes good sometimes bad) 3) Possibly buggy optimizations. For #2, perhaps you will have a 80/20 case. 80% of the time the optimization speeds up code but 20% of the time it slows it down. #3 should only be a concern if you are using pgcc. Andy