Mail Archives: djgpp/2007/11/05/01:29:10
H. Peter Anvin wrote:
> Robert Riebisch wrote:
>> Hi!
>>
>> Please read this thread at
>> <http://www.bttr-software.de/forum/forum_entry.php?id=1220>, how UPXed
>> binaries cause compile slowdowns.
>>
>
> "upx --brute" and "upx --ultra-brute" runs a bunch of algorithms, and
> picks the smallest one, which may or may not be LZMA. I don't know how
> they affect decompression speed, and it would be interesting to find out.
>
> In particular, the NASM build robot
> (ftp://ftp.zytor.com/pub/nasm/snapshots/) appear to generate marginally
> smaller binaries with --[ultra-]brute than with --lzma --best, and I'm
> wondering where the difference comes from. It would also be nice to
> know what algorithm it ends up using so I can tell it to use the same
> one every time, instead of trying them all every time taking time (a
> whopping minute every night ;)
After all that I read in this thread I'm no more sure that using --lzma
or UPX is a good idea at all for compressing NASM or any applications
which may be executed many times and often. So GCC (so I should stop
using UPX at all), binutils, NASM, fileutils etc. fits in this category
for which UPX should not be used by default (of course it is users right
to compress them later).
Other group is applications are not expected to be executed oftem and
many times from shell script, Makefile etc. Some examples GDB, EMACS,
RHIDE (isn't the latest already dead?). In this case single time
slowdown is more likely to remain unnoticed. As result UPX could be used
(even in LZMA mode) for these executables.
Andris
PS. Did some tests: running compiling C language "Hello World" style
program 10 times from shell script. I saw noticeable slowdown if used
executables (GCC, etc) are compressed with UPX (especially LZMA). And
the slowdown for compiling 10 times the same program were in worst case
(LZMA) several seconds (about 3) CPU time on 2.4GHz Intel Core 2 Quad
processor (WinXP SP2). Also NRV compressed GCC executables caused slowdown.
- Raw text -