Mail Archives: djgpp-workers/1999/03/09/05:02:42
On Mon, 8 Mar 1999, Mark E. wrote:
> 1) Use of GNU malloc provided by Bash.
That figures. It's not that gmalloc is slower (I don't know if it is,
but I doubt it's significantly slower). The problem probably is just
what it looked to me: gmalloc is a relocating allocator, meaning that
it sometimes decides to relocate large buffers behind the scenes, if
it is going to run out of memory. In other words, before it calls
sbrk, it tries very hard to reuse memory it already owns. And this
relocation takes time, since it's a kind of garbage collection.
This relocation behavior could sometime mean grave trouble. For
example, in Emacs I once had a nasty bug that took weeks to track
down. It turned out that, since DJGPP's `write' function was calling
malloc (to allocate a buffer for newline-to-CRLF conversion), gmalloc
would sometimes relocate the buffer whose pointer was already passed
to `write'. Thus, the pointer to text would change from under
`write's feet! (`write' was changed to not call malloc as a result of
this, btw.)
I don't know whether Bash uses the relocation feature (it can be
turned off IIRC), but if it does, I'm not sure it is a good idea to
use gmalloc in Bash. It could bite you when you least expect.
> Sometime before release, I'll reconfigure Bash to use the native
> malloc. Or it may be...
>
> 2) No optimization. Some of the early Bash 2.02.1 binaries were built
> with -O2, but debugging with rhgdb proved annoying because the
> highlight bar kept jumping around, so I stopped using "-O2". It resulted
> in some bloat (20-40K), so perhaps that's part of the slowdown.
Code bloat won't cause slow-down, but not optimizing surely would.
AFAIK, optimized GCC code is indeed about twice as fast as
non-optimized one.
But what worries me more is that all this time we were testing the
wrong binary: it is not optimized and with a different malloc. I
would strongly suggest to begin using the optimized binary, and if you
want to replace gmalloc by ours, please do it now. Otherwise, we run
high risk of not seeing bugs which don't show in non-optimized code,
but will raise their ugly head once you release Bash.
FWIW, I always debug the optimized version, even if it sometimes means
I need clever tricks inside the debugger. It just isn't worth it to
waste additional time debugging non-optimized version, then repeating
the same testing with optimized one. More often than not, people tend
to think, once they have a working non-optimized version, that the
optimized one doesn't need so much debugging and testing. And therein
be dragons...
You can always keep around a non-optimized version produced from the
same sources, to be used whenever you actually see a bug. You could
even include this additional binary in the prerelease zip, so others
could see if the bug happens in both optimized and non-optimized
versions. But I'd strongly advise to do all testing with the binary
produced exactly as you'd release it (except that it can be
unstripped).
- Raw text -