delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1997/12/02/03:52:01

From: Christopher Croughton <crough45 AT amc DOT de>
Message-Id: <97Dec2.104900gmt+0100.17029@internet01.amc.de>
Subject: Re: Alternate malloc?
To: dj AT delorie DOT com (DJ Delorie)
Date: Tue, 2 Dec 1997 09:47:10 +0100
Cc: crough45 AT amc DOT de, djgpp AT delorie DOT com
In-Reply-To: <199711291657.LAA17674@delorie.com> from "DJ Delorie" at Nov 29, 97 05:57:55 pm
Mime-Version: 1.0

DJ Delorie wrote:
> 
> All the malloc*.c files are my own invention.  No sense replacing
> BSD's malloc with yet another copyrighted malloc.  

That's true.  I thought some of them were public domain, though.

> I don't know how
> they compare with other published malloc packages.  Try them and see.

Well, on the Dec Alpha I got the results:

test0	SIGSEGV
test1	OK
test2	loads of alignment errors (the Alpha is 64 bit)

Under DJGPP on my 586 PC, I got:

test0	SIGSEGV
ntest	OK
test1	OK
test2	locked the machine solid

> test1 is similar in speed to BSD, but wastes slightly less in my
> tests.  That may not be the case in "normal" applications, though.
> test6 is my other favorite.  I realize it's about 3x slower than
> test1, but I don't think the extra overhead is really that significant
> in a "normal" application, and it does save a lot of memory.

I'll vote for the massive memory saving.  I think that's more relevant
in most cases, and the BSD one can always be provided as source for
people who want an alternative and don't mind about the memory overhead.

> SGI is not an intel platform.  I used that for various reasons,
> including (1) it's my favorite, (2) there's no chance of my code
> crashing the system, and (3) it provides *two* commercial malloc
> implementations to test against.  It also happens to be the one with
> the web server on it, so I can do my development right in the web
> server directory and it's automatically published.

That makes sense.

> Since I realize my benchmark may not reflect "normal" applications, I

There is no 'normal' application.  Several of mine use loads of small
memory blocks (string handling), whereas others use chunks of a meg
or more.

> saw no point in trying to get that accurate with my results.  I just
> wanted a rough comparison.  The memory overhead results are accurate,
> since they rely only on the algorithm itself.  

Ah, OK.  They don't mind about sbrk 'holes'?

> Only the timing results
> would depend on the compiler and/or machine you're running on, and I
> suspect that the relative values would be similar.  Obviously, the
> absolute timings will change as you change CPUs and clock speeds.

I suspect that some might be more optimisable (is that a word?) than
others, but it's probably not that significant at this level of discussion.

Chris C

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019