Mail Archives: djgpp/1993/05/10/11:57:45
----- Begin Included Message -----
From: gpk AT physics DOT att DOT com
I wonder if a memory allocator that keeps the blocks sorted by size in
a b-tree might be good. You'd expect to get rapid access to a block
of the approximately correct size ( ln(N) tests ), and since you could
use a best-fit (rather than a first-fit) strategy, there would probably
be less memory fragmentation than the classic list_sorted_by_address
algorithm. Since blocks could be split, and the remainders put back on
the b-tree, you wouldn't have the BSD lack-of-recycling problem.
--Greg Kochanski
----- End Included Message -----
I read an article some years back that discussed various memory allocation
schemes... A group of researchers analyzed the behaviour of a best-fit, and
a first-fit memory allocation scheme... The conclusion the came to was
a first-fit scheme is better... Counter-intuitive?? Yes, until you think
about the (bad) side effects of a best fit...
1) allocate 200 bytes - A
2) allocate xxx bytes - B
3) allocate 100 bytes - C
4) free A
5) free C
6) allocate 99 bytes - D
A first-fit scheme will return A, and you have 101 bytes left over.
A best-fit scheme will return C, and you have 1 byte left over.
As you can see from this (very simple) example, memory fragmentation will
be much worse with the best-fit versus first-fit
- Raw text -