delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1996/10/23/21:47:08

From: cziwkga AT ulcc DOT ac DOT uk (Kevin Ashley)
Newsgroups: comp.os.msdos.djgpp
Subject: Re: Virutal memory problems.
Date: 23 Oct 1996 14:36:36 GMT
Organization: University of London Computer Centre
Lines: 59
Distribution: world
Message-ID: <54lahk$k20@calypso.ulcc.ac.uk>
References: <Pine DOT SUN DOT 3 DOT 91 DOT 961020074010 DOT 29425L-100000 AT is> <845921032 DOT 28340 DOT 0 AT abwillms DOT demon DOT co DOT uk>
Reply-To: k DOT ashley AT ulcc DOT ac DOT uk
NNTP-Posting-Host: silver-e.ulcc.ac.uk
To: djgpp AT delorie DOT com
DJ-Gateway: from newsgroup comp.os.msdos.djgpp

In article <845921032 DOT 28340 DOT 0 AT abwillms DOT demon DOT co DOT uk>, alaric AT abwillms DOT demon DOT co DOT uk (Alaric B. Williams) writes:
|>Eli Zaretskii <eliz AT is DOT elta DOT co DOT il> wrote:
   [Mistaken attributbion snipped - Eli was responding to a message
      by Jon Slaughter]
|>
|>>Check out the method your program uses to allocate memory.  If it 
|>>allocates all the 18 MB at once, the DJGPP library allocation functions 
|>>round that up to the next power of 2, which means you ask for 32 MB.
|>
|>How serious is that power-of-two waste - in that is it then free for
|>the next allocation to be taken from (ie if I then request 32-18 =
|>14Mb will it not 'allocate' any more), or is it really that bad?
|>

It really wastes it - virtually at least, although not physically,
although this may depend whether you use the zeroing malloc or 
or plain malloc, and possibly also on your DPMI provider. If memory
is not truly allocated until it is referenced, you just end up with
holes in your address space - no big deal usually.

|>If so, what's the point? Surely it's no slower to allocate to the
|>nearest word?
|>

The idea of the algorithm (which comes from the BSD implementation of
libc) is not to be faster on initial allocation, but to be fast
for programs which make many calls to malloc/free for differing sized
blocks. Instead of a single linked list of all free chunks of memory,
which can grow very large and take a long time to search, a set of
lists of blocks of size 16,32,64, etc is kept and when you request
n bytes, a search is made of the list of blocks of size 2^K where 2^K >= n.
This also helps to reduce memory fragmentation in programs which make
many thousands or millions of calls to malloc/free.

The algorithm works well for cases where the average size of chunks
allocated is orders of magnitude less than the total address space or 
real memory available. It breaks down when single allocations are
comparable in magnitude to the size of all memory available - i.e.
are within a factor of 10 or so of it.

There are alternative mallocs which use naive algorithms (allocate to the
nearest doubleword - you don't want to go smaller than that or your blocks
will be badly aligned for memory and cache fetches). They are slower in some
cases and more prone to memory fragmentation. Another approach, which we
have used on our larger Unix systems, is to use the power-of-2 algorithm
for blocks up to a specific size (we used 1 Mbyte, I think) and
an alternative , exact-fit algorithm for blocks above that. This
helped on our 4 Gbyte system, with a 2-Gbyte-per-process address space,
when programs making requests for 1.1 Gbyte of memory in one go fell over
because malloc tried to give them 2 Gbyte :-(
 
malloc is just straight user-type code, and anyone can replace
it without knowing djgpp or c internals.  
----------------------------------------------------------------------------
Kevin Ashley                              K DOT Ashley AT Ulcc DOT ac DOT uk
Development Manager               http://www.ulcc.ac.uk/staff/Kevin+Ashley
University of London Computer Centre.      ...ukc!ncdlab!K.Ashley
                      This is not a signature

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019