delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp-workers/1997/10/20/01:47:23

From: sandmann AT clio DOT rice DOT edu (Charles Sandmann)
Message-Id: <9710200547.AA12963@clio.rice.edu>
Subject: sbrk() algorithm change suggestion
To: djgpp-workers AT delorie DOT com
Date: Mon, 20 Oct 1997 00:47:30 -0600 (CDT)

RFD on sbrk() behavior change:

sbrk() is implemented in crt0.s in ugly assembler.  It's also got some
ugly features.  For example, we store a list of the memory handles 
we have seen for freeing at exit, since some DPMI providers don't clean
up nested clients.  This list is currently limited to 256 handles, which
in the worst case of small extensions (<64K) ends up supporting about 16Mb
of memory before we reach the end of the table (in which case we discard
the handles, don't free the DPMI memory).  If CC1 (a nested program) uses
more than this, you can leak memory in some DPMI clients.

We are also all painfully aware of the mis-feature of CWSDPMI which stops
returning memory blocks after around 400 some odd handles unless you
bump the internal heap.

I propose making the extension size for the multi-block sbrk() be variable
instead of fixed at 64K.  For example, here is one possibility:

  handle range   Round size   Worse Case Min   Worse Case Total
    0 -  63         64K           4Mb               4Mb
   64 - 127        128K           8Mb              12Mb
  128 - 191        256K          16Mb              28Mb
  192 - 247        512K          28Mb              56Mb
  248 - 255          1M           8Mb              64Mb
  256 - 405          1M         150Mb             214Mb

Our handle limit and memory run out at the same time for one popular OS.
If you continue past that point, CWSDPMI would allow you to allocate 214Mb
before running out of handles (which could be bumped with the internal
heap again for the worse case, but would be much more rare and pathological).

What are the down sides?  Well, you might not be able to get the last 400K
or so of memory around handle 192.  If the DPMI provider dumps them all 
over the virtual address space so they aren't contiguous, you might waste
more memory.  But I think these are small disadvantages for the "hands off
larger sizing".  For memory tight machines running relatively small apps,
there would be no disadvantage.

One other algorithm I considered was:
  Round size = 4K*handle#
Which is smoother, pops up a bit faster (total 131Mb in first 256 handles),
and is a whopping 320Mb before CWSDPMI would run out of heap.

Comments?  Suggestions?

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019