delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1992/06/05/11:40:59

From: Joe Huffman <joe AT proto DOT COM>
To: djgpp AT sun DOT soe DOT clarkson DOT edu
Subject: Virtual memory and malloc().
Cc: joe AT proto DOT COM
Date: Fri, 5 Jun 92 8:10:14 PDT

>Since gcc is a Unix compiler, it is thought to give you as much memory as
>you need. It should *never* let you run out of memory. Especially malloc
>cannot see when your disk is full...
>
>> Instead, I got this error:
>> 
>> >Fatal! disk full writing to swap file
>Oh yes. This is the real limit of virtual memory: the swap file (-partition,
>whatever) will be full some day. If you really need to process such a lot of 
>memory, get a bigger hard disk. ;-)

On my UNIX machine it is not the size of the swap file, it's the ulimit size
for sbrk that limits my virtual memory.

I don't know about all DOS extenders but X-32VM shipped by FlashTek returns
an error condition to malloc() which returns a NULL.  A malloc() which
does return NULL for an out of memory condition is compliant with the
following:

	  AT&T SVID Issue 2, Select Code 307-127;
	  The X/Open Portability Guide II of January 1987;
	  ANSI X3.159-198X C Language Draft Standard, May 13, 1988;
	  IEEE POSIX Std 1003.1-1988 with C Standard Language-
	  Dependent System Support;
	  and NIST FIPS	151-1.

If it doesn't return NULL then I suppose it doesn't necessarily mean that
it is out of compliance, but I would like to think that it is.  As a 
programmer I would want to be able to gracefully handle the condition
(at least to say, "Sorry Charlie, try freeing up some disk space.  
Good-bye.").

You may have a system with 16M of RAM and 40M free on your disk, but your
customer may only have 2M of RAM and 8M of disk space free.  Or his data
files may be 20 times larger than mine...

My fervent opinion is that malloc() should return a NULL, not crash, when
it runs out of memory.

-joe-

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019