From: heiberg AT daimi DOT aau DOT dk (Morten Heiberg Rasmussen) Newsgroups: comp.os.msdos.djgpp Subject: Bug in malloc/free ? Date: 6 Mar 1997 10:38:31 GMT Organization: DAIMI, Computer Science Dept. at Aarhus University Lines: 139 Message-ID: <5fm6r7$6av$1@gjallar.daimi.aau.dk> NNTP-Posting-Host: potassium.daimi.aau.dk To: djgpp AT delorie DOT com DJ-Gateway: from newsgroup comp.os.msdos.djgpp This may or may not be a bug in malloc or free. I am currently writing a large program that does a lot of calls to malloc and free (about 20.000.000 during a singe run) but it should use a more or less constant amount of memory at any given time. Nevertheless it runs out of memory at a very late stage of the execution. I can practically guarantee that no memory is leaking through lack of freeing. I have managed to reproduce this behaviour in a small program which I will include at the end of this posting. In the program there is a function avail_mem() that tries to estimate the size of the free heap by allocating as much as possible (down to blocks of 32 bytes) and then freeing it all again. In main() I then call this function and print the result. Afterwards I try to do two allocations. One of 12345 bytes and then one of 30000 bytes. The allocation of 12345 bytes fails while the one of 30000 bytes succeeds. These numbers are somewhat random, and vary depending on the computer you use. I have tried this on two different Pentium90 machines with 16 and 48 MB RAM using both the Windows DPMI server and the one distributed with the DJGPP package. This yielded 4 different sets of numbers that produced the above behaviour. Futhermore I have tried to run it on different Unix platforms with no segmentation faults. In the program there is a magic line that makes both allocations work if it is included, but if it is omitted the first allocation fails. Remember to comment it out if you are trying to produce the 'error' :). Oh yes... I use the latest ftp'able version of both DJGPP and GCC and the Windows version is 3.11 (I know...but it also fails outside Windows!). My guess is that the memory manager thinks that the memory is fragmented after all these allocations and deallocations even when it's not. But it does not explain why it's possible to allocate larger chunks. Maybe it fails the allocation but then defragments the memory or something. Any comments are welcome. Please reply via email to heiberg AT daimi DOT aau DOT dk Thanks... Morten Heiberg Program follows: --------------------------- C U T H E R E ------------------------------- #include #include /* macros */ #define FOOSIZE 12345 #define BARSIZE 30000 /* memlst typedef */ typedef struct memlst_ memlst; struct memlst_ { char *ptr; memlst *next; }; /* prototype */ int avail_mem(); /* estimates free heap */ /* main */ int main() { char *foo,*bar; free(malloc(FOOSIZE)); /* MAGIC LINE */ printf("avail_mem()=%d (nothing allocated)\n",avail_mem()); /* This allocation fails without the magic line */ if (!(foo=(char *)malloc(FOOSIZE))) printf("foo: malloc of %d bytes failed.\n",FOOSIZE); else { printf("foo: malloc of %d bytes successful.\n",FOOSIZE); free(foo); } /* This allocation always work */ if (!(bar=(char *)malloc(BARSIZE))) printf("bar: malloc of %d bytes failed.\n",BARSIZE); else { printf("bar: malloc of %d bytes successful.\n",BARSIZE); free(bar); } return 0; } /* implementation */ int avail_mem() { memlst *list,*dummylst; char runbit=1; int avail=0; int size=2097152; /* 2MB */ char *dummy; list = NULL; while (runbit) { while ((dummy=(char *)malloc(size))) { avail += size; if (!(dummylst=(memlst *)malloc(sizeof(memlst)))) { runbit=0; free(dummy); } else { dummylst->ptr=dummy; dummylst->next=list; list=dummylst; avail+=sizeof(memlst); } } if (size<=32) runbit=0; else size=size/2; } while (list) { dummylst=list; list=list->next; free(dummylst->ptr); free(dummylst); } return avail; }