delorie.com/archives/browse.cgi   search  
Mail Archives: djgpp/1997/10/12/07:56:49

Date: Sun, 12 Oct 1997 13:47:19 +0200 (IST)
From: Eli Zaretskii <eliz AT is DOT elta DOT co DOT il>
To: Herman Geza <hg211 AT hszk DOT bme DOT hu>
cc: "Salvador Eduardo Tropea (SET)" <salvador AT inti DOT edu DOT ar>, djgpp AT delorie DOT com
Subject: Re: Compiler crashes...
In-Reply-To: <Pine.GSO.3.96.971009150831.28093A-100000@ural2>
Message-ID: <Pine.SUN.3.91.971012134633.8447A-100000@is>
MIME-Version: 1.0

On Thu, 9 Oct 1997, Herman Geza wrote:

> 	So, I'm here again. I said that there is no problem with
> 	arrays with 100.000 elements. I said the problem begins
> 	from about 200.000 - 300.000 elements. If your compiler
> 	doesn't crash with this value, try 500.000. If it still
> 	doesn't crash, please tell me your configuration settings.

Well, I was able to go up to 1,200,000 bytes and still compile the
program, albeit slowly.  (I used the program that I posted in my
message, but enlarged the initialized data accordingly.)

Here's what I've learned:

  1.  You need to enlarge the size of the CWSDPMI heap, so it will be
      able to honor the huge amount of small memory chunks that cc1
      allocates.  The default setup of CWSDPMI is good if the program
      allocates less than 20MB.  (This is explained in section 15.4 of
      the FAQ and in CWSDPMI docs.)  I needed to enlarge the CWSDPMI
      heap to 640 paragraphs (32KB) to compile the program with a
      300,000-long array.

      (Btw, are you using CWSDPMI or something else as your DPMI?)

  2.  You also need to have enough free space on the disk where your
      TMPDIR environment variable points.  (At least 5MB was required
      for the simple test program I used.)  If you are used to point
      TMPDIR to a RAM disk, change it to point to a real disk for this
      compilation.

  The above enabled me to compile my test program with an array
  dimensioned 300,000; cc1 used 61.7MB (!) of virtual memory to do
  that.  To bump it still higher, I used the next trick:

  3.  The amount of memory that cc1 gobbles seems to depend *only* on
      the number of the array initializers in the source; it does NOT
      depend on the SIZE of each initializer.  (This probably means
      that the parser is allocating all that space when it creates the
      parse tree of the source, since it would then make sense to have
      a certain amount of memory per each source token that gives
      birth to a tree node.)

      So my way of getting 1,200,000 chars initialized was to declare
      an int array of dimension 300,000 and initialize it with chunks
      of 4 bytes.  The compiler still paged to disk frantically, and
      it took it a whopping 6 minutes to compile my simple test
      program (that was on a 32MB P166 under plain DOS 5.0 with a 5MB
      disk cache), but it did work without crashing and without
      running out of virtual memory.  If you need less than 1,200,000,
      I recommend using an int array with smaller dimensions, because
      the dimension directly affects the amount of memory required by
      cc1.

      I didn't test that, but it's possible that using long long
      (which is 8-byte long), you could slash the cc1 memory usage
      (and thus compilation time) even further.  However, since long
      long is not a native data type, it might not be as simple.

- Raw text -


  webmaster     delorie software   privacy  
  Copyright © 2019   by DJ Delorie     Updated Jul 2019