Mail Archives: cygwin/2005/07/27/14:06:55
On Wed, 27 Jul 2005, Krzysztof Duleba wrote:
> Gerrit P. Haase wrote:
>
> > > > $ ./inter.pl
> > > > perl> sub foo($){$a=shift;foo($a+1);}
You do realize you have infinite recursion here, right? So you're most
likely running out of stack (which Perl may be re-allocating on the heap).
> > > > perl> foo 1
> > > > Out of memory during request for 4040 bytes, total sbrk() is 402624512
> > > > bytes!
> > > > Segmentation fault (core dumped)
> > >
> > > Another version (with "my $a"):
> > >
> > > perl> sub foo($){my $a=shift;foo($a+1);}
> > > perl> foo 1
> > > Out of memory during "large" request for 134221824 bytes, total sbrk() is
> > > 304633856 bytes at (eval 19) line 1.
> > > perl> foo 1
> > > Bad realloc() ignored at (eval 19) line 1.
Hmm, this sounds interesting. If this message is any indication, this is
indeed a bug in Perl. Perl seems to be saying: hmm, I've tried
reallocating the array that represents the call stack, and the realloc
seems to have failed. Oh, well, who cares, let's just continue and bash
the unallocated memory at the end of the stack array, and hopefully all
will be well...
> > > Segmentation fault (core dumped)
And this is Windows saying "I don't think so". :-)
> > > Is this a perl bug, Cygwin bug, or just a feature?
> >
> > I don't know. Maybe it is a Windows feature that applications running
> > out of memory are crashing?
>
> But there's plenty of memory left when perl crashes. I have 1 GB RAM and
> 1 GB swap file.
IIRC, unless you specifically increase heap_chunk_in_mb, Cygwin will only
use 384M of address space (which seems consistent with the sbrk() and the
request size above).
> I've simplified the test case. It seems that Cygwin perl can't handle
> too much memory. For instance:
>
> $ perl -e '$a="a"x(200 * 1024 * 1024); sleep 9'
>
> OK, this could have failed because $a might require 200 MB of continuous
> space.
Actually, $a requires *more* than 200MB of continuous space. Perl
characters are 2 bytes, so you're allocating at least 400MB of space!
FWIW, the above doesn't fail for me, but then, I have heap_chunk_in_mb set
to 1024. :-)
> But hashes don't, do they? Then why does the following code fail?
>
> $ perl -e '$a="a"x(1024 * 1024);my %b; $b{$_}=$a for(1..400);sleep 9'
Wow. You're copying a 2MB string 400 times. No wonder this fails. It
would fail with larger heap sizes as well. :-)
This works with no problems and very little memory usage, FWIW:
$ perl -e '$a="a"x(1024 * 1024);my %b; $b{$_}=\$a for(1..400);sleep 9'
> Or that one?
>
> $ perl -e '$a="a"x(50 * 1024 * 1024);$b=$a;$c=$a;$d=$a;$e=$a;sleep 10'
Yep, let's see. 100MB * 5 = 500MB. Since Cygwin perl by default can only
use 384MB, the result is pretty predictable. Perl shouldn't segfault,
though -- that's a bug, IMO.
> On linux there's no such problem - perl can use all available memory.
Yeah. Set heap_chunk_in_mb to include all available memory, and I'm sure
you'll find that Cygwin perl works the same too. However, you might want
to read some Perl documentation too, to make sure your data structure size
calculations are correct, and that your expectations are reasonable.
HTH,
Igor
--
http://cs.nyu.edu/~pechtcha/
|\ _,,,---,,_ pechtcha AT cs DOT nyu DOT edu
ZZZzz /,`.-'`' -. ;-;;,_ igor AT watson DOT ibm DOT com
|,4- ) )-,_. ,\ ( `'-' Igor Pechtchanski, Ph.D.
'---''(_/--' `-'\_) fL a.k.a JaguaR-R-R-r-r-r-.-.-. Meow!
If there's any real truth it's that the entire multidimensional infinity
of the Universe is almost certainly being run by a bunch of maniacs. /DA
--
Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple
Problem reports: http://cygwin.com/problems.html
Documentation: http://cygwin.com/docs.html
FAQ: http://cygwin.com/faq/
- Raw text -