Mail Archives: djgpp-workers/2001/08/20/09:42:40
> Perhaps it would be usefull to increase the default. For example
> if gcc runs: cpp1.exe, cc1.exe, as.exe, collect2.exe and ld.exe and is not
> linked with libc.a where this feature is included we already have more.
> If we're compiling more sources when invoking gcc things are even worse.
> So I suggest to increase the default to something like 50
Yes, I was worried about this also. I'll do more testing. While the
first call would immediately then bump this to 100 for the next exec,
we would then have 38 selectors stranded that we scanned across each
time, which hurts performance also. The doubling is also not as effective
as it sounds since there are the extra stranded ones in the middle.
> Other suggestion:
>
> move 'char desc_map[how_deep]' inside
> if (workaround_descriptor_leaks)
> block.
Can't do this since there are two workaround_descriptor_leaks blocks,
one to set the map, then we exit it (to call direct_exec), then a
second one to use the map.
> In this case we can set initial value depending on
> workaround type. If we can use LAR then I think it's safe
> (and fast enough) to scan at least 100 descriptors. I think
> it's better while most DJGPP packages are not rebuilt with
> new libc. Scanning 8192 descriptors took about 30% overhead
> when spawned program simply quits. I think even 3% would be
> acceptable as it would not noticeable in most real life
> situations. Initial value of limit could be reduced in
> future when most packages will be built using new libc.a
The performance is much worse for NT since it has to go to a
DPMI interrupt for each check, so I don't want to go wild...
I think you are convincing me to make the limit larger. One
possible optimization is to use the lar scanning for NT also.
If a selector was never touched it passes the lar test which
is fast.
Thanks for the comments, I'll take another shot.
- Raw text -