Mail Archives: djgpp/2000/10/07/07:42:08
> From: Eric Rudd <rudd AT cyberoptics DOT com>
> Newsgroups: comp.os.msdos.djgpp
> Date: Fri, 06 Oct 2000 13:20:36 -0500
>
> I agree that you still don't know whether the operation will succeed in
> advance, but in a DOS program, you can malloc the maximum physical region,
> and the program will succeed if it is possible for it to succeed (and
> complain when it finds out it can't). In either event, you can call
> realloc() later to free up the unneeded space. The alternative seems to be
> to place a check inside the inner loop, which slows the program down.
Why isn't the usual paradigm, of reallocing to double the storage when
the current one is exhausted, a reasonable solution in this case? I
think the overhead of the test and realloc is quite small with this
algorithm.
> > This is impossible with most modern OSes. Plain DOS and, sometimes,
> > Windows 9X are the only ones that report available physical memory
> > reliably (on Windows 9X, the report is only accurate if no other program
> > works at the same time consuming memory).
>
> I count this as a deficiency in the OSes, since it means that an app can't
> predict which algorithm would be most efficient. In my experience, an
> algorithm that expects to work with internal memory, but actually works with
> external (virtual) memory, can be *extremely* slow.
Unfortunately, this is how modern OSes work. The OS is expected to
handle the paging in the best way possible, while the programmer is
expected to understand the limitations of paging and make memory
references as local as possible, e.g. by rearranging the loops,
changing data structures, etc.
The only alternative is to lock allocated memory, which is hardly a
good solution for large allocations.
- Raw text -