Mail Archives: djgpp/2000/10/06/17:50:38
Eli Zaretskii wrote:
> Let's say that there is a way to know the amount of available memory--how
> would this help you in the scenario you've just described? You still won't
> know whether that memory will be enough to read the file, since you don't
> know how many lines there are in the file, right?
I agree that you still don't know whether the operation will succeed in
advance, but in a DOS program, you can malloc the maximum physical region,
and the program will succeed if it is possible for it to succeed (and
complain when it finds out it can't). In either event, you can call
realloc() later to free up the unneeded space. The alternative seems to be
to place a check inside the inner loop, which slows the program down. (I'm
not sympathetic to arguments that, with the fast computers we have these
days, efficiency isn't so important any more, because I'm still using them
to solve problems at the limits of what they can conveniently handle -- and
those limits are still a function of the efficiency of the routines I
write.)
I admit that it is possible to get around these difficulties by more
intricate programming, but a good programming environment ought make simple
things like this straightforward.
> > 2. To predict how large a region of memory can be accessed without
> > disk swaps.
>
> This is impossible with most modern OSes. Plain DOS and, sometimes,
> Windows 9X are the only ones that report available physical memory
> reliably (on Windows 9X, the report is only accurate if no other program
> works at the same time consuming memory).
I count this as a deficiency in the OSes, since it means that an app can't
predict which algorithm would be most efficient. In my experience, an
algorithm that expects to work with internal memory, but actually works with
external (virtual) memory, can be *extremely* slow. Take qsort(), for
instance. If that function gets called on a virtual array, it can take
literally *hours*, whereas a routine specially written to make a few
sequential passes through intermediate disk files can take only a few times
longer than the file I/O to read the input file and write the output file.
On the other hand, external sorting routines like that are generally less
efficient than a routine that is allowed the privilege of working entirely
in internal memory.
-Eric Rudd
rudd AT cyberoptics DOT com
- Raw text -