Mail Archives: djgpp-workers/2002/02/19/06:20:16
Eli Zaretskii wrote:
>
> On Sun, 17 Feb 2002, CBFalconer wrote:
>
> > I have pretty well decided to rebuild the module from scratch.
> > The attached is preliminary, and I would appreciate you guys
> > giving it a quick once over for obvious failings.
>
> As DJ said, this should be sent to djgpp-workers.
>
> I can't say I like the possibility of throwing away the existing code and
> rewriting everything from scratch. malloc is central to the library and
> to any DJGPP program, its current code is well tested and reliable, and
> so starting anew is not a good idea unless there's a very good reason.
> Slow operation of one of its subroutines is not a reason good enough,
> IMHO; there should be a way of rewriting just that single subroutine to
> make it faster. It is also very hard to judge the justification for such
> changes without some data which explains why is the current code slow.
>
> This effort started as an investigation into the reasons of the slow
> operation of `free', which turned out to be slow due to `merge'. I think
> we should continue to explore why is `merge' slow; dropping everything on
> the floor and writing a totally different implementation sounds wrong at
> this point, as I don't think we know enough about the problems with the
> original code.
>
> In any case, if you want to change the code significantly, please start
> with what's in the current CVS (which has quite a few changes compared to
> the version ship-ped with djlsr203.zip).
>
> More specifically, your code seems to modify the structure used by malloc
> to maintain its memory pool. Unless I missed something, this creates a
> binary incompatibility with existing code, something that we try very
> hard to avoid in DJGPP development.
>
> > The intent is to eventually be able to combine freed chunks with a
> > single O(1) operation.
>
> I'm not convinced that the O(n) algorithm is an issue here. Please give
> us some data that indicates that is the problem.
My earlier messages showed the profile results, and pinpointed the
problem. I even quoted the nested for loop that caused the
O(N**2) performance. I also (earlier yet) supplied some timings
that demonstrated that performance. The profiling showed over 80%
of a 5 second run spent in the merge at only about 20,000 items.
I can see why you would be worried about altering *published*
binary structures, but the internals of malloc are hidden in the
original code, and I am adhering strictly to the published
interface. So the cause of worry escapes me. Unless some
**very** unclean practices and magic numbers have been used in
other modules. If so, a change will expose them to the light of
day.
If such unclean things are prevalent, I can see the worry from a
system viewpoint. I think it would be better to clean them up,
but such unenvisioned problems are the reason I sent the
preliminary code to you in the first place. Do not put too much
stock in the detailed data format in that code, it changes as I
find needs, but will stabilize soon.
As long as merging requires an exhaustive search of a singly
linked list for merge candidates, it is sooner or later going to
require such an O(N**2) algorithm. This seems obvious to me, but
I have been known to be wrong. Thus cure requires a fresh data
structure, which is what I am doing. At the same time I am
attempting to leave a clear and modifiable source, which can be
improved without excessive worry. At present the excess overhead
size is due to the guard areas I am providing to reduce fouling by
off-by-one index use. That is removable and controlled in one
place.
Aside: so far I am not receiving the workers maillist, because it
is being sent to my spam-trap rather than the reply-to address. I
e-mailed dj asking if he could fix this. This was very recently,
so this is not any sort of criticism.
--
Chuck F (cbfalconer AT yahoo DOT com) (cbfalconer AT XXXXworldnet DOT att DOT net)
Available for consulting/temporary embedded and systems.
(Remove "XXXX" from reply address. yahoo works unmodified)
mailto:uce AT ftc DOT gov (for spambots to harvest)
- Raw text -