Mail Archives: djgpp/1999/06/14/07:42:50
On Mon, 14 Jun 1999, Shawn Hargreaves wrote:
> > That would require a totally different development model than the one
> > adopted by DJGPP. I'm not sure how familiar you are with the Linux
> > development procedures; neither will I go into discussing here
> > something which I happen to know only by hearsay. It will suffice to
> > say that the development of core Linux functionality is much more
> > centralized than ours
>
> Do you really think so?
As I said: I'd rather not discuss it. My source was someone very close
to Linux development, but that's all I'm willing to say.
> I would have said that this was exactly the
> opposite way around: Linux development seems to be far less tightly
> organised than djgpp. Some individual GNU packages are very centralized,
> but there is no single ftp site where you can get a "latest version"
> of everything, compiled and tested to work properly together.
I was talking about core features, not about add-ons. Think Linux
kernel, not application packages. The features that are or aren't
included in core Linux functionality are tightly controlled.
> Anyone who was using Linux during the changeover to glibc will be
> very aware that it is not even remotely free from library conflicts.
That's another issue. Someone who *wants* to prevent a mess does not
necessarily *succeeds* in doing that. We had our share of library bugs
as well.
> And I don't think that submitting a patch to the maintainers would
> be an instant way to fix a problem
The argument about instant fixes wasn't mine.
My argument was that when I fix a bug in my copy of libc and then relink
some program with the patched library and upload it to SimTel, I only
have to worry about testing that one program. In contrast, patching
and distributing a shared library potentially affects many more programs,
some of which aren't my responsibility and are well beyond my control.
> The majority of Linux programs will in fact run on any Unix variant
I don't have any direct experience with Linux, but I do have experience
with other flavors of Unix. I've seen too many cases where a binary
carried to another box running the same OS crashed or didn't start
because of mismatches in versions of libc.so.
> and if you code with
> that in mind, your end product will be far more resistant to changes
> in the environment.
I don't know how to code for several, potentially incompatible versions
of libraries, unless you purposefully refrain from using advanced
features, or use run-time tests to choose from several possible code
branches (and bloat the application).
Even if that would be practical, how in the world do I test my program
with umpteen versions of libc it can meet out there?
> - Most Linux programs are distributed as source, and have a whole
> set of tools (autoconf, automake) to adjust this source for whatever
> machine you want to build it on.
I hope we can agree that this paradigm is not for DJGPP. To this day, I
have difficulties convincing people that recompiling most DJGPP packages
is a very simple job, although we both know that it usually boils down to
typing "make [Enter]" and sitting back for a while.
If you agree, the above just reiterates that what's good for Linux is
not necessarily good for DJGPP.
FWIW, I predict that before Linux becomes much more popular than it is
today, it, too, will have to abandon the idea of compiling the kernel
for each configuration change. I think the latest releases already do
so.
- Raw text -