Mail Archives: djgpp/2009/04/11/02:00:13
"Rugxulo" <rugxulo AT gmail DOT com> wrote in message
news:9ee58e13-9e8f-4ed4-ba0a-720959d48842 AT z19g2000yqe DOT googlegroups DOT com...
On Apr 10, 6:44 am, "Rod Pemberton" <do_not_h DOT DOT DOT AT nohavenot DOT cmm> wrote:
> > "Rugxulo" <rugx DOT DOT DOT AT gmail DOT com> wrote in message
> > news:f5509665-5037-4f7b-b71b-8868ddace192 AT l1g2000yqk DOT googlegroups DOT com...
> >
> > Sorry, wasn't following too well...
>
Still not...
> 180 MB is (IMHO) too much unpacked
...
> auto-compression
...
> Yeah, I'm not sure where I was going with this idea. Sure, somebody
> with eLisp experience could hack the inflate.cl to maybe make Emacs
> support it out of the box, but I'm not that somebody. :-)
> But I still think, ignoring auto-compression, if you just wanted to
> save unpacked space, you could use one of the untar / untgz / tgunzip
> etc. alternatives (since they're small, free license, w/ srcs). Or
> maybe just bundle the "docs + changelogs + etc" separately anyways.
I'm confused. What are you trying to do?...
The best I can come up with as to what your'e discussing is that you're
trying to make DJGPP tools "intelligent" and natively "aware" of gzip
compression and how to compress and uncompress files as needed from gzip
archives. Yes? No?
E.g., some versions of Windows recognize .zip compression as will display a
.zip's contents as a normal Windows folder of files. AFAICT, you're wanting
to leave all the compressed sources as compressed sources, and the tools
would uncompress and extract, use, then cleanup whatever files were needed
in an incremental pattern. I.e., if you needed to compile 100Mb+ of
uncompressed files but only had 1Mb of disk space and 8Mb of memory, then
each file, incrementally, would be expanded onto the disk or into memory,
used - assembled or compiled, and removed. Unfortunately, source and object
files are very dependent on the content or information from numerous other
files. I.e., you may need many source and object files which might exceed
your memory or disk space. The solution is to "go back in time" to more
compact tools and languages: interpreters instead of compilers - FORTH and
BASIC, assembly instead of high-level languages. Of course, if you do that,
you lose whatever the modern package provides...
Alternately, you might try a smaller GNU compatible C compiler than DJGPP or
GCC itself. Fabrice Bellard claims his TCCBOOT allows compilation of GNU C
at *runtime*. Supposedly, his 138KB bootloader compiles and executes the
GNU C code for the *Linux kernel*, with screenshots:
http://bellard.org/tcc/tccboot.html
How much it actually compiles is in question. Under QEMU, tccboot.iso shows
only a modest number of C files being compiled - no assembly. So, it's
likely a customized version of Linux.
But since they compile GNU C, maybe his TinyCC or TCCBOOT might help out.
Also, if you looked at the screenshots, you'd notice a decompressor (for
initrd... gzip?) is built into whatever Linux bootloader (in 16-bit
assembly?...) that they are using...
Rod Pemberton
- Raw text -