Mail Archives: cygwin-developers/2001/03/23/01:06:36
----- Original Message -----
From: "Brian Keener" <bkeener AT thesoftwaresource DOT com>
To: <cygwin-apps AT cygwin DOT com>; <cygwin-developers AT cygwin DOT com>
Sent: Friday, March 23, 2001 2:45 PM
Subject: Re: setup wishes -- any volunteers
> I'm not sure I totally understood this and it's probably because I
know
> absolutely nothing about dpkg or rpm for that matter as to how they
work.
> I wonder how these will tie into the current operation of setup. Are
> each of these essentially a replacement for tar and how do they
control
> the dependencies and packaging. Take for example the way I sometimes
use
> setup to update my packages. Because of a slow internet connection
via
> phone line I might use setup to download 3 or 4 packages one night and
> then download 3 or 4 more the next night as opposed to trying to
install
> all of them from the internet in one night. After I get all the
packages
> I want then I will run setup and perform the install from local
directory
> and install all the packages I previously downloaded. I would not
expect
> the download to really control anything based on the categories and
> dependencies although the ability to select the packages based on the
> appropriate criteria would be a nice touch. I sort of thought of the
> dependencies and categories as an aid in allowing setup to select the
> appropriate packages for download and then later for install.
>
> How does the above scenario tie in with using dpkg or rpm.
>
Well as you point out above there are two things going on: deciding what
files are needed (dependencies) and installation. In fact there are two
more things that are needed: an installed file database, and the ability
to tell if a downloaded file is corrupt.
for rpm or dpkg the following applies
So in your scenario above, you run setup.exe. It downloads the current
"master list" of available packages. Then you choose what you want to
install. Then setup calculates any needed upgrades or new packages based
on your selection. Finally you tell it to start downloading. If you only
want to download 2 or 3 files, you'd start of with the core packages
that other packages depend upon.
Once it's downloaded, like now, it can immediately install, or just save
the files to it's cache directory and exist.
When the time comes to install, it compares the currently installed
packages to the packages it's got in it's cache, checks for dependencies
(say perl 5.6 is needed for package foo, but perl 5.6 failed to
download - it won't install foo). Installation consists of running
pre-install actions (say uninstalling an onld version), extracting the
files, and the installing the new files, and finally running post
install actions.
From a user point of view you have much the same flexability as you do
now, with the following bonuses (some of which exist to a greater or
lesser degree today):
* Dependencies: no downloading 12 Mb to find out you missed ash.
* file ownership: you can query the database to see what files belong to
what package.
* auto-upgrading: you can (in theory) ask the packaging system to
download any upgrades to existing packages, core packages to satisfy any
new dependencies, ask you for configuration details, and install them.
* multiple mirror access - both rpmfind (an extension to rpm) & dpkg
support multiple download sites, so if one is busy or not available, the
packaging system will switch over to another.
Rob
- Raw text -