Mail Archives: cygwin-developers/1998/03/21/06:27:15
Hi all,
In the original pid allocation scheme, lookups could take a while
because a pid could be anywhere in the table. In Chris Faylor's
scheme that was included in b19, pids were forced to be a PBASE offset
to the actual index in the process table. In this scheme, pids
wrapped every 128th (PSIZE) process.
In the new scheme, I'm using modular arithmetic to map from a pid to an
index (which I believe should be fast enough). This allows us to have
more standard pids (1000 to INT_MAX) without wrapping all the time.
Because we skip numbers whenever the new potential pid mod size ()
maps to a slot already taken by a previous process, as more slots are
filled, more pids are skipped. I don't see this as a real problem.
The next step is to extend this to remove the upper limit on the
number of concurrent processes. I'm planning on either:
1) Having a linked list of PSIZEd process tables. Start out with one,
if you fill it, allocate another. So with two tables, when you
allocate, you would first try to allocate a PSIZE range of pids in the
first table, then try to allocate a PSIZE range of pids in the second
table, creating a third table if there aren't any slots in either.
Lookups would involve doing a pid mod size() and then checking that
position in each existing table.
Under this scheme, lookups get slower as the number of process tables
increase. However, this might not be a noticeable performance hit.
2) Alternatively, when the first table runs out of room, you could
allocate a larger single table, increasing PSIZE and remapping all
existing processes within the table (in order to keep the pid mod
PSIZE relationship consistant).
This involves a large up front cost whenever you have to increase the
table size but lookups would stay fast.
Opinions / other ideas?
--
Geoffrey Noer
noer AT cygnus DOT com
- Raw text -