Mail Archives: djgpp/2000/02/07/03:53:32
On Sun, 6 Feb 2000, ayoung wrote:
> Within the listed thread packages to allow a pre-emptive scheduler signals
> via DPMI are used.
I'm not sure what exactly do you mean by ``signals via DPMI''. AFAIK,
threading implementations that support DJGPP make the scheduler be run
from the SIGALRM handler, and use setitimer to trigger SIGALRM. If
that's what you mean, this has nothing to do with DPMI: the DJGPP
signal-handling machinery doesn't exploit DPMI features, it actually
tries to avoid them. That's what makes this machinery so stable and
portable between different DPMI servers/environments.
> Does anyone know the context switching overheads of this
> construction and how high could the timer resolution be pushed ?
The current DJGPP implementation of interval timers doesn't directly
support speeding up the timer tick interrupt, although you could, of
course, do that externally (if you do, you'd need to change the timer
tick handler provided by the DJGPP library). So, you are limited to
the normal 18.2-Hz heartbeat of the PC.
As for the context-switch overheads, this is mostly irrelevant to the
DJGPP implementation of signals, because the signal handler is not run
from the timer interrupt handler. The DPMI spec imposes grave
limitations on what can be done from a hardware interrupt handler, so
calling user code from there would be a very bad idea.
Instead, the hardware interrupt handler (the timer tick handler, in
this case) invalidates the application's DS selector by setting its
limit to 4KB, the null page; then it simply does an IRET. The very
next time the application tries to access any of its data or stack, it
triggers a GPF. The GPF handler, installed by the library startup
code, relizes that the GPF was produced intentionally, restores the
original DS limit, and simply CALLs the user-defined signal handler.
This removes many limitations from what a signal handler can do,
because it runs in the normal application context, but it does have
one unpleasant side-effect: the actual signal delivery is deferred
until the application touches some of its data. This means that if a
program is parked inside a DOS call (e.g., waits for keyboard input)
or inside a tight register-based loop, the signal will wait until the
DOS call returns or the loop ends.
In other words, rescheduling can be put off for quite some time,
depending on what the foreground thread does at any given moment.
I hope this background helps you understand the issues involved, and
ask your questions in a way that they can be meaningfully answered.
(Or maybe I already told you all you needed to know ;-)
- Raw text -