Mail Archives: djgpp/2001/07/13/15:24:40
> From: Eric Rudd <rudd AT cyberoptics DOT com>
> Newsgroups: comp.os.msdos.djgpp
> Date: Fri, 13 Jul 2001 13:26:35 -0500
>
> Eli Zaretskii wrote:
>
> > The problem is that `uclock' returns a value of the type uclock_t, which is
> > defined as long long. That's a 64-bit quantity, and since `uclock's
> > resolution is 840 nanoseconds, the 64-bit type overflows in about 48 hours.
>
> I haven't tried using uclock() to time long intervals, but overflow of the
> 64-bit quantity is not the problem, since 840ns * 2^64 = 1.5E+13 seconds =
> 50000 years.
Sorry, I should have looked at the sources before talking.
The problem is not overflow, of course, but the fact that `uclock'
doesn't look at the system date, for performance reasons. So if you
time a very long interval by calling `uclock' once before and once
after, you cannot tolerate more than one midnight between these two
events, because there's no way of knowing how many midnights passed.
> Back when people were first having problems with uclock(), I switched over to
> using an rdtsc-based timer. It is not as portable as uclock(), so I haven't
> proposed it to the DJGPP developers as a replacement, but it has performed
> well for me.
>
> rdtsc measures in processor clocks instead of seconds, but one can get around
> that problem by timing the rdtsc timer against rawclock() for a couple of
> ticks on the first call to the timer, which enables one to estimate the CPU
> clock rate.
>
> If there is interest in this style of timing, I would consider submitting the
> code.
I'm not sure I understand the issues (does someone really needs to
time long periods with sub-microsecond resolution? is RDTSC accurate
enough, given the calibration of the processor speed?), but if you
think it will be useful, please send the code.
- Raw text -