Mail Archives: djgpp-workers/2003/03/10/15:46:19
> > It doesn't improve - the scaling is based on the initial calibration loop.
> > If you are unlucky and get a 5% error on the calibration, then the long
> > term times will be off 5% - which could make it less accurate for large
> > times than the 18.2 tics/sec clock.
>
> Can't we use a hybrid.
We could - it just makes it more complex and a little slower each call.
It will also cause some potential "jumps" in the time values if we
aren't careful. Each time we adjust the scale between the rdtsc
counter and uclock() return value - we can break things.
For short timings (less than a few seconds) the current code does
OK. If people need longer timers would they be using clock() anyway?
I don't know, which is why I started this discussion instead of just
submitting something.
> 1. Use the tics in conjuction with rdtsc for large enough delays.
> 2. For suffciently large values, use the time to recalibrate.
> 2b. Possibly use a moving average for the new value.
The way uclock() is designed, the internal bits 47:16 should alway be
the current bios time tic counter. (Ignore midnight for now). If we
get called and that's not the case, we could to change our
scale divider to make it so. Unfortunately, we don't know where we
are in the middle of the tick, so we take a risk of messing up the
next call if we guess wrong. For this precision in long timings
(1 Tick in 24 hours) we eventually would move the ratio to get
better than 1 part per million accuracy.
> > Essentially it's trying to figure out what
> > your CPU frequency is.
>
> Hmm... I'm not an expert on CPU and their frequencies, but happens on
> those mobile CPUs which change frequency? Or Transmeta CPUs? Perhaps
> 1. above would help some with this?
If you calibrate with one frequency then run under another, It's not
going to work very well. Sounds like a good documentation item.
And I can test that on my laptop ...
- Raw text -