Mail Archives: djgpp-workers/2003/03/10/01:35:47
..SNIP.. ...SNIP....
> So, trying to use a single tic to calibrate the clock would potentially
add
> lots of inaccuracy. But waiting more tics causes the first call to be
> slower. Even 9 tics (1/2 second) frequently ends up with a 3.3% error;
> 18 tics is freqently a 1.3% error.
> So my question is - how accurate should we try and be?
> On a 60Mhz system (slowest ever to support rdtsc), the divider would be
50 -
> which means a potential 2% error - but this seems within the probable
> calibration loop error.
>
> Comments? How about a 2% target, with roughly a 1/2 second calibration?
> Too long?
My 2 cents worth:-
1) At least is it better than not workingat all on 2K/XP.
2) If I try uclock() over say a few hours is it still approx rounded up 2%
or is it less? I expect it to go to zero as the elapsed time between calls
increases.
3) Looking at the code example quickly in the other email is it possible to
only perform the calibration once on the first call to uclock() instead of
on every call?
4) BTW remember the libc docs need to be updated to include notes / info on
the 2K/XP delays
5) In the libc docs there is reference to 'UCLOCKS_PER_SEC' as per:-
" This function returns the number of uclock ticks since an arbitrary
time, actually, since the first call to `uclock', which itself returns
zero. The number of tics per second is `UCLOCKS_PER_SEC' (declared in
the `time.h' header file."
My query is what would this be and how is it updated depending on what OS
you run the app on?
I have to go out tonight so I will not be able to try it out. I will give it
a try in the next few days.
Regards,
Andrew
- Raw text -