Mail Archives: djgpp-workers/2003/03/10/11:44:13
> 2) If I try uclock() over say a few hours is it still approx rounded up 2%
> or is it less? I expect it to go to zero as the elapsed time between calls
> increases.
It doesn't improve - the scaling is based on the initial calibration loop.
If you are unlucky and get a 5% error on the calibration, then the long
term times will be off 5% - which could make it less accurate for large
times than the 18.2 tics/sec clock.
We could check this and re-calibrate for the longer times, so it would
essentially re-calibrate each time you called uclock(). But I wasn't
sure it was worth it.
> 3) Looking at the code example quickly in the other email is it possible to
> only perform the calibration once on the first call to uclock() instead of
> on every call?
The calibration is only done on the first call, and it waits a worse case
of 6 Tics (0.32 seconds). Essentially it's trying to figure out what
your CPU frequency is.
> 4) BTW remember the libc docs need to be updated to include notes / info on
> the 2K/XP delays
I know, but I don't want to document something if we are going to change it.
The current code is more proof of concept/prototype/discussion.
> 5) In the libc docs there is reference to 'UCLOCKS_PER_SEC' as per:-
> " This function returns the number of uclock ticks since an arbitrary
> time, actually, since the first call to `uclock', which itself returns
> zero. The number of tics per second is `UCLOCKS_PER_SEC' (declared in
> the `time.h' header file."
>
> My query is what would this be and how is it updated depending on what OS
> you run the app on?
We create a scaling factor internally in uclock() which converts the
return value from rdtsc() to a value which matches UCLOCKS_PER_SEC. So there
would be no need to update this.
- Raw text -