Mailing-List: contact cygwin-help AT cygwin DOT com; run by ezmlm List-Subscribe: List-Archive: List-Post: List-Help: , Sender: cygwin-owner AT cygwin DOT com Delivered-To: mailing list cygwin AT cygwin DOT com Message-ID: <3C4E6FD1.9060505@computer.org> Date: Wed, 23 Jan 2002 00:09:53 -0800 From: Tim Prince User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.4) Gecko/20011126 Netscape6/6.2.1 X-Accept-Language: en-us MIME-Version: 1.0 To: Ralf Habacker CC: Cygwin Subject: Re: gettimeofday() does not returns usec resolution References: <00af01c1a3de$7af04b40$60a407d5 AT BRAMSCHE> Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit Ralf Habacker wrote: > Hi, > > for kde2 we are building a profiler lib for profiling complex c++ > applications (currently found in the cvs areas of kde-cygwin.sf.net > > (http://cvs.sourceforge.net/cgi-bin/viewcvs.cgi/kde-cygwin/profiler/) > > using the high resolution timer of native windows (about usec > resolution). > > This lib could be use for easy profiling of any c++ application and > libs like cygwin.dll and so on. > > While adding unix support (and cygwin) for this lib, I noticed, that > the gettimeofday() function returns only a resolution of 10ms (the > time slice resolution) but mostly other unix os returns a > resolution in the usec region. I have appended a testcase for this. > > Has anyone address this problem already. I have looked int the > cygwin and list and found the only topic http://sources.redhat.com/ml/cygwin/2001-12/msg00201.html > > > In http://www-106.ibm.com/developerworks/library/l-rt1/ there is a > detailed instruction how to use the hugh resolution counter. . > > $ cat timeofday.c #include > > int main() { struct timeval tp; long a,b; > > gettimeofday(&tp,0); a = > ((unsigned)tp.tv_sec)*1000000+((unsigned)tp.tv_usec); > > printf("timestamp (us): %d\n",a); usleep(1000); gettimeofday(&tp,0); b > = ((unsigned)tp.tv_sec)*1000000+((unsigned)tp.tv_usec); printf("timestamp > (us): %d (diff) %d\n",b,b-a); } > > > Ralf Habacker This is a continuing source of consternation, which may be considered OT for cygwin. I suspect that linux for ia32 tends to use one of the low level cpu tick registers to obtain the microsecond field; I have not examined current source. I don't know that it is possible to guarantee how well the zero of the microsecond field coincides with the second ticks. On many ia chips, it is possible to use the rdtsc instruction directly, for timing intervals at sub-microsecond resolution. A calibration run is required, to measure the tick frequency against the lower resolution time of day clock. linux and Windows, of course, do something along this line, when booting up. Any working Windows will report usable results via the QueryPerformance API's, usually with better than 10 microsecond resolution, and it seems reasonable for cygwin to base its functions directly on Windows API's. On many chips the direct use of rdtsc can produce better than 1 microsecond resolution, but then the application takes on the burden of dealing with various odd hardware combinations, rather than expecting the hardware vendor to make Windows work. -- Tim Prince tprince AT computer DOT org -- Unsubscribe info: http://cygwin.com/ml/#unsubscribe-simple Bug reporting: http://cygwin.com/bugs.html Documentation: http://cygwin.com/docs.html FAQ: http://cygwin.com/faq/