Hi Simon,
On 02.10.20 15:24, Simon Himmelbauer wrote:
Anyways, regarding your timing problems, how exactly did you measure them? My current setup consists of taking a start and stop timestamp via a timer-connection but I only seem to get milliseconds-precision (I tried using elapsed_us() but it seems to always produce "microsecond"-values ending with three zeroes.). I currently run everything inside QEMU so I don't know whether real hardware will simply fix this.
apart from the lack of precision, the 'Timer::Session::elapsed_ms()' RPC function induces unwelcome overhead, in particular the context switch forth-and-back between your component and the timer driver and the timer driver's interaction with the physical timer device.
For capturing timing behavior at a higher precision and with much less overhead, you may find the 'Trace::timestamp()' utility useful. The function returns a platform-dependent timer tick value. On x86, this would be the time-stamp counter (TCS), on ARM it returns a CPU counter value.
As the value returned by 'Trace::timestamp()' depends on the platform (e.g., the CPU frequency), you will need to apply some kind of calibration. To determine the calibration factor, you may use the 'Timer::elapsed_ms' or 'Timer::elapsed_us' functions before running your actual tests, e.g., by measuring the number of timer ticks passed within one second.
As a heads-up, please regard the 'Trace::timestamp()' with a healthy dose of skepticism. In our experience, the values are not always perfectly proportional to real-world time. In particular,
- On the ARMv6-based Raspberry Pi, the counter value is increased only if the CPU is not idle.
- On some x86 platforms, the TCS values are not perfectly stable. In particular, the TCS values taken with different CPU cores cannot be assumed to correlate.
You can find the implementations of the 'Trace::timestamp' function at the base/include/spec/<arch>/trace/timestamp.h header, where <arch> is the CPU architecture (like arm_64 or x86_64).
Cheers Norman