Posted Jan 21, 2006 1:00 UTC (Sat) by roelofs
Parent article: The high-resolution timer API
On 64-bit systems, a ktime_t is really just a 64-bit integer value in nanoseconds. On 32-bit machines, however, it is a two-field structure: one 32-bit value holds the number of seconds, and the other holds nanoseconds. The order of the two fields depends on whether the host architecture is big-endian or not; they are always arranged so that the two values can, when needed, be treated as a single, 64-bit value.
Perhaps I'm missing something, but I was always under the impression that the number of nanoseconds in a second (109) was not actually a power of two. Even if it were, it's not 232...
So how do we reconcile the two-32-bit-values-as-one-64-bit-value problem? Does the nanoseconds half count up monotonically for 4.3 seconds, at which point some weird adjustment is made? (No, that would seem to be incompatible with the 64-bit view of things.) Is the "seconds" half not really seconds but actually a count of 4.3[etc.]-second intervals? Or is the "nanoseconds" half not really nanoseconds but actually a count of 1/4.3[etc.]-nanosecond super-micro-jiffies?
Am I missing something obvious? (I didn't get much sleep last night, so I might be...)
to post comments)