Why not units of 2^-32 seconds?
Why not units of 2^-32 seconds?
Posted May 23, 2005 16:25 UTC (Mon) by spitzak (guest, #4593)In reply to: A new kernel timer API by giraffedata
Parent article: A new kernel timer API
That actually sounds like a useful and natural unit to use, and certainly easier for programmers to remember. It would allow the upper 32 bits to be equal to the Unix clock and allow conversion to a floating-point number of seconds without rounding errors.
Is there some good reason why a power of ten should be used? Is it because of rounding errors from times specified in decimal numbers of seconds?
Posted May 24, 2005 2:40 UTC (Tue)
by giraffedata (guest, #1954)
[Link]
Remember that we're talking about an external interface here -- the question is in what units would a user of the timer facility want to specify a duration? Virtually nobody measures time in binary units; we all think of time in milliseconds, nanoseconds, etc.
The Unix time_t type (which I think is what you're referring to as the Unix clock) doesn't actually figure in anywhere here -- this is a value that specifies a duration, not a point in time; and if it ever gets added to a point in time, that time is in the kernel internal format, which is a count of clock ticks.
Natural for whom? The computer?
Why not units of 2^-32 seconds?
