|
|
Subscribe / Log in / New account

A new kernel timer API

A new kernel timer API

Posted May 19, 2005 17:50 UTC (Thu) by brouhaha (subscriber, #1698)
In reply to: A new kernel timer API by kleptog
Parent article: A new kernel timer API

I beleive we're reaching the point where higher clock-speeds are useless and we have to start doing more per clock cycle,
People have been saying that for at least fifteen years, and the clock rates keep going up. As Feynman said, "there's plenty of room at the bottom". Eventually we will hit the physical limits, but we aren't there yet.

My point wasn't so much that we need 1 ps timing precision, as that we may well need better than 1 ns. There's three orders of magnitude difference there. If we don't want to use 1 ps, we could certainly use 10 ps, 100 ps, or even 37.2 ps as the unit. But 1 ps seems somewhat more convenient.


to post comments

A new kernel timer API

Posted May 19, 2005 18:42 UTC (Thu) by hamjudo (guest, #363) [Link] (3 responses)

But you got your units confused. The smallest power of ten unit that fits is the nanosecond. The smallest power of 2 unit is 2^-32 seconds, which is about 250 picoseconds.

To the parent poster, picosecond timers will be usefull, even on chips that are many picoseconds across. Note how well the Network Time Protocol can synchronize clocks to better than millisecond precision, even though the systems themselves are many light milliseconds apart. (assuming appropriate network interconnect.)

A new kernel timer API

Posted May 20, 2005 23:51 UTC (Fri) by giraffedata (guest, #1954) [Link] (2 responses)

But you got your units confused. The smallest power of ten unit that fits is the nanosecond. The smallest power of 2 unit is 2^-32 seconds, which is about 250 picoseconds.

You're right that brouhaha confused the units (he said 4 seconds worth of picoseconds fit in 32 bits; it's really 4 seconds worth of nanoseconds). But you're introducing a "fits" that nobody's talking about here -- apparently you're intending for 32 bits to count one second.

Note that the interface provides multiple granularities/ranges from which to choose. You can specify your interval in milliseconds, microseconds, or nanoseconds. That in no way means you can actually get that kind of precision out of the timer. There's really no reason not to throw in picoseconds, if only to save having to answer the question.

Why not units of 2^-32 seconds?

Posted May 23, 2005 16:25 UTC (Mon) by spitzak (guest, #4593) [Link] (1 responses)

That actually sounds like a useful and natural unit to use, and certainly easier for programmers to remember. It would allow the upper 32 bits to be equal to the Unix clock and allow conversion to a floating-point number of seconds without rounding errors.

Is there some good reason why a power of ten should be used? Is it because of rounding errors from times specified in decimal numbers of seconds?

Why not units of 2^-32 seconds?

Posted May 24, 2005 2:40 UTC (Tue) by giraffedata (guest, #1954) [Link]

Natural for whom? The computer?

Remember that we're talking about an external interface here -- the question is in what units would a user of the timer facility want to specify a duration? Virtually nobody measures time in binary units; we all think of time in milliseconds, nanoseconds, etc.

The Unix time_t type (which I think is what you're referring to as the Unix clock) doesn't actually figure in anywhere here -- this is a value that specifies a duration, not a point in time; and if it ever gets added to a point in time, that time is in the kernel internal format, which is a count of clock ticks.

A new kernel timer API

Posted May 19, 2005 22:24 UTC (Thu) by kleptog (subscriber, #1183) [Link] (2 responses)

LOL! If we're making devices which are smaller than a third of a millimetre across, I imagine we'd get a much better return clustering a few thousand onto a single chip than we'd get by trying to make them all run at a terahertz clock rate.

My personal feeling is that measuring less than a nanosecond is not useful given than the moment you're accessing something off-chip (like say memory) you're going to be delayed by tens of thousands of picoseconds and memory latency is not reducing anywhere near as fast as clock speeds.

But hey, I'm willing to be proved wrong.

A new kernel timer API

Posted May 22, 2005 15:17 UTC (Sun) by haraldt (guest, #961) [Link] (1 responses)

Processors are asyncronous even today, aren't they?
Todays standard is to move information from place A to place B within a manageable number of clock ticks. If a clock tick takes a picosecond, then the only requirement is that places A and B are never more than one third of a millimeter apart.

Instructions may have to run through a lot of clock ticks this way, but it's all a matter of resolution.

Won't promise this is is going to happen, but hey, it's an idea?

A new kernel timer API

Posted May 22, 2005 15:39 UTC (Sun) by haraldt (guest, #961) [Link]

Err.. that A and B are in multiples of a third millimeter apart.

You'd probably need asynchronous buses, asyncronous memory devices etc. too.
The distance between processor and main memory, for example, could be a load of clock ticks, often with several signals travelling on the same wire. But as long as equipment can handle asyncronous signalling (in addition to the speed of course) it's far from impossible.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds