User: Password:
|
|
Subscribe / Log in / New account

Pondering 2038

Pondering 2038

Posted Aug 16, 2013 17:34 UTC (Fri) by tmattox (subscriber, #4169)
Parent article: Pondering 2038

What about using 64-bit floating point seconds as the standard time storage?
I know, the kernel doesn't use floating point so that it doesn't need to save/restore floating point registers as often... but, if that wasn't an issue, why not?

It lets you represent vastly distant times both in the past and in the future. "short" time differences can be represented very precisely, although absolute times far from 1970 will have lower resolution the further you go.
And future systems would need to make sure they increment their time by large enough values that the timer keeps increasing... rather than effectively adding zero every tick, once the time got past a certain date.

It's what I do in my programs, I always convert time to a double.


(Log in to post comments)

Pondering 2038

Posted Aug 16, 2013 17:55 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Double seconds are for weaklings.

Real men measure time in Planck units - 540*10^(-42) second precision should be enough for everyone!

You can't go smaller than that (well, if you can then you probably don't need something as inferior as silicon-based computers)

Pondering 2038

Posted Aug 16, 2013 18:03 UTC (Fri) by jimparis (subscriber, #38647) [Link]

Floating point timestamps are good for some uses but not others. There are lots of cases where rounding error is bad, and even simple operations like addition cause problems with floats because it's not associative ((a+b)+c != a+(b+c)). I tried floats in one of my projects, but ended up needing to use int64 instead.

Pondering 2038

Posted Aug 16, 2013 21:53 UTC (Fri) by giraffedata (subscriber, #1954) [Link]

"short" time differences can be represented very precisely,

If you're talking about differences between datetimes represented in this format, the additional precision for short differences does you no good because you're limited by the precision of the datetimes you're subtracting.

If one were going to use a floating point datetime format in order to avoid catastrophic failure in the distant future, one probably should make the epoch 2013 instead of 1970. At least that would make the high-precision era a more useful one. Or maybe 2100, guessing that finer time precision will be more useful with future technology.

Pondering 2038

Posted Aug 19, 2013 1:48 UTC (Mon) by ras (subscriber, #33059) [Link]

Yes, measuring time in SI units (seconds) as is typically done with doubles is the right approach IMHO.

The problem is the 53 bit precision provided a IEEE 754 mantissa provided is nowhere near enough. IEEE 754 does define a quadruple precision with 113 bits which is enough, but no one implements it. In fact not everyone implements IEEE floats, and IIRC the kernel doesn't use or depend floats at all. That probably rules out floats completely. Still it would be very nice format for higher level languages like say Python. Double double (a 128 floating point format that emulates 128 bit using 2 doubles) is more than fast enough for interpreted languages.

IMHO the right solution to this is a new timeval like this:

struct timeval64 {
int64_t tv_sec; // Time in seconds since say 1/1/0001.
uint32_t tv_frac; // 1 / 2^32 second
}

Storing the fractional part as 1 / 2^32 has a number of advantages over microseconds. Although it doesn't seem "natural", turns out not many people use microseconds directly either so they have to convert it. As soon as you have to do that the code becomes identical. Eg, to convert to milliseconds:

milliseconds = tv_usec / ((1000*1000) / 1000)
milliseconds = tv_frac / ((1 << 32) / 1000)
milliseconds = (int64_t)tv_frac * 1000 >> 32 // Also works

The second form above is a better choice on architectures where divide is slow.

Storing stuff as 1 / 2^32 gives the full precision for the bits available, yielding a resolution of around 4GHz which happens to neatly fit with where CPU frequencies have plateaued.

Converting it to a 128 bit time becomes just a shift and add, mainly because this is just a 96 bit time in disguise.

Moving the epoc to 1/1/0001 solves a problem I continually get bitten by: times less that 1970 blowing up. Given 64 bits gives us 292e9 years, it could equally well be 13,000,000,000 BC which is the one true epoc (the big bang), however I am not sure what we do for calendars back then.

In comparison storing times in nanoseconds in a int64_t looks like a 1/2 baked solution. As others have observed here, it only delays the problem for a couple of hundred years, doesn't provide enough precision and doesn't address the 1/1/1970 wart.

Pondering 2038

Posted Aug 19, 2013 7:00 UTC (Mon) by eru (subscriber, #2753) [Link]

Moving the epoc to 1/1/0001 solves a problem I continually get bitten by: times less that 1970 blowing up. Given 64 bits gives us 292e9 years, it could equally well be 13,000,000,000 BC which is the one true epoc (the big bang), however I am not sure what we do for calendars back then.

If you change the Epoch, there are some choices that are better than others from the point of view of the Gregorian calendar (none of which are 1/1/1970). I recall some discussion about this around Y2K. googling... here: https://groups.google.com/forum/#!topic/comp.protocols.time.ntp/PVE9niFvRqc

Pondering 2038

Posted Aug 19, 2013 7:42 UTC (Mon) by ras (subscriber, #33059) [Link]

> If you change the Epoch, there are some choices that are better than others from the point of view of the Gregorian calendar

Yes. The age old trick of adjusting your origin so Feb 29 is at the end of the leap year cycle has been rediscovered many times.

But what seems to trip up calendar routines more than anything else isn't leap years, or non-leap centuries, or leap millennia. It's negative times. Making the birth of space time the epoc solves eliminates that gotcha once and for all. Moving the origin to 2000-03-01 on the other hand would make it worse. *shudder*

Pondering 2038

Posted Aug 19, 2013 10:16 UTC (Mon) by eru (subscriber, #2753) [Link]

But what seems to trip up calendar routines more than anything else isn't leap years, or non-leap centuries, or leap millennia. It's negative times. Making the birth of space time the epoc solves eliminates that gotcha once and for all.

Then most of the range would have no use. There is no point in going that far back in general-purpose time representations. Dates make sense only during human history. Before that you talk about geological eras...

If you want an ancient starting point, the one used in Julian Day (4714-11-24 BC) would be a reasonable choice: All recorded human history happened after that, and this system of counting time is already in widespread use, so ready-made conversion algorithms exist for it.

Pondering 2038

Posted Aug 19, 2013 23:44 UTC (Mon) by ras (subscriber, #33059) [Link]

> Then most of the range would have no use.

True. But when your date format spans 292e9 years, that's always going to be true at one end of the range.

I'd agree if wasting part of the range like this was a problem - ie if the range was small. But it isn't. And when it comes to usefulness, I suspect the first 13 billion years is much more interesting and useful to us than the last 13 billion covered by the range. Since you must have one or the other, surely you would keep the more useful bit? Or to put it another way, if the proposal was to use a single 64 bit quantity in units of 1 / 2^16 sec, then yes I'd too would be pushing for 4714-11-24 BC - now that I know it exists. :D

Pondering 2038

Posted Aug 20, 2013 0:26 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Please, no base-two fractions. We will have tons of WONDERFUL problems trying to convert them to whole seconds and milliseconds.

Pondering 2038

Posted Aug 20, 2013 0:38 UTC (Tue) by ras (subscriber, #33059) [Link]

> We will have tons of WONDERFUL problems trying to convert them to whole seconds and milliseconds.

I'm courious. What wonderful problems? I've been doing for ages - from back when the 8086's div instruction cost 10's of cycles. It was done that way for a couple of reasons. Firstly (and oddly enough), binary computers are rather good at handling binary factions. For them it's faster and simpler than base 10. And secondly, the hardware timers usually counted in binary fractions, so inevitably someone, somewhere has to do it.

Pondering 2038

Posted Aug 20, 2013 11:22 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

In one of my previous lives I was developing a staffing and accounting package. We had the following problem:

An employee is paid (say) $20 per hour and has worked 1 hour and 7 minutes. How much do we pay them for these 7 minutes? Well, the answer is ($20 * 7/60) = $2.33, right?

Except that then an accounting system tried to get time back by dividing $2.33 by 20 and getting 6.99(9) minutes which are rounded to 6 minutes. And we get a discrepancy in the reports. Not nice at all.

Here we'll have the same problem - it's not possible to express time periods like 100 msec in binary-based fractions. It might not be critical, but if you have applications where you do a lot of statistics it might get in the way (http://www.ima.umn.edu/~arnold/disasters/patriot.html - the famous example of this).

Pondering 2038

Posted Aug 20, 2013 13:12 UTC (Tue) by ras (subscriber, #33059) [Link]

> it's not possible to express time periods like 100 msec in binary-based fractions

No it's not. But then again no one I know of stores or manipulates times in a raw struct timeval format either, so I doubt it will be a problem.

I always thought timeval's being a prick to work with was a nit. But since they force everyone to convert times to a more suitable format, perhaps that's a blessing in disguise. And as I mentioned before, binary fractions do have one advantage over other bases: they are faster to convert to a different unit.

BTW, did you notice the patriot missile's OS reported the time in 100ms units. It's a nice power of 10 - just what you are asking for. It didn't solve the problem.

Pondering 2038

Posted Aug 20, 2013 13:25 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Timeval is used for calendar time, so it makes total sense to use real calendar units. It's not an especially good for calendar values, but the intent is clear.

Using binary fractions makes little bit more sense for CLOCK_MONOTONIC - it's mostly used for relative periods anyway. But then we'll have those rare cases when we need to sync CLOCK_MONOTONIC with the real clock (i.e. 'how much I'm going to wait till the next 100ms period?') and we're back to square one.

For better or worse, we are used to milliseconds and not binary-based second fractions (mibiseconds?). So it makes sense to keep it that way.

Would have been nice to also have decimal hours...

> BTW, did you notice the patriot missile's OS reported the time in 100ms units. It's a nice power of 10 - just what you are asking for. It didn't solve the problem.
It's still the same problem, just from the other side.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds