Pondering 2038
We'll have the opportunity to find out, since one such problem lurks over the horizon. The classic Unix representation for time is a signed 32-bit integer containing the number of seconds since January 1, 1970. This value will overflow on January 19, 2038, less than 25 years from now. One might think that the time remaining is enough to approach a fix in a relaxed manner, and one would be right. But, given the longevity of many installed systems, including hard-to-update embedded systems, there may be less time for a truly relaxed fix than one might think.
It is thus interesting to note that, on August 12, OpenBSD developer Philip Guenther checked in a patch to the OpenBSD system changing the types of most time values to 64-bit quantities. With 64 bits, there is more than enough room to store time values far past the foreseeable future, even if high-resolution (nanosecond-based) time values are used. Once the issues are shaken out, OpenBSD will likely have left the year-2038 problem behind; one could thus argue that they are well ahead of Linux on this score. And perhaps that is true, but there are some good reasons for Linux to proceed relatively slowly with regard to this problem.
The OpenBSD patch changes types like time_t and clock_t to 64-bit quantities. Such changes ripple outward quickly; for example, standard types like struct timeval and struct timespec contain time_t fields, so those structures change as well. The struct stat passed to the stat() system call also contains a set of time_t values. In other words, the changes made by OpenBSD add up to one huge, incompatible ABI change. As a result, OpenBSD kernels with this change will generally not run binaries that predate the change; anybody updating to the new code is advised to do so with a great deal of care.
OpenBSD can do this because it is a self-contained system, with the kernel and user space built together out of a single repository. There is little concern for users with outside binaries; one is expected to update the system as a whole and rebuild programs from source if need be. As a result, OpenBSD developers are much less reluctant to break the kernel ABI than Linux developers are. Indeed, Philip went ahead and expanded ino_t (used to represent inode numbers) as well while he was at it, even though that type is not affected by this problem. As long as users testing this code follow the recommendations and start fresh with a full snapshot, everything will still work. Users attempting to update an installed system will need to be a bit more careful.
In the Linux world, we are unable to simply drag all of user space forward with the kernel, so we cannot make incompatible ABI changes in this way. That is going to complicate the year-2038 transition considerably — all the more reason why it needs to be thought out ahead of time. That said, not all systems are at risk. As a general rule, users of 64-bit systems will not have problems in 2038, since 64-bit values are already the norm on such machines. The 32-bit x32 ABI was also designed with 64-bit time values. So many Linux users are already well taken care of.
But users of the pure 32-bit ABI will run into trouble. Of course, there is a possibility that there will be no 32-bit systems in the wild 25 years from now, but history argues otherwise. Even with its memory addressing limitations (a 32-bit processor with the physical address extension feature will struggle to work with 16GB of memory which, one assumes, will barely be enough to hold a "hello world" program in 2038), a 32-bit system can perform a lot of useful tasks. There may well be large numbers of embedded 32-bit systems running in 2038 that were deployed many years prior. There will almost certainly be 32-bit systems running in 2038 that will need to be made to work properly.
During a brief discussion on the topic last June, Thomas Gleixner described a possible approach to the problem:
Though even if we fix that we still need to twist our brains around the timespec/timeval based user space interfaces. That's going to be the way more interesting challenge.
In other words, if a new ABI needs to be created anyway, it would make sense to get rid of structures like timespec (which split times into two fields, representing seconds and nanoseconds) and use a simple nanosecond count. Software could then migrate over to the new system calls at leisure. Thomas suggested keeping the older system call infrastructure in place for five years, meaning that operations using the older time formats would continue to be directly implemented by the kernel; that would prevent unconverted code from suffering performance regressions. After that period passed, the compatibility code would be replaced by wrappers around the new system calls, possibly slowing the emulated calls down and providing an incentive for developers to update their code. Then, after about ten years, the old system calls could be deprecated.
Removal of those system calls could be an interesting challenge, though; even Thomas suggested keeping them for 100 years to avoid making Linus grumpy. If the system calls are to be kept up to (and past) 2038, some way will need to be found to make them work in some fashion. John Stultz had an interesting suggestion toward that end: turn time_t into an unsigned value, sacrificing the ability to represent dates before 1970 to gain some breathing room in the future. There are some interesting challenges to deal with, and some software would surely break, but, without a change, all software using 32-bit time_t values will break in 2038. So this change may well be worth considering.
Even without legacy applications to worry about, making 32-bit Linux
year-2038 safe would be a significant challenge. The ABI constraints make
the job harder yet. Given that some parts of any migration simply cannot
be rushed, and given that some deployed systems run for many years, it
would make sense to be thinking about a solution to this problem now.
Then, perhaps, we'll all be able to enjoy our retirement without having to
respond to a long-predicted time_t mess.
Index entries for this article | |
---|---|
Kernel | Timekeeping |
Kernel | Year 2038 problem |
Posted Aug 15, 2013 2:35 UTC (Thu)
by davecb (subscriber, #1574)
[Link]
--dave
Posted Aug 15, 2013 2:53 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 15, 2013 5:02 UTC (Thu)
by eru (subscriber, #2753)
[Link] (1 responses)
Posted Aug 15, 2013 8:35 UTC (Thu)
by khim (subscriber, #9252)
[Link]
Posted Aug 15, 2013 5:08 UTC (Thu)
by dlang (guest, #313)
[Link] (4 responses)
There is no dispute that the system's idea of the date will be wrong, but will that actually cause a problem that the user will be able to detect?
If the user can detect the problem, is there a simple work-around? (if the system just displays time-of-day, it's trivial, if it shows day of week or month, it's a bit more complex, but still just a matter of setting the date to the right 'incorrect' value)
Posted Aug 15, 2013 9:24 UTC (Thu)
by keeperofdakeys (guest, #82635)
[Link] (3 responses)
Posted Aug 15, 2013 9:49 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
time will go from
Posted Aug 15, 2013 12:15 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Posted Aug 23, 2013 14:02 UTC (Fri)
by BenHutchings (subscriber, #37955)
[Link]
This is not a problem unless you want to accurately represent timestamps on files from systems older than Unix, or you try to use time_t outside of its original scope.
Posted Aug 15, 2013 5:16 UTC (Thu)
by simlo (guest, #10866)
[Link] (1 responses)
class TimeStamp: public struct timespec {
And we run 32 bit x86 embedded systems, and we do promise our customer 20 years of life time.....
But Linux can _not_ get rid of struct timespec: It is part of POSIX.
Could the kernel and libaries have ABI versions - a bit similar to having amd64, x32, i686 running on the same system and then make a x86 version 2 ABI, where a lot of these things are fixed?
Posted Aug 15, 2013 12:22 UTC (Thu)
by justincormack (subscriber, #70439)
[Link]
Posted Aug 15, 2013 10:44 UTC (Thu)
by mgedmin (subscriber, #34497)
[Link] (5 responses)
Posted Aug 16, 2013 11:41 UTC (Fri)
by guillemj (subscriber, #49706)
[Link] (4 responses)
<http://lintian.debian.org/tags/binary-file-built-without-...>
Lots might just simply need to be built with the LFS API enabled, but many will still lack the actual support in the code even after that.
Posted Aug 18, 2013 1:05 UTC (Sun)
by vomlehn (guest, #45588)
[Link] (3 responses)
My career has spanned the conversion from 16-bit to 32-bit architectures, the Y2K issue, LFS, conversions to 64-bit architectures, and the lessons all boil down to one thing: fix it now and plan to fix the other things that you didn't get right at first immediately, too.
I can't conceive of a need for 128-bit systems, but we should plan for them, too. Some solutions already exist: we should really be using intptr_t (or a type derived from it) in the kernel to hold pointer values, not long longs. For the same practical reasons that the drove the creation of the long long type, long longs will wind up being exactly 64 bits wide and we will need to create another type for 128-bit integers.
Posted Aug 18, 2013 1:22 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 19, 2013 21:00 UTC (Mon)
by pr1268 (guest, #24648)
[Link]
So, let's create a [signed|unsigned] long long long type. (While we're at it, why not make a long long double?) Also, <inttypes.h> and <stdint.h> could be updated to include int128_t and friends. I'm only being half-facetious here. Sure, it's more keystrokes, but programmers are free to use typedef if desired to make coding easier. I do just that. Also, the existing ABIs and programming language specs don't change. Just a suggestion...
Posted Aug 22, 2013 18:35 UTC (Thu)
by pebolle (guest, #35204)
[Link]
Pointers in the kernel are of equal width to longs, aren't they?
Posted Aug 15, 2013 12:14 UTC (Thu)
by Karellen (subscriber, #67644)
[Link] (12 responses)
That doesn't buy us much. There are 10^9, or roughly 2^30, nsec in a sec. Which means if we're increasing our timestamp by 32 bits, but using 30 of those for sub-second resolution, we're only increasing our range by a factor of 4. This "only" gives us a couple more centuries to play with.
While some might suggest that ~2245CE is far enough away to safely punt the problem down enough generations that it'll get fixed *next* time in plenty of time, it does seem a little short-sighted in that regard.
However, a bigger problem in my mind is that it only allows us to reason about dates using this style of timestamp as far back as ~1700CE. While this might not be a problem for the kernel, in that there are very few cases where it might need to care about such dates, there are certainly a number of userspace applications of dates before then. While they could use an alternate representation (struct tm?) for such dates, it would be incredibly handy to be able to use the same time representation, and therefore the same time manipulation libraries, everywhere.
The more conversions you do between different time representations, the more opportunities there are for error.
Posted Aug 15, 2013 12:22 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Posted Aug 15, 2013 12:46 UTC (Thu)
by eru (subscriber, #2753)
[Link] (10 responses)
Posted Aug 15, 2013 14:27 UTC (Thu)
by rleigh (guest, #14622)
[Link] (1 responses)
Working with scientific/medical imaging, millisecond time resolution is common, but microsecond and nanosecond resolution are also used. One thing worth pointing out is that it doesn't matter that the CPU can't measure this accurately; external acquisition hardware can, and we want to be able to store and process such timing data. Being able to store the deltaT in between frames at a higher precision removes one source of variability due to decreased error. And this applies to timings from anything external to the CPU. Even on the CPU, the increased precision is useful, even if it isn't as accurate as we might like.
Regards,
Posted Aug 16, 2013 9:01 UTC (Fri)
by eru (subscriber, #2753)
[Link]
Certainly true, but your measurement application is likely to use its own data type for the times in this situation anyway, if only for portability: you cannot rely on the OS you are using to provide a suitable high-resolution time type (especially now that multiple slightly different solutions for the same Y2038 problem are likely to emerge among Unix variants).
Posted Aug 15, 2013 15:23 UTC (Thu)
by joern (guest, #22392)
[Link] (7 responses)
People will always ask for better time resolution until they get about nanosecond resolution. Unless you want to argue for the rest of your career, it is a wise choice to give them nanosecond resolution and move on to some other problem.
Posted Aug 15, 2013 15:46 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
Nanoseconds, on the other hand, have more than enough precision for that.
Posted Aug 16, 2013 2:32 UTC (Fri)
by jimparis (guest, #38647)
[Link]
One CPU cycle is also less than 1 nanosecond. How is that enough precision?
Posted Aug 16, 2013 2:35 UTC (Fri)
by felixfix (subscriber, #242)
[Link] (4 responses)
Posted Aug 16, 2013 10:48 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
2GHz is 500nsec/cycle and so on.
Posted Aug 16, 2013 12:19 UTC (Fri)
by mpr22 (subscriber, #60784)
[Link]
Posted Aug 16, 2013 12:21 UTC (Fri)
by intgr (subscriber, #39733)
[Link] (1 responses)
You're mistaken. 1 thousandth of a second is a millisecond. (1 kHz)-1
Posted Aug 16, 2013 12:32 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Grrr... I blame too much coffee.
Posted Aug 15, 2013 12:32 UTC (Thu)
by paulj (subscriber, #341)
[Link]
Posted Aug 15, 2013 23:02 UTC (Thu)
by pjtait (subscriber, #17475)
[Link] (1 responses)
Posted Aug 16, 2013 2:55 UTC (Fri)
by jake (editor, #205)
[Link]
Yes, indeed. Fixed now, thanks.
jake
Posted Aug 16, 2013 14:57 UTC (Fri)
by hummassa (subscriber, #307)
[Link] (1 responses)
> a 32-bit processor with the physical address extension feature will struggle to work with 16GB of memory which, one assumes, will barely be enough to hold a "hello world" program in 2038
for Quote of the Week... :-D Go Jon!
Posted Aug 17, 2013 21:55 UTC (Sat)
by pr1268 (guest, #24648)
[Link]
I don't think our editors allow their own content (as it appears here on LWN) to be considered for QOTW (I would ask Jon to verify this, but only if he has nothing better to do). ;-) But, if a "Hello World" program barely fits in 16 GB of memory space 25 years from now, then perhaps we should be looking at managing binary bloat in addition to the Y2038 problem. Or, if the past several decades' (or so) trend seems to indicate that Our Editor's assumption may prove to be correct, then am I missing something? Ewwww....
Posted Aug 16, 2013 17:34 UTC (Fri)
by tmattox (subscriber, #4169)
[Link] (13 responses)
It lets you represent vastly distant times both in the past and in the future. "short" time differences can be represented very precisely, although absolute times far from 1970 will have lower resolution the further you go.
It's what I do in my programs, I always convert time to a double.
Posted Aug 16, 2013 17:55 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Real men measure time in Planck units - 540*10^(-42) second precision should be enough for everyone!
You can't go smaller than that (well, if you can then you probably don't need something as inferior as silicon-based computers)
Posted Aug 16, 2013 18:03 UTC (Fri)
by jimparis (guest, #38647)
[Link]
Posted Aug 16, 2013 21:53 UTC (Fri)
by giraffedata (guest, #1954)
[Link]
If you're talking about differences between datetimes represented in this format, the additional precision for short differences does you no good because you're limited by the precision of the datetimes you're subtracting.
If one were going to use a floating point datetime format in order to avoid catastrophic failure in the distant future, one probably should make the epoch 2013 instead of 1970. At least that would make the high-precision era a more useful one. Or maybe 2100, guessing that finer time precision will be more useful with future technology.
Posted Aug 19, 2013 1:48 UTC (Mon)
by ras (subscriber, #33059)
[Link] (9 responses)
The problem is the 53 bit precision provided a IEEE 754 mantissa provided is nowhere near enough. IEEE 754 does define a quadruple precision with 113 bits which is enough, but no one implements it. In fact not everyone implements IEEE floats, and IIRC the kernel doesn't use or depend floats at all. That probably rules out floats completely. Still it would be very nice format for higher level languages like say Python. Double double (a 128 floating point format that emulates 128 bit using 2 doubles) is more than fast enough for interpreted languages.
IMHO the right solution to this is a new timeval like this:
struct timeval64 {
Storing the fractional part as 1 / 2^32 has a number of advantages over microseconds. Although it doesn't seem "natural", turns out not many people use microseconds directly either so they have to convert it. As soon as you have to do that the code becomes identical. Eg, to convert to milliseconds:
milliseconds = tv_usec / ((1000*1000) / 1000)
The second form above is a better choice on architectures where divide is slow.
Storing stuff as 1 / 2^32 gives the full precision for the bits available, yielding a resolution of around 4GHz which happens to neatly fit with where CPU frequencies have plateaued.
Converting it to a 128 bit time becomes just a shift and add, mainly because this is just a 96 bit time in disguise.
Moving the epoc to 1/1/0001 solves a problem I continually get bitten by: times less that 1970 blowing up. Given 64 bits gives us 292e9 years, it could equally well be 13,000,000,000 BC which is the one true epoc (the big bang), however I am not sure what we do for calendars back then.
In comparison storing times in nanoseconds in a int64_t looks like a 1/2 baked solution. As others have observed here, it only delays the problem for a couple of hundred years, doesn't provide enough precision and doesn't address the 1/1/1970 wart.
Posted Aug 19, 2013 7:00 UTC (Mon)
by eru (subscriber, #2753)
[Link] (8 responses)
If you change the Epoch, there are some choices that are better than others from the point of view of the Gregorian calendar (none of which are 1/1/1970). I recall some discussion about this around Y2K. googling... here: https://groups.google.com/forum/#!topic/comp.protocols.time.ntp/PVE9niFvRqc
Posted Aug 19, 2013 7:42 UTC (Mon)
by ras (subscriber, #33059)
[Link] (7 responses)
Yes. The age old trick of adjusting your origin so Feb 29 is at the end of the leap year cycle has been rediscovered many times.
But what seems to trip up calendar routines more than anything else isn't leap years, or non-leap centuries, or leap millennia. It's negative times. Making the birth of space time the epoc solves eliminates that gotcha once and for all. Moving the origin to 2000-03-01 on the other hand would make it worse. *shudder*
Posted Aug 19, 2013 10:16 UTC (Mon)
by eru (subscriber, #2753)
[Link] (6 responses)
Then most of the range would have no use. There is no point in going that far back in general-purpose time representations. Dates make sense only during human history. Before that you talk about geological eras...
If you want an ancient starting point, the one used in Julian Day (4714-11-24 BC) would be a reasonable choice: All recorded human history happened after that, and this system of counting time is already in widespread use, so ready-made conversion algorithms exist for it.
Posted Aug 19, 2013 23:44 UTC (Mon)
by ras (subscriber, #33059)
[Link] (5 responses)
True. But when your date format spans 292e9 years, that's always going to be true at one end of the range.
I'd agree if wasting part of the range like this was a problem - ie if the range was small. But it isn't. And when it comes to usefulness, I suspect the first 13 billion years is much more interesting and useful to us than the last 13 billion covered by the range. Since you must have one or the other, surely you would keep the more useful bit? Or to put it another way, if the proposal was to use a single 64 bit quantity in units of 1 / 2^16 sec, then yes I'd too would be pushing for 4714-11-24 BC - now that I know it exists. :D
Posted Aug 20, 2013 0:26 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
Posted Aug 20, 2013 0:38 UTC (Tue)
by ras (subscriber, #33059)
[Link] (3 responses)
I'm courious. What wonderful problems? I've been doing for ages - from back when the 8086's div instruction cost 10's of cycles. It was done that way for a couple of reasons. Firstly (and oddly enough), binary computers are rather good at handling binary factions. For them it's faster and simpler than base 10. And secondly, the hardware timers usually counted in binary fractions, so inevitably someone, somewhere has to do it.
Posted Aug 20, 2013 11:22 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
An employee is paid (say) $20 per hour and has worked 1 hour and 7 minutes. How much do we pay them for these 7 minutes? Well, the answer is ($20 * 7/60) = $2.33, right?
Except that then an accounting system tried to get time back by dividing $2.33 by 20 and getting 6.99(9) minutes which are rounded to 6 minutes. And we get a discrepancy in the reports. Not nice at all.
Here we'll have the same problem - it's not possible to express time periods like 100 msec in binary-based fractions. It might not be critical, but if you have applications where you do a lot of statistics it might get in the way (http://www.ima.umn.edu/~arnold/disasters/patriot.html - the famous example of this).
Posted Aug 20, 2013 13:12 UTC (Tue)
by ras (subscriber, #33059)
[Link] (1 responses)
No it's not. But then again no one I know of stores or manipulates times in a raw struct timeval format either, so I doubt it will be a problem.
I always thought timeval's being a prick to work with was a nit. But since they force everyone to convert times to a more suitable format, perhaps that's a blessing in disguise. And as I mentioned before, binary fractions do have one advantage over other bases: they are faster to convert to a different unit.
BTW, did you notice the patriot missile's OS reported the time in 100ms units. It's a nice power of 10 - just what you are asking for. It didn't solve the problem.
Posted Aug 20, 2013 13:25 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Using binary fractions makes little bit more sense for CLOCK_MONOTONIC - it's mostly used for relative periods anyway. But then we'll have those rare cases when we need to sync CLOCK_MONOTONIC with the real clock (i.e. 'how much I'm going to wait till the next 100ms period?') and we're back to square one.
For better or worse, we are used to milliseconds and not binary-based second fractions (mibiseconds?). So it makes sense to keep it that way.
Would have been nice to also have decimal hours...
> BTW, did you notice the patriot missile's OS reported the time in 100ms units. It's a nice power of 10 - just what you are asking for. It didn't solve the problem.
Posted Aug 18, 2013 21:47 UTC (Sun)
by sdalley (subscriber, #18550)
[Link] (5 responses)
- It can record nanoseconds, but disregards the spotty and variegated collection of leap seconds (of 10^9 nanoseconds each) since 1 Jan 1970 or whatever other epoch is used. Instead it either jumps a tooth backwards, repeating the same values for the leap second(s) and the second afterwards, or enters a smeary time-distortion field in which nanoseconds elapse less rapidly in the neighborhood of a leap second.
- It will be used for two unrelated puposes : to/from time/date and seconds, or calculating supposedly-exact time intervals in ns between two spot times.
These problems could be sidestepped by allowing the 64-bit word to be a union with two 32-bit integers:
This would preserve:
It would yield:
Since binary time formats are not portable anyway, one is in any case forced to convert to absolute date-times or absolute time differences when interfacing between systems.
The only drawback that I can see is that full nanosecond tick accuracy would not be available from such an interface. But the resolution is in any case limited by the hardware implementation. The above is surely more useful than tglx's plain scalar ns-since-the-epoch, for which I am unable to think of a use case that would not be better handled with the above modified method. Accurate nanosecond (or to the limit of the hardware) timings during the running of a particular program are available with clock_gettime(CLOCK_MONOTONIC, ...)
Comments?
- Simon
Posted Aug 18, 2013 22:26 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
So I vote for 64 bit whole second counter and 32 bit nanosecond counter.
Posted Aug 18, 2013 23:58 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Aug 19, 2013 9:10 UTC (Mon)
by sdalley (subscriber, #18550)
[Link] (1 responses)
And atomicity becomes (even) more of an issue once you exceed 64 bits. Tick precision down to 100ns avoids this.
When you care about the last nanosecond, you usually don't care at all about the epoch. Why have an excessively large size in order to accommodate both? For nanoseconds, we can just stick to using clock_gettime(CLOCK_MONOTONIC,).
Posted Aug 19, 2013 11:31 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
And if you don't want extra precision then you can simply ignore it. Or we can do the same trick one more time: use 64 bit for seconds, 32 bits for nanoseconds and 32 bits for attoseconds.
Posted Aug 20, 2013 15:25 UTC (Tue)
by mirabilos (subscriber, #84359)
[Link]
Used in, for example: https://www.mirbsd.org/cvs.cgi/src/kern/include/mirtime.h...
The idea here is to have a 2-tuple (day, second) or a 3-tuple (day, second, tzoffset), which can be quickly converted into any of these representations (and, in fact, is used by MirBSD kernel and libc to do that).
“day” is conveniently a time_t (64 bit in MirBSD since about a decade, in NetBSD® for a few years, in OpenBSD too nowadays, 32 bit elsewhere) so any possible time_t value can round-trip.
“second” is an [0;86400[ or [0;86400]-bounded value (the latter on days that have leap seconds) from Greenwich (no DST), making the leap second always be second==86400.
The time zone offset is just added to get calendar time, e.g. 3600 or 7200 (DST) for Central Europe, negative figures for you Americans.
https://www.mirbsd.org/cvs.cgi/src/kern/c/mirtime.c?rev=HEAD has routines to calculate with that. Prehistoric dates simply use proleptic calendars; if display of a “real” Julian (or Hebrew, Arabic, Japanese, …) calendar is needed, user space is expected to do that by itself taking any of the time_t, MJD or struct tm representations as base and going from there.
One thing this would not solve is the space issue in the world beyond ILP32 considering things like struct alignment (you get 64+32 bit).
Posted Aug 22, 2013 4:52 UTC (Thu)
by cpeterso (guest, #305)
[Link] (1 responses)
Posted Aug 22, 2013 9:31 UTC (Thu)
by intgr (subscriber, #39733)
[Link]
Posted Aug 22, 2013 10:23 UTC (Thu)
by ledow (guest, #11753)
[Link] (1 responses)
The problem isn't the big guns and the huge infrastructure - that will solve itself as a matter of financial incentive. It's the little things that, to be honest, nobody will really care about by that time. If the clock on your in-car PC is wrong in 2038, chances are there's a LOT more wrong with the car anyway. Like the example of the old video recorder - I came across many video recorders with the Y2K problem. Fact is, nobody was using them, certainly nobody even knew how to set the timer anyway, and even when they did need the timer, they just set the time and date to something relatively comparable (who cares if the video thinks it's 1900, so long as it records tomorrow's program?).
As such, the scaremongering over Y2k38 I can see going the same way - the big guns are already dealing with those dates (which suggests their apps don't even rely on the OS to handle dates anyway). OS's are already changing the way they deal with them. In another 25 years, 99% of the equipment we're using today will be in the bin. The rest - well, that's not such much a Y2k38 problem as why are you using 25-year-old hardware to perform any kind of vital task whatsoever, without some kind of oversight process checking it's still fit for purpose? And how worried would you be if told you had to throw it in the bin and buy yourself a new one?
Even medical devices - they have to be tested and calibrated regularly and eventually those tests will include a Y2k38 test and then they'll be condemned and someone will have to go buy a new one. And not a whole great fuss will be made over doing so ("Well, we've had those for 25 years, it's about time we had a new one anyway").
Anywhere it matters, it's fixed or should have been fixed already. Anywhere it doesn't, well, it doesn't matter.
And nobody has to wait for Linux to catch up OpenBSD or any other nonsensical suggestion - just program your app to take account of it. It's something you can do NOW and test NOW and by the time it comes around, it'll be dead code as we'll be using 64-bit or even 128-bit values, processors and OS's, anyway.
Fact is, if you were able to program properly back in 1975 (and before), you were never affected and probably never will be. The same holds true for today. The bigger problem will be things like GPS and NTP timestamp rollovers (one of which comes after Y2k38 and one two years before). But, again, if you'd programmed your systems with some ounce of common sense, it still wouldn't be a problem.
Posted Aug 22, 2013 11:01 UTC (Thu)
by anselm (subscriber, #2796)
[Link]
The Linux time_t is useful for file timestamps and other similar stuff. Nobody ever claimed that it was reasonable to use time_t for general dates and times because its range is much too restrictive.
Serious applications – like anything that deals with people's birth dates or long-term financial stuff like mortgages or life insurance – had to come up with their own, more flexible date/time handling a long time ago. Even in the 1970s, when Unix was new, you would have had to deal with birth dates in the 1890s as a matter of course, and a signed 32-bit time_t only goes back to 1902.
Posted Aug 22, 2013 15:52 UTC (Thu)
by heijo (guest, #88363)
[Link]
Thus, a 512-bit integer time representation (maybe represented as 256.256 fixed point with 1 second unit) seems the most prudent choice (assuming that Planck time is a quantum of time and that proton decay will cause CPUs to stop working correctly anyway).
Alternatively, a signed 64.64-bit fixed point representation for seconds is still enough for currently measurable times, but may prove inadequate in the future.
Posted Aug 29, 2013 9:22 UTC (Thu)
by Darkstar (guest, #28767)
[Link]
Something like
~# runas32 /usr/bin/my_old_program
could be used to switch the kernel to the "old" compatible mode on the fly for only this process (with a simple mapping of Jan 1 1980 = Jan 1 2030 or similar) so old "legacy" programs at least continue to run (although absolute time calculations would obviously be wrong)
This could even be automated by the ELF loader in conjunction with GCC
Pondering 2038
Pondering 2038
I'm still using a TV set I bought 19 years ago. There is also a VCR from the same period I start occasionally for old tapes. Nowadays consumer gadgets of this sort are often controlled by an embedded processor running Linux. It is possible that even some devices taken into use today will be affected by the problem. Better hurry with the fix to avoid Linux turning into a swear word in the living rooms of year 2038.
Pondering 2038
Swear word in the living room is minor issue. In 2000 I've personally encountered piece of of health diagnosis hardware which refuses to communicate with outside world because of Y2K problem (thankfully it still functioned standalone thus people just retyped the numbers of keyboard which was not all that efficient, but worked). And these things are often used for a much longer time then mere TV or VCR.
Pondering 2038
Pondering 2038
Pondering 2038
Pondering 2038
Tue, 19 Jan 2038 03:14:08 GMT
to
Fri, 13 Dec 1901 20:45:51 GMT
Pondering 2038
Pondering 2038
struct timespec
// C++ operators
};
But POSIX doesn't define the length of time_t, which is the root course of the problem.
struct timespec
Pondering 2038
Pondering 2038
Pondering 2038 vis-a-vis 64-bit file offsets
Pondering 2038 vis-a-vis 64-bit file offsets
Pondering 2038 vis-a-vis 64-bit file offsets
long longs will wind up being exactly 64 bits wide and we will need to create another type for 128-bit integers.
Pondering 2038 vis-a-vis 64-bit file offsets
> kernel to hold pointer values, not long longs.
Pondering 2038
Pondering 2038
I was also wondering about that. What would be the point of nanosecond
resolution when real computer clocks have much less? How about counting microseconds instead, getting 1000x the range?
Pondering 2038
Pondering 2038
Roger
One thing worth pointing out is that it doesn't matter that the CPU can't measure this accurately; external acquisition hardware can, and we want to be able to store and process such timing data.
Pondering 2038
Pondering 2038
Pondering 2038
Pondering 2038
>Nanoseconds, on the other hand, have more than enough precision for that.
Pondering 2038
Pondering 2038
micro = 10^-6. nano = 10^-9. pico = 10^-12. A 2GHz clock has a cycle time of 500ps.
Pondering 2038
Pondering 2038
1 millionth is a microsecond. (1 MHz)-1
1 billionth is a nanosecond. (1 GHz)-1
Pondering 2038
Almost want some kind of syscall version or personality field
Pondering 2038
Pondering 2038
Pondering 2038
QOTW material
Pondering 2038
I know, the kernel doesn't use floating point so that it doesn't need to save/restore floating point registers as often... but, if that wasn't an issue, why not?
And future systems would need to make sure they increment their time by large enough values that the timer keeps increasing... rather than effectively adding zero every tick, once the time got past a certain date.
Pondering 2038
Pondering 2038
Pondering 2038
"short" time differences can be represented very precisely,
Pondering 2038
int64_t tv_sec; // Time in seconds since say 1/1/0001.
uint32_t tv_frac; // 1 / 2^32 second
}
milliseconds = tv_frac / ((1 << 32) / 1000)
milliseconds = (int64_t)tv_frac * 1000 >> 32 // Also works
Moving the epoc to 1/1/0001 solves a problem I continually get bitten by: times less that 1970 blowing up. Given 64 bits gives us 292e9 years, it could equally well be 13,000,000,000 BC which is the one true epoc (the big bang), however I am not sure what we do for calendars back then.
Pondering 2038
Pondering 2038
But what seems to trip up calendar routines more than anything else isn't leap years, or non-leap centuries, or leap millennia. It's negative times. Making the birth of space time the epoc solves eliminates that gotcha once and for all.
Pondering 2038
Pondering 2038
Pondering 2038
Pondering 2038
Pondering 2038
Pondering 2038
Pondering 2038
It's still the same problem, just from the other side.
Pondering 2038 - a suggestion for an improved time_t
- A more-significant signed part to count whole days, whole hours, or whole minutes since the epoch (plus or minus). Since leap seconds are inserted at day rollovers, this would steer clear of the leap seconds problem and last at least 60 times as long as the time from 1970 to 2038, both before and after.
- A less-significant unsigned part to count 100us, 10us or 100ns ticks after the precise time given in the more-significant part. For a leap second, it would increment by an extra second's worth before resetting to 0 and incrementing the more significant half.
- the integer nature of time_t
- its monotonic increase as time passes.
- reliable behaviour of difftime() over leap second(s),
- a usefully accurate tick rate for timeofday timestamps.
Pondering 2038 - a suggestion for an improved time_t
Pondering 2038 - a suggestion for an improved time_t
Pondering 2038 - a suggestion for an improved time_t
Pondering 2038 - a suggestion for an improved time_t
Pondering 2038 - a suggestion for an improved time_t
Windows FILETIME
Windows FILETIME
Pondering 2038
Pondering 2038
Anywhere it matters, it's fixed or should have been fixed already. Anywhere it doesn't, well, it doesn't matter.
Pondering 2038
Pondering 2038