|
|
Subscribe / Log in / New account

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

The New Yorker has a lengthy article on the Network Time Protocol and its creator David Mills.

Coders sometimes joke, morbidly, about the “bus factor.” How many people need to get hit by a bus before a given project is endangered? It’s difficult to determine the bus factor for N.T.P., and time synchronization more broadly, especially now that companies such as Google have developed their own N.T.P.-inspired proprietary code. But it seems reasonable to say that N.T.P.’s bus factor is rather small.


to post comments

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 5, 2022 20:22 UTC (Wed) by dankamongmen (subscriber, #35141) [Link] (23 responses)

this was a fascinating (though not very technical) profile, but damn it was *depressing*.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 5, 2022 22:06 UTC (Wed) by gus3 (guest, #61103) [Link] (22 responses)

That isn't at all depressing to me. Mills' sight was iffy at best, 40 years ago, yet he established NTP as the de facto time management/reporting system. There are very few who can do better.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 0:01 UTC (Thu) by apoelstra (subscriber, #75205) [Link] (21 responses)

I imagine that it is not only Mill's blindness that could be depressing, but also his comments about aging and relevancy: that in his 20s he was "one of [the hackers actively working on stuff]", then in his 40s he was "their father" and in his 60s he's an "old geezer who can be ignored".

I think the story of NTP, a protocol largely maintained by the same people who were around when it was new and exciting and needed to be invented, but who are increasingly becoming older and less able to continue their work, can be told about a number of protocols.

It was interesting that the article mentions Bitcoin, and Goldberg's comment about how she wishes that NPT "were more like Bitcoin". Bitcoin (whatever you think of its economics or externalities) is a very very fortunate protocol. Both in that it is able to fund its own development and also that the people who built it are still mostly young and healthy.

I think it's a very sad thing that something like NTP, which is much more foundational and critical than Bitcoin, has no self-funding model and there is not even a clear way for it to have one. I know esr had some sort of Patreon-like thing he started but I don't think it went anywhere, and Linux has managed to find a model in which many kernel developers are actually paid by large corporations to work on it, but most protocols continue to be maintained as a personal sacrifice by individual people who are finite and precious.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 5:55 UTC (Thu) by madhatter (subscriber, #4665) [Link] (20 responses)

You quote Goldberg as saying that she wishes that NPT "were more like Bitcoin". She is not quoted as saying that, at least not in the NY article; my feeling is that you should not have used quotation marks when paraphrasing her.

For reference, what she is actually quoted as saying is that she thinks time synchronization should have a cryptocurrency-like buzz around it (ideally with less controversy)—coders who contribute to it, she said, should feel proud enough to declare, “Everyone uses the software, it’s in everything, and I wrote it!”.

But I completely agree with your last paragraph. We've seen this time and time again (eg, with GPG); someone creates something so useful that it becomes ubiquitous, then universally-relied-on, and we come to forget that the whole edifice is held up by a tiny group of unpaid (or at least underpaid) volunteers.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 11:43 UTC (Thu) by paulj (subscriber, #341) [Link]

"we come to forget that the whole edifice is held up by a tiny group of unpaid (or at least underpaid) volunteers."

At which point some people from some corporate will setup some new foundation, to take over and put things on a good foundation and set things "right". Often this effort leaves those who spent years on working for little-to-no money on the code out in the cold - shoved aside.

(Just thinking of recent and not so recent cases of this).

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 12:43 UTC (Thu) by farnz (subscriber, #17727) [Link] (12 responses)

It's not just the lack of money that's a problem - the failure mode Free Software is currently going in for is one where if key volunteers stop doing their job, nobody is in a position to replace them.

Not all projects suffer from this - the Linux kernel, for example, has plenty of people who could take over Linus's role if he gave up suddenly, since all of the subsystem maintainers are in a position to do that - but enough of the core infrastructure projects are depending on people who've been doing them for decades and who don't have understudies who could take over if something happens that prevents them continuing.

It's not enough to fund the current developers of the things that infrastructure relies on - we also need to find a way to train and fund succession, so that if a developer does want to stop doing something (whether because they've burnt out on it and want to move on, or because they are no longer able to do it), they can without significant loss.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 19:31 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link] (3 responses)

I think the underlying problem is with our values more than our procedures. We value creators more than maintainers, so people are drawn to write new things rather than maintaining the old ones. Nor is this limited to FOSS, or even just to software. It's a deep societal problem that leads us to build lots of stuff without ever thinking about how we'll pay for maintenance. We need to think long and hard about how to change those values, within FOSS if not in society as a whole, if we ever want FOSS to be maintained.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 2:13 UTC (Fri) by jrw (subscriber, #69959) [Link]

Wow, is this ever true. In the U.S., think of our roads, or dams, or bridges, or the reliable delivery of electric power, for example. Or our infrastructure in general.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 10, 2022 4:34 UTC (Mon) by k8to (guest, #15413) [Link]

Hah, i like maintaining and fixing things, and it's a lot of work to find people willing to oay me for this. Same problem.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 19:36 UTC (Tue) by mrugiero (guest, #153040) [Link]

Funding is also part of the values problem tho. People are happy to use and even promote the use of free software, but most won't give a dime even if they have a surplus. Why pay if it's free, right? Putting your money where your mouth is is also a principle.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 10:41 UTC (Tue) by paulj (subscriber, #341) [Link] (7 responses)

Another failure mode is that, because resources for maintaining free software are so scarce, that if someone introduces some resources to try solve the problem you mentioned, that it can lead to quite toxic behaviour - because the introduced resources invariably are insufficient to fund no more than a couple of the key people.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 13:20 UTC (Tue) by farnz (subscriber, #17727) [Link] (6 responses)

There's a very deep challenge with "introduce some resources to try and solve the problem", in that you need quite a lot of resources. You need to fund all of the following somehow:

  1. The current key people, so that you don't lose important contributors.
  2. Any current contributors who might become key people if funded, so that they don't walk away in disgust at not being paid.
  3. A slush fund for turning future contributors into key people.

Not only is this a lot of resource to put forward, but the third component (the slush fund to allow you to turn future contributors into key people) carries a high corruption risk, since it's quite hard to distinguish "paying farnz's reasonable travel costs to allow him to join our face-to-face summit and learn what we care about and why" from "paying farnz's reasonable travel costs so that he can have a fun holiday near our face-to-face summit".

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 13:25 UTC (Tue) by paulj (subscriber, #341) [Link] (5 responses)

Yep. Indeed.

Seen it happen.

I've even seen 'foundations' setup, attract funding by telling big tech companies they represented the community (even though the community had no say in them), and pay favoured contributors. And - this is the best one - then get their own maintainer/contributors to deliberately slow down the processing of the patches of contributors from tech companies they wanted money from.

Such fun.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 13:31 UTC (Tue) by paulj (subscriber, #341) [Link] (4 responses)

Oh, and my conclusion from all this is that the "Non-profit foundation to represent the community" model is broken. Perhaps it works in some cases, but it is also very vulnerable to conflicts of interest and other issues.

Which makes me think the most honest model is some kind of commercial company model. Which of course also has other issues - but at least the incentives are more obvious.

Oh, another nice conflict of interest at that NPO foundation, was that those running it /also/ had a separate commercial consulting business of some sort. Which they were very opaque about - indeed, they did their best to not mention it. So they were getting commercial consultancy contracts with their commercial company, while also running the NPO foundation and trying to get funding for that from tech companies on basis of public good. Representing themselves in public as the public good foundation, and never (in any non-obfuscatory way at least) mentioning their commercial side.

I have no faith at all in the NPO/public-good foundation model (e.g. 501.3c in USA?) as a result.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 13:35 UTC (Tue) by paulj (subscriber, #341) [Link]

Oh yes, the biggest beneficiary - by far - of the money given to the 501.3c was the guy who nominally (by the paperwork) ran the 501.3c (though, the activities of the 501.3c behind the scenes were mixed with the commercial work) - he had no contributions of note to the actual project. He did some testing, using test-suites which he wouldn't open-source (least back then - no idea if that has changed). Only discovered that after a few years, when the (i think) 2-year delay on publishing of required 501.3c filings expired.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 15, 2022 20:42 UTC (Sat) by jospoortvliet (guest, #33164) [Link] (2 responses)

Agreed. The challenge is to find a business model that creates an incentive to pay without screwing over the community/project itself. That's though. But you really don't want to rely on charity for the reasons you and others mention.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 15, 2022 21:19 UTC (Sat) by paulj (subscriber, #341) [Link]

I think it is actually immoral to have a non-profit foundation that is granted tax benefits, for public good reasons, (501.3cs in USA, charities elsewhere) acting to manage software whose main users are commercial corporates. These should be run as non-profit, commercial trade organisations - not public-good foundations.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Jun 22, 2024 9:19 UTC (Sat) by Wol (subscriber, #4433) [Link]

It's very tricky ...

My inclination is simply to say "You need to be honest". I would have no qualms about setting myself up as maintainer with a consulting business behind it, or whatever, but your fellow contributors NEED TO KNOW. They don't need to know the details of how the maintainership and consultancy interact, but the existence of the link should not come as a surprise.

And while it might be tricky to get off the ground, I'd try and make any foundation be a sort of trade association - the officers don't get paid by the foundation, they are consultancies seeking to grow the pie. That's the big problem in the MV/Pick world, we have too few people trying to grow the pie. They're too busy nicking each others customers ...

Cheers,
Wol

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 13:05 UTC (Thu) by apoelstra (subscriber, #75205) [Link] (4 responses)

>She is not quoted as saying that, at least not in the NY article; my feeling is that you should not have used quotation marks when paraphrasing her.

You're right -- sorry about that, and thanks for being charitable about it. I should've either returned to the article to get her exact words (and covered more context) or dropped the quotation marks.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 10, 2022 15:52 UTC (Mon) by Ranguvar (subscriber, #56734) [Link] (3 responses)

Quote issues aside, thanks for mentioning the connection.

It struck me as intriguing, because one of the best ways of explaining Bitcoin is as a trustable, if imprecise, clock.

"In this paper, we propose a solution to the double-spending problem using a peer-to-peer distrib­uted timestamp server to generate compu­ta­tional proof of the chrono­log­ical order of trans­ac­tions." - Satoshi Nakamoto, 2009

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 12:57 UTC (Tue) by paulj (subscriber, #341) [Link] (2 responses)

If I understand the Bitcoin protocol correctly, the difficulty of the hashing problem is adjusted every 2016 blocks, so that the block rate is maintained at an average of 1 block/10 minutes given the hash-rate of the network over those last 2016 blocks.

Without such an adjustment the network would seize up, one way or another - too few or too many blocks. Though, I guess the fees offered by senders would have to change.

So it seems like time is an input to the distributed protocol. There might be some practical issues that arise if you try remove external time as an input into the protocol. ???

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 13:40 UTC (Tue) by excors (subscriber, #95769) [Link] (1 responses)

According to https://blog.lopp.net/bitcoin-timestamp-security/ :

> In order for the time field of a block header to be considered valid by nodes it must meet two criteria:
> 1. Be less than 2 hours in the future from your computer’s current time
> 2. Be greater than the median timestamp of the past 11 blocks

You need time (from e.g. NTP) as an input for the first of those. Otherwise (if I understand correctly) a miner with a modest percentage of network's hashing power could keep posting blocks with timestamps a year into the future, and eventually they would get lucky and win 6 of the latest 11 blocks, and now the median timestamp is a year into the future and every other miner has to start using bogus timestamps else their blocks will be rejected as invalid, and then the network will think it's taken a year to solve the last 2016 blocks and will adjust the hashing difficulty accordingly.

With the 2 hour limit a miner could still make the median clock jump ahead by 2 hours, but that wouldn't have a huge effect on the hashing difficulty so it's a much less valuable attack. If the miner had >50% of the network's hashing power then in theory they could stop the clock advancing for several weeks (because timestamps in the distant past are still considered valid), and then jump ahead to the present to manipulate the hashing difficulty, but the defence is for Bitcoin to be so stupendously wasteful of electricity that no reasonable attacker can afford >50% of it (and hope that it never gets targeted by an unreasonable and very wealthy attacker).

When Bitcoin is described as "a peer-to-peer distributed timestamp server", I think that's in the sense of Lamport timestamps, i.e. a globally-agreed partial order over events. It's not computing accurate or reliable UTC timestamps. The UTC timestamps come from NTP and Bitcoin just does this median filtering to make it statistically unlikely that an attacker could benefit much from fake timestamps.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 11, 2022 13:53 UTC (Tue) by paulj (subscriber, #341) [Link]

Interesting. Didn't know about that.

I wonder why the block-count - which is a monotonically increasing counter - doesn't work. I guess they needed a further ordering field, but allow just a partial order for sorting of blocks in the mempool? I wonder why a non-timestamp, partial order field with the proposed block-counter wouldn't have worked.

I guess there's lots of devil in the details.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 9:09 UTC (Fri) by Tet (guest, #5433) [Link]

we come to forget that the whole edifice is held up by a tiny group of unpaid (or at least underpaid) volunteers.

Obligatory XKCD: https://xkcd.com/2347/

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 16:54 UTC (Thu) by HenrikH (subscriber, #31152) [Link]

"Stenn told me that he has received requests for free fixes from companies that charge customers for services that depend on N.T.P" - Sometimes I wonder if companies like these even understand what they are asking here... They would never dream of making such requests for anything else, but then suddenly, oooo it's open source...

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 19:30 UTC (Thu) by jnareb (subscriber, #46500) [Link] (5 responses)

What about the *NTPsec* re-implementation of NTP Classic: https://www.ntpsec.org/ ?

It looks like the project is quite active, and has quite a few active contributors... though the last release 1.2.1 was more than a year ago (Jun 7, 2021), and the blog is inactive since then.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 8, 2022 1:36 UTC (Sat) by mtaht (subscriber, #11087) [Link] (4 responses)

Esr is recovering from stomach cancer.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 8, 2022 9:05 UTC (Sat) by zdzichu (subscriber, #17118) [Link] (3 responses)

That would indicate NTPSec project has a bus factor of 1, exactly what it was meant to prevent wrt original NTP.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 8, 2022 15:59 UTC (Sat) by mtaht (subscriber, #11087) [Link]

The project is lacking a bit of direction and funding just now. Eric has been good about finding people to help on all his other projects. It does illustrate that our world should be evolving to have the equivalent of nobel prizes and organizations that can, as he put it, <a href="http://esr.ibiblio.org/?p=4196"> hold up the sky</a>. <br>

"Ubiquity, like great power, requires of us great responsibility. It changes our duties, and it changes the kind of people we have to be to meet those duties. It is no longer enough for hackers to think like explorers and artists and revolutionaries; now we have to be civil engineers as well, and identify with the people who keep the sewers unclogged and the electrical grid humming and the roads mended. Creativity was never enough by itself, it always had to be backed up with craftsmanship and care – but now, our standards of craftsmanship and care must rise to new levels because the consequences of failure are so much more grave."

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 12, 2022 23:47 UTC (Wed) by jwarnica (subscriber, #27492) [Link]

Ragecoding a fix might rapidly get to PoC or even "80% done", and is great proof of the possibility of a technical replacement.

But if the fundamental problem is the lack of sustainable community then one guy doing anything, or even everything, doesn't fix anything.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 31, 2022 2:19 UTC (Mon) by ghane (guest, #1805) [Link]

Speaking as a lurker on the ntpsec lists, which I got into because of the gpsd community:

NTPsec had a specific goal, to make the NTP code base "secure". The first year was spent in reducing the attack surface, quite successfully, at which time progress stalled. We had another burst when NTS was developed.

There are few problems left to solve (I go back and fix some minor grammar in the docs/ , because the codebase itself is pretty stable and doing what it needs to do).

esr has not been a major contributor for a couple of years, there are a few developers active and consistent, and shepherding new developers. Eg, Hal Murray.

In the last 12 months, there have been 93 commits (down 20) from 15 contributers (up 2). I don't think this project has a low bus-factor.

tl;dr: Nothing exciting to do till NTPv5 comes around, probably the draft RFCs late next year.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 22:16 UTC (Thu) by NYKevin (subscriber, #129325) [Link] (16 responses)

> “They should all be sent to bed without dinner,” Judah Levine, a physicist who has worked at NIST since 1969, told me, referring to the tech companies. Levine has maintained NIST's own N.T.P. service for decades, and he and a select few of his colleagues are responsible for reconciling its master clocks with Universal Coördinated Time, often within billionths of a second. If the companies disliked leap seconds, Levine said, they should have lobbied for change through the intergovernmental bodies that have the power to abolish them. (Levine himself would prefer a world without leap seconds, and the U.S. government, with his help, has proposed eliminating them for the sake of computer systems; astronomers and other governments have objected. The next opportunity for formal discussion will come in 2023, and the matter is already actively being debated.) “There’s a process,” Levine said. “And if they don't like it they don’t get to say, ‘Well, I’m not going to play anymore. I’m going to do what I’m going to do, and tough luck on you.’ ” As one of the world’s preëminent observers of nanoscopic slices of seconds, Levine argues that there could be legal implications if computers disagreed about when a message was sent, or about when a stock trade occurred. [Small capitals converted to ALL CAPS.]

While I can somewhat understand Levine's frustration, it should be emphasized that leap smearing is not new. Everyone who ever ran ntpd -x during a leap second is guilty of it in one form or another. (The -x flag roughly means "slew time instead of stepping it, even if the discrepancy is big"). All the tech companies did was standardize the way in which time gets slewed, which is obviously better than "well, ntpd just sort of eyeballs it on each machine, and who knows how far out of sync they get with each other."

As for the legal implications, that's what unsmear[1] is for. You don't actually lose any information, it's just following a different time standard, and you can always convert back into UTC if necessary. This is an improvement over ntpd -x (which definitely cannot be converted into UTC without a whole lot of guesswork and/or instrumentation and telemetry). I frankly do not understand why he would even bring up that argument in the first place. It makes it sound as if he's merely resentful of the fact that a bunch of non-NIST people came up with a standard without asking NIST's opinion. And yes, it probably would have been good to go through a formal standardization process, but on the other hand, it sounds as if NIST doesn't want to own it anyway.

(Disclaimer: I work for Google, but not on Time SRE.)

[1]: https://github.com/google/unsmear

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 6, 2022 22:27 UTC (Thu) by NYKevin (subscriber, #129325) [Link] (9 responses)

> As for the legal implications, that's what unsmear[1] is for. You don't actually lose any information, it's just following a different time standard, and you can always convert back into UTC if necessary. This is an improvement over ntpd -x (which definitely cannot be converted into UTC without a whole lot of guesswork and/or instrumentation and telemetry).

I forgot to mention: Even if ntpd is *not* given -x and all components of the system exactly follow all of the relevant time-related standards, you still cannot unambiguously convert Unix time into UTC around a leap second, because Unix time represents (is specified to represent) 23:59:60 and 00:00:00 as the same second. Smeared time is monotonic, so it can distinguish between those two seconds.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 9:51 UTC (Fri) by rschroev (subscriber, #4164) [Link] (8 responses)

> because Unix time represents (is specified to represent) 23:59:60 and 00:00:00 as the same second

Since man 3 localtime (on my Debian system) says

> tm_sec The number of seconds after the minute, normally in the range 0 to 59, but can be up to 60 to allow for leap seconds

my assumption always was that Linux had some way of dealing with leap seconds. Or at least that 11:49:60 was not the same as 11:50:00. That's not correct then, and the text in the manual page incorrect or at least misleading? Or is it just my understanding that's wrong?

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 14:28 UTC (Fri) by eru (subscriber, #2753) [Link]

The possibility of the value 60 in tm_sec comes from the C standard. I don't have the current one at hand, but a final draft for C99 that I have as text says:

7.23.1 Components of time

...

[#4] The tm structure shall contain at least the following members, in any order. The semantics of the members and their normal ranges are expressed in the comments.251)

               int tm_sec;   // seconds after the minute  --  [0, 60]
               int tm_min;   // minutes after the hour  --  [0, 59]
               int tm_hour;  // hours since midnight  --  [0, 23]
               int tm_mday;  // day of the month  --  [1, 31]
               int tm_mon;   // months since January  --  [0, 11]
               int tm_year;  // years since 1900
               int tm_wday;  // days since Sunday  --  [0, 6]
               int tm_yday;  // days since January 1  --  [0, 365]
               int tm_isdst; // Daylight Saving Time flag

The value of tm_isdst is positive if Daylight Saving Time is in effect, zero if Daylight Saving Time is not in effect, and negative if the information is not available.

footnote 251: The range [0, 60] for tm_sec allows for a positive leap second.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 17:42 UTC (Fri) by NYKevin (subscriber, #129325) [Link] (6 responses)

That is not Unix time. That is localtime. If you pervasively use localtime (or gmtime etc.) everywhere, then it's (probably?) fine (or at least, I should hope it's fine). But as soon as you call a function like gettimeofday(2), there are no separate fields for individual parts, and the specification says a positive leap second is a repeated leap second. Furthermore, other programming languages will usually provide direct interfaces to gettimeofday (e.g. Python has time.time()), so it's surprisingly hard to pervasively audit all software for this misbehavior.

Also, if you read the Python documentation carefully, you'll find that it purportedly implements time.localtime() by converting from time.time(). So even if your system library gets this right, your programming environment might not!

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 17:54 UTC (Fri) by NYKevin (subscriber, #129325) [Link]

> If you pervasively use localtime (or gmtime etc.) everywhere, then it's (probably?) fine (or at least, I should hope it's fine).

No, it's actually not fine. I misread the man page. These functions take Unix time as an argument and then claim they can give you leap second information from that time. But there is no leap second information in a Unix time value in the first place, so that's just a flat lie. See also the POSIX documentation: https://pubs.opengroup.org/onlinepubs/009696799/basedefs/... (the formula they give for Unix time causes positive leap seconds to repeat).

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 8, 2022 14:30 UTC (Sat) by kleptog (subscriber, #1183) [Link] (4 responses)

Another way of dealing with leap-seconds is to not repeat them but just keep counting. To do this you can use the right/* timezones.

$ TZ=right/Australia/Sydney date; TZ=Australia/Sydney date
Sun 09 Oct 2022 01:24:54 AEDT
Sun 09 Oct 2022 01:25:21 AEDT

Of course, the risk of confusion is rather high.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 8, 2022 23:04 UTC (Sat) by NYKevin (subscriber, #129325) [Link] (2 responses)

Now I am rather curious.

Suppose the kernel is following UTC (as is typical). Can userspace correctly figure out the current value of TAI, even during a leap second? I'm not really clear on how that would be done, since the kernel ultimately "just" gives you a time_t or a struct timespec (or something roughly equivalent like a struct timeval), and as we've established, time_t and struct timespec don't contain leap second information. Do you have to somehow talk to ntpd and manually ask it whether a leap second is happening? Or is there some obscure system call I'm not aware of where the kernel can give you that information directly?

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 9, 2022 6:03 UTC (Sun) by lunaryorn (subscriber, #111088) [Link]

I don't think the kernel uses UTC with leap seconds. It would need a leap second table to do so and a way to update that table dynamically, and I don't think this exists.

I presume it uses UTC as baseline and ticks in SI seconds (roughly) but ignores leap seconds, and thus drifts away from UTC on every leap second unless time's corrected by an NTP client.

I work in an industry where leap seconds matter and leap smearing is not an option, and all software I know of in this area handles leap seconds by itself with its own custom date/time routines, because there's no support for proper leap second handling on Linux, and the standard date time routines of most programming languages can't every represent 61 seconds.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 9, 2022 19:36 UTC (Sun) by kleptog (subscriber, #1183) [Link]

AIUI internally the kernel uses an internal monotonic clock based on a number of jiffies per second since boot. From this all other time sources are defined. There is a clock_gettime(CLOCK_TAI,...) and clock_gettime(CLOCK_MONOTONIC,...) if you really can't handle discontinuities.

The TAI offset can be set via adjtimex(2).

All this information could be used by userspace to produce a clock display where there is a 61st second during a leap-second, but I don't know if anyone does this.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 15, 2022 0:18 UTC (Sat) by mirabilos (subscriber, #84359) [Link]

You also need tons of posix2timet() and timet2posix() calls this way.

I implement leap seconds in time_t in my hobby BSD derivative (German law used to explicitly require them, and fsck POSIX for not having them), and I have lots of fun with them… the existence of the right/ timezones is a beginning but not the end of all of it.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 4:55 UTC (Fri) by oldtomas (guest, #72579) [Link] (5 responses)

To be fair, said "tech companies" could choose to dedicate resources to contribute to the process. Instead, they just "do".

To me it looks very much like the incursion phase in Zuboff's disposession cycle [1]: "'I'm taking this,' it says. 'These are mine now'".

And then we go whine that our civilisation is on its last legs.

[1] Shoshana Zuboff "The Age of Surveillance Capitalism"

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 17:44 UTC (Fri) by NYKevin (subscriber, #129325) [Link] (4 responses)

The smear is a published standard. See https://developers.google.com/time/smear. Anyone who wants to implement it is free to do so. The only difference is that you *don't* have to pay NIST a fee to get an "official" copy of the standard. I imagine NIST is not happy about losing those fees, but IMHO that's their problem.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 8, 2022 7:06 UTC (Sat) by oldtomas (guest, #72579) [Link] (2 responses)

Disposession cycle, phase 2 [Zuboff, p137 ff]. But please (seriously!), don't take that personally.

Of course, lobbying the NIST (and ISO, etc) to make norms and specs publically available at no cost is still the right thing to do.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 8, 2022 8:33 UTC (Sat) by NYKevin (subscriber, #129325) [Link] (1 responses)

Sorry, but I completely do not understand where you are coming from here. The Leap Smear was invented for use on Google's private servers, to fix a specific problem that Time SRE was observing during leap seconds. Surely you have no objection to that - we should be allowed to set our own private servers to whatever time standard we damn well want (just like anyone else can).

Time SRE then realized that other people might have similar problems, and went public with it. Or maybe it was a defensive publication (I'm not a lawyer, and I *definitely* wasn't in the room for that conversation, if it even happened). Either way, are you seriously claiming that such publications, when performed by tech companies, are inherently problematic? If so, that's going to cause all sorts of patent-related problems.

Or are you saying that the problem was the next step, when they made it available on their public NTP servers? I still don't see how that even hits stage one of the cycle, though, because it didn't "disrupt" anything - nobody is ever going to force you to use Google's NTP services if you don't want to, and I seriously doubt any of the various non-Google people currently operating NTP servers (mostly universities and standards bodies) are going to say "oh, well, now that Google invented smeared time, we all have to switch over tomorrow!" That is just not going to happen. It's obvious that non-smeared time is still necessary and useful for scientific applications and other purposes. Outside of the specific context of Unix time repeating positive leap seconds, the smear is silly, and doesn't make sense as a universal time standard like UTC does.

Please, do answer these questions. I'm not making a rhetorical argument, I'm genuinely trying to understand your position.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 10, 2022 6:12 UTC (Mon) by oldtomas (guest, #72579) [Link]

"...oh, well, now that Google invented smeared time, we all have to switch over tomorrow!"

But this is what eventually happens, due to Google's sheer size. Sometimes it doesn't, the incursion phase is like that, high risk, high reward.

Don't get me wrong: technically, this approach does solve a problem. This is the engineering aspect. Strategically (i.e. the way to go about it) is focused on another problem. Things often have several aspects and different echelons in such a huge corporation care about different ones :-)

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 12, 2022 7:07 UTC (Wed) by gdt (subscriber, #6284) [Link]

The smear is a published standard

It is a published specification. That it is not a standard is precisely Levine's objection.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 19:39 UTC (Fri) by jthill (subscriber, #56558) [Link] (5 responses)

I simply do not understand why the list of leap seconds is not kept and transmitted as a separate file, completely ignored by hardware and kernels and applied by explicit request at the userland level as a context-specific offset like timezones.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 7, 2022 22:22 UTC (Fri) by NYKevin (subscriber, #129325) [Link]

You can do that now: https://support.ntp.org/Support/TimeScales

Unfortunately, nobody does that (to a first approximation), and so your "Unix time" will be out of sync with everyone else's.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 12, 2022 4:49 UTC (Wed) by fest3er (guest, #60379) [Link] (3 responses)

I've been wondering this myself. Time itself (on Earth) passes at a given rate. Leap seconds account for the slowing of Earth's rotation so that meridian 0 remains 180° away from the sun at midnight.

Leap seconds should have been handled exactly the same as DST: grab the current 'seconds-since-N' and compute the respective date-time considering time zone, DST, leap years and seconds. Entering DST, human clocks jump ahead one hour. With a leap second, clocks should jump ahead one second though we humans aren't too concerned about accuracy and precision of time in our daily lives; shoot, look at the USPS: at this moment, the time is exactly October, 2022 (very old joke).

Elapsed time (in yoctoseconds) is always continuous and constant. Here on Earth. Date-time in recent centuries has always jumped around. It took the world over 400 years to adopt the Gregorian calendar nation-by-nation; yet it still isn't used universally. Leap seconds should have been just another calculation when preparing human-readable date-time. IMO.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 12, 2022 9:45 UTC (Wed) by excors (subscriber, #95769) [Link] (2 responses)

> Elapsed time (in yoctoseconds) is always continuous and constant.

Unfortunately it's not constant, because of general relativity. Time progresses at a different rate depending on your location in the solar system. If you're interested in yoctoseconds, or even just attoseconds, I think you need to account for e.g. the tidal effects of the Moon and Sun, which means you still need to account for the unpredictable rotation speed of the Earth.

Astronomers have things like Barycentric Coordinate Time (a clock at the center of mass of the solar system, which goes faster than regular Earth clocks by about 0.5 seconds per year) and Geocentric Coordinate Time (a clock at the center of Earth but infinitely far away, or something, which oscillates around +/- 2 msecs from Barycentric Coordinate Time every year or so) and Terrestrial Time (a linear scaling of Geocentric Coordinate Time to approximate SI seconds at mean sea level) and it all gets a bit complicated.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Oct 13, 2022 0:02 UTC (Thu) by jwarnica (subscriber, #27492) [Link]

Sounds like astronomers have considered different "actual time" and have some mapping between them. A bunch of other problems. And then there is human readable time.

I would suggest that NTP is for earth, so should at best (worst?) be considered "earth time", still not quite human readable time. I think we must agree DST is for humans. Are leap seconds?

If leap seconds are for the convenience of the planet earth, sure, make leap seconds a protocol problem. If leap seconds are for the convenience of humans, it's a display problem, the same a DST, and the rest of timezones.

The Thorny Problem of Keeping the Internet’s Time (New Yorker)

Posted Feb 1, 2023 15:06 UTC (Wed) by nix (subscriber, #2304) [Link]

Yoctoseconds and attoseconds would need to account for altitude (to within millimetres) and almost certainly that also means they'd need to know the geoid at your current location (the local gravitational field, which depends on things like the depth of continent under your feet and that sort of thing, whether you're located above one of the LLSVP blobs, etc etc ad nauseam). In geologically active areas this changes continuously and sometimes quite rapidly...


Copyright © 2022, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds