A rift in the NTP world
The failure of the Network Time Protocol (NTP) project could be catastrophic. However, what few have noticed is that the attempts to prevent that catastrophe may have created entirely new challenges.
NTP is a Internet Engineering Task Force (IETF) standard, handled by its NTP working group. As Tom Yates described it in an LWN article:
First designed in 1985 by David L. Mills, the protocol has been coordinated in recent years by the Network Time Foundation. Today, it develops a number of related standards, including Ntimed, PTPd, Linux PTPd, RADClock, and the General Timestamp API. For most of this time, the primary manager of the project has been Harlan Stenn, who has volunteered thousands of hours at the cost of his own consulting business while his NTP work is only intermittently funded.
Several years ago, the project's inadequate funding became known in the media and Stenn received partial funding from the Linux Foundation's Core Infrastructure Initiative, which was started after the discovery of how the minimal resources of the OpenSSL project left systems vulnerable to the Heartbleed vulnerability. Searching for additional funding, Stenn contacted the Internet Civil Engineering Institute (ICEI) and began working with two of its representatives, Eric S. Raymond and Susan Sons.
However, the collaboration did not go smoothly. According to Stenn, Raymond contributed one patch and had several others rejected, but Stenn's ideas and Raymond's and Sons's were out of sync. "I spent a lot of time trying to work with Susan Sons," Stenn said in a phone interview, "Then all of a sudden I heard they have this great plan to rescue NTP. I wasn't happy with their attitude and approach, because there's a difference between rescuing and offering assistance. [Their plan was] to rescue something, quote unquote, fix it up, and turn it over to a maintenance team." Beside the fact that this plan would eliminate Stenn's role, he considered it impractical because the issue is not merely maintenance, but also continued development of the protocol. The efforts to collaborate finally collapsed when Raymond and Sons created a fork they called Network Time Protocol Secure (NTPsec).
Today, the NTP Foundation lists four main contributors, one of whom is on sabbatical and acknowledges the contributions of 33 in all. In addition, another seven work on related projects. By contrast, NTPsec lists seven contributors, including Sons. Although NTPsec began by using the NTP code, today neither NTP nor NTPsec shares code or patches with the other.
Both projects would probably more or less agree on the general outline of events given above. Yet it is difficult to be sure, since both Sons and Mark Atwood, NTPsec's Project Manager pro-tem, ignored requests for an interview. However, the details of the two project's claims could hardly be farther apart. The two projects differ on the scale and cause of NTP's current problems and the approach that should be taken to address those problems.
The NTPsec version
Sons has publicly described the NTPsec interpretation several times, including in a presentation at OSCON and in a podcast interview with Mac Slocum of O'Reilly. In the podcast, Sons depicted NTP as a faltering project run by out-of-touch developers. According to Sons, the build system was on one server whose root password had been lost. Moreover, "the standard of the code was over sixteen years out of date in terms of C coding standards" and could not be fully assessed by modern tools. "We couldn't even guarantee reproducible results across different systems," she added.
Sons also claimed that "security patches weren't being circulated in a timely manner," taking "months to years" for release. Meanwhile, "security patches were being circulated secretly and leaked," although she did not explain how. Instead she offered an anecdote about a group of script-kiddies who knew that NTP was useful for denial of service attacks while remaining unaware of its function.
However, Sons was most concerned about the aging group of developers who maintain low-level Internet software (including NTP) in general. Most of them, she said, "are older than my father.... [and] are not always up to date on the latest techniques and security issues." Many are burning out from trying to maintain critical code while working full time jobs, and Sons suggested that they "should be retired."
Faced with such chaos, Sons said, she soon realized that "the Internet is
going to fall down if I don't fix this." When efforts to gain acceptance
for her plans from Stenn and other NTP developers failed, Sons and Raymond
started NTPsec, placing the revised code in a Git repository rather than
the Bitkeeper one used by the NTP Foundation, rewriting NTP rewriting NTP scripts in Python
rather than C various other languages to make attracting new developers easier, and actively
promoting the project in order to attract volunteers.
In her OSCON presentation she listed several accomplishments (Sons refers to the original NTP project as "NTP Classic"):
- Due to a reduction in code of over 2/3 (from 227kLOC to 74kLOC), NTPsec was immune to over 50% of NTP Classic vulns BEFORE discovery in the last year.
- NTPsec patches security vulnerabilities, on average, within less than 12 hours after discovery. Note that publication is sometimes slowed to coordinate with NTP Classic releases.
- NTPsec's vulnerability response has pressured NTP Classic to speed up their response from months-to-years to days-to-weeks upon threats of funders pulling out.
- [...] NTPsec is poised to replace NTP Classic in the coming year in installations around the world.
Sons's perspective on her involvement is summarized by the title of her OSCON presentation: "Saving Time." She has since become president of ICEI; she described herself in the presentation as having "moved on" and is no longer involved with NTPsec on a daily basis.
Meanwhile, a web search shows that media coverage of events accepts Sons's account while rarely attempting to hear NTP's side of the story. Cory Doctorow repeated the NTPsec version, and so did Brady Dale of the Observer, while Steven J. Vaughan-Nichols recommended NTPsec over NTP. The security site UpGuard was equally unquestioning, while CircleID, a site specializing in Internet infrastructure, only revised its coverage after complaints from representatives of NTP. In public, the NTPsec version of events has become the official one.
The NTP side
NTPsec depicted NTP as being in a state of total disorder. However, in communications with me, Stenn offered a radically different story. In Stenn's version of events, NTPsec, far from being the savior of the Internet, has misplaced priorities and its contributors lack the necessary experience to develop the protocol and keep it secure.
Stenn denied many of Sons's statements outright. For example, asked about Sons's story about losing the root password, he dismissed it as "a complete fabrication." Similarly, in response to her remarks about older tools and reproducible results across different systems, Stenn responded: "We build on many dozens of different versions of different operating systems, on a wide variety of hardware architectures [...] If there was a significant problem, why hasn't somebody reported it to us?"
Asked about how current the code is, Stenn stated that "the code has been and continues to be written to compile and run on currently available and currently used systems." Stenn conceded that some code only builds on older machines, yet pointed out that many old machines are still running. "If hardware is still in use, from our point of view there is an actual benefit to doing what we can to make sure folks can build the latest code on older machines."
As for security patches, Stenn acknowledged that NTP currently lacks the funding for a much-needed replacement of Autokey, the code that authenticates NTP servers. However, he noted that NTP released five major patches in 2016, and claimed that it was up to date as of the end of November 2016. He added, "I have no idea what she's talking about [in regard to] secret circulation of patches or leaked patches."
Moreover, Stenn questioned the accomplishments listed in Sons's presentation. In particular, the reduction of NTPsec's code base, even allowing for the relative compactness of code written in Python, becomes less impressive in light of Stenn's explanation that NTP is "the only reference implementation for NTP, and that means we have to provide complete functionality." Stenn claimed that NTPsec has "removed lots of stuff that has zero reported bugs in them, like sntp, the ntpsnmd code, and various refclocks." Although a less than complete implementation might have its uses, Stenn claimed that NTPsec has gone too far in removing code, and that its bug repairs have sometimes been at the cost of reduced functionality.
In general, Stenn wondered if, after only a couple of years work, NTPsec contributors have the experience necessary to work with the code. His own understanding of the protocol has changed several times during his decades of work, and he warned that "if you don't understand how everything works and where it fits into place, when things get busy, horrible things can happen." The NTPsec story frequently spoke of free-software ideals such as openness, transparency, and a welcoming environment to all contributors, "but this isn't a democratic process. It's a scientific process, and this isn't somebody's turn to go ahead and take theirs at the wheel driving the bus."
Still, the NTPsec fork has caused some changes in the NTP project. After NTPsec began, the foundation felt the need to commission regular financial audits, and to continue code audits that were begun in 2006.
"Creative destruction ('let's see what happens if we throw something into the works') is a horrible way to provide core Internet structure," Stenn concluded.
One step forward, two steps back?
For outsiders, which version of events is closer to the truth is difficult to assess. Probably few are competent to judge. However, assigning blame is beside the point.
What is of concern is that acceptance of the two implementations of the NTP protocol has been based largely on the most appealing story, and not on the quality of the code. NTPsec's constant analogy to the need to support OpenSSL evokes an immediate concerned response from free-software supporters, but, if Stenn is correct in his assertions, the situations of NTP and OpenSSL are not usefully comparable.
In particular, having two separate projects may be no more than a duplication of effort. Although having competing projects can sometimes benefit free software, in this case, having two warring projects risks diluting the already limited resources and support being contributed to put the protocol on a reliable footing.
Despite all the efforts of both projects, the possibility remains that the dangers to the protocol are as great today as they were before anyone attempted to address them. Already, where once only Stenn was looking for support, now Raymond is in a somewhat similar position, as NTPsec has lost its Core Infrastructure Initiative funding as of September 2016. It is all too easy to imagine the struggle for survival growing worse for everyone.
[Update: As noted in the comments, it was the scripts that were rewritten in Python for NTPsec.]
Index entries for this article | |
---|---|
GuestArticles | Byfield, Bruce |
Posted Feb 9, 2017 4:40 UTC (Thu)
by mina86 (guest, #68442)
[Link] (13 responses)
Posted Feb 9, 2017 9:46 UTC (Thu)
by jnareb (subscriber, #46500)
[Link]
Actually the article misrepresents NTPsec stance. It is only _auxiliary_ (administrative) tools that are getting rewritten in Python. The core code is in C, and would remain to use a compiled language (there are some thoughts about moving to Go or Rust, but rather Go).
Posted Feb 9, 2017 9:49 UTC (Thu)
by mtaht (subscriber, #11087)
[Link] (8 responses)
I eschew politics. That said, there was one thing in the article's comments, that really stuck in my craw - "using python for core infrastructure" - vs what was actually said.
What was said in the article was still wrong: "rewriting NTP in Python rather than C to make attracting new developers easier"
The core ntpsec codebase remains in C, just cut in size and cruft by well over half... a *client* or two got re-written in python.
One of the the underlying protocols was removed. You can write a client in anything. Bunch of bad ideas - like autokey - ripped out.
Python is an unsuitable candidate for a daemon such as this.
There has been some interesting discussion about moving to rust, or go - on esr's and the ntpsec blog, as well as attempts at outreach to a wider community to get people interested in the obscurities of time service once again.
As a ex-time-nut myself, I have to say this piece was great:
https://blog.ntpsec.org/2017/02/01/heat-it-up.html
https://blog.ntpsec.org/ has discussions of the language evaluations, there are bigger ones on esr's blog. The rust one will melt your browser. :)
...
If people want to contribute to any of the time related codebases (be they ntpd, ntpsec, chrony, opentpd), that would be great! Even better if more folk would step in and independently evaluate these codebases and run them.
Even better if the world realized that somehow, the sometimes contentious, hard to deal with, difficult to understand, downright ornery experts - deeply involved in "holding up the sky", need a roof over their heads, too.
Posted Feb 9, 2017 10:12 UTC (Thu)
by mtaht (subscriber, #11087)
[Link] (7 responses)
Prior to that there was a usenet newsgroup, going back to the beginning of internet time. Much, much, lost lore there.
Time problems remain with us, and we still keep getting it wrong in major daemons and slices of code - there are multiple ways of dealing with it in the Linux kernel, all needed, all in partial conflict. Why do you use periodic timers? Can you trust them? Does anyone know when and where you should use CLOCK_MONOTONIC vs CLOCK_MONOTONIC_RAW? What are the flaws in using one or the other? How can you represent stuff "right" in postgres? Should you smear time across a leapsecond? What happens when you slew time forward from the epoch? A lot of crypto protocols rely on time marching forward. DNSSEC needs it close to within an hour -
I happen to love fdtimers, which gets us away from do_something; sleep(1); do_itagain()....
if only there was a way to get at fdtimers from the shell! A lot of primitive benchmarks would get better.
We dealt OK with Y2K, but I increasingly worry about Y2036.
Dealing with time is hard. We need more people trying to get it right. My own interest in ntp was revived by realizing that ntp's ancient "huff and puff" algorithm was probably affected by bufferbloat, and, well, you have to obsessive about stuff like this to truly get it right.
Posted Feb 10, 2017 10:04 UTC (Fri)
by xav (guest, #18536)
[Link] (6 responses)
Posted Feb 10, 2017 15:51 UTC (Fri)
by mtaht (subscriber, #11087)
[Link] (5 responses)
"The 64-bit timestamps used by NTP consist of a 32-bit part for seconds and a 32-bit part for fractional second, giving NTP a time scale that rolls over every 232 seconds (136 years) and a theoretical resolution of 2−32 seconds (233 picoseconds). NTP uses an epoch of 1 January 1900. The first rollover occurs in 2036, prior to the UNIX year 2038 problem.
I still expect a huge number of systems starting up to not have a battery by then.
Posted Feb 10, 2017 18:31 UTC (Fri)
by excors (subscriber, #95769)
[Link]
Posted Feb 10, 2017 21:45 UTC (Fri)
by flussence (guest, #85566)
[Link] (1 responses)
I'd say something about picoseconds being overkill for a timestamp sent over any length of network, but I guess someone somewhere really needs that. TAI64 goes all the way down to attoseconds(!)
Posted Feb 11, 2017 20:52 UTC (Sat)
by da4089 (subscriber, #1195)
[Link]
Posted Feb 12, 2017 18:10 UTC (Sun)
by smcv (subscriber, #53363)
[Link] (1 responses)
I like systemd's trick for dealing with this: during early boot, if the kernel clock is before the date/time on which this particular version of systemd was compiled, it brings the clock forward to that date/time.
As long as you don't leave a device for decades without upgrading its OS, and rebuild systemd at least every few years (either for a security or other bug fix, or just artificially to get a newer timestamp), that's good enough.
There'd be nothing to stop the kernel doing the same trick with its own release or compilation date/time.
Posted Feb 13, 2017 1:31 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Feb 10, 2017 4:50 UTC (Fri)
by jhoblitt (subscriber, #77733)
[Link] (2 responses)
Posted Feb 10, 2017 21:03 UTC (Fri)
by wahern (subscriber, #37304)
[Link] (1 responses)
And I'd speculate that among the world-wide pool of stratum 1 servers, especially the reliable ones offered by public institutions, you'll probably find a ton of ancient, esoteric hardware and software (clock sources, operating systems, etc), which effectively binds them to ntpd and binds ntpd to them.
Posted Feb 11, 2017 1:36 UTC (Sat)
by jhoblitt (subscriber, #77733)
[Link]
Posted Feb 9, 2017 6:31 UTC (Thu)
by bferrell (subscriber, #624)
[Link]
You go Stenn. Keep doing good solid work
Posted Feb 9, 2017 6:41 UTC (Thu)
by zdzichu (subscriber, #17118)
[Link] (4 responses)
Posted Feb 9, 2017 6:55 UTC (Thu)
by roskegg (subscriber, #105)
[Link] (2 responses)
Posted Feb 10, 2017 21:24 UTC (Fri)
by wahern (subscriber, #37304)
[Link] (1 responses)
Also there's the issue with support for reference clocks. See my comment elsethread regarding chrony.
Other people know far better than me, but years ago I asked myself the same questions as you and these are the answers I came up with. If the slate could be wiped clean and the entire global infrastructure magically transported into the year 2017, with 2017 hardware and 2017 software, ntpd likely wouldn't figure into the equation at all. AFAIU, the core algorithms in ntpd have been hashed and rehashed in countless scientific papers and other discourse, so there's nothing special per se about the code that doesn't exist elsewhere or can't be easily rewritten from scratch. And the bulk of the remainder is effectively legacy support. But if you want those algorithms (because they're tried-and-true, despite the downsides) and need the legacy support, it's not crazy to want to keep ntpd around.
Posted Feb 12, 2017 12:35 UTC (Sun)
by mchouque (subscriber, #62087)
[Link]
https://chrony.tuxfamily.org/comparison.html
I'm not going to argue about the merits of leap seconds but for something supposed to keep your computer on time, err...
Sure it's only one second and even google has leap smear but unless everyone agrees on one standard, it can't be considered as an enterprise ready daemon, specially for time sensitive applications.
Posted Feb 9, 2017 8:07 UTC (Thu)
by asn (subscriber, #62311)
[Link]
Posted Feb 9, 2017 8:06 UTC (Thu)
by k8to (guest, #15413)
[Link] (16 responses)
That said, implementing NTP in python sounds crazy to me. You want it on every single host. You want it low latency, low overhead, and using minimal resources. Python is *not* the language for such a thing. Golang might be a reasonable choice if you want to "modernize" the langauge choice.
Posted Feb 9, 2017 8:10 UTC (Thu)
by bangert (subscriber, #28342)
[Link] (13 responses)
Posted Feb 9, 2017 9:20 UTC (Thu)
by ballombe (subscriber, #9523)
[Link] (12 responses)
Posted Feb 9, 2017 12:17 UTC (Thu)
by moltonel (guest, #45207)
[Link] (11 responses)
https://forge.rust-lang.org/platform-support.html
Python may have wider reach still, but only if you take the myriad of implementations into account. Things get messy when you target niche platforms, you should be prepared for extra work to install the interpreter, and for your pure python program to behave slightly differently.
Posted Feb 9, 2017 17:14 UTC (Thu)
by ballombe (subscriber, #9523)
[Link] (9 responses)
Posted Feb 9, 2017 19:13 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (7 responses)
Rust's portability is OK for all practical purposes - you might not be able to run it on OS/360 or DEC Alpha, but do you really need it?
Posted Feb 10, 2017 19:10 UTC (Fri)
by jra (subscriber, #55261)
[Link]
Never thought I'd be agreeing with one of your posts so much, but HURRAH for posting that !
I've been trying to get new engineers scared of writing network-facing code in C for many years now.
Posted Feb 11, 2017 23:27 UTC (Sat)
by mbiebl (subscriber, #41876)
[Link]
Posted Feb 15, 2017 17:13 UTC (Wed)
by dublin (guest, #114125)
[Link] (4 responses)
Operating at real-world timescales, you probably don't even want the OS in the loop for timing critical processing - modern OSes and hardware are now fast enough to really give you a false sense of security about the ability of a system to reliably do hard realtime control - "statistically works almost all the time" is NOT the same as "will work every time, guaranteed" - the latter is required in life-critical applications.
Most of the discussion here is making a fatally flawed assumption - namely, that "regular" computers (PCs, servers, etc.) are the main kinds of things that need time synchronization. This is why the full functionality of the original NTP codebase (and its offshoots such as PTP) is important - as IoT moves increasingly into hard realtime (especially for apps like comms between self-driving cars, robots cooperating in the same workspace, etc.), PTP or something similar is required. (PTP is the Precision Time Protocol offshoot of NTP, and is the leading current standard in high-resolution industrial/commercial measurements.) NTP proper is only good to about 1/100 of a second - but 10 ms is forever: long enough for very, very bad things to happen in the real world.
So in short, this isn't just about syncing PC clocks - that's pretty much trivial - the real issue is secure, reliable, synchronized microsecond clocks suitable for hard realtime IoT. As Mills says, the NTP group really does understand what that takes. I'm not convinced the NTPSec guys do... (at least not yet.)
Posted Feb 15, 2017 17:46 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Hint: lots of hard-realtime trading systems are written in... Java. And these systems are not these laughable cases like stability control in cars where a failure simply leads to lost lives. If a trading system failure occurs then a much worse outcome happens - people lose MONEY.
> Operating at real-world timescales, you probably don't even want the OS in the loop for timing critical processing - modern OSes and hardware are now fast enough to really give you a false sense of security about the ability of a system to reliably do hard realtime control - "statistically works almost all the time" is NOT the same as "will work every time, guaranteed" - the latter is required in life-critical applications.
NTP is designed to do one thing - synchronize clocks of computers across the wide-area network. It needs to be as good as the network jitter, which for regular Internet is usually within multiple milliseconds.
Posted Feb 16, 2017 16:28 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
> No, it's not. And assembly? LOL.
Which is why a lot of time critical stuff was (and is) written in Fortran.
Fortran was the language of choice when simulations had difficulty keeping up with the real world, for example in weather forcasting ...
Cheers,
Posted Feb 16, 2017 19:23 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Weather stuff is written in all kinds of languages. It does NOT need to do anything realtime, just fast enough. Fortran is used simply because the earliest models hark back to the beginning of 60-s.
Posted Feb 15, 2017 18:01 UTC (Wed)
by moltonel (guest, #45207)
[Link]
Concerning "will work every time, guaranteed", it's a no-brainer that Rust's safety guarantees make that goal easier to attain than C. And yes, you can boot to Rust to avoid that pesky interfering OS.
Sure, nothing beat C's portability. But when I see that Rust is arriving on Arduino for example, I stop worrying.
Posted Feb 15, 2017 13:48 UTC (Wed)
by federico3 (guest, #101963)
[Link]
Posted Feb 10, 2017 22:57 UTC (Fri)
by lsl (subscriber, #86508)
[Link]
When comparing with Go, consider that ports for all the different C libraries (like Rust has) are not relevant: Go doesn't depend on libc at all (well, except on Solaris). It also calls all those variants of ARM and 386 the same at the arch level, but actually supports the different versions of them.
Posted Feb 9, 2017 9:49 UTC (Thu)
by jnareb (subscriber, #46500)
[Link] (1 responses)
The article misrepresents NTPsec stance. NTPsec is not implementing NTP in Python, it is rewriting administrative auxilary tools in Python. The core code remains in C, though there are some thoughts to porting it to Golang.
Posted Feb 11, 2017 8:58 UTC (Sat)
by k8to (guest, #15413)
[Link]
Posted Feb 9, 2017 10:46 UTC (Thu)
by jnareb (subscriber, #46500)
[Link] (6 responses)
I wonder why NTP situation is to be different from SSL situation (where we have similarly OpenSSL vs BoringSSL and others).
Posted Feb 9, 2017 12:27 UTC (Thu)
by moltonel (guest, #45207)
[Link] (1 responses)
Posted Feb 9, 2017 12:58 UTC (Thu)
by jnareb (subscriber, #46500)
[Link]
NTPsec blog describes it quite well.
Posted Feb 10, 2017 6:57 UTC (Fri)
by branden (guest, #7029)
[Link] (3 responses)
See: http://www.xfree86.org/releases/rel480.html
Posted Feb 16, 2017 16:36 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (2 responses)
And the fork to Xorg happened when said committee kicked the primary maintainer (ie the Indian doing the work, not the Chiefs pontificating) out of the project.
And actually, I find the reported comments of NTPsec that "the guys doing NTP are burnt out and should be retired" a perfect example of offensive propaganda. If they're burning out, they need help not flaming. And if Sons was serious, she should at least have *started* by offering to help, not by forking ...
Cheers,
Posted Feb 17, 2017 0:24 UTC (Fri)
by anselm (subscriber, #2796)
[Link] (1 responses)
It seems that's how esr and his friends roll. After all, didn't esr fork many of his other feather-in-the-cap projects, like the Jargon File and fetchmail?
The problem is that offering help requires a certain degree of deference towards the developers who have already been working on the project for a long time, and that doesn't come easy to people who consider themselves the greatest Unix programmers ever (remember that esr basically wrote The Book on the subject, or thinks so, anyway). If you're all “I'm new to this project but you're doing everything completely wrong – wrong language, wrong build system, and your code is insecure, too, let me sort this out for you already”, then forking is basically your only option because nobody likes a know-it-all bully.
Posted Feb 17, 2017 15:08 UTC (Fri)
by branden (guest, #7029)
[Link]
http://invisible-island.net/ncurses/ncurses-license.html
Posted Feb 9, 2017 14:09 UTC (Thu)
by michel (subscriber, #10186)
[Link] (3 responses)
Posted Feb 9, 2017 17:07 UTC (Thu)
by anselm (subscriber, #2796)
[Link] (2 responses)
The person in question is Eric S. Raymond, who isn't exactly a nobody in the community. He's done a number of interesting things over the years – including co-founding the Open Source Initiative – but excessive modesty about his personal accomplishments and influence on the field has never been one of his vices :^)
Posted Feb 9, 2017 17:23 UTC (Thu)
by michel (subscriber, #10186)
[Link] (1 responses)
Posted Feb 12, 2017 14:07 UTC (Sun)
by jubal (subscriber, #67202)
[Link]
Posted Feb 9, 2017 15:14 UTC (Thu)
by kingdon (guest, #4526)
[Link]
Posted Feb 9, 2017 15:23 UTC (Thu)
by jake (editor, #205)
[Link] (3 responses)
jake
Posted Feb 9, 2017 15:46 UTC (Thu)
by paulj (subscriber, #341)
[Link] (1 responses)
Also, this article paints a picture of the NTP people, Stenn particularly, needing help and of the wider tech. community responding by basically paying a number of people to come in, undermine them and try take their life's work away from them. Nice again.
The recent LWN article on "Consider the maintainer" ( https://lwn.net/Articles/712215/ ) seems relevant to this story too.
Posted Feb 12, 2017 5:16 UTC (Sun)
by jubal (subscriber, #67202)
[Link]
Posted Feb 9, 2017 18:44 UTC (Thu)
by nanday (guest, #51465)
[Link]
Thanks, Jake, for making the correction.
- Bruce Byfield
Posted Feb 9, 2017 15:50 UTC (Thu)
by jhoblitt (subscriber, #77733)
[Link]
Speaking for myself, I often won't even look at the source for a project if the SCM system is exotic or makes it difficult to have my own working copy (e.g., cvs, svn). The other major issue that will deter me are "odd" or "custom" build systems. I genuinely don't want to untangle build a system that relies on tcl, m4, and some obscure feature of tcsh. C projects often don't have a CI system or even a test suite -- this makes me concerned that a small patch won't be accepted because of the manual testing burden on a maintainer. There was a time when this state of affairs was the norm but there are now an enormous number of projects that are relative easy to jump into.
Posted Feb 9, 2017 17:19 UTC (Thu)
by fallenpegasus (guest, #58173)
[Link] (15 responses)
I'm the PM for the NTPsec Project.
My apologies to LWN and to Bruce Byfield, I must have somehow missed your attempts to reach me for an interview for this article, but for the record, my email address working on this project is <mark.atwood@ntpsec.org>, and I'm happy to answer anyone's questions about the history, state, and goals of the NTPsec Project.
We were not happy about having to fork from NTP Classic, and did so with regrets.
The main point of contention that caused the fork was BitKeeper vs Git. Harlan insisted on staying on BitKeeper. At that time, BitKeeper was still closed source, proprietary, and was a huge barrier to recruiting contributions, large and small. Even now still, the official Git repos for NTP Classic are out of date with the official BK repos, and are lacking tags. And the official public BK repos are out of date from Harlan's internal working repos.
We work from our public repos at https://gitlab.com/ntpsec and we welcome gitlab pullrequests and git-patch emails from everyone. Our contribution workflow is at https://www.ntpsec.org/contributor.html and our "hacking guide" is at https://gitlab.com/NTPsec/ntpsec/blob/master/devel/hackin...
The only exception to our working from our public trees is when we fix embargoed CVEs that get reported to us, under the principle of Responsible Disclosure. Our security issue and reporting policy is at https://www.ntpsec.org/security.html
As other commenters here at LWN have pointed out, we are very public on our website at https://ntpsec.org/ and on our blog at https://blog.ntpsec.org/
We are very clear about how we have decided to remove code and features. We removed autoconf and replaced it with WAF. We removed clock drivers for time source hardware that are no longer available even on the used market, and for time source hardware that is worse than a cheap GPS PPS receiver. We removed code for uncompleted features in NTP Classic that were never surfaced to the users. We removed compatibility shims and ifdefs for operating systems that are no longer running in the wild or no longer supported by their vendors. It turns out that all the world is now POSIX/C99, and when we encounter cases where that is not the case, it's easier to start with a clean POSIX/C99 state, and then carefully add what little is necessary.
We also gifted to the larger time and NTP community a working step-by-step howto on how to build a Stratum 1 NTP time server on a Raspberry Pi, at https://www.ntpsec.org/white-papers/stratum-1-microserver...
Again, I'm happy to answer any questions that LWN may have about the NTPsec project, and I welcome everyone to clone and build from our repos, and give NTPsec a try.
Thank you!
Posted Feb 9, 2017 18:51 UTC (Thu)
by nanday (guest, #51465)
[Link]
Posted Feb 13, 2017 18:37 UTC (Mon)
by clemensg (guest, #94377)
[Link] (10 responses)
Now you introduced more barriers for users and for developers. To build, you have to install Python. To understand the build process they need to know Python + waf: https://github.com/ntpsec/ntpsec/blob/master/wscript
Many new build systems come and go. Remember Scons? Google wrote ninja/gyp/gn to "replace" it. waf is also a Scons fork/rewrite.
Why not just keep using Autotools? Developers need to know it anyway if they work on other free software projects and it does its job.
Posted Feb 13, 2017 21:10 UTC (Mon)
by fallenpegasus (guest, #58173)
[Link] (9 responses)
The decision to move off autotools was not done to move to the "new shiny", but as a deliberate decision to help drive the cleanup. The existing autotools was ancient, was itself out to date with the latest autotools generators and so was very difficult to update and revise, it will full of errors, it ran very slowly, and it was tightly entangled with all the out of date portability shims and ifdefs.
We made the technical decision that having a Python dependency to build would not be a difficult barrier. Every modern distro of every still shipping and still supported UNIX has a sufficiently up to date Python package, and most UNIX distros in fact now either actually require a Python install, or have Python in their core package set.
Regular users or even builders of NTPsec are not required to know or code in Python. Just do a "./waf build" and it should work. If it doesnt work on your system, please email us at contact@ntpsec.org and let us fix it for you.
If the WAF project ever dies, we will port to another build system. Maybe even to a modernerized autotools, if that is appropriate at the time.
Posted Feb 14, 2017 1:44 UTC (Tue)
by pizza (subscriber, #46)
[Link] (1 responses)
Yeah, the last time I experienced a "help drive cleanup" migration away from autotools, it was GPSd to scons. It turned out to be easier to just backport fixes rather than fix the build system to work in a cross-compiled environment.
To paraphrase, Autotools is the worst build system, except for all the others.
Posted Feb 14, 2017 3:22 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
I've had my share of issues with build toolchains and debugging even esoteric issues with Unicode capitalization on MacOS X in Scons was easier than finding out why autotools is not rebuilding stuff properly.
Posted Feb 14, 2017 11:52 UTC (Tue)
by chrisV (guest, #43417)
[Link]
Posted Feb 15, 2017 15:47 UTC (Wed)
by rkeene (guest, #88031)
[Link] (5 responses)
--- building host ---
Further, running "./waf configure" fails predictably when cross-compiling (as generally you need to tell the system what platform is being targeted, though it could figure it out):
The "build.log" referenced is full of escape sequences that hinder readabilty (but I'm sure if I were using "cat" to view it would make it very colorful on my terminal) and a huge, hundred or so line Python stack trace that, from what I can gather from the stack trace, means it tried to run the code it just compiled... which of course won't work, since the code being generated isn't for the platform I'm compiling it on.
Helpfully, an "INSTALL" file is provided which tells us how to cross-compile NTPSec:
== Cross-compiling ==
Set up a cross-compile environment for the target architecture. At minimum
Configure NTPSec with:
waf configure --enable-cross=/path/to/your/cross/cc
But when we actually try to use that:
The usage information, however, indicates a different option of:
So we try with that...
And the build.log looks like the same stack trace as before.
Let's assume "waf" is really dumb and tries to look at the filename of my compiler to determine the platform and give it a different name -- no change.
Let's assume it's ultra-extra dumb and is running "$CC" expecting it to be a native-compiler instead of using HOST_CC or CC_FOR_BUILD -- that seems to have gotten us further !
+ CC="${CC_FOR_BUILD}" ./waf configure --cross-compiler=gcc
At this point, things are failing because it's asking our build system's "python-config" how to compile a Python extension, which differs from the host system Python... but there doesn't seem to be a way to tell it to use the correct "python-config", so we play some PATH tricks and get going further...
At this point, after reading the wrong comment, wrong documentation, and finally following the usage information and making a few guesses, then some workarounds ... the configuration is complete. Time to build it !
Of course there's no Makefile but the INSTALL documentation says we should NOW do "./waf build". It seems to work -- it's a bummer that it's not using a Makefile since it would otherwise integrate into the parallel build system.
Now we have to figure out how to install it -- with no conventional "make install DESTDIR=..." and the INSTALL file lacking (hey, atleast it's not just wrong this time) information on how to use DESTDIR. There seems to be a --destdir option to "waf", we try it out and it works on the first try... but at this point we notice we forgot to specify "--prefix" and "--sbindir" options to the configure step...
But there's no option for "--sbindir" and the "--bindir" option doesn't stop it from installing "ntpd" in /sbin, so we have to move it to the correct directory after the installation completes... and finally, we have a working package.
Posted Feb 20, 2017 19:55 UTC (Mon)
by fallenpegasus (guest, #58173)
[Link] (2 responses)
Posted Feb 20, 2017 23:08 UTC (Mon)
by nix (subscriber, #2304)
[Link] (1 responses)
Was this *really* an improvement? It's not looking anything like one from where I'm sitting. Any autobuilder out there understands configure/make/make install based systems supporting DESTDIR, like upstream ntp, and will at most need telling "use configure, here are the configure script options". Handling your special snowflake build system will be a lot more work, a lot more cursing, and a lot more packagers asking "why does this project even exist?".
(I hate scons too. Building Samba 4 wasn't terribly pleasant, either, particularly not for platforms with no Python installation but with a working compiler. I actually had to have the autobuilder *compile Python first*, install it in a virtualenv, then build Samba with that. Samba was worth it -- just. ntpsec... wouldn't be.)
Posted Feb 20, 2017 23:19 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted May 21, 2020 14:08 UTC (Thu)
by hridhya09 (guest, #139051)
[Link]
Posted May 27, 2020 9:26 UTC (Wed)
by hridhya09 (guest, #139051)
[Link]
Posted Feb 15, 2017 15:22 UTC (Wed)
by HedgeMage (guest, #114072)
[Link] (2 responses)
I'm Susan Sons, the "ISO Emeritus" who presented on NTPSec at O'Reilly Security Conference. While I've stepped down from my official role with NTPSec, I do what I can to raise awareness of NTPSec, both for the good of its community, and as a case study in what much of our infrastructure software is going through. Mark has already done a more than adequate job of addressing points related to the NTPSec/NTP classic split, so I shall limit myself to some personal notes:
I was not, to my knowledge, contacted by Bruce Byfield or anyone else researching this article. I find this surprising as I am incredibly easy to find. Feel free to drop "Susan Sons NTP" or "Susan Sons ICEI" or any similarly logical set into a search engine and see what happens. I've chosen to make my email addresses ( susan@icei.org for open source organization things, hedgemage@binaryredneck.net for personal, and sesons@iu.edu at work) quite public, and finding at least one of my phone numbers as well as my office address and home address should be trivial as well. While I do travel extensively for work, I'm good at following up, and there are people in my office who will bridge the gap if needed.
Being a hacker, I find this is a great defense against getting doxxed: it's already out there, and I don't care.
The full video of my interview with Mac Slocum is here for anyone who would like to see my remarks in context and un-edited:
I speak at length about the education I received at the feet of previous generations of software engineers, and how I built my career on their mentorship. I also talk about how one day I looked around, and there were not enough people like me in my generation waiting to take the hand-off. Succession planning matters, and it has not been happening. I created New Guard to attract and mentor new infrastructure software maintainers, and specifically to help them find opportunities to work under the Old Masters and learn as I did. We, the community of hackers, need to plan ahead, or our software infrastructure will be taken over by centralized powers who are happy to Balkanize it and curb the many freedoms many now take for granted.
My slides from the security conference presentation are here: http://slides.com/hedgemage/savingtime
I can't promise to follow up in this comment thread, as it has not been accessible at all times I've tried to visit. However, anyone who has questions for me is welcome to email me.
Posted Feb 15, 2017 17:35 UTC (Wed)
by jubal (subscriber, #67202)
[Link]
(If neither of the options appeals to you, the actual subscription is dirt cheap and supporting high quality and independent tech journalism is very important.)
Posted Feb 15, 2017 17:40 UTC (Wed)
by Nelson (subscriber, #21712)
[Link]
Can you explain any of that in more detail or is it simply hyperbole or is it worse? I mean, I'm pretty sure I've built in myself before and I was unaware that I was dependent upon that machine.
I get the message, I understand that there are these old project and they need transition plans and fresh air and such and that it's important. I'm even open to the idea that long-term supporting an opensource project might not be emotionally healthy, maybe it's better to hand things off for many reasons. I also get that things change and most problems have a different looking solution after the fact. I'd suggest that respect should be a key component of hacking. We should assume hacker solved problems the best they could with the information and knowledge and tools that they had at the time when we re-examine their solutions. It feels really disrespectful to hear stories that might not be entirely true being told as part of the process. In your presentation you also state the the tooling choices and build process were made to "maintain control," is that something you could also elaborate on? NTP was fundamentally a forkable opensource project, you can take it and patch it however you want, fork it, do whatever, what was it that constituted a control issue? Versus maybe a comfort issue. I only ask because, again, it sounds a little disrespectful, if he was a real control freak, it wouldn't be opensource at all and he'd have never asked for help, right?
I can email if I don't hear back here.
Posted Feb 10, 2017 15:08 UTC (Fri)
by ScottMinster (subscriber, #67541)
[Link] (2 responses)
It's a little disturbing to read quotes like that. While there may be a point that older developers may want to retire and their knowledge and experience should be captured and passed on to the next generation, the argument that old == bad is very distasteful. Are these quotes accurate? Or were they taken out of context or later clarified? Age discrimination is a real problem in our industry, and this apparent attitude does not reflect well on the NTPsec project.
Posted Feb 10, 2017 15:45 UTC (Fri)
by spaetz (guest, #32870)
[Link]
Posted Feb 11, 2017 1:57 UTC (Sat)
by nanday (guest, #51465)
[Link]
Posted Feb 10, 2017 18:57 UTC (Fri)
by ttelford (guest, #44176)
[Link] (1 responses)
Posted Feb 16, 2017 16:49 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
It's all very well us telling them "here be dragons", but by the time they find out there really are dragons it's too late. They've been eaten.
Cheers,
Posted Feb 12, 2017 23:02 UTC (Sun)
by jmscott (guest, #57432)
[Link]
https://www.meinbergglobal.com/
excellent linux interface.
-j
Posted Feb 15, 2017 1:39 UTC (Wed)
by gdt (subscriber, #6284)
[Link] (10 responses)
I'm a network engineer on a large internet network. NTP causes us problems, and has done so for decades. Being able to list IP addresses in the configuration — rather than only DNS names — is a major misfeature. These IP addresses get hardcoded by device manufacturers and that IP address is then forever bound to provide NTP service. That's because of a second misfeature, where blocking NTP — even returning a ICMP Administratively Denied — causes more traffic than servicing NTP. It's reasonable to require people providing NTP infrastructure to also configure DNS, and this small hassle is worth the cost when they no longer wish to prove the service at that particular IP address. The configuration file isn't really designed for the NTP pool. There's no way to apply a "restrict" statement to a pool DNS name. So most NTP deployments from Linux vendors are vulnerable out-of-the-box to allowing a subverted NTP pool machine to use those NTP clients in a DDoS multiplcation attack. NTP needs the control channel moved to TCP, again to limit the ability of servers to launch traffic multiplication attacks. I don't see NTPsec making these sort of changes, so I'd question the "sec" appellation.
Posted Feb 15, 2017 2:09 UTC (Wed)
by pizza (subscriber, #46)
[Link] (5 responses)
I disagree. Forcing use of DNS names makes you reliant on DNS, which:
* requires that there be a DNS server, and that it be accessible.
..As an end-user and administrator of small (<50 user) networks, I've been bitten by all of these situations.
Posted Feb 15, 2017 2:16 UTC (Wed)
by pizza (subscriber, #46)
[Link] (4 responses)
So you're always going to need to handle NTP client configuration in terms of IP addresses.
Posted Feb 15, 2017 19:25 UTC (Wed)
by dwh (guest, #114129)
[Link]
So while the wire format DHCP protocol contents is 4-octet network byte order IPv4 or 16-octet network byte order IPv6 addresses, their contents are the result of DNS resolution and caching by the DHCP/DHCPv6 server; DNS remains the source of truth and clients receive updated results as they renew their leases (or abandon stale configuration when leases expire).
This is even when the DHCPv6 server offers IPv6 address(es), IPv6 multicast address(es), and also FQDN(s) all together for some reason that can never be adequately explained (RFC 5908).
Posted Feb 16, 2017 5:04 UTC (Thu)
by gdt (subscriber, #6284)
[Link] (2 responses)
DHCP using IP addresses is fine. That's a mechanism an administrator can use to modify the NTP server used by the appliance's software. My concern is appliances where the IP addresses used are baked in.
Posted Feb 16, 2017 14:21 UTC (Thu)
by pizza (subscriber, #46)
[Link] (1 responses)
It seems misguided to blame NTPd-the-project when the fault lies entirely with the idiots who shipped consumer gear with hardcoded, bad configurations.
(From NTPd's perspective, there's no way to tell a difference between "bad" and "good" here; that's entirely the integrator/administrator's purview)
Posted Feb 16, 2017 14:37 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Posted Feb 15, 2017 19:02 UTC (Wed)
by biergaizi (subscriber, #92498)
[Link] (3 responses)
It is a separate issue, the issue of human, and it is not related to this post. Having a different NTP implementation won't make any differences, current NTPd has nothing wrong. Having a different protocol may have no use neither. It is much harder to solve all these problems than your imagine.
Usually, if a device comes with hardcoded NTP addresses, it, in fact, usually indicates their program is poorly-written, and the manufacturers are irresponsible.
Those devices have the worst homebrew NTP implementation on the planet,
1. They send ancient NTPv1 packets, while the latest version is NTPv4.
ST-1 servers are the most vulnerable: they have highest accuracy, with reference clock. Despite the acceptable usage of ST-1 are passing time to downstream, or for scientific purposes, since there's only a handful of them and often listed publicly, they are often spotted by those manufacturers, and put in their devices by default.
ST-1 are usually provided by universities, or unpaid volunteers for the public good of the Internet. If a single server got hardcoded in those mass-manufactured devices, serious consequences can happen, the volunteer may literally bankrupt: your whole institute/school will be kicked out from the Internet [0]; when you came to the manufacture asking to pay the damage they are responsible for, you are threatened by a lawyer from California. [1] The whole Internet community should honor the spirit of self-sacrifice of these NTP volunteers.
Most of the NTP pool servers has similar issues, once you're in and became well-known on the net, there's no way out and keep receiving bad-traffic. Fortunately given a reasonable bandwidth, it is often a negligible issue though. But not as always. [2]
> That's because of a second misfeature, where blocking NTP — even returning a ICMP Administratively Denied — causes more traffic than servicing NTP.
In contrast, NTPd has proper rate-limit mechanism built-in, such as KoD and good pooling interval, blocking NTP does NOT causes more user traffic. What increased is the abuser traffic. The damage caused by a standard NTPd and silly sysadmin is much less significant and is negligible compared to the Internet of Scary Things.
By the way, not only hardware devices can contains dangerous NTP code, but also software.
As long as manufactures still write broken code and unaware of the proper way to use NTP, nothing can be done to solve this issue. Many involved in these misuses and abuses are totally unaware what they are doing. The proper way to use of NTP should have been written in all textbooks related to practical networking lectures.
NTPd does has issues, it has legacy code issues, it has vulnerabilities, it is not so actively maintained, and previously it has reflection attack issues, but the issue you mentioned is human issue, not NTPd issues. We still have reflection attack issues, though it has been fixed, it is still another human issue.
[0]: Flawed Routers Flood University of Wisconsin Internet Time Server
[1]: Open Letter to D-Link about their NTP vandalism
[2]: Recent NTP pool traffic increase
Tom.
Posted Feb 16, 2017 6:03 UTC (Thu)
by gdt (subscriber, #6284)
[Link] (2 responses)
In contrast, NTPd has proper rate-limit mechanism built-in, such as KoD and good pooling interval, blocking NTP does NOT causes more user traffic. We had issues in 2003 with SMC routers, and blocking that traffic lead to a multiplication of incoming traffic. The KoD option didn't arrive in servers until later, and then not in clients for years after that. So we had to specially engineer the network for ntpd until most of the purchasers of those routers retired them. It would be very useful if issuing a ICMP Administratively Denied prevented retries. The ACL is unlikely to suddenly disappear, so it would be safe to disable the association entirely. It's much easier for a network to issue a ICMP than to deploy an anycast edge of ntpd servers to issue KoD replies. Please note that "we've fixed it now" is desirable but not immediately useful for operating a network. We'll probably have to run special queuing for ntpd UDP control plane traffic for the next decade. So there might be only a few years where the network hasn't been running without a special fix for some ntpd issue. As you can see, the timescales in network operations are much longer than the timescales in development: you are at the leading edge of deployment, we have to deal with the trailing edge of deployments. That's why I say that the total security picture is important, human factors and all. It's better to avoid issues than to fix them. We not adverse to change. It would be useful if the major NTP authors formed a small cabal and had an approximate date where they all make a new release which moves the control channel to TCP (or even to TLS if you think that is wise).
Posted Feb 16, 2017 7:15 UTC (Thu)
by akkornel (subscriber, #75292)
[Link] (1 responses)
So much of security today relies on each party in the exchange having the same, or similar, time.
If changes are being proposed, I would suggest relying on capabilities that already exist within DHCP:
1: ISPs which provide DHCP service should run at least three NTP server, and must include those NTP server IPs in all DHCP offers. Smaller ISPs may make arrangements with a higher-tier ISP, or other entity, if they're unable to run their own NTP servers. ISPs which do not provide DHCP service are exempt, although they may wish to provide NTP service for their customer ISPs.
2a: Router vendors must each obtain a vendor zone from the NTP Pool project, and configure that as their NTP servers for their router software. For example, 0.linksys.pool.ntp.org, 1.linksys.pool.ntp.org, and 2.linksys.pool.ntp.org would be the default NTP servers for a Linksys router.
2b: When the router receives one or more NTP server IP addresses in a DHCP offer, if the router accepts the DHCP offer, then the router must replace its existing NTP server entries with the NTP server IP addresses received in the DHCP offer: For each IP received in the DHCP offer, replace one of the pre-configured NTP servers.
2c: If the router provides DHCP (for example, to its local network), and the router received NTP server IP addresses from upstream, then the router must provide those NTP server IP addresses in all DHCP offers that the router makes to local machines.
Item 1 would be agreed upon and implemented by the ISPs. Those ISPs would then lean on their router suppliers, to ensure that the routers sold/rented/leased/whatever by the ISPs met all of the requirements of point 2. From there things could slowly flow to the remaining consumer router market. Some OSes are able to support getting NTP servers from DHCP; as more routers begin to provide them, OSes will begin to use them.
This would allow devices to get NTP servers from a "trusted" source (the same source giving them their IP address), and helps to localize NTP traffic to within an ISP (or an ISP's ISP). It also eliminates the reliance on things like DNS and TLS, with one exception: As DNS embraces DNSSEC more and more, point 2a will stop working. Point 2a is only a stopgap, so that router vendors will have something to put in default configs.
Posted Feb 16, 2017 8:06 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Feb 18, 2017 23:32 UTC (Sat)
by jnareb (subscriber, #46500)
[Link] (1 responses)
I doubt that NTP Classic would pass it.
Posted Feb 19, 2017 3:56 UTC (Sun)
by pizza (subscriber, #46)
[Link]
That 100% score doesn't mean that a given project is actually suitable for a given task or meets any objectively meaningful code quality targets.
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
Implementations should disambiguate NTP time using a knowledge of the approximate time from other sources. Since NTP only works with the differences between timestamps and never their absolute values, the wraparound is invisible as long as the timestamps are within 68 years of each other. This means that for NTP the rollover will be invisible for most running systems, since they will have the correct time to within a very small tolerance. However, systems that are starting up need to know the date within no more than 68 years. ": https://en.wikipedia.org/wiki/Year_2038_problem
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
Third way
Fourth way
Fourth way
Fourth way
Third way
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
https://talks.golang.org/2017/state-of-go.slide#12 https://golang.org/doc/install/source#introduction
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
No, it's not. And assembly? LOL.
WTF OS has to do with the language of a fairly simple daemon? And IoT? You've forgotten to mention Machine Learning and Big Data for the complete Bullshit Bingo.
A rift in the NTP world
Wol
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
>
> That said, implementing NTP in python sounds crazy to me. You want it on every single host. You want it low latency, low overhead, and using minimal resources. Python is *not* the language for such a thing. Golang might be a reasonable choice if you want to "modernize" the language choice.
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
Wol
A rift in the NTP world
And if Sons was serious, she should at least have *started* by offering to help, not by forking ...
A rift in the NTP world
Perhaps it's just me, but when I see statements like "the Internet is going to fall down if I don't fix this" and "One of our team members has more experience doing messy repository conversions than anyone else alive" and seeing that one of the principals "having "moved on" and is no longer involved with NTPsec on a daily basis." I get this visceral reaction that makes me want to stay as far away from that project as I can.
It certainly appears that NTP was a tough space to get into. But from my perspective the attitude of the NTPsec developers seems worse than the (perhaps) ornery attitude of Stenn.
A rift in the NTP world
A rift in the NTP world
I'm very well aware of some of the individuals involved :-).
A rift in the NTP world
But as paulj also observes he's not the only abrasive (IMO of course) contributor.
esr is a rare case, though: there are not that many personalities in the FOSS world who make whole classes of potential contributors feel unwelcome – by the sheer force of their personality.
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
Isn't that, basically, a typical esr schtick – starting with the jargon file?
A rift in the NTP world
A rift in the NTP world
Basic plumbing
A rift in the NTP world
A rift in the NTP world
- Bruce Byfield
A rift in the NTP world
autotools is complicated but waf is not a big improvement, imho. Only a different kind of complexity.
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
The project was not configured: run "waf configure" first!
+ ./waf configure
Setting top to : /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-pcXttoUpzzwX
Setting out to : /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-pcXttoUpzzwX/build
--- Configuring host ---
Checking for 'gcc' (C compiler) : /home/rkeene/devel/aurae/common/compiler/online/x86_64-coreadaptive-linux/bin/gcc
Checking for program 'bison' : /home/rkeene/devel/aurae/common/detected-tools/bison
Checking compiler : no
The configuration failed
(complete log in /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-pcXttoUpzzwX/build/config.log)
it will need its own binaries for the OpenSSL library.
+ ./waf configure --enable-cross=gcc
waf [commands] [options]
<...hundreds of lines of usage omitted...>
waf: error: no such option: --enable-cross
--cross-compiler=CROSS_COMPILER
+ ./waf configure --cross-compiler=gcc
Setting top to : /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-oyqK4b6LB8gE
Setting out to : /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-oyqK4b6LB8gE/build
--- Configuring host ---
Checking for 'gcc' (C compiler) : /home/rkeene/devel/aurae/common/compiler/online/x86_64-coreadaptive-linux/bin/gcc
Checking for program 'bison' : /home/rkeene/devel/aurae/common/detected-tools/bison
Checking compiler : no
The configuration failed
(complete log in /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-oyqK4b6LB8gE/build/config.log)
Setting top to : /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-JoHiSwi4ewtB
Setting out to : /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-JoHiSwi4ewtB/build
--- Configuring host ---
Checking for 'gcc' (C compiler) : /home/rkeene/devel/aurae/common/detected-tools/gcc
Checking for program 'bison' : /home/rkeene/devel/aurae/common/detected-tools/bison
Checking compiler : yes
Compiler found : GCC
...
Asking python-config for pyembed '--cflags --libs --ldflags' flags : yes
Testing pyembed configuration : yes
Asking python-config for pyext '--cflags --libs --ldflags' flags : yes
Testing pyext configuration : Could not build python extensions
The configuration failed
(complete log in /home/rkeene/devel/aurae/node/root/packages/ntpsec/workdir-JoHiSwi4ewtB/build/config.log)
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
https://www.oreilly.com/ideas/the-internet-is-going-to-fa...
The video of the presentation is available on O'Reilly Safari
A rift in the NTP world
So there was a single machine in the world that could build ntp and the root password was lost? (You said that around 2:20 in that video.)
A rift in the NTP world
A rift in the NTP world
A rift in the NTP world
> ..[finds this disturbing]..
I agree, plus given that ESR *is* older than my father and was hired as lead architect makes the argument doubly weird.
A rift in the NTP world
The NTPsec story frequently spoke of free-software ideals such as
openness, transparency, and a welcoming environment to all
contributors, "but this isn't a democratic process. It's a scientific
process, and this isn't somebody's turn to go ahead and take theirs at
the wheel driving the bus."A rift in the NTP world
The most unfortunate thing, I think, is that as Susan Sons put
rather indelicately: The current NTP maintainers aren't immortal.
I don't know about the rest of the world, but driving has always
been something where an older, experienced driver sits down and
laboriously mentors each and every new driver.
Even then, I've been driving cars for decades, but I'm pretty sure I
have no clue how to drive a bus. Passing me the keys and a manual isn't
going to end well.
The NTP maintainers don't need to stand aside by any means.
However, much like every human endeavor, it's always necessary to
mentor a new generation. Like every
other generational knowledge transfer, if the seasoned
practitioners truly care about their life's work, they are going to
have to do a little accommodation to the younger generation.
The wise mentor having to adapt to connect and teach the new generation
is a cultural trope for a reason. It's a lesson that we are all
told from the time we're children.
A rift in the NTP world
Wol
A rift in the NTP world
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
* opens you up to dns hijacking should said DNS server (and all intermediate resolvers) not be under your direct control. It may prevent use of DNSSEC (as the local resolver's clock needs to be at least in the same ballpark as the server's)
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
2. They synchronize their time on the beginning of an hour, effectively making a flooding attack. Another larger flooding attack starts at 00:00 UTC.
3. They retry interval is around 3 minutes, if fails to reach the server, make even more traffic to the server, rather than an exponentially-increase interval.
4. They still try to talk to the default hardcoded servers, even if an alternative server list is set.
5. They don't support the Kiss of Death packet, nothing can stop them if they became wild.
http://pages.cs.wisc.edu/~plonka/netgear-sntp/
https://web.archive.org/web/20060423012837/http://people....
https://mailman.nanog.org/pipermail/nanog/2016-December/0...
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
Code cleanups don't solve NTP's security issues
A rift in the NTP world
https://bestpractices.coreinfrastructure.org/projects/79
A rift in the NTP world