|
|
Subscribe / Log in / New account

Shedding old architectures and compilers in the kernel

By Jonathan Corbet
February 26, 2018
The kernel development process tends to be focused on addition: each new release supports more drivers, more features, and often new processor architectures. As a result, almost every kernel release has been larger than its predecessor. But occasionally even the kernel needs to slim down a bit. Upcoming kernel releases are likely to see the removal of support for a number of unloved architectures and, in an unrelated move, the removal of support for some older compilers.

Architectures

The Meta architecture was added to the 3.9 kernel as "metag" in 2013; it is a 32-bit architecture developed by Imagination Technologies. Unfortunately, at about the same time as the code was merged, Imagination Technologies bought MIPS Technologies and shifted its attention to the MIPS architecture. Since then, the kernel's support for Meta has languished, and it can only be built with the GCC 4.2.4 release, which is unsupported. On February 21, James Hogan, the developer who originally added the Meta port to the kernel, proposed that it be removed, calling it "essentially dead with no users".

The very next day, Arnd Bergmann, working entirely independently, also proposed removing Meta. Bergmann, however, as is his way, took a rather wider view of things: he proposed that the removal of five architectures should be seriously considered. The other four were:

  • The Sunplus S+core ("score") architecture entered the kernel for the 2.6.30 release in 2009. Since then, the maintainers for that architecture have moved on and no longer contribute changes to the kernel. GCC support for score was removed in the 5.0 release in 2015. Bergman said: "I conclude that this is dead in Linux and can be removed". Nobody has spoken up in favor of retaining this architecture.
  • The Unicore 32 architecture was merged for the 2.6.39 kernel in 2011. This architecture was a research project at Peking University. Bergmann noted that there was never a published GCC port and that the maintainer has not sent a pull request since 2014. This architecture, too, seems to lack users and nobody has spoken in favor of keeping it.
  • Qualcomm's Hexagon is a digital signal processor architecture; support for Hexagon entered the kernel in 2011 for the 3.2 release. Hexagon is a bit of a different case, in that the architecture is still in active use; Bergmann said that "it is being actively used in all Snapdragon ARM SoCs, but the kernel code appears to be the result of a failed research project to make a standalone Hexagon SoC without an ARM core". The GCC 4.5 port for this architecture never was merged.

    Richard Kuo responded in defense of the Hexagon architecture, saying: "We still use the port internally for kicking the tools around and other research projects". The GCC port is indeed abandoned, he said, but only because Qualcomm has moved to using LLVM to build both kernel and user-space code. Bergmann responded that, since there is still a maintainer who finds the code useful, it will remain in the kernel. He would like to put together a working LLVM toolchain to build this port, though.

  • The OpenRISC architecture was merged in the 3.1 release, also in 2011. Bergmann observed that OpenRISC "seems to have lost a bit of steam after RISC-V is rapidly taking over that niche, but there are chips out there and the design isn't going away". He added it to his list because there is no upstream GCC support, but said that the OpenRISC GCC port is easy to find and the kernel code is being actively maintained. Philipp Wagner responded that the GCC code has not been upstreamed because of a missing copyright assignment from a significant developer; that code is in the process of being rewritten. The end result is that there is no danger of OpenRISC being removed from the kernel anytime soon.

Bergmann also mentioned in passing that the FR-V and M32R architectures (both added prior to the beginning of the Git era) have been marked as being orphaned and should eventually be considered for removal. It quickly became apparent in the discussion, though, that nobody seems to care about those architectures. Finally, David Howells added support for the mn10300 architecture for 2.6.25 in 2008 and is still its official maintainer but, according to Bergmann, it "doesn't seem any more active than the other two, the last real updates were in 2013". Others in the discussion mentioned the tile (2.6.36 in 2010) and blackfin (2.6.21, 2007) as being unused at this point.

The plan that emerged from this discussion is to remove score, unicore, metag, frv, and m32r in the 4.17 development cycle, while hexagon and openrisc will be retained. There will be a brief reprieve for blackfin and tile, which will be removed "later this year" unless a maintainer comes forward. And mn10300 will be marked for "pending removal" unless it gains support for recent chips. All told, there is likely to be quite a bit of code moving out of the kernel in the near future.

Compilers

The changes.rst file in the kernel documentation currently states that the oldest supported version of GCC is 3.2, which was released in 2002. It has been some time, though, since anybody has actually succeeded in building a kernel with a compiler that old. In a discussion in early February, Bergmann noted that the oldest version known to work is 4.1 from 2006 — and only one determined developer is even known to have done that. The earliest practical compiler to build the kernel would appear to be 4.3 (2008), which is still supported in the SLES 11 distribution.

Linus Torvalds, though, said that the real minimum version would need to be 4.5 (2010); that is the version that added the "asm goto" feature allowing inline assembly code to jump to labels in C code. Supporting compilers without this feature requires maintaining a fair amount of fallback code; asm goto is also increasingly needed for proper Meltdown/Spectre mitigation. Some developers would be happy to remove the fallback code, but there is a minor problem with that as pointed out by Kees Cook: LLVM doesn't support asm goto, and all of Android is built with LLVM. Somebody may need to add asm goto support to LLVM in the near future.

Peter Zijlstra would like to go another step and require GCC 4.6, which added the -fentry feature; this replaces the old mcount() profiling hook with one that is called before any other function-entry code. That, too, would allow the removal of some old compatibility code. At that point, though, according to Bergmann, it would make sense to make the minimum version be 4.8, since that is the version supported by a long list of long-term support distributions. But things might not even stop there, since the oldest version of GCC that is said to have support for the "retpoline" Spectre mitigation is 4.9, released in 2014.

Nobody has yet made a decision on what the true minimum version of GCC needed to build the kernel will be so, for now, the documentation retains the fictional 3.2 number. That will certainly change someday. Meanwhile, anybody who is using older toolchains to build current kernels should probably be thinking about moving to something newer.

(Thanks to Arnd Bergmann for answering a couple of questions for this article. It's also worth noting that he has recently updated the extensive set of cross compilers available on kernel.org; older versions of those compilers can be had from this page.)

Index entries for this article
KernelArchitectures
KernelBuild system


to post comments

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 0:52 UTC (Tue) by james (subscriber, #1325) [Link] (24 responses)

ARM says that the A53 and some other cores aren't vulnerable to Spectre, presumably because they don't do enough prediction.

There must be other still-supported architectures where all the cores ever produced are too simple to be affected by Spectre—and, of course, many kernels are built for embedded environments where they're only going to run on one sort of core.

So it's by no means obvious that everyone should be using a complier that supports retpolines.

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 8:59 UTC (Tue) by jmspeex (subscriber, #51639) [Link]

The A53 is indeed (mostly?) immune because it is an in-order core, with no speculation to trigger the kind of issues with Spectre and Meltdown. OTOH, I believe cores like the A57 (which is out-of-order) are vulnerable to Spectre. So ARM users do care about those bugs.

It also depends on your security model

Posted Feb 27, 2018 14:41 UTC (Tue) by abatters (✭ supporter ✭, #6932) [Link] (22 responses)

My company deploys Linux in embedded x86-64 appliances that never run any untrusted 3rd-party code, so the whole spectre/meltdown thing is irrelevant to those systems even though the underlying arch is vulnerable. So for our use case we don't care about retpolines even on x86-64.

It also depends on your security model

Posted Feb 27, 2018 17:56 UTC (Tue) by rahvin (guest, #16953) [Link] (8 responses)

If the appliance accepts a network connection any bugs in your software could make spectre/meltdown a real risk for a determined hacker, particularly if they are exposed to the broader internet. All you need is a buffer overflow or other bug that would allow a network service to insert a command that is then executed. Spectre and Meltdown are the bugs that are going to continue to give for a very long time.

It also depends on your security model

Posted Feb 27, 2018 18:40 UTC (Tue) by pbonzini (subscriber, #60935) [Link] (7 responses)

There are many cases in which breaking $networkprocess is just as bad as getting root. (There are also many cases in which everything that you worry about is running as root, though these are less likely to run on a core with speculative execution). In both cases, Meltdown and Spectre are not really much to worry about.

It also depends on your security model

Posted Feb 28, 2018 18:54 UTC (Wed) by clump (subscriber, #27801) [Link]

We can only speculate(!) about abatters use case. You need to think about risks you can't forsee. If you don't use a network, can a network be created? Did you still include a network stack in your embedded OS? Your use case might not include peripherals but can peripherals be attached? Are you carrying software that's not used in your product?

You can't protect against everything but you can take precautions. Some performance impacting mitigations on later kernels can be disabled, but having them opt-out by default is reasonable.

It also depends on your security model

Posted Mar 1, 2018 1:28 UTC (Thu) by rahvin (guest, #16953) [Link] (4 responses)

I disagree, the capability of Meltdown/Spectre to read out the entire memory stack makes them far and away some of the most dangerous bugs. We need to keep in mind that only the very first exploits have rolled on these and because Spectre is fundamental to the design of almost all modern processors it's going to be the bug that continues to give for a very long time.

IMO It's simply a matter of time with Spectre, and the timing attacks that it's based on, before we get the remote exploit that makes it the single biggest threat to computing. Hopefully I'm wrong.

It also depends on your security model

Posted Mar 3, 2018 2:47 UTC (Sat) by himi (subscriber, #340) [Link] (3 responses)

I think the point was that if all your code is running as root, /any/ security vulnerability will open up access to any and all memory on the system - meltdown and spectre don't add any additional problems.

It also depends on your security model

Posted Mar 3, 2018 19:12 UTC (Sat) by nix (subscriber, #2304) [Link] (2 responses)

Quite. If everything important is running as one uid (as is true of most single-user setups!) both Spectre and Meltdown are also unimportant, because the attack doesn't need to use either when it could just ptrace() far more cheaply.

It also depends on your security model

Posted Mar 3, 2018 20:56 UTC (Sat) by excors (subscriber, #95769) [Link] (1 responses)

Unless you're running e.g. untrusted scripts in a sandboxed interpreter. Even if everything is running as root, you expect the sandbox to prevent the scripts accessing arbitrary memory. With Spectre you lose that guarantee. ("Script" could mean JS or eBPF or pretty much any kind of data that can cause the program to do a bounds-checked array access, which might include data that doesn't really look anything like a script at all.)

It also depends on your security model

Posted Mar 4, 2018 14:04 UTC (Sun) by nix (subscriber, #2304) [Link]

Yeah, the old guarantees that language purity could enforce security appear pretty much dead now. The implementation always needs to do extra work to keep people from reaching out beyond the bounds of the implementation via side channels like this -- and we don't know what new side channels may exist in the future, or may exist now yet be unknown to us :(

It also depends on your security model

Posted Mar 1, 2018 12:49 UTC (Thu) by matthias (subscriber, #94967) [Link]

You should keep in mind that at least in theory, Spectre can be exploited remotely: Send crafted network packets to train the speculation engine and afterwards extract information based on timing of future packets.

I do not expect this to be a problem right now, as it will be even harder than a local exploit and the bandwidth of the hidden channel will be much lower than for a local attacker. The SNR will be much worse, due to increased noise. But it is not as easy as saying "If $networkprocess is broken, then we have lost anyway". Spectre works even without breaking any process.

It also depends on your security model

Posted Mar 2, 2018 1:34 UTC (Fri) by gerdesj (subscriber, #5446) [Link] (12 responses)

"My company deploys Linux in embedded x86-64 appliances that never run any untrusted 3rd-party code, so the whole spectre/meltdown thing is irrelevant to those systems even though the underlying arch is vulnerable. So for our use case we don't care about retpolines even on x86-64."

What exactly are you contributing to the discussion here? At best you are correct about all of your assertions and at worst some clever bugger will prove you very, very wrong on one or two of them.

Personally I always attack problems assuming I'm stupid. If you find yourself promoting terms like "best practice" (you didn't but you came close) then you are doomed, I tell you, doomed. You may be a clever person but there is always someone else that is better than you and our colleagues here will include some of them - they may point out the flaw in your reasoning. The baddies won't.

That's the way of the sys 'n' network admin as I see it. No tin foil, just simple scepticism. Perhaps you are the dog's bollocks when it comes to IT security but I note that your firm were not involved in this disclosure (or were you?), so were just as surprised as the rest of us (sort of, see papers from ~1993 onwards.)

Me? I'm stupid, always have been and always will be.

It also depends on your security model

Posted Mar 2, 2018 13:01 UTC (Fri) by intgr (subscriber, #39733) [Link] (7 responses)

I agree with the rest of your comment but...

> If you find yourself promoting terms like "best practice" (you didn't but you came close) then you are doomed, I tell you, doomed.

What's wrong with "best practice"? The way I see it, a best practice documents general conventions that are a good idea to follow "stupidly" as you describe. Applying all security updates periodically is one such best practice, even if you think they don't apply to your use case. As opposed to cherry-picking patches that you think apply -- because you're bound to miss some.

It also depends on your security model

Posted Mar 2, 2018 13:34 UTC (Fri) by farnz (subscriber, #17727) [Link]

The problem with "best practice" is that it comes without a rationale, and thus documents things that you "must" do, but not why you must do them. Thus, when advice in a best practices document becomes wrong (e.g. due to external changes), there's no way to even notice that it is wrong, let alone fix it.

It also depends on your security model

Posted Mar 3, 2018 0:13 UTC (Sat) by gerdesj (subscriber, #5446) [Link] (1 responses)

"What's wrong with "best practice"?"

There be bad practice and there be good practice. As soon as I see anyone claiming that they know what is "best practice" I briefly lose the will to live. You said it yourself in a way: "The way I see it, a best practice documents general conventions that are a good idea to follow" "good idea to follow" - here you dilute "best" to "good idea".

I am being very pedantic with this point, I know. I doubt that many people here have ever written documentation that claimed to be *shudder* "a best practice". There are a lot of opinionated folk here (me included) and quite a lot of "I'm right and you are talking twaddle" style bike shedding err ... discussion, but rarely, if ever, do I see an invocation of "best practice". I do often see that phrase elsewhere rather a lot and I generally use it as a litmus test for possible but unlikely guide to solution of the problem at hand, which is a bit sad.

I suppose that in the end I'm a bit of a grammar fiend: good, better, best - adjective, comparative, superlative. Soz!

It also depends on your security model

Posted Mar 3, 2018 19:13 UTC (Sat) by nix (subscriber, #2304) [Link]

What about calling it 'best current practice' like the RFCs do, then? That doesn't carry the ridiculous implication that this practice is best for all time, the best ever, rah rah!

It also depends on your security model

Posted Mar 3, 2018 6:21 UTC (Sat) by cladisch (✭ supporter ✭, #50193) [Link]

User Rosinante on the Meta Stack Exchange expressed it best:

While you may be an exemplary, clear-thinking individual, who uses the term best practice in a constructive manner, you have been preceded by a giant procession of zombies who use it as the antithesis of thought. Instead of understanding the important specifics of their situation and looking for an appropriate solution, all they want is to spot a herd in the distance and go trotting off after it.

Thus, the term best practice has been rendered an extremely strong signal of an empty resonant cavity in the place where a brain should be, and questions that mention the phrase get closed.

It also depends on your security model

Posted Mar 8, 2018 14:17 UTC (Thu) by Wol (subscriber, #4433) [Link] (2 responses)

> I agree with the rest of your comment but...

> > If you find yourself promoting terms like "best practice" (you didn't but you came close) then you are doomed, I tell you, doomed.

Not often, but some things are clearly best practice. Like what I documented :-) at a previous employer ...

"Switch compiler warnings to max, and if you can't fix them you need to explain them".

:-) :-)

Cheers,
Wol

It also depends on your security model

Posted Mar 8, 2018 15:00 UTC (Thu) by excors (subscriber, #95769) [Link] (1 responses)

> Not often, but some things are clearly best practice. Like what I documented :-) at a previous employer ...
>
> "Switch compiler warnings to max, and if you can't fix them you need to explain them".

But that isn't clearly best. In some compilers, the maximum 'warning' level (e.g. Clang's -Weverything) includes various diagnostic messages that are merely pointing out something the compiler noticed about your code, without implying there's really any problem with it. Perhaps those messages are useful in niche situations, or they were requested by one customer with a bizarre coding style, or a bored compiler developer added them because they could. It's not worth your time to modify your code to avoid those warnings, and it's probably not even worth your time going through the list of the hundreds of available messages and selectively disabling them. The compiler developers have already produced a list of reasonably useful warnings and called it -Wall or something, so that's usually a better place to start.

Maybe "set everything to max" is good advice when -Wall *is* the max. But if you just state that advice without any rationale, it will become poor advice when people switch to a compiler with many less-useful warnings.

It also depends on your security model

Posted Nov 1, 2018 5:23 UTC (Thu) by brooksmoses (guest, #88422) [Link]

Also, of course, there's the "which compiler?" question. Once you start getting into esoteric "extra" warnings, it's pretty common to end up with situations where avoiding a warning in one compiler will cause a warning in another.

It also depends on your security model

Posted Mar 2, 2018 17:13 UTC (Fri) by abatters (✭ supporter ✭, #6932) [Link] (3 responses)

In a discussion of dropping support for old compilers simply because they lack retpoline support, the point, of course, was that since retpoline support may not be essential to everyone, keeping support for the old compilers might still be useful to some people even for x86-64. Upgrading development environments is a PITA.

I have found the responses to my comment very insightful. I am glad I posted my comment, if only to get other people to share their perspectives. Let me summarize:

1) If a system does not allow 3rd-party code to be installed or run, and is not network-attached, then Meltdown/Spectre might not matter.

2) If a system does not allow 3rd-party code to be installed or run, and is network-attached, then Spectre might be remotely exploitable by network packet timings, although this has yet to be demonstrated and is only theoretical.

3) If a system does not allow 3rd-party code to be installed or run, and is network-attached, and an attacker manages remote code injection into a network process, then Meltdown/Spectre can be used for further compromise. In some cases, compromising an important network process is already "game over" so Meltdown/Spectre would not be of any use, but for other cases, Meltdown/Spectre could be used to gain complete control after compromising a low-privilege network process.

It also depends on your hardware

Posted Mar 3, 2018 13:53 UTC (Sat) by anton (subscriber, #25547) [Link] (2 responses)

keeping support for the old compilers might still be useful to some people even for x86-64.
In addition to those where Meltdown/Spectre are irrelevant because there is no attack scenario where they worsen the situation, there are also x86-64 CPUs that are not vulnerable to Meltdown/Spectre (in particular, those based on Bonnell), and hopefully we will see many more CPUs without these vulnerabilities in a few years.

It also depends on your hardware

Posted Mar 8, 2018 20:20 UTC (Thu) by landley (guest, #6789) [Link] (1 responses)

Linux on nommu still exists. If a system without spectre/meltdown mitigation can't possibly be viable to run, you're expressing a pretty strong opinion about those too. (There are more nommu processors than with-mmu processors on the planet about the same way there are more bacterial cells than mammalian cells. This is the dark matter of the hardware world.)

Half the embedded developers already tend to stay several versions behind on everything (kernel, compiler, libc, userspace), as in it took them several years to notice upstream uClibc was dead and move to musl-libc or the uclibc necromancy fork. That's why I had a http://landley.net/toybox/faq.html#support_horizon in busybox (and now toybox): clear policy with leeway for old systems but also a defined "ok, that's enough, you can move now" cutoff.

If you remove support for all but the newest toolchains, they'll just stop upgrading the kernel in their sdk, as in they'll develop _new_ devices against old stuff. (It's already enough of a pain to convince embedded developers to stop developing new devices based on a 2.6 kernel they have lying around. Make your policies too tight, you lose traction.)

Rob

It also depends on your hardware

Posted Mar 8, 2018 21:09 UTC (Thu) by excors (subscriber, #95769) [Link]

I believe nommu systems still need Spectre mitigation, as long as they do speculative execution. Something like a JavaScript interpreter that is designed to run untrusted code in a sandbox ought to be safe on a nommu system, but Spectre still makes it vulnerable exactly like with-mmu.

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 8:08 UTC (Tue) by liam (guest, #84133) [Link] (6 responses)

May I assume that a minimum supported compiler version defined per-arch is off the table?

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 9:30 UTC (Tue) by comio (guest, #115526) [Link] (5 responses)

I hope to have a min version that doesn't depend on architecture or other stuffs. We have already too many entropy in this world.
Which is the oldest (enterprise) distro that is still supported?

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 11:41 UTC (Tue) by rsidd (subscriber, #2582) [Link] (3 responses)

Don't enterprise distros stick to the same major kernel version no matter how old? If so, why not allow GCC 4.3 for kernel 3.10 (for example), but insist on GCC 4.9 for a current kernel?

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 15:47 UTC (Tue) by mageta (subscriber, #89696) [Link] (1 responses)

AFAIK:

Redhat does. Suse does upgrades with service-packs. Canonical sticks to the original kernel, but for a shorter time and provides no hardware-enablement at all via that old version, but with new kernels (hwe kernels) instead. So this could hit the two later.

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 16:22 UTC (Tue) by kiko (subscriber, #69905) [Link]

For Ubuntu, I think we're fine.

While 12.04 and 14.04 LTS are still formally supported, they won't receive modern kernels via HWE; in reality only 4 kernel versions are ever available to be installable on any given Ubuntu release. See the "Kernel release end of life" for details at https://www.ubuntu.com/info/release-end-of-life -- but the summary is that neither of them would be affected by a GCC requirement uplift.

For 16.04 LTS, similarly, the last HWE kernel received will be 4.15, which is the kernel release targeted for the next LTS, 18.04 LTS, due in April. So it too would be unaffected by an uplift in GCC version.

Bionic itself comes with GCC 7.3 (and 6.4) which should be acceptable for any kernel version it ends up receiving.

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 19:12 UTC (Tue) by willy (subscriber, #9762) [Link]

Because we want to allow someone running a currently supported RH release to be able to build a current kernel.

Shedding old architectures and compilers in the kernel

Posted Mar 7, 2018 16:06 UTC (Wed) by BenHutchings (subscriber, #37955) [Link]

Debian 7 "wheezy" (released 2013) has gcc 4.7. But we will probably add gcc 4.9 shortly, specifically so we can build the kernel with retpolines.

However, RHEL 5 (2007) is also still supported and apparently still has gcc 4.1 as the default compiler (with 4.4 available as an alternative).

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 10:09 UTC (Tue) by moltonel (guest, #45207) [Link] (6 responses)

If the "minimum compiler version" is just documentation (as opposed to a build-time check), why not document the range of versions and their respective cons ? Something like "=<4.1 will require manual workarounds, =<4.5 lacks asm goto and uses inferior fallbacks, =<4.6 lacks --fentry which impacts tracing, =<4.9 lacks retpoline which is important for security".

I know documenting it this way doesn't get rid of (for example) asm goto fallback, but for users it gives a clearer picture and for developers wanting to unsupport something it puts the info in the Linux repo rather than a mailing list and people's minds.

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 15:21 UTC (Tue) by doublez13 (guest, #122213) [Link] (5 responses)

Because by completely dropping support for older compilers, we're able to get rid of chucks of code needed only to support them.

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 19:11 UTC (Tue) by moltonel (guest, #45207) [Link] (4 responses)

Yes, I know that's the whole point of droping support.

But my comment was about documenting the more nuanced support: the LKML thread shows that documenting a single minimum version hides a lot of useful info ("gcc foo is supported but you'll miss out on bar"). Detailed support docs should help both users of old tools and support-droping efforts by developers.

Shedding old architectures and compilers in the kernel

Posted Mar 1, 2018 4:16 UTC (Thu) by xtifr (guest, #143) [Link]

Sounds like a great idea. How soon can you have a first draft for us to check out?

Shedding old architectures and compilers in the kernel

Posted Mar 3, 2018 19:19 UTC (Sat) by nix (subscriber, #2304) [Link] (2 responses)

That's... not really useful, because nobody ever reads the documentation. The kernel's build dependencies have been documented in the same place for decades (they moved from Documentation/Changes to Documentation/process/changes.rst recently, but left a symlink), yet when I've added build dependencies there I see many questions from people who encounter build failures because of the lack of those dependencies but never consider even looking to see if the kernel's build-deps are documented anywhere, let alone looking to see if they changed.

And that's something as obvious as a build failure when turning on a new feature: i.e. doing something active, intentionally. Nobody will think to look in documentation if a *lack* of upgraded compiler causes a failure (or, worse, a silent change in semantics). Nobody.

Shedding old architectures and compilers in the kernel

Posted Mar 3, 2018 22:14 UTC (Sat) by moltonel (guest, #45207) [Link] (1 responses)

The logical conclusion to "nobody ever reads the doc" is that the docs should be deleted, to reduce the maintenance burden. Maybe your premise is true for users. I guess it is true for me, a desktop user getting his build tools and kernel sources from a distro.

But it's clearly not true for (some) kernel devs, since they are discussing raising the minimum version (which is only defined in the docs). Since the goal is to get rid of old workarounds and to be sure that certain features are available, wouldn't it make sense for devs to document what those features and workarounds are ? So that devs know what can be removed from the kernel after raising requirements to this or that ? Wouldn't a short "version X brings you Y" table speed things up for the next reqs update discussion, compared to a single "version X required" statement ?

Shedding old architectures and compilers in the kernel

Posted Mar 4, 2018 14:06 UTC (Sun) by nix (subscriber, #2304) [Link]

My premise is from, uh, kernel developers working for a major Linux distributor (unnamed to save their blushes). Maybe the problem is that nobody ever *rereads* Documentation/Changes after they read it once, since the assumption is that after your latest pull the deps are unchanged. (I am guilty here too, of course. So is everyone who builds a lot of kernels.)

But if kernel devs don't read the things, I see no reason to assume that anyone else will.

Smallest common denomiator

Posted Feb 27, 2018 10:20 UTC (Tue) by moltonel (guest, #45207) [Link]

Looking at the minimum version of all the other tools (appart from gcc), they're all comparatively recent. I'd be surprised if one of them (binutils, procps, udev, grub, openssl...) didn't yield a higher minimum gcc version independently of kernel code. You can use an older gcc for your kernel than for your userspace, but that's arguably just looking for trouble.

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 11:36 UTC (Tue) by jmm (subscriber, #34596) [Link] (1 responses)

The only official GCC releases which supports building a retpoline-enabled kernel is 7.3. But there are various backports made by HJ Lu for older branches available at https://github.com/hjl-tools/gcc

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 20:35 UTC (Tue) by arnd (subscriber, #8866) [Link]

Thanks, that's what I had been looking for. I should have a look there if I want to rebuild the cross toolchains for kernel.org. Not sure if we want to keep both version (with and without retpoline support), or if I should just replace the binaries I uploaded.

If I understand it right, the changes in that git tree are only relevant for x86; other architecture might need similar work but nothing else has it, right?

Shedding old architectures and compilers in the kernel

Posted Feb 27, 2018 20:54 UTC (Tue) by geert (subscriber, #98403) [Link] (2 responses)

Can we get rid of one more arch/ directory by (finally) merging arm64 and arm? Or is it still too early?

Shedding old architectures and compilers in the kernel

Posted Apr 27, 2018 21:08 UTC (Fri) by willy (subscriber, #9762) [Link] (1 responses)

Do arm and arm64 have enough in common that it's worth unifying them? It's not like we ever tried to unify i386 and ia64

Shedding old architectures and compilers in the kernel

Posted Apr 28, 2018 19:34 UTC (Sat) by flussence (guest, #85566) [Link]

We have a unified arch/x86/ for i386 and amd64 nowadays, which wasn't always the case (see commit 208652d6b2c).

Old compilers, new kernels

Posted Mar 5, 2018 6:37 UTC (Mon) by fratti (guest, #105722) [Link] (2 responses)

Are there any users who would conceivably build brand-spanking new kernels with decade-old compilers? Aside from appeasing archaeologists 1000 years from now trying to piece together what lead to humanity's downfall and running into issues dating fragments of our past, I can't really think of a user that would use an old compiler without also using an old kernel. Perhaps I'm not familiar enough with the embedded vendor landscape to know the restrictions they have to live by though.

Old compilers, new kernels

Posted Mar 5, 2018 14:44 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

I see this all the time. Not with kernels, but folks want to compile 10+ million SLOC projects changed yesterday using Ubuntu 12.04, so they require everyone support CMake 2.8.9 or whatever. Nevermind that just adding CMake to the pile of code they want to build would save everyone a lot of headaches.

Old compilers, new kernels

Posted Jun 3, 2018 14:11 UTC (Sun) by dps (guest, #5725) [Link]

You are clearly out of touch with far too many production systems. The idea of upgrading from long unsupported old releases of things, like python 2.3, very old sshd, etc with known unfixed security issues is anathema. I would advocate using current stable versions but that was not my pigeon and it was feared hat some bugs might be revealed. Actually fixing those bugs would obviously be the wrong thing.

I have extensive experience of not using things which are clearly the right thing because the production environment is sufficiently ancient that it is not supported. Let it suffice to say that this experience was obtained at multiple very large software companies.

I would not worry about the lack of spectre mitigation on those boxes because there are so many easier ways of owning them.


Copyright © 2018, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds