Building the kernel with Clang
Over the years, there has been a persistent effort to build the Linux kernel using the Clang C compiler that is part of the LLVM project. We last looked in on the effort in a report from the LLVM microconference at the 2015 Linux Plumbers Conference (LPC), but we have followed it before that as well. At this year's LPC, two Google kernel engineers, Greg Hackmann and Nick Desaulniers, came to the Android microconference to update the status; at this point, it is possible to build two long-term support kernels (4.4 and 4.9) with Clang.
![Nick Desaulniers [Nick Desaulniers]](https://static.lwn.net/images/2017/lpc-desaulniers-sm.jpg)
Desaulniers began the presentation by answering the most commonly asked question: why build the kernel with Clang? To start with, the Android user space is all built with Clang these days, so Google would like to reduce the number of toolchains it needs to support. He acknowledged that it is really only a benefit to Google and is "not super useful" elsewhere. But there are other reasons that are beneficial to the wider community.
There are some common bugs that often pop up in kernel code, especially out-of-tree code like the third-party drivers that end up in Android devices. The developers are interested in using the static analysis available in Clang to spot those bugs, but the kernel needs to be built using Clang to do so. There are also a number of dynamic-analysis tools that can be used like the various sanitizers (e.g. AddressSanitizer or ASan) and their kernel equivalents (e.g. KernelAddressSanitizer or KASAN).
Clang provides a different set of warnings than GCC does; looking at those will result in higher quality code. It is clearly beneficial to all kernel users to have fewer bugs in it. There are some additional tools that are planned using Clang. One is a control-flow-analysis tool that could enumerate valid stack frames at compile time; those could be checked at run time to eliminate return-oriented programming (ROP) attacks. There is also work going on for link-time optimization (LTO) and profile-guided optimization (PGO) for Clang, which could provide better execution speed, especially for hot paths.
Building code with another compiler is a good way to shake out code that relies on undefined behaviors. Since the language specification does not define certain behaviors, compiler developers can choose whatever is convenient. That choice could change, so even a GCC upgrade might cause misbehavior if some kernel code is relying on undefined behavior. The hope, Desaulniers said, is that both the kernel and LLVM/Clang can improve their code bases from this effort. The kernel is a big project with a lot of code that can find bugs in the compiler; in fact, it already has.
Greg Kroah-Hartman said that "competition is good"; he was strongly in favor of the effort. Desaulniers was glad to hear that as he and others were worried that the tight coupling with GCC was being protected by the kernel developers. Kroah-Hartman said that there have been other compilers building the kernel along the way. Behan Webster also pointed to all of the new features that have come about in GCC over the past five years as a result of the competition with LLVM. Kroah-Hartman said that he wished there was a competitor to the Linux kernel.
![Greg Hackmann [Greg Hackmann]](https://static.lwn.net/images/2017/lpc-hackmann-sm.jpg)
Hackmann related the state of the upstream kernel: "we are very close to having a kernel that can be built with Clang". It does require using a recent Clang that has some fixes, but the x86_64 and ARM64 kernels can be built, though each architecture has one out-of-tree patch that needs to be applied to do so. There is also one Android-specific Kbuild change that is needed, but only if the Android open-source project (AOSP) pre-built toolchain is being used.
As announced on the kernel mailing list, there are patches available for the 4.4 and 4.9 kernels. There are also experimental branches of the Android kernels for 4.4 and 4.9 available from AOSP. More details can be found in the slides [PDF]. Those branches had just been pushed a few days earlier, Hackmann said, and the HiKey boards were able to build and boot that code shortly thereafter.
There have been LLVM bugs found in the process, though most of them have been fixed at this point, Desaulniers said. The initial work was done with LLVM 4.0, but they have since updated to 5.0 and are also building with the current LLVM development tree (which will become 6.0). You can probably build the kernel with 4.0, he said, but it will be much slower than building with 5.0 or later.
There are still some outstanding issues. Variable-length arrays as non-terminal fields in structures are not supported by Clang, there is a GNU C extension for inline functions that is not supported, and the LLVM assembler cannot be used to build the kernel. Hackmann noted that the GNU assembler is too liberal in what it accepts.
This work has shown that the FUD surrounding using a new toolchain for the kernel is unfounded, Desaulniers said. It is working now, but there are a few asterisks. Clang, the front end, can compile the kernel, but the assembler and the linker from GNU Binutils are needed to complete the build process.
Next up is figuring out how to do automated testing of LLVM and the kernel. Currently, the team is working with two specific LTS kernel branches and using specific LLVM versions. So he can't quite say that Clang will build any kernel, since there are so many different configuration options. A bot to check whether kernel patches will fail to build under Clang is in the works as well. An audience member noted that kernelci.org is looking at adding other compilers to its build-and-boot testing.
Hackmann and Desaulniers encouraged others to try building using Clang. All it takes is a simple "make CC=clang" on a properly equipped system. We are, it seems, quite close to having a two-compiler world for the Linux kernel.
[I would like to thank LWN's travel sponsor, The Linux Foundation, for
assistance in traveling to Los Angeles for LPC.]
Index entries for this article | |
---|---|
Conference | Linux Plumbers Conference/2017 |
Posted Sep 19, 2017 18:13 UTC (Tue)
by ndesaulniers (subscriber, #110768)
[Link] (5 responses)
Posted Sep 19, 2017 20:24 UTC (Tue)
by nathanchance (subscriber, #118533)
[Link] (4 responses)
Posted Sep 19, 2017 21:22 UTC (Tue)
by ndesaulniers (subscriber, #110768)
[Link] (1 responses)
Posted Sep 19, 2017 23:57 UTC (Tue)
by ndesaulniers (subscriber, #110768)
[Link]
Posted Sep 20, 2017 4:55 UTC (Wed)
by voltagex (guest, #86296)
[Link] (1 responses)
Posted Sep 20, 2017 17:54 UTC (Wed)
by nathanchance (subscriber, #118533)
[Link]
The process is basically identical to building a normal desktop kernel (setup defconfig, customize as you need, build with a cross compiler, then install it). Use Google's stock toolchain (linked below). It's a little more difficult than a normal desktop kernel as the entire boot partition is traditionally compressed into an image so you need to pull it off your device, unpack that, add your kernel image and any other files (like modules), then reflash it (either with fastboot or a custom recovery like TWRP). I personally don't run any tests on the kernel after adding patches and building as I am an amateur and don't have that kind of time; runtime is my test environment lol. I just add all the Linux stable upstream patches and pull stuff in from CAF and kernel/common from Google.
https://android.googlesource.com/platform/prebuilts/gcc/l...
Posted Sep 19, 2017 19:28 UTC (Tue)
by sfeam (subscriber, #2841)
[Link] (19 responses)
Posted Sep 19, 2017 20:08 UTC (Tue)
by ndesaulniers (subscriber, #110768)
[Link] (9 responses)
Posted Sep 19, 2017 21:17 UTC (Tue)
by ndesaulniers (subscriber, #110768)
[Link] (7 responses)
Posted Sep 19, 2017 21:21 UTC (Tue)
by ndesaulniers (subscriber, #110768)
[Link] (1 responses)
Posted Sep 25, 2017 14:58 UTC (Mon)
by mwsealey (subscriber, #71282)
[Link]
Posted Sep 19, 2017 23:05 UTC (Tue)
by Sesse (subscriber, #53779)
[Link] (1 responses)
Posted Sep 20, 2017 0:00 UTC (Wed)
by ndesaulniers (subscriber, #110768)
[Link]
Posted Sep 20, 2017 7:04 UTC (Wed)
by alison (subscriber, #63752)
[Link] (2 responses)
Posted Sep 20, 2017 21:00 UTC (Wed)
by ndesaulniers (subscriber, #110768)
[Link]
Posted Sep 21, 2017 16:38 UTC (Thu)
by mkaehlcke (guest, #61834)
[Link]
Posted Sep 21, 2017 21:28 UTC (Thu)
by behanw (guest, #90443)
[Link]
Posted Sep 19, 2017 20:50 UTC (Tue)
by WolfWings (subscriber, #56790)
[Link] (7 responses)
Posted Sep 20, 2017 0:51 UTC (Wed)
by ncm (guest, #165)
[Link] (5 responses)
Any sequence that is rejected by the current version of a tool is a candidate for syntax to mean something useful later on.
Being hard-ass about input grammar is good for everybody.
Posted Sep 20, 2017 7:18 UTC (Wed)
by joib (subscriber, #8541)
[Link] (1 responses)
https://tools.ietf.org/html/draft-thomson-postel-was-wron... lays it out in more detail.
Posted Sep 21, 2017 22:04 UTC (Thu)
by jani (subscriber, #74547)
[Link]
Posted Sep 20, 2017 14:33 UTC (Wed)
by aaron (guest, #282)
[Link] (2 responses)
Posted Sep 21, 2017 1:25 UTC (Thu)
by marcH (subscriber, #57642)
[Link] (1 responses)
Or... not :-(
http://www.wall.org/~larry/natural.html
Posted Sep 21, 2017 18:01 UTC (Thu)
by niner (subscriber, #26151)
[Link]
Posted Sep 20, 2017 18:01 UTC (Wed)
by valarauca (guest, #109490)
[Link]
By default Clang/LLVM tracks _closely_ to GAS, but with less magic https://llvm.org/docs/LangRef.html#inline-assembler-expre...
Small things like `add` on x64 requires a suffix to state if its an `addw`, `addq`, etc. for example https://clang.llvm.org/compatibility.html#inline-asm
Posted Sep 21, 2017 22:42 UTC (Thu)
by codewiz (subscriber, #63050)
[Link]
I am curious at what point you decide that if the emitted code can only be assembled by a "too liberal in what it accepts" assembler then it indicates a bug in the code or the compiler. What API or standard applies to the interface between compiler and assembler? I think they refer to hand-written inline assembly containing borderline invalid syntax. Suppose, for example, that someone wrote x86 assembly containing
Posted Sep 20, 2017 3:48 UTC (Wed)
by ncm (guest, #165)
[Link] (38 responses)
Posted Sep 20, 2017 4:18 UTC (Wed)
by eru (subscriber, #2753)
[Link] (19 responses)
Posted Sep 20, 2017 5:34 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Truly low-level code is a very small percentage of Linux.
But anyway, Linux is not going to be rewritten in anything any time soon.
Posted Sep 20, 2017 7:30 UTC (Wed)
by smurf (subscriber, #17840)
[Link] (1 responses)
Posted Sep 20, 2017 7:32 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Sep 20, 2017 9:21 UTC (Wed)
by NAR (subscriber, #1313)
[Link]
Posted Sep 20, 2017 16:43 UTC (Wed)
by ncm (guest, #165)
[Link] (14 responses)
There was a time you could say this, and the greybeards would nod silently and go back to scraping barnacles. But they're dead now, and the greybeards we have today know better.
We know that you aren't obliged to hide what is better not hidden. We know the magic isn't about hiding essential details, it's about whole categories of mistakes made impossible, and the attention spared from watching out for those available for better things. In all programming, no less in kernel programming, by far the scarcest commodity is attention. More productively-applied attention means better code: faster (yes, good C++ code is routinely faster), and doing more of the right things, and fewer of the wrong things. Compilation may be slower (although not compilation of the C subset -- guess what, Gcc uses the same code for both!), but with fewer trivial mistakes that have big consequences, you come out far ahead.
Nobody seriously suggests rewriting Linux in C++, just as nobody suggests rewriting Gcc (although somebody wrote Clang). But an increasing fraction of Gcc is good C++, and is visibly better for it. (Who hasn't noticed Gcc getting better, faster? It's not just competition from Clang.) Linux is coded in C, but C is bad C++, and new code could be good C++.
A totally new kernel in a modern language might be better than a mixed C and C++ Linux, but Linux is what we can have, and Linux can be made better than what we do have, with overwhelmingly less work. Over time, it will be noticed that the overwhelming majority of the bugs, by proportion, are in the old C code, and the quality standard will rise.
Posted Sep 21, 2017 9:19 UTC (Thu)
by NAR (subscriber, #1313)
[Link] (8 responses)
If you're only using the C subset of C++ then why are you compiling with the C++ compiler in the first place? The slowness starts with C++-specific stuff, each usage of e.g. std::map<std::string, std::string> includes so much code eventually that it really slows down compilation. Once I read a story about a C++ project that took 2 hours to compile. One developer was bored and copied all code of the project into a single source file: it was compiled in 7 minutes. This is the problem that the C++ ecosystem needs to solve.
Posted Sep 21, 2017 11:09 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 1, 2017 0:42 UTC (Sun)
by philomelus (guest, #96366)
[Link] (6 responses)
For an example:
Have you seen something like this in a header?
#ifndef __FOO_H__
That in itself isn't enough. The point of the macro is to prevent reloading a file. If the entire file has to be read in order to exclude it (e.g. compiler has to find the matched #endif because it could be a #else or something), you've gained very little. What saves compilation time is NOT reading the files to begin with.
My projects have used the following structure for more than 20 years (yes, since cfront days):
In source, or header if you like:
#ifndef __FOO_H__
Then in header, at line 1:
#ifndef __FOO_H__
// other stuff as usual
Doing the above makes the top of the source files a bit "ugly" in some folks opinion, but the compile time savings are well worth it. With modern c++ template meta programming, one can even make this happen without much involvement of the end user (the programmer, that is).
Posted Oct 1, 2017 2:43 UTC (Sun)
by viro (subscriber, #7872)
[Link] (1 responses)
Posted Oct 17, 2017 21:39 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Oct 1, 2017 7:17 UTC (Sun)
by NAR (subscriber, #1313)
[Link]
Posted Oct 1, 2017 18:24 UTC (Sun)
by madscientist (subscriber, #16861)
[Link] (1 responses)
It isn't standard of course, so if you require maximum portability you can't use it.
Posted Oct 5, 2017 9:25 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link]
Posted Oct 1, 2017 19:44 UTC (Sun)
by Jandar (subscriber, #85683)
[Link]
https://gcc.gnu.org/onlinedocs/cppinternals/Guard-Macros....
Posted Sep 21, 2017 15:19 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Posted Sep 22, 2017 20:34 UTC (Fri)
by vomlehn (guest, #45588)
[Link] (3 responses)
Posted Sep 24, 2017 8:18 UTC (Sun)
by ncm (guest, #165)
[Link] (2 responses)
It is about putting the type system to work performing logic at compile time, to generate code that is correct by construction. This is not something exotic; it is daily life for a C++ programmer.
Posted Sep 24, 2017 21:36 UTC (Sun)
by peter-b (guest, #66996)
[Link] (1 responses)
Alas, this paradigm of C++ programming is a relatively modern concept, which relies heavily on language features introduced in C++11 and more recently. Most C++ projects I've worked on in my career use C++ as "C plus classes", and the use of types-as-compile-time-assertions was a controversially innovative suggestion. It must be nice to work in an environment where this sort of "types for compile-time logic" approach is commonplace. :-)
Posted Sep 25, 2017 2:26 UTC (Mon)
by ncm (guest, #165)
[Link]
MongoDB's core server engineering organization is very well-run. (And is hiring.) We are using C++14 for the current release, probably C++17 in the next.
Posted Sep 20, 2017 10:20 UTC (Wed)
by k3ninho (subscriber, #50375)
[Link] (5 responses)
Posted Sep 20, 2017 19:46 UTC (Wed)
by ncm (guest, #165)
[Link] (1 responses)
The paper cited is one in a long line of apologia. There was a time when writing apologia was a respected activity, and apologia were widely persuasive. Although reading old apologia offers a precious glimpse into a lost world, they have ceased to persuade. As well-written as they often were (and remain), too many of the facts they cited have turned out to be falsehoods, and too many of the truisms have turned out, in the fullness of time, to be mere truthiness. (This last term seems quaint now; only a year ago, truth was something even liars pretended to.) Too many of the merits claimed were, in fact, harms, or are merits we may claim without accepting the argument.
In this case, essentially all of its valid arguments apply equally well to C and C++; the author contrasts them with "managed languages". However, he presents as a truism that C is faster than other languages, where we know that well-written C++ is routinely faster than C. In general, though, the arguments are basically irrelevant to the topic of upgrading from C to C++ . Arguments over the merits of "safe" languages, in general, are suspect; the real question is where we expect to get correctness. Testing is nice, and checkers, and validators, but the place to get the pure stuff is by construction. When your language is powerful enough to present facilities (i.e., libraries) that admit only valid operations, without compromising performance, worries over invalid operations creeping in vanish.
Posted Sep 21, 2017 5:04 UTC (Thu)
by eru (subscriber, #2753)
[Link]
This I can agree with. But C++ is not that language. My biggest gripe with it is that it cannot protect its abstractions. Where I work, this turns up every time the g++ compiler is upgraded, despite having warnings to the max in mandatory compiler options.
The programmers has used the language or its library in a way that happened to work in the old version, but either does not
compile in the new version, or crashes. (A contributing problem is also the horrid complexity of modern C++).
Posted Sep 22, 2017 9:47 UTC (Fri)
by tdz (subscriber, #58733)
[Link] (2 responses)
Posted Sep 22, 2017 16:11 UTC (Fri)
by ncm (guest, #165)
[Link] (1 responses)
Always promise, never provide.
In every case, as noted in the cited article, the language offers some sort of escape hatch to do "unsafe" operations. In this detail they are equivalent to the "safe subset" promoted for C++, that a program steps out of at need.
The relevant difference between languages, for systems programming, is how effectively they can package user-defined abstractions to make it unnecessary for users to step outside the (safe) abstraction. Commonly, certain necessary abstractions can't be expressed as libraries, and so have to be built into the core language, and then are promoted as features "missing" from other languages.
In an otherwise powerful language like Haskell, for example, we see its weakness in resource management papered over with built-in garbage collection, causing the familiar integration problems. When you cannot abstract resource management, abstractions that need to manage resources other than memory necessarily leak, and in integration even memory management leaks.
The Rust project has chosen to provide expressive power, and in many cases better defaults than C++, while making it harder to accidentally do many (but not all) unsafe operations. In ten or twenty years, if it matures well, it may be a good choice for implementing a successor to Linux; but C++ isn't standing still, so the bar is rising.
There is really no way forward for the Linux kernel other than C++. At some point the choice to build as C++ or not will amount to choosing whether to keep or abandon relevance. It's not there yet. It would be better for the project to make the switch before that point.
Posted Nov 17, 2017 19:39 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
Indeed the competition for the Linux kernel is unfortunately nowhere near yet. In fact I haven't really seen any at all. I bet Rust will pass C++ long before there's any credible one.
Posted Sep 20, 2017 10:43 UTC (Wed)
by error27 (subscriber, #8346)
[Link] (4 responses)
It's already pretty common to format code to please static analysis tools. For example, Sparse is limited in how it understands locking so people work around it by making their locking very simple. In twenty years, the kernel will still be written in C but it will mostly be a subset of C that static analysis tools can understand. We're still developing the tools and figuring out that looks like.
Posted Sep 20, 2017 11:53 UTC (Wed)
by cpitrat (subscriber, #116459)
[Link]
Posted Sep 20, 2017 12:31 UTC (Wed)
by eru (subscriber, #2753)
[Link] (2 responses)
Posted Sep 20, 2017 13:10 UTC (Wed)
by karkhaz (subscriber, #99844)
[Link]
Posted Sep 21, 2017 3:54 UTC (Thu)
by jreiser (subscriber, #11027)
[Link]
Posted Sep 20, 2017 14:38 UTC (Wed)
by aaron (guest, #282)
[Link] (1 responses)
Posted Sep 21, 2017 13:41 UTC (Thu)
by adobriyan (subscriber, #30858)
[Link]
I never realized how stupid the decision to ban struct comparisons while allowing struct assignments is until trying to compile kernel with C++ compiler.
Say, it is possible to annotate non-null pointers by switching to references and lose nothing.
But no, we continue to do it the hard way because kernel programming is supposed to be hard, right?
Right?
Posted Sep 21, 2017 13:22 UTC (Thu)
by adobriyan (subscriber, #30858)
[Link] (4 responses)
I actually started doing it and even found a 1.5 bugs in the process:
2c13ce8f6b2f6fd9ba2f9261b1939fc0f62d1307 posix_cpu_timer: Exit early when process has been reaped
50755bc1c305340660bbfa65fdae3ed113d8fe0e seqlock: fix raw_read_seqcount_latch() (+ followup fix)
It is doable to compile with clang++ (but not g++, see C99 intializers) while maintaining source compatibility with certain exceptions like SYSTEM_CALL macro wrappers and some relocation thingy clang doesn't support. But then one needs to win holy war against allocators returning "void *" (recent kvmalloc() enthusiasm doesn't help), pointer arithmetic, "new", "private" etc.
It was quite refreshing to type something like "char min(char x, char y) = delete;" or overload "==" or "enum class" for type safety.
In fact your C++ posts here encouraged me to turn to the dark side. :-)
Posted Sep 21, 2017 15:22 UTC (Thu)
by khim (subscriber, #9252)
[Link] (1 responses)
Posted Sep 21, 2017 16:42 UTC (Thu)
by adobriyan (subscriber, #30858)
[Link]
Good. I recalled what clang++ didn't support: .label subtractions in alternatives calculations. This is not a problem for compile time checking but showstopper for runtime. Hopefully g++ won't have problems with them.
Posted Sep 21, 2017 15:23 UTC (Thu)
by excors (subscriber, #95769)
[Link] (1 responses)
"But then one needs to win holy war against allocators returning "void *"" If you mean the problem is just that you need to add explicit casts in a million places, maybe you could avoid that relatively cleanly with: so it can be implicitly cast to any pointer type (with zero runtime cost). (Hmm, I wonder if you could then extend it to something like: to detect some bugs.)
Posted Sep 21, 2017 16:26 UTC (Thu)
by adobriyan (subscriber, #30858)
[Link]
#define lmalloc(T, gfp) ((T*)_kmalloc(sizeof(T), (gfp)))
and friends currently. There were even minor bugs in kernel because of these type of type mismatches.
Posted Sep 20, 2017 11:55 UTC (Wed)
by cpitrat (subscriber, #116459)
[Link] (13 responses)
Good because the Hurd is coming to take the world !
Posted Sep 20, 2017 12:11 UTC (Wed)
by laarmen (subscriber, #63948)
[Link] (1 responses)
Posted Sep 20, 2017 14:20 UTC (Wed)
by mageta (subscriber, #89696)
[Link]
Posted Sep 20, 2017 14:37 UTC (Wed)
by lkurusa (guest, #97704)
[Link] (1 responses)
Posted Sep 20, 2017 15:30 UTC (Wed)
by cornelio (guest, #117499)
[Link]
Posted Sep 20, 2017 15:29 UTC (Wed)
by rvfh (guest, #31018)
[Link] (4 responses)
Or did you mean Magenta?
Posted Sep 20, 2017 16:08 UTC (Wed)
by lkurusa (guest, #97704)
[Link] (3 responses)
Posted Sep 20, 2017 16:53 UTC (Wed)
by ncm (guest, #165)
[Link] (2 responses)
It's kind of dumb to name the files ".cpp", instead of ".cc", though. For a long time, portable C++ code had to be in ".cpp" files because MSVC insisted on that, but Zircon doesn't need to be built with MSVC (and anyway, nowadays MSVC can compile ".cc" files).
Posted Sep 21, 2017 14:34 UTC (Thu)
by jond (subscriber, #37669)
[Link] (1 responses)
New law: Over a long enough time frame, every IRC client's name will be re-used for something else.
Posted Sep 24, 2017 7:21 UTC (Sun)
by magfr (subscriber, #16052)
[Link]
Posted Sep 20, 2017 21:02 UTC (Wed)
by ndesaulniers (subscriber, #110768)
[Link] (3 responses)
Posted Sep 21, 2017 1:18 UTC (Thu)
by marcH (subscriber, #57642)
[Link]
NIMBY!
Posted Sep 21, 2017 5:58 UTC (Thu)
by gregkh (subscriber, #8)
[Link] (1 responses)
Posted Sep 25, 2017 18:14 UTC (Mon)
by leoc (guest, #39773)
[Link]
Posted Sep 20, 2017 12:23 UTC (Wed)
by johnjones (guest, #5462)
[Link] (1 responses)
Posted Sep 20, 2017 15:29 UTC (Wed)
by mchouque (subscriber, #62087)
[Link]
Posted Oct 24, 2017 1:47 UTC (Tue)
by zhiqiu (guest, #119236)
[Link]
Building has been done successfully, but when lunch aosp_arm64-eng, and run:
Any help will be really appreciated. Thank you!
Posted Jan 3, 2018 23:47 UTC (Wed)
by ylluminate (guest, #120848)
[Link]
Also, regarding the lamentation of having another option to Linux itself, as far as I'm seeing illumos is really pushing to be that alternative. Frankly with it's memory management and such, it would be a welcomed change if they can get something figured out for wide driver support.
Nice write up, thanks Jake! The slides from our talk can be found here.
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
https://android.googlesource.com/platform/prebuilts/gcc/l...
I am curious at what point you decide that if the emitted code can only be assembled by a "too liberal in what it accepts" assembler then it indicates a bug in the code or the compiler. What API or standard applies to the interface between compiler and assembler?
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
https://youtu.be/ju1IMxGSuNE?t=165
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
"test %eax"
, with the second operand missing, and gas liberally interpreted it as "test %eax, #0"
or "test %eax, %eax"
.C, still?
C, still?
C, still?
C, still?
C, still?
C, still?
C, still?
> the compiler doing things behind your back. You need to stay
> in control. The things C++ adds to C are all about hidden magic.
"Compilation may be slower (although not compilation of the C subset -- guess what, Gcc uses the same code for both!)"
C, still?
C, still?
C, still?
#define __FOO_H__
...
#endif // __FOO_H_
#include "foo.h"
#endif
#define __FOO_H__
// Header comments, (c) notice, license notice, etc.
#endif
C, still?
C, still?
Compiler can trivially recognize that file is guarded that way (i.e. that having a macro defined guarantees that everything inside will be ifdef'ed out) and skip running a tokenizer over that thing. I've never checked in gcc and clang do that
GCC certainly does. See the first few conditionals in libcpp/files.c:should_stack_file(). This is even documented (in the node 'Once-Only Headers' in the cpp info doc).
C, still?
Every compiler I've used in the last 5 years supports the "#pragma once" facility. This is much cleaner than ifdefs and just as fast as the "ifdefs around the #include" method you are suggesting.
C, still?
C, still?
include guard
C, still?
Compilation may be slower (although not compilation of the C subset -- guess what, Gcc uses the same code for both!)
Um, no it doesn't. The frontends are separate, the parser is distinct, there is no way you can say it uses the same code for both except insofar as it also uses the same code for Ada, Objective C and Fortran.
C, still?
C, still?
C, still?
C, still?
Hey, can you let me know what you think of Some Were Meant for C: The Endurance of an Unmanageable Language (PDF), by Stephen Kell? (Via: http://chneukirchen.org/trivium/)
C, still?
K3n.
C, still?
When your language is powerful enough to present facilities (i.e., libraries) that admit only valid operations, without compromising performance, worries over invalid operations creeping in vanish.
C, still?
C, still?
C, still?
C, still?
C, still?
C, still?
C, still?
Also prevents
C, still?
if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
retval = -EINVAL;
circa 2003: https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-attempt-of-2003/
If there is only one test (no && or ||) and only one group of assignments (a=b=c but no , [sequential comma]) then assignment inside an if can be OK.
C, still?
C, still?
C, still?
C, still?
C, still?
but not g++, see C99 intializers
C++20 got them thus hopefully GCC would implement them soon. I guess when that would happen there would, finally, could be some discussion about switching to C++ compiler...
C, still?
C, still?
class autocast {
public:
autocast(void *p) : ptr(p) { }
template<typename T> operator T*() {
return static_cast<T*>(ptr);
}
private:
void *ptr;
};
#define kmalloc(size, flags) autocast(kmalloc(size, flags))
template<size_t size>
class checked_autocast {
public:
checked_autocast(void *p) : ptr(p) { }
template<typename T> operator T*() {
static_assert(size >= sizeof(T), "allocated size smaller than return type");
return static_cast<T*>(ptr);
}
private:
void *ptr;
};
#define kmalloc(size, flags) \
__builtin_choose_expr( \
__builtin_constant_p(size), \
checked_autocast<size>(kmalloc(size, flags)), \
autocast(kmalloc(size, flags)))
C, still?
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Over a long enough time frame every software project name will be re-used for something else.
Building the kernel with clang
Building the kernel with clang
Building the kernel with clang
Wish granted. :)
Building the kernel with clang
binary diff ?
Performance diff?
Building the kernel with clang
make ranchu64_defconfig
export ARCH=arm64
export CROSS_COMPILE=aarch64-linux-android-
export CLANG_TRIPLE=aarch64-linux-gnu-
make CC=clang HOSTCC=clang
emulator -kernel kernel/common/arch/arm64/boot/Image
I cannot boot into the emulator.
AND when use prebuilt qemu-Image, emulator is ok. I dont know how to use the Image built by clang, would you please explain detailly?
Building the kernel with clang