|
|
Subscribe / Log in / New account

The Ninja build tool

The Ninja build tool

Posted Nov 17, 2016 21:58 UTC (Thu) by thoughtpolice (subscriber, #87455)
Parent article: The Ninja build tool

Though it may not be especially popular here on LWN I feel, I figure I should give a shout-out to the Shake build system as another thing worth mentioning:

http://shakebuild.com/

One of the reasons I bring it up is because, Shake has support for reading and executing `.ninja` files! Originally, this feature was only used to benchmark Shake against Ninja to see how it faired (spoiler alert: it's pretty much just as fast). Shake also has a lot of other features, even when you only use it for Ninja; for example it can generate profiling reports of your build system, so you can see what objects/rules took the most time, etc. I actually use LLVM's CMake build system to generate .ninja files, then use Shake to run the actual build. It's useful sometimes when I occasionally want to see what takes up the most time while compiling[1]. Some people here might like that. I believe the 'lint' mode in Shake can also detect classes of errors inside Ninja files like dependency violations, so that's useful too.

The actual Shake build system itself, however, is almost an entirely different beast, mostly because it's more like a programming language library you create build systems from, rather than a DSL for a specific tool: more like e.g. Waf than CMake, so to speak. So on top of things like parallelism pools like Ninja, extending that even further beyond, to incorporate features like distributed object result caching (a la Bazel/Blaze inside Google) is quite feasible and doable. It also has extremely powerful dependency tracking features; e.g. I can have a config file of key-value pairs, and Shake tracks changes all the way down to individual variable assignments themselves, not the actual mtime or whatever of the file. You can express a dependency on the output of `cc --version`, so if the user does `export CC=clang-4.0; ./rebuild`, only rules that needed the C compiler get rerun, etc. I've been using lots of these features in a small Verilog processor I've been working on. I can just run the timing analysis tool on my design, it generates a resulting report, run a parser to parse the report inside the build system itself, and the build can fail if the constraints are violated, with a pretty error-report, breakdown, etc in the terminal window. If I extended it, I could even get the build to give me longest paths, etc out of the resulting report.

It's almost life-changing when your build system is this powerful -- things that you'd previously express as bizarre shell scripts or "shell out" to other programs to accomplish, you can just write directly in the build system itself. This, in effect, completely changes the dynamics around what your build system can even do and what its responsibilities are. I find it surprisingly simple and freeing when everything can be done "In one place", so to speak, and I'm not as worried about taking on complex features that will end in a million tears down the road.

That said, Shake is on the extreme end of "I need a really powerful build system". It's only going to pay off with serious investment and need for the features. We're going to use it in the next version of the Glasgow Haskell Compiler, but our build system is an insanely complex non-recursive Make fiasco with all kinds of impressive tricks inside of it that have destroyed its maintainability over time (in an ironic twist of fate -- since most of these tricks were intended to make the build system more reliable and less brittle, but only came at a large cost. Don't look at how the sausage is made, etc etc.)

If you can, these days I normally suggest people just use something like Make, or CMake+Ninja. There are some fundamental concepts they might lack direct analogs of in comparison to Shake or whatever, but they're pretty good and most software doesn't *really* need an exceptionally complex build system. Honestly, I would probably just like Make a lot more if the terse syntax didn't get utterly ridiculous in some cases like interpolating inside macros, escaping rules, etc, and I'd like CMake more if it WAS_NOT_SO_VERBOSE.

[1] related: LLVM really, really needs a way to leverage Ninja pools for its link rules, because if you have too many cores, you'll eat all your RAM from 10x concurrent `ld` processes. I really hate that, because Ninja loves to automatically use up every core I have by default, even if it's 48+ of them :)


to post comments

The Ninja build tool

Posted Nov 17, 2016 23:06 UTC (Thu) by karkhaz (subscriber, #99844) [Link] (3 responses)

Thanks for pointing this out! Another interesting tool to watch (and which is inspired by Shake) is llbuild [1]. Daniel Dunbar announced it during the LLVM Developers' Meeting a few weeks ago, and it looks like it has a lot of the same motivation. In particular, llbuild is a low-level library that handles actual builds, and Daniel has already written a Ninja front-end which uses llbuild as a back-end. There's a possibility that if you have a bunch of sub-projects which use different build tools (ninja, make, etc), having a llbuild front-end for all those tools would allow the project to be built with a single invocation (because the front-ends would parse all the manifests, merge the dependency tree, and send the whole thing to llbuild).

Regarding your comment about linking. It seems that Daniel wants llbuild to use Clang _as a library_ rather than invoking it as a subprocess. More generally, he thinks that if build systems in the future were able to communicate with the build commands (rather than just spawning them and letting them do their thing) we would be able to get much more highly optimised builds...things like llbuild having its own scheduler so that it could run I/O- and CPU-intensive tasks together. May be worth listening to the talk once a video is posted. Exciting times!

[1] https://github.com/apple/swift-llbuild

The Ninja build tool

Posted Nov 20, 2016 0:52 UTC (Sun) by thoughtpolice (subscriber, #87455) [Link] (2 responses)

It's good to see people taking lessons from Shake to heart! That's a really neat project.

At one point, someone had mentioned a similar thing for Shake and the GHC build system rewrite: why not use the compiler APIs directly in the build system to compile everything ourselves? I think it's a valid approach, though API instability makes things a little more complex, perhaps. We initially just wanted to port the existing system, which already has taken long enough! I do think it could improve a lot of things, though, at a first glance. The linker case is a good one.

I'll check the LLVM Dev video when I get a chance, thanks for pointing it out!

The Ninja build tool

Posted Nov 21, 2016 10:44 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

Using compiler as a library is not really a good idea. LLVM+clang is not too prone to ICE-ing these days but it does happen from time to time. Integrating a compiler into the build tool also just seems to be.. inelegant.

But forking a brand new compiler for every file is even less elegant. Perhaps there could be a middle ground - why not create something like a "compilation server"? The simplest version can just be a simple read-eval loop that reads arguments from stdin and launches a compiler in a thread, multiplexing its output into stdout.

This can easily be gradually adapted for over compilers (gcc, icc, msvc) as it can gracefully degrade to simply spawning new processes.

The Ninja build tool

Posted Nov 30, 2016 19:56 UTC (Wed) by nix (subscriber, #2304) [Link]


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds