LWN: Comments on "The Ninja build tool" https://lwn.net/Articles/706404/ This is a special feed containing comments posted to the individual LWN article titled "The Ninja build tool". en-us Tue, 23 Sep 2025 11:48:01 +0000 Tue, 23 Sep 2025 11:48:01 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net The Ninja build tool https://lwn.net/Articles/707727/ https://lwn.net/Articles/707727/ nix <div class="FormattedComment"> See &lt;<a href="https://gcc.gnu.org/wiki/IncrementalCompiler">https://gcc.gnu.org/wiki/IncrementalCompiler</a>&gt; (alas moribund).<br> </div> Wed, 30 Nov 2016 19:56:56 +0000 The Ninja build tool https://lwn.net/Articles/707722/ https://lwn.net/Articles/707722/ nix <div class="FormattedComment"> Distributors/packagers hate scons, as well, because there is no cross-project consistency for things like CFLAGS overriding, DESTDIR, etc: it's just like raw Makefiles only much uglier because you have a full program in a full-blown programming language to comprehend and possibly hack at in order to build things.<br> <p> </div> Wed, 30 Nov 2016 19:20:23 +0000 The Ninja build tool https://lwn.net/Articles/706951/ https://lwn.net/Articles/706951/ gerv Presumably they called it "ninja" because <a href="https://blog.gerv.net/2011/09/build-tool-name-shortage/">all of the names formed from prefixing the word "make" with a single letter were taken</a>? <br> <br> Gerv Mon, 21 Nov 2016 11:33:56 +0000 The Ninja build tool https://lwn.net/Articles/706950/ https://lwn.net/Articles/706950/ Cyberax <div class="FormattedComment"> Using compiler as a library is not really a good idea. LLVM+clang is not too prone to ICE-ing these days but it does happen from time to time. Integrating a compiler into the build tool also just seems to be.. inelegant.<br> <p> But forking a brand new compiler for every file is even less elegant. Perhaps there could be a middle ground - why not create something like a "compilation server"? The simplest version can just be a simple read-eval loop that reads arguments from stdin and launches a compiler in a thread, multiplexing its output into stdout.<br> <p> This can easily be gradually adapted for over compilers (gcc, icc, msvc) as it can gracefully degrade to simply spawning new processes.<br> </div> Mon, 21 Nov 2016 10:44:30 +0000 CMake and recursive makefiles https://lwn.net/Articles/706913/ https://lwn.net/Articles/706913/ mathstuf <div class="FormattedComment"> Yeah, it is a top-level system rather than an arbitrary depth. There have been thoughts of making a GNU-specific makefiles generator, but lack of time and all that.<br> </div> Sun, 20 Nov 2016 14:14:51 +0000 The Ninja build tool https://lwn.net/Articles/706897/ https://lwn.net/Articles/706897/ thoughtpolice <div class="FormattedComment"> It's good to see people taking lessons from Shake to heart! That's a really neat project.<br> <p> At one point, someone had mentioned a similar thing for Shake and the GHC build system rewrite: why not use the compiler APIs directly in the build system to compile everything ourselves? I think it's a valid approach, though API instability makes things a little more complex, perhaps. We initially just wanted to port the existing system, which already has taken long enough! I do think it could improve a lot of things, though, at a first glance. The linker case is a good one.<br> <p> I'll check the LLVM Dev video when I get a chance, thanks for pointing it out!<br> </div> Sun, 20 Nov 2016 00:52:55 +0000 The Ninja build tool https://lwn.net/Articles/706868/ https://lwn.net/Articles/706868/ wahern <div class="FormattedComment"> That link isn't mine. It was posted on HN. There's author attribution at the top of the post and elsewhere on that page.<br> <p> </div> Fri, 18 Nov 2016 22:58:32 +0000 CMake and recursive makefiles https://lwn.net/Articles/706858/ https://lwn.net/Articles/706858/ madscientist It depends on what you mean by "recursive makefiles". I haven't studied them in detail but it appears that the CMake-generated makefiles have one makefile per high-level target (program, library, etc.), where that makefile contains all the rules needed to build that target (these can be lengthy). The "master" makefile runs one make instance for each of these targets. <p> In the sense that there are many different instances of make invoked, it's absolutely recursive (see my pstree output below) with all the overhead involved with that. It's likely that some of the worst problems with recursive makefiles that are described in Miller's paper on the subject are not present in this build system: I haven't investigated but I assume that the "master" makefile has some sense of which high-level targets depend on others. <p> I would say there's little doubt that the makefiles generated by CMake are not getting every last erg of performance out of make (and especially not GNU make, since they're standard POSIX makefiles). <p> Here's a pstree of a CMake-generated build from a medium-sized project (but one with lots of different targets): <pre> $ pstree -a 6453 bash └─make -j8 └─make -f CMakeFiles/Makefile2 all ├─make -f Dir1/CMakeFiles/Dir1.dir/build.make... │ └─sh -c... │ └─cmake -E cmake_dependsUnix M ├─make -f Dir1/CMakeFiles/Dir1Static.dir/build.make... │ └─sh -c... │ └─ccache... │ └─x86_64-generic- │ └─cc1plus ├─make -f Dir3/CMakeFiles/Dir3IFace.dir/build.make... │ └─sh -c... │ └─ccache... │ └─x86_64-generic- │ └─cc1plus ├─make -f Dir4/CMakeFiles/Dir4.dir/build.make... │ └─sh -c... │ └─ccache... │ └─x86_64-generic- │ └─cc1plus ├─make -f Dir5/CMakeFiles/Dir5.dir/build.make... │ └─sh -c... │ └─ccache... │ └─x86_64-generic- │ └─cc1plus ├─make -f Dir6/CMakeFiles/Dir6.dir/build.make... │ └─sh -c... │ └─ccache... │ └─x86_64-generic- │ └─cc1plus ├─make -f Dir3/CMakeFiles/Dir3Test.dir/build.make... │ └─sh -c... │ └─ccache... │ └─x86_64-generic- │ └─cc1plus └─make -f Dir7/CMakeFiles/crashtest.dir/build.make... └─sh -c... └─cmake -E cmake_link_scriptCM └─x86_64-generic- └─collect2 └─ld </pre> Fri, 18 Nov 2016 18:07:35 +0000 Build time optimization https://lwn.net/Articles/706855/ https://lwn.net/Articles/706855/ jhhaller <div class="FormattedComment"> Good comparison of different build options in the comments. One optimization I've never seen is to handle include path optimization with openat. Every system I've seen, both build systems and compilers, work by concatenating strings and handing the result to the open call. Using a different file descriptor for each include option and using openat would avoid the kernel having to search the directory from the root for every include path option, including failed lookups. While not a big optimization for any one open, over a large build, this seems to be a win. I once tried to change gcc to use this, but didn't find the right spot before real work intruded.<br> <p> Has anyone seen this optimization done?<br> </div> Fri, 18 Nov 2016 17:24:54 +0000 The Ninja build tool https://lwn.net/Articles/706804/ https://lwn.net/Articles/706804/ mathstuf <div class="FormattedComment"> Oh, and you didn't mention what version of CMake was used.<br> </div> Fri, 18 Nov 2016 11:25:16 +0000 The Ninja build tool https://lwn.net/Articles/706803/ https://lwn.net/Articles/706803/ mathstuf <div class="FormattedComment"> CMake Ninja files could use this issue[1] (I have a solution sketched out, but have not had time to implement it) being fixed to speed up quite a bit for projects with a large graph (when using targets as nodes). As for Makefiles, they have the same restriction in CMake, but apparently there is no way to do something similar. Have you done any further analysis on the performance pain points? Do you have repositories one could clone to get your benchmark tests so that we can use it ourselves?<br> <p> [1]<a href="https://gitlab.kitware.com/cmake/cmake/issues/15555">https://gitlab.kitware.com/cmake/cmake/issues/15555</a><br> </div> Fri, 18 Nov 2016 11:20:53 +0000 The Ninja build tool https://lwn.net/Articles/706793/ https://lwn.net/Articles/706793/ wahern <div class="FormattedComment"> CMake-generated ninja builds are apparently quite poor and can be even slower than a large non-recursive Make build. See <a href="http://www.kaizou.org/2016/09/build-benchmark-large-c-project.html">http://www.kaizou.org/2016/09/build-benchmark-large-c-pro...</a><br> <p> <p> </div> Fri, 18 Nov 2016 05:29:46 +0000 The Ninja build tool https://lwn.net/Articles/706791/ https://lwn.net/Articles/706791/ wahern <div class="FormattedComment"> Here's are some more performance comparisons.<br> <p> <a href="http://www.kaizou.org/2016/09/build-benchmark-large-c-project.html">http://www.kaizou.org/2016/09/build-benchmark-large-c-pro...</a><br> <p> One of the takeaways is that CMake does a poor job at producing efficient ninja files. If you're using CMake to generate ninja, it's like getting the worst of both worlds.<br> <p> Another takeaway, I think, is that non-recursive Make is really efficient. Theoretically there's little to distinguish a non-recursive Make build from a ninja build. The basic syntax is very similar. The real bottleneck is GNU Make's implementation, which is lumbering after decades of feature accretion and hacks. (OTOH, it's also much more flexible. ninja's auto-dependency generation only works for C and C++ using GCC syntax, whereas the GNU Make solution is totally generic.)<br> <p> Regarding GNU Make spending so much time including auto-generated header dependencies (mentioned in <a href="http://david.rothlis.net/ninja-benchmark/">http://david.rothlis.net/ninja-benchmark/</a>), I bet that could be addressed by generating a single include file per directory instead of per source file. As you showed in your benchmarks even ninja spends most of its time parsing, and it's parser is leaner and simpler than GNU Make's.<br> <p> </div> Fri, 18 Nov 2016 05:23:29 +0000 The Ninja build tool https://lwn.net/Articles/706780/ https://lwn.net/Articles/706780/ mathstuf <div class="FormattedComment"> CMake does not generate recursive makefiles. There's just a large convoluted inclusion hierarchy going on so you can run "make" in any directory and dependencies in sibling directories get built too.<br> </div> Fri, 18 Nov 2016 01:41:34 +0000 The Ninja build tool https://lwn.net/Articles/706774/ https://lwn.net/Articles/706774/ karkhaz <div class="FormattedComment"> Thanks for pointing this out! Another interesting tool to watch (and which is inspired by Shake) is llbuild [1]. Daniel Dunbar announced it during the LLVM Developers' Meeting a few weeks ago, and it looks like it has a lot of the same motivation. In particular, llbuild is a low-level library that handles actual builds, and Daniel has already written a Ninja front-end which uses llbuild as a back-end. There's a possibility that if you have a bunch of sub-projects which use different build tools (ninja, make, etc), having a llbuild front-end for all those tools would allow the project to be built with a single invocation (because the front-ends would parse all the manifests, merge the dependency tree, and send the whole thing to llbuild).<br> <p> Regarding your comment about linking. It seems that Daniel wants llbuild to use Clang _as a library_ rather than invoking it as a subprocess. More generally, he thinks that if build systems in the future were able to communicate with the build commands (rather than just spawning them and letting them do their thing) we would be able to get much more highly optimised builds...things like llbuild having its own scheduler so that it could run I/O- and CPU-intensive tasks together. May be worth listening to the talk once a video is posted. Exciting times!<br> <p> <p> [1] <a href="https://github.com/apple/swift-llbuild">https://github.com/apple/swift-llbuild</a><br> </div> Thu, 17 Nov 2016 23:06:26 +0000 The Ninja build tool https://lwn.net/Articles/706766/ https://lwn.net/Articles/706766/ thoughtpolice <div class="FormattedComment"> Though it may not be especially popular here on LWN I feel, I figure I should give a shout-out to the Shake build system as another thing worth mentioning:<br> <p> <a href="http://shakebuild.com/">http://shakebuild.com/</a><br> <p> One of the reasons I bring it up is because, Shake has support for reading and executing `.ninja` files! Originally, this feature was only used to benchmark Shake against Ninja to see how it faired (spoiler alert: it's pretty much just as fast). Shake also has a lot of other features, even when you only use it for Ninja; for example it can generate profiling reports of your build system, so you can see what objects/rules took the most time, etc. I actually use LLVM's CMake build system to generate .ninja files, then use Shake to run the actual build. It's useful sometimes when I occasionally want to see what takes up the most time while compiling[1]. Some people here might like that. I believe the 'lint' mode in Shake can also detect classes of errors inside Ninja files like dependency violations, so that's useful too.<br> <p> The actual Shake build system itself, however, is almost an entirely different beast, mostly because it's more like a programming language library you create build systems from, rather than a DSL for a specific tool: more like e.g. Waf than CMake, so to speak. So on top of things like parallelism pools like Ninja, extending that even further beyond, to incorporate features like distributed object result caching (a la Bazel/Blaze inside Google) is quite feasible and doable. It also has extremely powerful dependency tracking features; e.g. I can have a config file of key-value pairs, and Shake tracks changes all the way down to individual variable assignments themselves, not the actual mtime or whatever of the file. You can express a dependency on the output of `cc --version`, so if the user does `export CC=clang-4.0; ./rebuild`, only rules that needed the C compiler get rerun, etc. I've been using lots of these features in a small Verilog processor I've been working on. I can just run the timing analysis tool on my design, it generates a resulting report, run a parser to parse the report inside the build system itself, and the build can fail if the constraints are violated, with a pretty error-report, breakdown, etc in the terminal window. If I extended it, I could even get the build to give me longest paths, etc out of the resulting report.<br> <p> It's almost life-changing when your build system is this powerful -- things that you'd previously express as bizarre shell scripts or "shell out" to other programs to accomplish, you can just write directly in the build system itself. This, in effect, completely changes the dynamics around what your build system can even do and what its responsibilities are. I find it surprisingly simple and freeing when everything can be done "In one place", so to speak, and I'm not as worried about taking on complex features that will end in a million tears down the road.<br> <p> That said, Shake is on the extreme end of "I need a really powerful build system". It's only going to pay off with serious investment and need for the features. We're going to use it in the next version of the Glasgow Haskell Compiler, but our build system is an insanely complex non-recursive Make fiasco with all kinds of impressive tricks inside of it that have destroyed its maintainability over time (in an ironic twist of fate -- since most of these tricks were intended to make the build system more reliable and less brittle, but only came at a large cost. Don't look at how the sausage is made, etc etc.)<br> <p> If you can, these days I normally suggest people just use something like Make, or CMake+Ninja. There are some fundamental concepts they might lack direct analogs of in comparison to Shake or whatever, but they're pretty good and most software doesn't *really* need an exceptionally complex build system. Honestly, I would probably just like Make a lot more if the terse syntax didn't get utterly ridiculous in some cases like interpolating inside macros, escaping rules, etc, and I'd like CMake more if it WAS_NOT_SO_VERBOSE.<br> <p> [1] related: LLVM really, really needs a way to leverage Ninja pools for its link rules, because if you have too many cores, you'll eat all your RAM from 10x concurrent `ld` processes. I really hate that, because Ninja loves to automatically use up every core I have by default, even if it's 48+ of them :)<br> </div> Thu, 17 Nov 2016 21:58:31 +0000 Kbuild + ninja https://lwn.net/Articles/706757/ https://lwn.net/Articles/706757/ rabinv <div class="FormattedComment"> I didn't try removing the recursiveness so I don't know how much just of the slowness is just from that. IIRC I measured over one million stat(2) calls on a no-op build with make. I found a couple of low hanging fruits in the kernel's makefiles like the fact that gcc is invoked over 40 times even before running any rules: <a href="https://patchwork.kernel.org/patch/9201089/">https://patchwork.kernel.org/patch/9201089/</a> <a href="https://patchwork.kernel.org/patch/9201093/">https://patchwork.kernel.org/patch/9201093/</a> (haven't followed up on those patches yet though).<br> </div> Thu, 17 Nov 2016 20:45:30 +0000 Kbuild + ninja https://lwn.net/Articles/706755/ https://lwn.net/Articles/706755/ pbonzini <div class="FormattedComment"> How much of that is due to Kbuild's usage of recursive Makefiles?<br> </div> Thu, 17 Nov 2016 20:11:02 +0000 The Ninja build tool https://lwn.net/Articles/706754/ https://lwn.net/Articles/706754/ pbonzini <div class="FormattedComment"> Actually, I found ninja to be slower than make on building MySQL on the first build (and by no small amount, like 15%). Subsequent builds were faster with ninja, however the make backend of CMake uses recursive makefiles so the comparison favors ninja because of a "defect" in CMake.<br> <p> I need to write a blog post about it someday...<br> </div> Thu, 17 Nov 2016 20:09:13 +0000 Kbuild + ninja https://lwn.net/Articles/706747/ https://lwn.net/Articles/706747/ rabinv <div class="FormattedComment"> Here's an experimental Ninja build file generator for the Linux kernel I hacked up a while ago: <a href="https://github.com/rabinv/kninja">https://github.com/rabinv/kninja</a>. 0.065 seconds for a no-op build vs make's 2.254 seconds.<br> </div> Thu, 17 Nov 2016 18:22:02 +0000 The Ninja build tool https://lwn.net/Articles/706693/ https://lwn.net/Articles/706693/ mathstuf <div class="FormattedComment"> One could probably set up SCons to remember stuff like this, but I don't think it ships with it by default which then means there's no real consistency between projects. Waf seems to be the new scons replacement and it does have a configure step sort of built in (AFAICT, I interact with it mainly due to mpv).<br> </div> Thu, 17 Nov 2016 13:13:47 +0000 The Ninja build tool https://lwn.net/Articles/706691/ https://lwn.net/Articles/706691/ mathstuf <div class="FormattedComment"> How well does SCons handle different compilers? Last I checked, every project had to build their own command lines for each compiler which usually means you're stuck with Clang/GCC support, maybe MSVC and almost never any other compilers (primarily Intel).<br> </div> Thu, 17 Nov 2016 13:11:53 +0000 The Ninja build tool https://lwn.net/Articles/706692/ https://lwn.net/Articles/706692/ mathstuf <div class="FormattedComment"> <font class="QuotedText">&gt; I'm not sure whether the "good Windows support" extends to CMake+Ninja, or if CMake users on Windows still prefer CMake with the Visual Studio back end. Anybody?</font><br> <p> Yes, Ninja is way faster than Visual Studio to build and to generate. Building because Ninja does rule-level parallelization whereas msbuild does target-level parallelization. The generate step is faster because Ninja builds for a single build type (i.e., Debug vs. Release) at a time while Visual Studio supports multiple configurations from a single generated .sln file. For projects with many generator expressions (which if using the new `target_*` commands, is a lot), that multiplies that time out.<br> <p> <font class="QuotedText">&gt; Presumably if you're using CMake+Makefiles then CMake+Ninja will just work.</font><br> <p> There are some things supported only in one and not the other, but they're rarely used (and those bits are documented as being generator-specific).<br> </div> Thu, 17 Nov 2016 13:11:07 +0000 The Ninja build tool https://lwn.net/Articles/706689/ https://lwn.net/Articles/706689/ fsateler <div class="FormattedComment"> SCons has (or had the last time I used it) the limitation that there is no separation of configure and build steps. Thus, you need to remember all the configure arguments for every scons invocation. Not only is it painful, it is slow as well, as all configure tests need to be rerun (although there is some caching). <br> </div> Thu, 17 Nov 2016 12:21:44 +0000 The Ninja build tool https://lwn.net/Articles/706682/ https://lwn.net/Articles/706682/ drothlis <div class="FormattedComment"> Last time I needed to use CPack was a while ago but if I recall correctly, CPack's "make dist" just zips up your source directory (including any untracked files that may be lying around) and you can't get it to include some generated files (i.e. files that are generated from a maintainer build, that you don't expect users to generate because the build-time dependencies are onerous).<br> <p> On the other hand if I needed to create a Windows &amp; OS X installer I wouldn't dream of doing it by hand, I'd use CMake/CPack.<br> <p> The main personal discovery that I wanted to share was that when you have very custom needs[1] generating dumb build files (whether they're Ninja files or explicit Makefiles) from Python is way better than using `make` alone, or Python alone, or a higher-level build system that's designed for more conventional needs.<br> <p> [1]: My own use case is more like an integrator / distro packager, where I'm building an Operating System image from many components (I'm also the upstream of some of those components, other components may be using CMake or some other build system).<br> <p> </div> Thu, 17 Nov 2016 11:33:02 +0000 The Ninja build tool https://lwn.net/Articles/706680/ https://lwn.net/Articles/706680/ lkundrak <div class="FormattedComment"> ReactOS seems to use this as well.<br> </div> Thu, 17 Nov 2016 11:20:13 +0000 The Ninja build tool https://lwn.net/Articles/706678/ https://lwn.net/Articles/706678/ halla <div class="FormattedComment"> CMake has CPack instead for making releases (<a href="https://cmake.org/Wiki/CMake:CPackPackageGenerators#Archive_Generators">https://cmake.org/Wiki/CMake:CPackPackageGenerators#Archi...</a>), which I would say is a built-in equivalent of make dist. I don't use CPack myself, but then, even when krita was still using autotools, make dist wasn't used to make release tarballs. <br> <p> I used to use CPack at my day job, some years ago, and it worked fine creating distribution source tarballs, windows setup.exe using nsis and OSX app bundles in a disk image.<br> <p> I used to use ninja with cmake on Windows, and on Linux, but on the whole, while it worked fine, it just didn't add much value to my workflow -- probably the structure of my project, structurally inherited from the autotools days, with 1,100 kloc and about 150 library targets isn't that suited to it. Or it was just my habit.<br> </div> Thu, 17 Nov 2016 10:47:58 +0000 The Ninja build tool https://lwn.net/Articles/706676/ https://lwn.net/Articles/706676/ drothlis <div class="FormattedComment"> Yes that was wrong and poorly worded. Thanks for the correction. CMake lacks built-in support for things I take for granted with autotools (like "make dist") so I assumed its authors came from a non-Unix background.<br> <p> Ninja itself has good Windows support: It uses "response" files to work around command-line length limitations, it understands the dependency format generated by the Microsoft compiler, it has had performance optimisations motivated by operations that are particularly slow on Windows, and it has zero dependencies so it's easy to install.<br> <p> I'm not sure whether the "good Windows support" extends to CMake+Ninja, or if CMake users on Windows still prefer CMake with the Visual Studio back end. Anybody? Presumably if you're using CMake+Makefiles then CMake+Ninja will just work.<br> <p> </div> Thu, 17 Nov 2016 10:15:35 +0000 GYP -> GN https://lwn.net/Articles/706664/ https://lwn.net/Articles/706664/ drothlis <div class="FormattedComment"> A minor correction to the article: Since October 2016 Chromium doesn't use GYP to generate the Ninja files; it uses a new tool called GN.<br> <p> <a href="https://chromium.googlesource.com/chromium/src/+/master/tools/gn/README.md">https://chromium.googlesource.com/chromium/src/+/master/t...</a><br> <p> Thanks to Nico Weber for pointing this out.<br> <p> </div> Thu, 17 Nov 2016 08:52:00 +0000 The Ninja build tool https://lwn.net/Articles/706658/ https://lwn.net/Articles/706658/ halla <p>CMake doesn't "seem to come from the Windows world". As a quick look on wikipedia (https://en.wikipedia.org/wiki/CMake#History) shows, CMake started out to be <i>cross-platform</i>, instead . And that's why I hope that the current trend where more and more cross-platform libraries get their cmake build system, and more and more applications get their cmake build system will accelerate. Cross-platform is what counts for me these days, especially reliable cross-platform finding of dependencies. Thu, 17 Nov 2016 07:55:26 +0000 The Ninja build tool https://lwn.net/Articles/706654/ https://lwn.net/Articles/706654/ brouhaha If you want "a real programming language like Python", you can use SCons which is fully in Python, and doesn't have to invoke a different build tool with a different syntax. Thu, 17 Nov 2016 07:16:24 +0000 The Ninja build tool https://lwn.net/Articles/706653/ https://lwn.net/Articles/706653/ roc <div class="FormattedComment"> rr uses CMake and I was pleasantly surprised to find that generating Ninja build files for rr "just worked" and gave a nice speedup essentially for free. CMake + Ninja is a good combination.<br> </div> Thu, 17 Nov 2016 05:21:29 +0000