|
|
Subscribe / Log in / New account

DNF 3: better performance and a move to C++

DNF 3: better performance and a move to C++

Posted Apr 10, 2018 5:06 UTC (Tue) by karkhaz (subscriber, #99844)
In reply to: DNF 3: better performance and a move to C++ by linuxrocks123
Parent article: DNF 3: better performance and a move to C++

To get parallel compilation, I would suggest a system like what I use. I have something similar to your baker.py, except that there are usually multiple of them---one in each subdirectory of a project. Each baker.py emits the build information for that directory (maybe subdirectories too, if they don't have their own baker.py), and calling the script from the top level gathers up all the build recursively. This keeps the dependency information local and easy to understand in modular way.

The difference with my version is that I haven't written my own build tool---my "baker.py" always emits ninja syntax, rather than your custom syntax that needs its own build tool. So running baker.py is akin to running `configure`, and I then separately run ninja to build. You get parallel compilation and many other goodies for free with ninja.

If you don't like cmake, there's no need to write an entire build+metabuild system---cmake is just the meta-build aspect. As for builds, I don't think the fact that ninja is wonderful is controversial, so you should just emit ninja from your meta-build system and use that to build.

I haven't published it because it's ugly and needs a rewrite but hopefully the idea is clear w.r.t. how your system works.


to post comments

DNF 3: better performance and a move to C++

Posted Apr 10, 2018 10:45 UTC (Tue) by HelloWorld (guest, #56129) [Link] (2 responses)

What's the point of having a separate “meta build system”? Nobody else does that, see for instance Shake, sbt or mill. As far as I can see, you only need a “meta build system” if your underlying build system sucks.

DNF 3: better performance and a move to C++

Posted Apr 10, 2018 11:00 UTC (Tue) by karkhaz (subscriber, #99844) [Link] (1 responses)

1. It's faster. There should be no need to calculate the dependency graph every time you build. So the meta-build system emits the dependency graph, and the job of the build tool proper is merely to execute commands in accordance with the graph. You run the build tool repeatedly, and only re-run the meta-build tool when the dependency graph changes.

A big reason why ninja is so much faster than make is that it doesn't evaluate environment variables or have any kind of fancy logic (there's no support for all of the functions and expansions that make has). All of the decision-making should be done at configure time, there's no need to repeat it every time you build.

2. Multiple build directories, each with a different build configuration. With out-of-tree builds, I can have one build directory for release builds, one for debug, one for building only a sub-project, etc. You typically configure each of these directories once, e.g. by passing parameters or environment variables to the meta-build system; and then the build command is just `ninja` in each one. If the meta-build and build commands are combined, then you need to pass the parameters/variables _every single time_ you run a build, and need to remember which parameters are associated with each directory.

DNF 3: better performance and a move to C++

Posted Apr 10, 2018 12:29 UTC (Tue) by HelloWorld (guest, #56129) [Link]

Figuring out whether the dependency graph changed is one of the things that a good build system will do for you rather than having you think about whether you need to invoke the meta build system or not. And in fact the two-stage build systems do do that, I don't have to re-run the meta build system every time I add an #include in a .c file even though that does change the dependency graph as there is now an edge from the .c to the .h file. I don't see why other changes to the build graph should be handled differently.

As for your other argument, I don't buy that either. Release and debug builds should simply be different targets, and the build system should give you sufficient means of abstraction so that the common parts can be easily shared between the two.

DNF 3: better performance and a move to C++

Posted Oct 24, 2018 9:09 UTC (Wed) by linuxrocks123 (subscriber, #34648) [Link]

Ha! Yeah, it looks like I basically just cloned Ninja with bake. Hadn't heard of it before; thanks for the pointer. I'm still going to parallelize bake, probably tomorrow now that I turned my attention to it in response to your comment and found an easy way to do it.

The two systems are somewhat different, probably most notably in that a Bakefile calls its clients to generate the dependency tree, and will give a previous client's output as input to a later client for modification. Of course, I haven't actually used that for anything. Also the -sub feature looks unique, though I haven't actually written a baker.py that uses it yet, so perhaps it's harder to make use of than I suspected. No harm in keeping it around, though.

(Also yeah it's months later; sorry for not seeing this at the time.)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds