|
|
Subscribe / Log in / New account

The Yocto Project 3.0 release

November 14, 2019

This article was contributed by Richard Purdie

The Yocto Project recently announced its 3.0 release, maintaining the spring/fall cadence it has followed for the past nine years. As well as the expected updates, it contains new thinking on getting the best of two worlds: source builds and prebuilt binaries. This fits well into a landscape where reproducibility and software traceability, all the way through to device updates, are increasingly important to handle complex security issues.

This update contains the usual things people have come to expect from a Yocto Project release, such as upgrades to the latest versions of many of the software components including GCC 9.2, glibc 2.30, and the 5.2 and 4.19 kernels. But there is more to it than that.

One major development in this release was the addition of the ability to run the toolchain test suites. The project is proud of its ability to run builds of complete Linux software stacks for multiple architectures from source, boot them under QEMU, and run extensive software tests on them, all in around five hours. In that time we can now include tests for GCC, glibc, and binutils on each of the principal architectures. As a result, the test report for the release now has around two-million test results.

Build change equivalence

What is slightly less usual is a small line in the release notes starting with "build change equivalence". This innocuous-sounding line covers what could become one of the most useful enhancements to the project in recent years and may also be a first for large-scale distribution compilation in general. In basic terms, it allows detection of build-output equivalence and hence reuse of previously built binaries — but in a way never seen before — by building on technology already used by the project.

While the project has been able to reuse binaries resulting from identical input configurations for some time, 3.0 allows the reuse of previously built binaries when the output of the intermediate steps in the build process is the same. This avoids much rebuilding, leading to faster development times, more efficient builds, reduced binary artifact storage, and also a reduction in work like testing, allowing build and test resources to focus on "real" changes. In short, it addresses one of the complaints many Yocto Project users have about the system: its "continual building from source".

In some ways focusing on this feature is unfair to 3.0, as there are many other, smaller features in there, many of which are small, incremental improvements to things the project has already done well. One other feature of note is that the change equivalence work led naturally into more efficient "multiconfig" builds where multiple different configurations can be built in parallel. These are now optimized when the builds share artifacts. The Yocto Project is one of the few where you can build components for different architectures or operating systems (e.g. an RTOS) in parallel and combine them all in one build.

The Yocto Project/OpenEmbedded build process

To understand more about what build change equivalence means and how it works, it first makes sense to understand how prebuilt binaries were already being handled. There is a common misconception that the Yocto Project (or OpenEmbedded, the underlying build system) always builds everything from source. This may have been true ten years ago but, in modern times, the project maintains what it terms its shared-state cache (or "sstate"). This cache contains binaries from previous builds that the system can reuse under the right conditions.

When building software with OpenEmbedded, there are a series of steps that are followed. The project's execution engine, "BitBake" takes these steps ("tasks" in BitBake terms), builds a dependency tree, and executes them in the correct order. These tasks are usually written in Python or the shell and, ultimately, are effectively represented by key/value pairs of data. These data pairs could be the topic of an article in their own right but, in short, they are how OpenEmbedded manages to be customizable and configurable. It does this through this data store and its ability to "override" values and stack configuration files from different sources, all of which can all potentially manipulate the values. An example could be the following configuration fragment:

    PARALLELISM ?= ""

    MAKE = "make"
    MAKE_specialmake = "new-improved-make"

    OVERRIDES = "${@random.choice(['', 'specialmake'])}"

    do_compile () {
        ${MAKE} ${PARALLELISM}
    }
    addtask compile after configure before install

This fragment illustrates some of the capabilities of the syntax. Several keys are defined, including do_compile, which is promoted to a task with some ordering constraints on when it needs to run. The "do_" prefix is simply a convention to make it obvious which keys are tasks. A user could set PARALLELISM elsewhere to pass options like -j to make, speeding up compilation, or to turn it off if some source code doesn't support parallel building.

Also shown is a simple override where MAKE is changed to a different tool when specialmake is added to OVERRIDES. In this case, it is being triggered randomly just to show the ability to inject Python code to handle more complex situations. There is much more to the syntax, but the idea is you build up functions and tasks that are executed to build software, and these functions are highly customizable depending on many different kinds of input, including the target architecture, the specific target device, the policies being used for the software stack, and so on. The BitBake data store is the place where all the different inputs are combined together to build the system.

There is code in BitBake that knows how to parse these shell and Python configuration fragments — in the Python case using Python's own abstract-syntax-tree code — and from this figure out which keys and associated values were used to build the functions that represent the tasks. Once you know the values that are going into a given task, you can represent them as a hash. If the input data changes, the hash changes. Some values, such as the directory where the build is happening, can be filtered out of the hash but, in general, they're sensitive and accurate representations of the configuration being used as the input. In addition, hashes of files being added to the build are included.

The sstate cache is therefore a store of output from tasks, indexed by a hash that represents the configuration and other inputs used to create that output. If the configuration (hash) matches, the object can be pulled from the cache and used instead of rebuilding it from scratch. This, however, is old technology; the project has been using sstate since 2010. One potential issue with this approach is how sensitive it is to changes. If you add some white space to a patch being applied to some piece of software early in the build process, it will change the input hashes to almost everything in the system and cause a full rebuild. There has, therefore, been a long-held desire to be more intelligent about when to rebuild. Solving a case of white space changes may be possible through optimization, but there are many other cases that could be optimized too, and the question becomes how to do this in a more generic way.

Better optimization

This is where the new work in 3.0 comes into play. So far, we've only talked about configuration inputs, but these inputs result in some kind of output such as, in the case of the example above, some generated object files. The project would usually be most interested in the output from "make install", which would be generated by a do_install() function following do_compile(). For more efficient builds, it became clear that the project should start analyzing the output from the intermediate tasks, so it came up with a way of representing the output of a task as a hash too. The algorithm currently chosen to do this looks at the checksums of the output files, but it ignores their timestamps. There are lots of potential future features that could be added here, such as representing the ABI of a library instead of its checksum but, for now, even this simplistic model is proving effective.

Once you have this output hash, you can compare it to hashes of previous builds of this task. If the output hashes from two builds match, then the input hashes from those two builds are deemed to be "equivalent". This means that the current build matched the previous build up to this point; it follows that anything beyond this point should also match, assuming the same subsequent configuration, even though the input configurations before this point were different. At this point, the build can therefore stop building things from scratch and start reusing prebuilt objects from sstate.

To make this work, the system needs to store information about these "equivalences", so the project added a "hash equivalence server" — a server that stores which input hashes generate the same output data as other input hashes and are thus equivalent. The first input hash given to the server is deemed to be the "base" hash and used to resolve any other matching hashes to that value. This server is written as a network service so that multiple different builds can feed it data and benefit from its equivalence data to speed up their builds.

This is all good, but BitBake itself required major surgery to be able to use this data. Previously, it would look for prebuilt objects from sstate, install them, then proceed with anything it needed to build from source. Installing from sstate is a different operation, since the task ordering is reversed compared to normal task execution. To understand why, consider that, in some cases, if an sstate object is available for some task late in the build, you can skip all the earlier tasks leading up to that object, as those objects aren't needed. That means you need to start with the tasks that would normally be last to run, then work backward up the dependency tree, installing the sstate objects in reverse order, stopping when no dependencies are needed. In simpler terms, this that means mixing sstate tasks and normal tasks is hard.

To do this, 3.0 BitBake now has to have two different task queues — sstate tasks and non-sstate tasks — and both queues need to be able to execute in parallel. When sstate equivalence is detected, tasks are "rehashed" with the new hashes, and tasks can possibly migrate back to the sstate queue. The build can alternate between these two states many times during a build as different equivalences are found and objects from sstate become valid. It makes for a fascinating scheduling problem.

Reproducibility and more

There is a further consideration here: reproducibility. This is a hot topic for many distributions, so there have been many people quietly working away at making software builds generate consistent binary output. The Yocto Project is no exception and has been trying to do its part, including sending patches to upstream projects where it can (including the Linux kernel). This ties in well for hash equivalence, since the higher the reproducibility of the output, the larger the number of equivalent hashes that should be found. In 3.0, automated tests for reproducibility were added. The project has this working for building the minimal core image, including its toolchain, and will continue to improve this feature in the next release.

While this is a beta feature in 3.0 that is not enabled by default, the project believes it represents a significant optimization in working with source builds. Perhaps the Yocto Project can finally put the reputation of "always building from source" behind it.

Finally, its also worth mentioning a quick follow-up to a previous article that discussed how the project found a Linux kernel bug with its automated testing. As the Yocto Project was approaching the 3.0 release and switching to the 5.2 kernel, a similar situation occurred where developers noticed that the "ptest" tests for the strace recipe were hanging. "ptests" are where the project has packaged up the upstream tests that come with software; they are run in the target environment. This was discussed on LKML and ultimately found to be a bug that only appeared when preemption was enabled.

The takeaway from all this is that "from scratch" source builds for the Yocto Project should be something that happens less frequently in the future, particularly as build reproducibility continues to improve. Along with the obvious benefits faster builds bring to developers, they reduce storage and load requirements and reduce testing requirements too. This is particularly important when you consider security updates for end-user devices, where little of the system should change for a given update. Minimizing rebuilding allows more focused testing and, thus, reduced risk of unintended side effects, which in turn may encourage more updates to be made.

The project plans to make this functionality the default in the near future and is looking forward to further improvements in reproducibility as it works on finalizing its long-term support plans within which these developments can play a key role.

[Richard Purdie is one of the founders of the Yocto Project and its technical lead.]

Index entries for this article
GuestArticlesPurdie, Richard


to post comments

The Yocto Project 3.0 release

Posted Nov 16, 2019 5:31 UTC (Sat) by geuder (subscriber, #62854) [Link]

Thanks for this insightful article. By reading the manuals I got a similar understanding after 1+ year of use and a lot of head-scratching...

For the adventorous of us this commit seems to be the key to trying ithe experimental hash equilence feature:

http://git.yoctoproject.org/cgit/cgit.cgi/meta-yocto/comm...

Still on my TODO list, when there is time :(

The Yocto Project 3.0 release

Posted Nov 18, 2019 13:07 UTC (Mon) by weberm (guest, #131630) [Link] (2 responses)

The Yocto Project is a prime example of the two things that are hard in software engineering: naming things, cache invalidation and counting.

The Yocto Project 3.0 release

Posted Nov 18, 2019 13:13 UTC (Mon) by clugstj (subscriber, #4020) [Link]

Thank you, I needed a good laugh on a Monday morning.

The Yocto Project 3.0 release

Posted Nov 19, 2019 1:17 UTC (Tue) by KaiRo (subscriber, #1987) [Link]

Hmm, I know those two hard problems as "naming, cache invalidation and off-by-one errors" - but it comes out to pretty much the same...

The Yocto Project 3.0 release

Posted Nov 18, 2019 17:26 UTC (Mon) by rweikusat2 (subscriber, #117920) [Link]

I've actually read this twice, but - just as with all "Yocto documentation" - it's basically incomprehensible unless the person trying to determine what the text says already know this. It seems to be something like

bitbake can now detect semantically irrelevant changes to input files by examining the generated output. If that's identical to output generated earlier using a different set of configuration parameters and inputs files, rebuilding anything which depends on the output is not necessary and won't be done.

The Yocto Project 3.0 release

Posted Nov 21, 2019 1:29 UTC (Thu) by ernstp (guest, #13694) [Link] (1 responses)

> OVERRIDES = "${@random.choice(['', 'specialmake'])}"

This specific example would of course not work since it would give Taskhash mismatch (sometimes), since the definition of the task would be change when it's re-parsed. :-)

The Yocto Project 3.0 release

Posted Nov 23, 2019 0:01 UTC (Sat) by rpurdie (guest, #131960) [Link]

It would indeed give task hash mismatches, it was a fun example and I'm glad people are awake! :)

The Yocto Project 3.0 release

Posted Nov 22, 2019 4:30 UTC (Fri) by therealjumbo (guest, #135383) [Link] (3 responses)

Congratulations on the release Richard and the yocto team!

I was under the understanding that reproducible builds was not something yocto was interested in fully chasing, since they felt they would need Task Specific Sysroots, instead of just Recipe Specific Sysroots.
https://www.yoctoproject.org/docs/2.4/ref-manual/ref-manu...
https://wiki.yoctoproject.org/wiki/Reproducible_Builds

I looked around and I did find where I read that:
http://lists.openembedded.org/pipermail/openembedded-arch...

So how did you manage to solve that issue?

The Yocto Project 3.0 release

Posted Nov 22, 2019 23:54 UTC (Fri) by rpurdie (guest, #131960) [Link] (2 responses)

We haven't solved that issue however it has turned out to be more of a theoretical problem than one which has effects in real world usage. Back in 2016 that wasn't clear but we've a few years of use to base that on now. There are ways of using the system you'd avoid the problem and ways you can have a large risk of exposure. It could be solved at a cost of much longer build times but we aren't seeing the evidence that shows it being an issue that justifies it. There are also protection mechanisms in place which also help avoid issues such as removal of sysroot artifacts where their hashes have changed. Those mechanisms came after the initial introduction of Recipe Specific Sysroots.

The Yocto Project 3.0 release

Posted Nov 25, 2019 2:04 UTC (Mon) by therealjumbo (guest, #135383) [Link] (1 responses)

Ok cool. We'll probably be updating to 3.0 sometime in 2020. Right now we are using 2.4. We do have an internal application which will definitely benefit from the optimization scheme described in the article, so thanks for that.

Just for fun, I tried replacing python3.5 with pypy (in 2.4, not in 3.0, so maybe you already fixed this), just to see if it worked. It didn't, I started to get parsing errors, is this something yocto is interested in supporting or not really?

The Yocto Project 3.0 release

Posted Nov 27, 2019 12:52 UTC (Wed) by rpurdie (guest, #131960) [Link]

Its not something we've looked at. What benefit are you aiming for by replacing it?

The Yocto Project 3.0 release

Posted Nov 24, 2019 17:13 UTC (Sun) by mckoan (guest, #135718) [Link]

Thank you Richard and the Yocto Project team for the great work and for releasing this 3.0 milestone.


Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds