|
|
Subscribe / Log in / New account

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Over at Linux.com, Yocto Project architect Richard Purdie writes about various kinds of problems that the project is experiencing, some of which stem from its success and growth. It is a story that will likely resonate with other open-source projects.
Our scale also means patch requirements are more demanding now. Once, when the number of people using the project was small, the impact of breaking things was also more limited, allowing a little more freedom in development. Now, if we accept a change commit and something breaks, it becomes an instant emergency, and I’m generally expected to resolve it. When patches come from trusted sources, help will often be available to address the regressions as part of an unwritten bond between developers and maintainers. This can intimidate new contributors; they can also find our testing requirements too difficult.

We did have tooling to help new contributors—and also the maintainers—by spotting simple, easily detected errors in incoming patches. This service would test and then reply to patches on the mailing list with pointers on how to fix the patches, freeing maintainer time and helping newcomers. Sadly, such tools require maintenance, and we lost the people who knew how to look after this component, so it stopped working. We formed plans to bring it back and make the maintenance easier, but we’ve struggled to find anyone with the time to do it. I’ve wondered if I should personally try to do it; however, I just can’t spend the chunk of time needed on one thing like that, as I would neglect too many other things for too long.



to post comments

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 17:22 UTC (Mon) by hDF (subscriber, #121224) [Link] (25 responses)

I've had the "pleasure" of using Yocto a few times, and I don't understand why anyone would touch it without getting paid for it. Even Android seems easier to configure and build. I don't mean to sound ungrateful. I'm glad that Yocto exists and shipped Linux to a ton of hardware, some of which I use. However, I think that many large modern projects have outgrown makefiles and shell scripts. New tools like Bazel and Nix can be much better for putting together system images with provenance out of the box.

Most developers simply don't care about packaging their software, so projects like Yocto will continue having trouble attracting contributors. The one place where I see an exception to this is NixOS. As a latte-sipping millenial who cares about packaging, I now have a choice to make. I can spend my life fighting makefiles, bash, and autotools (the antithesis of reproducibility) and all of their idiosyncrasies. Then I can push patches to a mailing list and wait until someone notices them. Alternatively, I can join a number of other projects on GitHub with active forums, chat, CI, and docs. I already do enough of the former at my day job, so on my free time I will choose the latter every time.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 17:58 UTC (Mon) by pizza (subscriber, #46) [Link] (11 responses)

> I can spend my life fighting makefiles, bash, and autotools (the antithesis of reproducibility) and all of their idiosyncrasies. Then I can push patches to a mailing list and wait until someone notices them. Alternatively, I can join a number of other projects on GitHub with active forums, chat, CI, and docs. I already do enough of the former at my day job, so on my free time I will choose the latter every time.

Be careful, you are conflating two separate things; there is plenty of stuff on github that utilizes some combination of makefiles/bash/autotools, and there are plenty of projects that don't use those tools and don't exist on github.

Yocto is built around cross-compiling, and _that_ is where most of the impedance matching problems come from, as most [1] projects [2] consider that to be an alien concept.

(FWIW I also consider Yocto quite frustrating, but IMO it seems to be the least-worst option for what it does)

[1] autotools handles this better than most
[2] Especially things that are not predominantly C-based, but bundle/require C libraries to work.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 19:09 UTC (Mon) by dullfire (guest, #111432) [Link] (10 responses)

Yeah cross compiling is nuanced and important. And I wish more projects had a clue about it.

Also My personal experience has been that buildroot has been more legible/workable than yocto. Some upstream vendors use yocto layers for their BSP, and they are a giant PITA to figure out what goes where. Especially when you do not want all their cruft.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 10:22 UTC (Tue) by dottedmag (subscriber, #18590) [Link] (9 responses)

This is an unfortunate artefact of history.

I too thought cross-compiling is complicated until I saw several toolchains (non-C) where cross-compilation was a goal from day 1, and hence seamless.

Even C can be cross-compiled easily as Zig shows.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 10:43 UTC (Tue) by jrtc27 (subscriber, #107748) [Link] (5 responses)

The problems with cross-compiling aren't ones that zig can magically solve (all it really does is provide a set of pre-built sysroots; it doesn't do much more than Clang can). The problems are: (a) all the broken configure checks out there that try to run binaries to test functionality; (b) things that probe the build system rather than the host (using the autotools terminology), whether that's OS, version, libraries, headers or other things build systems like to look at; (c) projects that need to build their own tools to run during the build process and don't have separate (HOST_)CC and BUILD_CC. It doesn't matter how easy it is to use your cross-compiler when it gets explicitly used incorrectly.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 11:28 UTC (Tue) by dottedmag (subscriber, #18590) [Link] (4 responses)

This is actually my point: if a typical toolchain for a language does not support cross-compilation without a lot of jumping over the hoops, then a typical project in that language won't either.

E.g. pure-Go projects are rarely non-cross-compilable.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 12:23 UTC (Tue) by mathstuf (subscriber, #69389) [Link] (3 responses)

> E.g. pure-Go projects are rarely non-cross-compilable.

Go projects also never need to check if some stdlib/POSIX API is hopelessly broken and/or wonky by running an actual program to see what the behavior is.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 22:45 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Has it actually been needed in the recent past?

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 4:07 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

`curl` tests a cross product of whatever size to determine the `const`-ness and types of the `send` call in order to have a warning-less build. If it were C++, you could just use `<type_traits>` to ask these things statically, but instead there's a slew of "is it this one?" calls.

There's also NumPy which builds (and runs?) some code to extract out floating point representations, HDF5 which does some system introspection, and others like it. Of course, some projects still aim to support long-dead platforms (Vim comes to mind) that have had wonky behaviors that need to be tested for.

Why most of this stuff isn't statically encoded and tested only if otherwise unknown is beyond me, but here we are…

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 27, 2023 12:44 UTC (Mon) by nix (subscriber, #2304) [Link]

> Why most of this stuff isn't statically encoded and tested only if otherwise unknown is beyond me, but here we are…

Because of horrible experiences with xmkmf / imake, which did exactly that. It didn't scale, half the results were still wrong (because it turned out they varied on a smaller granularity than "all x86-linux systems" or whatever) and it usually devolved to "hack this ancient analogue of config.h before you start, no, we can't tell you what things are wrong that you'll need to adjust or what to put in there, you have to know".

Bear in mind that Autoconf can put its cache files systemwide to speed up configure checks. Next to nobody does it because it's too unreliable: the configure answers for a given cache variable can vary *between projects*, since they often rerun the same configure checks (with the same cache variables) with different compiler options and what-have-you. Anything that tries to do the same thing on an even larger scale will have even bigger problems.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 12:43 UTC (Tue) by dullfire (guest, #111432) [Link]

Please do not get lost in the nuance of what I said: I said cross compiling is nuanced (as other people in this thread have demonstrated). This does not mean it need be complicated.

Indeed while I'll probably get lynched for saying it: The truth is, well written autoconf build system is very easy to cross compile (while a badly written one, or worse a project that attempts to reinvent Make in a common interpreted language for it's build system, or a poorly written one in cmake, or probably any other build system) may fall flat on it's face, and go toward impossible to cross compile.

As people have pointed out: Some of the common stumbling blocks are: auto-detect, host info leakage, self-hosting (that is, the project needs to run some part of itself on the build machine to fully build it self, even for another target).

And while there are tool to make this easier: the fact remains, lots of notable projects don't use them. Or worse insist on using quasi-home rolled build systems that ignore every thing learned about software build systems in the last 40 years.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 15:23 UTC (Tue) by Sesse (subscriber, #53779) [Link] (1 responses)

Clang has full cross-compiling (just give -target x86_64-pc-windows-msvc, or whatever; every Clang binary can compile for any target). The problem isn't really cross-compiling, though, it's cross-configuring and of course cross-getting-all-dependencies-in-the-first-place.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 4:09 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

> every Clang binary can compile for any target

Official ones, sure. My personal builds tend to have unnecessary targets compiled out because LLVM/Clang take long enough to compile already :) .

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 18:39 UTC (Mon) by bferrell (subscriber, #624) [Link] (10 responses)

If ya find Yocto onerous, try openwrt.

The simple fact of the matter is that once more than one build system get's involved "cat hearding" is a real thing.

These are really full distro build systems. The fact that anyone tackles this and even get's close... My 'ats off to the duke!

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 7:45 UTC (Wed) by MrWim (subscriber, #47432) [Link] (9 responses)

> The simple fact of the matter is that once more than one build system get's involved "cat hearding" is a real thing.

I think this is the core problem that makes all this so painful and slow. I wrote a bit about this here:

https://blog.williammanley.net/2020/05/25/unlock-software...

For a long time I've been thinking about tools for porting between build-systems, to try and make solving this problem tractable. The idea would be:

1. Get a list of compiler command line invocations from a build process via ptrace.
2. "Disassemble" this into a build description, like a bazel BUILD files or some similarly hermetic build system. You'd use knowledge of the compiler command line to understand build options and dependencies.

The goal being that once you've got that hermetic build description you solve the meta-build system problem by side-stepping it entirely. You'd have a complete and composable build graph.

Notes:

For (1) there is already a format for this, and tools to generate it: compile_commands.json ( https://clang.llvm.org/docs/JSONCompilationDatabase.html ). You'd probably have to run the build several times with different configure options for sufficient understanding.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 9:37 UTC (Wed) by laarmen (subscriber, #63948) [Link]

FWIW I often use `bear` to get a compile_commands.json out of arbitrary build systems.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 13:51 UTC (Wed) by andresfreund (subscriber, #69562) [Link] (5 responses)

There's no way to get correct dependencies just from the command lines in the general case. So I don't see how you'll end up with something robust this way.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 21:55 UTC (Wed) by madscientist (subscriber, #16861) [Link] (3 responses)

This can only work in the case where (a) you always want to do complete from-scratch builds never incremental builds (since many build systems want this, this is not a problem itself--incremental builds are wanted by developers) and (b) you are willing to do all your builds serially and never take advantage of parallel compilation (this is a serious limitation).

The problem with any sort of heuristic like, "this is a compile line so it can be run in parallel with this other compile invocation" is that builds just aren't that easy. A "real project" which has a build system that's as simple as "compile a bunch of files, maybe make some libraries, then link it" is vanishingly rare.

Builds want to generate output files (create headers or source files that contain Git SHA ids, etc.), they have all sorts of DSLs that they use to generate source code, they rely on other tools (yacc/lex are just the OG examples). And it's not even enough to say "do all the weird stuff that's not a compilation first, then do all the compilation" because sometimes the output generated requires some of the build to be completed first.

I think if you ever tried to actually create this "meta-build tool" you are imagining, you'll quickly run up against reality. IMO the only possible way to succeed is to create a "meta-build tool" that can use each project's own build environment, with a unified interface over it. This is basically what Arch does, and there are many other similar examples.

You reference compile_commands.json but that is manifestly insufficient as it captures ONLY compile commands and there is so, so much more to even a basic build system that is not represented by compile_commands.json.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 5:43 UTC (Thu) by NYKevin (subscriber, #129325) [Link]

> I think if you ever tried to actually create this "meta-build tool" you are imagining, you'll quickly run up against reality. IMO the only possible way to succeed is to create a "meta-build tool" that can use each project's own build environment, with a unified interface over it. This is basically what Arch does, and there are many other similar examples.

Honestly, that sounds a lot like Bazel[1] to me... but you can program any tool to invoke any other tool, so I suppose any build system could potentially function as a "meta-build tool," if you just tell it what commands to run and which files depend on each other.

[1]: Bazel is (the FOSS equivalent of) what I use at work, so this is probably just personal bias.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 20:24 UTC (Thu) by MrWim (subscriber, #47432) [Link] (1 responses)

> This can only work in the case where (a) you always want to do complete from-scratch builds never incremental builds (since many build systems want this, this is not a problem itself--incremental builds are wanted by developers) and (b) you are willing to do all your builds serially and never take advantage of parallel compilation (this is a serious limitation).

If your builds are hermetic incrementalism and parallelism comes naturally. You can achieve much greater parallelism with finer grained dependencies as you can be building .c files from applications and dependencies of your application at the same time. Incrementalism is improved as well as reproducibility means you can cache artifacts across builds and you never need to `make clean` to have confidence in the result.

So the question is: how to get from where we are to there? Is there some tooling that could assist is such a transition making boiling the ocean tractable? This is the problem my thought experiment is addressing.

> IMO the only possible way to succeed is to create a "meta-build tool" that can use each project's own build environment

This is exactly the model that I'm trying to move away from. As you say: every distro has their own version of such a system. The is another way, which is the way that google consumes open-source software. More here: https://blog.williammanley.net/2020/05/25/unlock-software...

So the question is: what tooling would make moving away from the "meta-build system model" to the "single hermetic build system" model tractable?

> You reference compile_commands.json but that is manifestly insufficient

Agreed, I referenced it to make the proposal more concrete, you'd need to capture a lot more than just compiler invocations - including awk, sed, yacc, etc. The point is that there are tools out there (like bear) that can do this using ptrace without specific knowledge of the build system that it's running against.

No doubt there are plenty of build steps that are specific to certain projects, but the bulk of the conversion work would be not project specific.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 20:50 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

The Nix developers came asking for something adjacent to this from CMake a while ago:

https://gitlab.kitware.com/cmake/cmake/-/issues/17114

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 16:48 UTC (Thu) by MrWim (subscriber, #47432) [Link]

> There's no way to get correct dependencies just from the command lines in the general case. So I don't see how you'll end up with something robust this way.

The most important thing is to to capture the outputs of any build command. The inputs are relatively easy to derive - you can run the build without the inputs and see where it fails - or use a FUSE filesystem or ptrace to see which files the build attempts to open. Or (like ccache) go to great efforts to understand the build command line. Tools to do this exist.

As NYKevin mentions below: bazel provides the robustness. It enforces "hermetic" builds where the build will fail if you have not correctly described the dependencies. This is why such a tool would be generating bazel BUILD files (or some other similarly "hermetic" build system).

I agree that such a tool couldn't run without human supervision over all codebases out there, but I believe that such a tool could dramatically reduce the manpower requirements for porting a bunch of build systems. As I said in the conclusion of the blog post linked above - the problem is social because there is too much work for one person to do - but with the right tooling you might be able to dramatically reduce the number of people you'd need to be pulling in the same direction.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 1:26 UTC (Thu) by lisandropm (subscriber, #69317) [Link] (1 responses)

Interesting, but in my point of view the key difference is that Google has **a lot** of build power. And also man power. Trying to run a binary distro in that way will probably be a failure...e xcept you have the money.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 16:33 UTC (Thu) by MrWim (subscriber, #47432) [Link]

In terms of man power I agree that it would take significant effort to move to a model like this.

In terms of build power - I think a model like this could help because it's much more efficient. You only rebuild the files that have changed so builds should be faster and use fewer resources. Currently if the man page of libreoffice changes the whole package must be rebuilt. With a build system that understands the dependencies between the files in a more fine-grained manner this wouldn't be a problem.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 18:39 UTC (Mon) by dskoll (subscriber, #1630) [Link] (1 responses)

Yes, I think part of Yocto's difficulties are that it's insanely complicated, so it's hard for people to learn to use it, let alone develop for it.

I haven't used Yocto directly, but have used it indirectly through Xilinx's Petalinux. It seems to me that huge companies that base critical infrastructure on Yocto should be ponying up assistance.

I'm happily no longer doing embedded work. It's gross and dirty work. ☺

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 20:13 UTC (Mon) by shemminger (subscriber, #5739) [Link]

If you want a to laugh/cry/scream then just look at how Yocto reinvents the kernel configuration system.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 20:57 UTC (Mon) by cry_regarder (subscriber, #50545) [Link] (1 responses)

This reminds me a bit of a situation I ran into a while back that led me to write:
Support Traps — A cautionary tale for infrastructure engineers
https://www.linkedin.com/pulse/support-traps-cautionary-t...

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 30, 2023 20:59 UTC (Mon) by cry_regarder (subscriber, #50545) [Link]

"A support trap is a situation in which the work to support customers starves out other work. It is at risk of occurring when a team builds a key horizontal platform or capability for other customers, internal or external, to build their products on. For products, one can measure business metrics such as revenue and user engagement. However for platforms, like the search federation system introduced above, it isn’t so easy. Rather than capturing business value, success is often measured by the number of user facing products leveraging the platform, the transaction rate (e.g. queries per second), or the cost per transaction. This leads to a perverse situation where there is a huge reward to onboard customers quickly and at scale. Build the minimum viable product and move-on to the next target."

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 7:22 UTC (Tue) by koenkooi (subscriber, #71861) [Link] (8 responses)

This is largely self-inflicted, the Yocto side of OpenEmbedded has always been keen to have 2 classes of patch acceptance: if you pony up enough cash, anything from, for example, an intel.com email gets accepted without or even *despite* review. And there's the class of contributions of people that lack the magic email addresses.
The reasoning given was the same as in the article: the probability of $corporate_sponsor having both incentive and bandwidth to fix issues that crop up is higher than "drive by" contributors. In practice it was Richard or one of the other usual suspects having to fix those or a release freeze took a few weeks longer.

I know this demotivated me and others have expressed the same sentiment. So it doesn't surprise me when those 2nd class contributors aren't chomping at the bit to work on this, it has actively worked against them in the past. Being told "The autobuilder needs to process your patch before it gets considered but *their* patch doesn't need that" and then seeing the your patch being delayed even more because surprise, surprise, the non-tested patch broke everything isn't the motivation Yocto thinks it is.

My day job doesn't involve working on OpenEmbedded that much anymore, but I don't bother send fixes upstream when I do work on OE, it is just too much hassle and hoop jumping for 2nd class contributors like me.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 8:15 UTC (Tue) by alexbk (subscriber, #37839) [Link] (7 responses)

This may have been the case years ago, but for as long as I can remember all patches are held to the same standard: it has to pass on the autobuilder. Admittedly fixing the resulting failures is harder for individual contributors than it is for companies due to bitbake being a heavy workload on a typical machine, but patch review itself holds everyone to the same standard.

Yocto would not survive without sponsors and you know it. Someone has to provide infrastructure and give RP an income. Sponsors get to decide what is tested and what isn’t (that’s why there is no official risc v support for example), but they don’t get privileges on patch review.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 16:55 UTC (Tue) by ejr (subscriber, #51652) [Link]

Irony. Yocto was used for the first RISC-V Linux image.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 7:09 UTC (Wed) by koenkooi (subscriber, #71861) [Link] (5 responses)

Passing the autobuilder is a hilariously low bar. There have been patches going into a code freeze (yes...) that passed the autobuilder, but a 2 second look at them showed they would break package management because it changed the logic to prefer *lower* version numbers. And the only public reviews of that patch said that exact thing. It still went in.

Having 'passing the autobuilder' as a requirement is only slightly more stringent than saying 'the patch has to actually apply'. It says nothing about the precedence given to actual review, the standards of that review and the willingness to ignore review.

And fun fact: it has happened multiple times that the patches posted to the mailing list didn't match the actual commits in the branch/tag listed in the coverletter. It wasn't fun debugging that, the mailinglist versions were obviously correct, the ones that got pulled in were subtly different and broken-at-runtime. Which version did the autobuilder build, the ones reviewed on the list or the branch?

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 1, 2023 8:43 UTC (Wed) by alexbk (subscriber, #37839) [Link] (4 responses)

All of this must be from many years ago. It just doesn't match my experience at all. Especially the comment about autobuilder being only slightly more stringent than git apply succeeding is very eyebrow-raising.

Ok, having said that, I had to go and check. I see you've contributed a total of 21 patches in the last 8 years. Me, 3431 patches in the same time. Not to brag, but to give context.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 5:48 UTC (Thu) by NYKevin (subscriber, #129325) [Link] (3 responses)

As someone who has zero contributions to Yocto and barely knows what it is, people have very long memories. If someone has a bad experience, posts about it, and the immediate response is to invalidate that experience... well, let's just say that does not sound like a community I would be eager to participate in, either.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 6:55 UTC (Thu) by alexbk (subscriber, #37839) [Link]

Yes, old experiences do expire and recent experiences (like mine) do invalidate them. If I’m reading restaurant reviews I would sort by date and read from the top, and I wouldn’t want anything older than a year to be factored into the numerical rating. Three years, tops. Same goes for open source communities.

My yocto experience is a five star one, I love being a part of the community and would want to stay there. It’s literally a project I owe my career in embedded industry to - and I was a relative late comer to it, having joined by random chance in 2015 with zero knowledge about embedded linux. I hope my story at least gives you pause.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 12:01 UTC (Thu) by rpurdie (guest, #131960) [Link] (1 responses)

I don't think anyone is trying to invalidate an experience and there are two sides to most stories. Mistakes and misunderstandings have happened in the past and things which shouldn't have merged have. These things do happen, the question is more about how we deal with it afterwards. Where things like that have happened I can say I've tried to learn from it and I've tried to help the project learn from these things too. I'd hope to say we have and I certainly try not to repeat mistakes. I'd hope we're talking about a handful of issues rather than something systemic (and that is the case as far as I know).

The project today is quite different in many ways to the experiences being described which are from a number of years ago. The autobuilder has been totally rewritten since those times and we push for test cases for key regressions. All patches (including my own) are now put though the testing pre-merge. Whilst that won't catch everything, it does stop key things from regressing and when something key does regress, we try and ensure we have testing added.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 13:04 UTC (Thu) by Wol (subscriber, #4433) [Link]

Probably the way to deal with it when people bring this sort of stuff up is to firstly ask when the experience dates from, and secondly to acknowledge their experiences "yes, it was like that, we've growed up now :-)".

Everyone's reality is different, and the quickest way to alienate people is to refuse to acknowledge their experience was/is different. It does need to be two-way - they may be so fixated on their bad experience that you can't get through to them - but if you both acknowledge there's a conflict, and neither of you insist that "my view is right", then things are fixable.

But it takes two to tango ...

Cheers,
Wol

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 9:13 UTC (Tue) by sam.thursfield (subscriber, #94496) [Link] (2 responses)

I feel like the lack of robustness within BitBake is the main issue. Lots of excellent maintainer work goes into making the OpenEmbedded recipes work well and that's why Yocto continues to be hugely popular. But the foundations they are built on are shaky.

Ideally I'd like to use a stricter build tool for embedded work. BuildStream (https://buildstream.build/) is a great example of how this could work. The blocker is that, while BuildStream is an excellent replacement for BitBake, there's no existing replacement for all the OpenEmbedded recipes. And, there isn't easy way to automatically migrate all the OpenEmbedded recipes to a new tool, precisely because of how loose BitBake's recipe format is.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Jan 31, 2023 16:59 UTC (Tue) by ejr (subscriber, #51652) [Link]

Oh, great, another build system. Cue the XKCD comic.

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 7, 2023 21:36 UTC (Tue) by BlueLightning (subscriber, #38978) [Link]

> I feel like the lack of robustness within BitBake is the main issue. Lots of excellent maintainer work goes into making the
> OpenEmbedded recipes work well and that's why Yocto continues to be hugely popular. But the foundations they are built on
> are shaky.

Do you have a bit more substantiation for this? What kind of strictness/robustness does Buildstream have that Bitbake (or perhaps more accurately, Bitbake + the core metadata that OE/Yocto provides) lacks?

Maintainer confidential: Opportunities and challenges of the ubiquitous but under-resourced Yocto Project (Linux.com)

Posted Feb 2, 2023 1:16 UTC (Thu) by lisandropm (subscriber, #69317) [Link]

I started doing "Yocto" packaging when Yocto wasn't yet a thing (it was Just OpenEmbedded) and while also learning Debian packaging. And there are lots of parallelisms there. Some that come to my mind:

- Even if OE/Yocto tackles full distributions and Debian packaging "normally" means just a set of packages they both share a huge quota of packaging knowledge.
- Both systems have a lot of tooling to do the packaging/building. And both lost the tooling maintainers from time to time.
- I **guess* many parts of Yocto, for example some layers, have just a few contributors. I have been maintaining Qt on Debian for more than a decade now (wow, I'm suddenly feeling old ;-) ) and most of the time we where ~2 contributors.

From all the pitfalls of packaging, my personal point of view is that manpower is the greatest problem of all.


Copyright © 2023, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds