Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Posted Feb 8, 2023 5:29 UTC (Wed) by jhoblitt (subscriber, #77733)Parent article: Fedora packages versus upstream Flatpaks
Development norms have changed, CI/CD systems that crank out binaries on demand (or even every commit) are now ubiquitous. Fetching actively moving software directly from upstream is now common place be it an OCI image, appimage, yum repo, or flatpak -- I use all of the above daily.
I am surprised by the attitude that some end users trust an upstream to write software but not compile or package it. If an upstream cant be trusted to operate the build system they wrote... It's time to find a different upstream.
Posted Feb 8, 2023 6:16 UTC (Wed)
by PengZheng (subscriber, #108006)
[Link]
Posted Feb 8, 2023 7:42 UTC (Wed)
by Conan_Kudo (subscriber, #103240)
[Link] (4 responses)
Posted Feb 8, 2023 10:07 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (3 responses)
The question becomes whether or not the contribution is valuable, or just duplication of effort. A distro packager who expects to only be responsible for the builds in their distro of choice, and nothing more, is just duplicating effort; there's no added value from having something built as a Fedora RPM as well as a Debian dpkg and a CentOS RPM.
A distro packager who does things like testing against newer library versions and reporting back to upstream that it's possible to move from libfoo-1.1.1 to libfoo-1.8.6 safely is valuable. As is a distro packager who handles front line support, from recommending tutorials to people who've installed the program and now want to be told how to use it, all the way through to producing good bug reports from user reports of the form "I installed it and it doesn't work".
Posted Feb 15, 2023 15:03 UTC (Wed)
by immibis (subscriber, #105511)
[Link] (2 responses)
Maybe one day effort will be merged into a distribution-development-kit based on bitbake or portage. Then Fedora can be DDK with one set of custom packages (e.g. branding) and compile options, and Ubuntu can be a different one.
The package manager itself is a relatively small amount of work compared to the work required to actually make the packages.
Posted Feb 15, 2023 17:24 UTC (Wed)
by Wol (subscriber, #4433)
[Link]
> Maybe one day effort will be merged into a distribution-development-kit based on bitbake or portage. Then Fedora can be DDK with one set of custom packages (e.g. branding) and compile options, and Ubuntu can be a different one.
Well, portage is a damn good system for building a distro. Throw in systemd as the one init system to rule them all :-)
But I think where systemd and portage both (like a huge amount of Open Source software) fall down badly is the lack of decent STARTER documentation. Open Source documentation is good, but it's mostly of the sort that only makes sense once you already know what it means - it's a reference not a tutorial. I've written my own systemd init file and it was torture, because I didn't know where to start. And I still think it's got a number of nasty glitches that I need to debug. However I really don't think SystemV would have been any easier ...
Cheers,
Posted Feb 20, 2023 12:58 UTC (Mon)
by farnz (subscriber, #17727)
[Link]
But those opinions are of very little value if the packager isn't communicating the results of complying with those opinions upstream. If a distro packager is building for X11 only, and upstream removes all X11 support (making their package Wayland-only), then the distro's opinion stops having any weight, since it becomes "remove package" versus "stop holding the opinion that X11 is required".
To be of value beyond simply providing a nice way to do ./configure && make install, packagers need to be communicating with upstream, and ensuring that the reasons behind distro opinions are respected upstream (which is hard work in its own right - the packager job is not simple or easy). If they're not, then distro packagers are going to duplicate each other's work, and upstream is not even going to realise that distro packagers have opinions that differ from theirs, nor why.
This becomes problematic if upstream's decision making results in them dropping code distributions need to implement their opinions - if you remove X11 support code because you think that everyone's using Wayland now, but my distribution package patches out Wayland support and only builds the X11 version, then we get an unpleasant collision. Had I been telling you why my package patches out Wayland support, you'd almost certainly have kept X11 support, or worked with me to fix the reasons why I patched out Wayland.
Posted Feb 8, 2023 10:19 UTC (Wed)
by LtWorf (subscriber, #124958)
[Link]
The problem is that more often than not they also download binaries.
In the end you end up with a binary and no clear idea on what the license is and where the source code is.
You basically invented windows.
Posted Feb 8, 2023 11:07 UTC (Wed)
by paulj (subscriber, #341)
[Link] (4 responses)
Then in the Linux world there is the issue that there are many many different kinds of systems, and a plethora of deployment options. Upstreams just can't keep up.
Posted Feb 8, 2023 14:26 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (3 responses)
And there's a "holy grail" being chased here - can we divert the people who build maintainable systems into doing it upstream, as opposed to integrating pieces downstream?
To choose a random example, if you install the Linux kernel from the Debian archives, you get a version with over 100 patches applied atop the upstream release. Some of those are specific to Debian policy, and thus fair enough, but others are about making the kernel a better part of a maintainable system; those patches should be upstream. For example, there's 6 patches to firmware loading; one is fair enough as a downstream patch (it changes the kernel to point to Debian documentation on firmware), but the other 5 are meant to be improvements to firmware loading. There's another two that set kernel taint if you use known-buggy features - this is something that probably belongs upstream, too (albeit maybe not in the form that Debian has it).
That's at least 7 of the hundred-odd patches Debian carries to a single piece of software that are relevant to building the software into a maintainable system rather than to Debian-specific changes, and that therefore would be beneficial to have upstream, helping everyone trying to use the Linux kernel as a component in a maintainable system, as opposed to downstream, only benefiting people trying to use Debian as a component in a maintainable system.
Posted Feb 8, 2023 14:53 UTC (Wed)
by paulj (subscriber, #341)
[Link] (2 responses)
Unless you mean retain the 'system' communities of distros, and encourage them to upstream more work. That I would agree with. Having been an upstream, it was actually frustrating how /little/ the distro package maintainers would communicate with upstream and how rarely they tried to upstream their changes and ancillary packaging work.
Posted Feb 8, 2023 15:33 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
I do mean the second paragraph; yes, there are changes that distros make that are distro-specific - no upstream wants a patch that links to Fedora-specific documentation, for example - but having the work needed to make a given piece of software part of a maintainable system living in N different distro patchsets along with distro-specific changes is duplication.
And that duplication becomes waste when you get two people who would happily improve each other's implementations of a change instead working from scratch because neither of them has submitted their version to a shared location, and thus they don't know that there's a collaboration possible.
Posted Feb 10, 2023 6:14 UTC (Fri)
by pabs (subscriber, #43278)
[Link]
Posted Feb 8, 2023 13:54 UTC (Wed)
by jzb (editor, #7867)
[Link] (4 responses)
" I am surprised by the attitude that some end users trust an upstream to write software but not compile or package it. " One of the benefits of using Debian, Fedora, etc. is the oversight they have for licensing, testing, packaging, and everything. If I want to run a program where I've no prior relationship with the upstream or much info about it, using a package from the distro (at least in theory) provides an extra layer of oversight and vetting for the software. I'm not claiming this is a perfect system, but I place a fair amount of trust in the maintainers of various Linux distributions and expect that they will work out a lot of issues before the software gets into my hands. "All of the time spent by literally dozens of Linux/BSD distributions repackaging the same software has a high opportunity cost for the community; that effort could instead be investigated in improving software." Could is doing a lot of work in that sentence. :-) If all the people working on open source *nix OSes could work together instead of redundant efforts and if they'd channel that labor into other things to improve software, usability, security, hardware support, etc., then all the proprietary OSes would be sidelined by now. People being people, though, most of the attempts to join together on these types of efforts have fallen apart or fallen very short. People doing the work as volunteers tend to want to just work on stuff they like doing, and don't want to be told how to do it. The companies willing to pay people to do this work are usually unwilling to go too far in the direction of providing support to competitors. But, yeah, in a perfect or even moderately less imperfect world, everybody could work together and make things better.
Posted Feb 8, 2023 16:07 UTC (Wed)
by southey (guest, #9466)
[Link] (3 responses)
A second important reason is that bugs are found and are quickly reported and solved by the distro maintainers. I am not confident that upstream responds as quickly as distro maintainers appear to do. Also that effort is actually improving the software expecially as the upstream developers typically lack a diverse range of hardware to test on. How many developers have Intel, AMD, ARM, and MIPS systems (Debian 'bulleye' supports nine major architectures)? It is an eye opening experience to test developer code such as release candidates especially when the known tests fail on your system. It is an large amount of effort doing testing but you know that your reports makes the code better.
Posted Feb 8, 2023 19:57 UTC (Wed)
by SLi (subscriber, #53131)
[Link] (2 responses)
Alas, software is not bug-free. If you take a modern browser and replace 159 libraries with slightly random versions of the same or compatible libraries built on your platform, you are virtually certainly going to expose bugs that do not exist in the upstream binary. And no, distributions are very unlikely to be experts in testing such a complex piece of software at a level comparable to the upstream.
Now, I believe it's been the case for a while that even distributions have given halfway up and just accepted, grudgingly, that browsers vendor their own specific versions of all libraries.
I do understand the distro motivation for all this, believe me, with how much easier it is to replace one library with a vulnerability than a thousand copies. But I feel that at the level of software development the world currently is, I think it's madness to expect that you can just throw stuff together from hundreds of sources and expect it to work just as well as the blessed upstream version.
Posted Feb 8, 2023 20:03 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link] (1 responses)
Posted Feb 9, 2023 10:37 UTC (Thu)
by farnz (subscriber, #17727)
[Link]
And, thanks to distro engineers being humans too, some of those backports will introduce new bugs that aren't present in any upstream version of the library. So even if I've tested and confirmed everything works on library versions 1.21 and 1.22 from upstream, a distribution's version of 1.21-2 with a fix backported from upstream 1.22, but otherwise the same as upstream 1.21 may exhibit the bug.
Posted Feb 8, 2023 16:55 UTC (Wed)
by sionescu (subscriber, #59410)
[Link]
Why surprised ? It seems pretty obvious to me that most upstream developers know close to nothing about integrating a piece of software into the OS. It's very similar to how I've met plenty of good software developers that knew their business matter very well (e.g. in banking, GIS, electronics simulations) but had absolutely no competence in running the code they wrote. That's why the ops team was around.
Posted Feb 8, 2023 20:34 UTC (Wed)
by rgmoore (✭ supporter ✭, #75)
[Link] (1 responses)
The availability of CI/CD systems that can turn out a binary on every commit does not mean it's a good idea to distribute those binaries. Developers need to have access to source because they have to compile things for themselves, but a binary format like flatpack is designed for people who aren't going to do that. If you encourage those end users to get the latest, hot off the CI/CD server version of your software, they'll wind up with a huge number of different versions of wildly varying quality. The variable quality will hurt the project's reputation, and any bug reports they send are going to be nearly useless because of that plethora of versions of variable quality. It might be a bit more effort, but it would be more productive in the long run to produce versions that are actually intended for release rather than development snapshots. It's possible those will still come out faster than distributions want to, or even can, keep up with, but it will still be a far sight better than having users get random stuff from your CI/CD system.
Posted Feb 8, 2023 21:02 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link]
Posted Feb 9, 2023 4:51 UTC (Thu)
by dilinger (subscriber, #2867)
[Link]
For example, I recently updated debian's copyright file for harfbuzz, and in doing so discovered a font that had a license that forbid distribution. I notified upstream, and it is now removed upstream. I can't count the number of times that I've packaged something and discovered no license, or license violations (especially GPL ones where the source code for something isn't supplied), or license conflicts, or ambiguous licensing. The whole many-eyes things only works if there's actually many eyes.
Similarly, in packaging chromium for Debian, I often find issues that upstream doesn't come across because they're focused on ARM on android, rather than desktop linux. Those fixes also go upstream.
And, of course, there's the patches that disable stuff that we don't like about upstream (like supporting 3rd party cookies by default), or that don't fit with the distribution (like things that assume a certain filesystem layout or desktop stack), or that enable hardware that upstream doesn't support (like ppc64 and x86), or just changing the defaults (like switching the search engine from Google to DuckDuckGo). It's not so much about trust, as it is about motivation. Upstreams have their many varied reasons for doing things, and distributors have their reasons. They're often aligned, but sometimes they're not. When they're not aligned, that's when distribution packagers bring a lot of value.
Posted Feb 9, 2023 15:12 UTC (Thu)
by smoogen (subscriber, #97)
[Link]
Those end users are doing that.. they are wanting a third party to do the work in an auditable method. For many of these users, the years of experience of either being an upstream or working with an upstream have taught them some hard truths. It is very rare for an upstream to write their build system from scratch. Most are relying on other people's code to make the build work. They usually didn't compile their own compilers system libraries, or other utilities but are relying on some distribution to do it for them. They are usually not spending too much time on various auxilliary libraries but letting whatever pip, cargo, gems, etc brought in.
If the upstream does its job well, then it will pass along what all those decisions to you the consumer in some form of bill of materials or manifest. In some cases, you will get a blob of software that they compiled, and a bundle of all the libraries, helper-apps, etc that the upstream has felt was needed to make their blob work. Basically every snap, container, flatpak, etc is its own distribution. If you are lucky they may choose to use a layer provided by some other operating system as that basis, but in many cases.. they don't.
That is probably not a big problem.. linux has been about exploding numbers of distributions since its beginning. The bigger issue is that most of the users do not assume problems with said containerized application is the responsibility of the upstream. Instead security problems, XYZ app ate all my files, etc are shoved onto the distribution as it is what the user knows and what gave them the downloader which got the flatpaks or snaps or containers in the first place.
Posted Feb 16, 2023 16:32 UTC (Thu)
by mrugiero (guest, #153040)
[Link] (1 responses)
The issue is not trusting the devs as packagers, but the resource consumption of the solutions they would depend on. Windows had DLL hell. Now we have OS-in-a-tar hell, which is actually worse. For the same reason you can't expect a dev to package for every distribution, you can't expect them to use the same runtime base image to ship it as you currently have installed, and my disk and RAM waste will increase a lot thanks to that. Compare that to shipping only the libraries you actually need.
Posted Feb 17, 2023 12:24 UTC (Fri)
by mrugiero (guest, #153040)
[Link]
s/The/My/
Seeing other comments, I see there are much heavier concerns that I just usually don't take into account. But where I come from, not wasting hardware is of paramount importance, to the point that people use "it's lighter than Windows" as a selling point for Linux. The container-as-an-app world will soon make that false.
Fedora packages versus upstream Flatpaks
Not really. What happens instead is that those people don't contribute to anything. The opportunity cost is having them contribute to the ecosystem at all. Most packagers are not programmers at the level of the developers making the software, at least not in the beginning. They sometimes learn along the way, though.
Fedora packages versus upstream Flatpaks
It's interesting how often people make the mistake of thinking that people working in distributions to package software would redirect their effort to Flathub or upstream developers if the option was missing in the distribution...
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Wol
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Development norms have changed, CI/CD systems that crank out binaries on demand (or even every commit) are now ubiquitous. Fetching actively moving software directly from upstream is now common place be it an OCI image, appimage, yum repo, or flatpak -- I use all of the above daily.
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
Fedora packages versus upstream Flatpaks
So, the hypothetical choice is between opportunity cost in the distro vs actual material cost in the users' computers. I'll take the first one every day of the week, but we all have our own opinions.
Fedora packages versus upstream Flatpaks