|
|
Subscribe / Log in / New account

Rethinking Fedora multilib support

By Jake Edge
January 11, 2017

The Fedora Modularity effort is bringing changes to the distribution, particularly in order to build these modules that will encompass a "unit of functionality" such as a web server. An ongoing discussion on the fedora-devel mailing list is looking at the pros and cons of changing the distribution's longstanding multilib mechanism, which is what allows 32 and 64-bit libraries to coexist on the system. An initial proposal to use containers as the new mechanism was shot down quickly, but other possibilities are being discussed. Where it will all lead is anyone's guess, but as noted in our December look at the possibility of annual Fedora releases, the project is clearly considering and discussing fairly massive changes going forward.

The proposals have come from Stephen Gallagher, who posted the first, container-based idea on January 5. In it, he suggested that, instead of separating libraries into /usr/lib for 32-bit libraries and /usr/lib64 for 64-bit ones, there should be a shared 32-bit container runtime used to run 32-bit programs on 64-bit systems. That idea swiftly ran aground.

Gallagher outlined some advantages and disadvantages of the approach, but complaints were immediately heard about programs like Wine, Steam, and Skype that require 32-bit OS support and are not likely to be containerized anytime soon. Beyond that, though, it would fundamentally change the way 32-bit applications are built on top of Fedora. Instead of using the -m32 GCC flag and the relevant 32-bit libraries in /usr/lib, some kind "special dance to enter a container environment" would have to be done, as Tom Hughes put it. In the end, there are no real user benefits, Ben Rosser said:

Speaking from an end-user perspective, I actually really like the way multilib on Fedora is currently implemented. All I need to do to get a 32-bit application-- be it some Windows application under wine, some proprietary application like Steam, etc.-- to work is to install the 32-bit packages via yum/dnf, and then things Just Work.

I understand that from a building-the-distribution perspective the way this is done currently is kind of a hack, but I can't help but notice that the *only* benefits to this proposal would be that it makes building the distribution easier. There are no proposed benefits for our users beyond breaking the way things currently work with probably no upgrade path. And whether we like it or not, users, myself included, install nonfree software like Steam on systems and generally expect it to continue working from release to release.

The fast reactions in the thread led Gallagher to put out a second proposal roughly six hours after the first. He summarized the objections raised to the first proposal and listed two alternatives that had been proposed in the thread. The first would adopt the Debian Multiarch mechanism, which uses a /usr/lib/$ARCH-linux-gnu directory scheme. One advantage to that might be the emergence of a de facto standard between distributions. The other suggestion from the thread was to default installations to a single architecture (i.e. 32 or 64 bit) and only install libraries for that, but to allow additional architectures to be enabled in the DNF package manager for those users that need them.

As Gallagher noted, the two are not incompatible and, in fact, "their combination may indeed prove to be a superior solution to the one I initially came up with and suggested". He then went on to point out some problems that the transition would engender, but called them "surmountable". Moving the libraries would likely require leaving some symbolic links behind for binaries that expect to find them in /usr/lib[64]. RPM specification files may need to be adjusted so that the wrong versions of dependencies don't get installed during times when the i686 and x86_64 mirrors are not in sync. Also:

Switching to this layout might give a false (or possibly accurate, in some cases) impression that one could expect Debian/Ubuntu packages to function "out of the box" on Fedora (if using something like Alien). Education is key here.

There were complaints that the Debian library directory structure does not follow the Filesystem Hierarchy Standard (FHS). But Gallagher seemed unconcerned about strictly following the FHS: "we try to stay as close as possible to it, but if it doesn't meet our needs, we'll work around it". Hughes thinks the Debian organization is clearly an improvement, but is not so sure about making a switch:

If we were starting now to support multilib then I would certainly suggest that the Debian design is the better one but whether it's enough of an improvement to merit the pain of changing is a rather different question.

My reasons for thinking it's better are much the same as what other people have already said - that it treats all arches as equals and scales readily to whatever is needed rather than just bolting on a single 32/64 bit split as a kind of special case.

But Bill Nottingham is concerned that the change is being motivated only by build problems for Fedora and may not be keeping users firmly in mind:

While I fully understand how our current multilib system is a mess for the build and release process (being in certain respects responsible), I'm leery of using that to make drastic changes.

The whole point of building an OS/module/etc for users is to keep the complexity on the build side and out of the users hands - they don't care whether half the packages switched from autoconf to meson, whether twenty things are now written in Rust, or whether the entire python stack jumped minor (or major!) versions. They just want the system to upgrade and the software they use to keep working.

While it is true that build problems are motivating Gallagher to look at the multilib support, he definitely does not want to leave users behind:

As Bill pointed out, things "just work" for users right now and that's something we'd like to avoid breaking. However, that does *not* mean that it is trivial to do on the build side. We're currently building out an entirely new infrastructure to support modules; we'd like to take a look at what we did the first time and see if (with more experience and hindsight) we can do a better job now, and ideally one we can share between the two approaches.

There is still opposition to the whole Modularity idea, however, especially from Kevin Kofler. Most participating in the thread seem to be on board with the plan, but Kofler, as he often does, sees things differently:

What was never discussed was whether modules are something worth rebuilding "an entirely new infrastructure" to begin with. I disagree that they are even a desirable feature to begin with, they just fragment and thus dilute the Fedora platform, and have the potential to seriously hurt integration across the distribution and increase code duplication and its resulting bloat.

As part of the discussion, Langdon White pointed out that, for example, there is no real need for KDE and httpd to be tightly integrated, but that the current Fedora model forces the two to share libraries. Florian Weimer expanded on that:

Apache httpd and KDE are very interesting examples. Both KDE and Apache httpd integrate with Subversion, on two levels: KDE has Subversion client support, Apache httpd has server support. And Subversion is implemented using apr (the Apache Portable Runtime library).

So unless we start building Subversion twice, once for use with Apache httpd, and once for use within KDE, modules containing KDE and Apache httpd will have to agree on the same version of Subversion and the same version of apr.

As Fedora project leader Matthew Miller said, that is an example of where the distribution has hobbled itself "in our well-meaning attempt to integrate everything". There are other ways to handle those kinds of problems in today's Fedora (using multiple libraries with version numbers as part of the name as Weimer noted), but the Modularity effort will provide an easier way to do that.

The conversation is still ongoing as of this writing and no real conclusions have been drawn. The Fedora project, and its leader in particular, are looking toward a future where distributions do their jobs in a different way than they do today. It is not so much that the role that a distribution project plays is changing, but that the way it goes about it is. As Miller put it: "It is entirely about how we can better deliver the universe of free and open source software." That has always been a distribution's job, but the way to do so seems different these days and Fedora is doing its best to keep up.


to post comments

Rethinking Fedora multilib support

Posted Jan 12, 2017 7:14 UTC (Thu) by pabs (subscriber, #43278) [Link]

I wonder if Fedora should adopt a GoboLinux/NixOS/Guix style setup; that would let them have as many versions of libraries as they want.

Rethinking Fedora multilib support

Posted Jan 12, 2017 8:21 UTC (Thu) by eru (subscriber, #2753) [Link] (12 responses)

This smells of change for change's sake. The current scheme with /usr/lib and /usr/lib64, while a bit ugly, made it totally painless for end-users to transparently run both 32-bit and 64-bit programs on Red Hat-style systems (OpenSUSE also does the same thing). It is not like there normally (if ever) are more than these architecture variants on a given installation (ignoring code inside VM's and emulators). We can revisit the issue when x86_128 -systems appear...

Rethinking Fedora multilib support

Posted Jan 12, 2017 8:26 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (9 responses)

There's also x32

Rethinking Fedora multilib support

Posted Jan 13, 2017 5:48 UTC (Fri) by compenguy (guest, #25359) [Link] (8 responses)

> There's also x32

It gets even more interesting than that.

On debian, if a user wants to cross-compile for arm, they can install the arm architecture versions of the dependency libraries, and they'll install into /usr/lib/arm-linux-gnueabi . Same for any other platform that you could want to cross-compile for.

I know it seems like kind of a niche use case, but on Fedora and OpenSuse, it's a problem that's not solved well.

Rethinking Fedora multilib support

Posted Jan 13, 2017 14:55 UTC (Fri) by aleXXX (subscriber, #2742) [Link] (7 responses)

So, let's say I cross-compile to ARM.
The shared libraries then end up in their proper library directories.
Where do the binaries go ?

Rethinking Fedora multilib support

Posted Jan 13, 2017 22:15 UTC (Fri) by zlynx (guest, #2285) [Link] (6 responses)

To be consistent the standard binaries should all be in /usr/bin/x86_64-gnu-linux/ also. I haven't seen that level of consistency from Debian or Ubuntu.

Rethinking Fedora multilib support

Posted Jan 14, 2017 5:26 UTC (Sat) by pabs (subscriber, #43278) [Link] (5 responses)

Multi-arch $PATH is explicitly out of scope for Debian currently.

Rethinking Fedora multilib support

Posted Jan 16, 2017 7:16 UTC (Mon) by aleXXX (subscriber, #2742) [Link] (4 responses)

Doesn't that make the argument above (crosscompiling and installing in the same file system) void ?

Rethinking Fedora multilib support

Posted Jan 16, 2017 7:27 UTC (Mon) by zwenna (guest, #64777) [Link]

No, because (at least on Debian) libraries and programs are not supposed to ship in the same binary package.

https://www.debian.org/doc/debian-policy/ch-sharedlibs.ht...

There may be cases where you need both the build and the host version of a program for cross-compilation, but those are relatively rare.

Rethinking Fedora multilib support

Posted Jan 16, 2017 9:26 UTC (Mon) by pabs (subscriber, #43278) [Link] (2 responses)

Cross-compiling and multi-arch $PATH are unrelated. Cross-compiling uses compiler executables for the build architecture that target the host architecture, not executables for the host architecture that target the host architecture. So you don't generally need any executables for the host architecture when cross-compiling.

Multi-arch $PATH doesn't have any significant use-cases yet that I know of.

Rethinking Fedora multilib support

Posted Jan 16, 2017 13:52 UTC (Mon) by lsl (subscriber, #86508) [Link] (1 responses)

> Multi-arch $PATH doesn't have any significant use-cases yet that I know of.

A network file system that is mounted by machines of different arch. Plan 9 did it that way many years ago. You had /mips/bin, /386/bin, …, as well as a directory for arch-less executables (shell scripts).

On startup, the setup scripts would bind the appropriate directories for the local machine so that their contents would be available at /bin. It used bind/union mounts instead of $PATH but you could do the same thing with $PATH on Unix.

Rethinking Fedora multilib support

Posted Jan 22, 2017 2:36 UTC (Sun) by pabs (subscriber, #43278) [Link]

Ack, but what was the motivation for Plan 9 doing it like that? What can you achieve with multi-arch /usr/bin or $PATH that you can't do without it?

Rethinking Fedora multilib support

Posted Jan 12, 2017 14:04 UTC (Thu) by foom (subscriber, #14868) [Link] (1 responses)

On Debian on an x86-64 machine, you can enable the arm architecture in dpkg, and then install and run arm packages and binaries as if native (via qemu-user binfmt-misc support). No special casing needed -- it works just like installing i386 packages (except the programs run slower, of course).

Maybe not a super common use case, but, in some circumstances, very very useful.

The proper solution: create another sysroot

Posted Jan 13, 2017 12:14 UTC (Fri) by scottt (guest, #5028) [Link]

The Fedora solution to the "I want to build code for ARM" problem is to create another sysroot via the --installroot option of the packaging tools. (Plus passing it a different config file that specifies another CPU architecture ;) )

Being able to compile code for another CPU architecture running the exact same version of the distro as your development machine is not that useful. Being able to build for the distro version in production use or the in the hands of customers is.

The usefulness of the "separate sysroot" approach can be gleamed from the popularity of installing Docker containers of various popular distributions for development.


Copyright © 2017, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds