|
|
Subscribe / Log in / New account

Package managers all the way down

By Jonathan Corbet
January 24, 2017

linux.conf.au 2017
Package managers are at the core of Linux distributions, but they are currently engulfed in a wave of changes and it's not clear how things will end up. Kristoffer Grönlund started his 2017 linux.conf.au talk on the subject by putting up a slide saying that "everything is terrible awesome". There are a number of frustrations that result from the current state of package management, but that frustration may well lead to better things in the future.

Grönlund started by asking a simple question: what is a package manager? There are, in fact, two types of package managers out there, and the intersection between them is leading to some interesting problems.

When most people think of package managers, they are thinking of a distribution package manager like zypper, DNF, or APT. These tools date back to the early days of Linux, when there were few boundaries between users, developers, and administrators; whoever we were, we had to do everything ourselves. Distribution package managers were construction kits that helped us to put our distributions together. They managed dependencies — both build and runtime dependencies, which are different things. They helped users install their software, administrators to keep systems up to date, and distributors to manage licenses.

There is another type of package manager out there: the language package manager. These tools are usually tied to a specific programming languages; examples include npm for JavaScript and Bundler for Ruby. They help non-Linux developers get the benefits of a package manager; their main role is to find and download dependencies so that users can build their software. Language package managers are useful, but they stress the distribution model in a number of ways.

The new dependency hell

Grönlund has been working on packaging Hawk, a Ruby-on-Rails application. It is unusual to create distribution packages from web applications, he said, but it will become more common as these applications become more common. Getting Hawk into an RPM package turns out to be challenging. Rails has about 70 dependencies that have to be installed; Hawk itself has 25 direct dependencies. Each of those has to be packaged as its own RPM file. That is not that bad of a job and, as a benefit, all of those dependencies can be shared with other openSUSE packages. But things get worse as soon as an update happens. Usually Rails breaks, so he has to go through the dependencies to figure out which one was updated incompatibly. Or Hawk breaks and has to be fixed to work with a new Rails version. It's a pain, but it's still manageable.

The next version of Hawk, though, is going to a more active JavaScript user interface. "Ruby dependency hell has nothing on JavaScript dependency hell," he said. A "hello world" application based on one JavaScript framework has 759 JavaScript dependencies; this framework is described as "a lightweight alternative to Angular2". There is no way he is going to package all 759 dependencies for this thing; the current distribution package-management approach just isn't going to work here.

Grönlund then changed the subject to the Rust language which, he said, is useful in many settings where only C or C++ would work before. Rust has its own package manager called "cargo"; its packages are called "crates", and there is a repository at crates.io. Cargo is "at the apex of usability" for language package managers, he said, adding that developing code in Rust is "a painless and beautiful process".

[Rust
dependencies] Rust has avoided dependency hell in an interesting way, he said. Imagine a dependency graph like the one shown on the right (taken from his slides). The application A depends on libraries B and C. But B depends on D 1.0, while C needs D 2.0. This is an arrangement that would be nearly impossible to achieve in the C world, but it's easy in the Rust environment. The compiler automatically incorporates the version number into symbol names, causing the two dependencies to look different from each other; the application, unaware, uses both versions of D at the same time. This makes it easy for developers to add a new dependency to a project.

Another useful tool is "rustup", which manages multiple versions of the compiler. The Rust compiler is released on a six-week cycle. Since Rust is a new and developing language, each release tends to include interesting new features. As a result, applications can end up being dependent on specific versions of the Rust compiler. Rustup handles those dependencies, ensuring that the proper compiler versions are available when building an application.

All of this is great for developers, he said, but it is making life difficult for distributors. It's not a problem if you don't care about relying on the Rust infrastructure and Mozilla's servers, and if you don't mind not being able to build your program without an Internet connection. If, instead, you need exact control over the versions of the software you are using, perhaps to track the associated licenses, the tools become harder to work with.

He found the process of packaging these applications frustrating, like swimming up a waterfall, and it led him to wonder: why are we trying to manage packages from one package manager with a different package manager? Should we be trying to build our software using such complicated algorithms? But the fact is that this kind of dependency management is increasingly expected as a part of what a programming language provides. The distributor package-manager role of tracking dependencies is not really needed anymore. Perhaps we don't need distributions anymore.

The way forward

One way to cope would be to complain and say that "kids these days are doing things the wrong way". He does that sometimes, but it won't get us anywhere in the long run. The only way to deal with change is to accept it and roll with it. Packaging libraries as we do today simply isn't going to work anymore. We need to explore what that means and how we should do things instead.

One thing that needs to be done is to realize that packaging and package management are not the same thing. The acts of building and distributing software need to be separated; tying them together is making it hard to progress on either side. Perhaps one step in that direction would be to focus on the creation of a protocol for package management rather than just building tools. We could have a metadata format that doesn't intermix the details of a package with how to build it.

Another thing that will have to happen is the separation of the management of the base system from the management of applications. They don't necessarily have to be packaged separately or use different tools, but we need to recognize that they are not the same thing. It is, he said, a "quirk of history" that the two got mixed up in Linux; we don't have to be prisoners of our history.

But, then, what is the role of distributions in this new world? None of this change is a threat to distributions, he said, as long as they figure out how to solve these issues. Distributors will still handle the overall look and feel, the selection of applications, support, security patches, and so on. Somebody will still have to provide those services. [Kristoffer
Grönlund] Distributors' lives may well become easier if they don't have to provide the entire universe for developers. Applications should be separated from the base system; perhaps we will see topic-specific distributions with no base at all. Implementing this separation calls for a protocol for app stores, to allow interaction between small app stores and distributions.

There are a lot of open questions, of course. How does one track software licenses when there are thousands of dependencies to deal with? How are security patches managed when applications carry their own copies of libraries, and perhaps multiple copies at that? It might be tempting to just dismiss the whole thing, saying that the older way was better, but that way is not going to remain viable for much longer. His experience with Rust made that clear; developing in that environment is just too nice.

Thinking about solving these problems is exciting, he said; the future is open. But we are still faced with the problem of inventing the future when we don't know what it will look like. We are going to have to play around with various ideas; that happens to be something that open source is particularly good at. There will be lots of people who disagree and set out to solve the problems in their own way, thus exploring the space. When the best solutions emerge, we are good at adopting them.

One particular issue that has to be addressed is managing state. One of the tenets of functional programming is avoiding managing state. Distributions, instead, have a great deal of state, and it is proving increasingly hard to manage. Time becomes a factor in everything, and that is hard to reason about. When you do the same thing twice, you may get two different results. Distributions have this problem all the time. Our software is full of weird quirks designed to mitigate the problems associated with managing state.

How can we fix that? One possibility is demonstrated by the Nix and Guix systems, which treats system composition as a Git-like tree of hashes. Packages are installed based on the hash of their source. There is no dependency hell, it just uses the exact required dependencies for everything. Configuration and data files are managed in the same way. If you need support, you get a hash of the system, which shows its exact state. It has some issues, though; updating the base forces updating everything else. There are ways of mitigating some of these problems. Nix is just scratching the surface of the possibilities, Grönlund said.

Containers are another interesting area. Projects like AppImage, snap, and Flatpak are just beginning to show progress in abstracting applications from the base system and sandboxing. None of them have reached the point where they need to be. Systemd is controversial, but it has reinvented what an init system is supposed to be and started a new discussion on how the base system should work. Systemd was adopted quickly across a wide range of distributions, showing that we can change the fundamental aspects of how the Linux system works. Qubes OS is focused on security and sandboxing, and is reimagining what an operating system can look like.

He concluded by saying that reproducible builds are an important goal. Figuring out how to make builds reproducible in this environment is going to be an interesting challenge. There are many challenges, but it is time to face them. Alan Kay once said that the future does not need to be incremental. Sometimes we need to think about where we want to be in ten years, then build that future.

Q&A

A member of the audience asked about multi-language dependencies. Language-specific package managers do not handle these dependencies well, while it's "bread and butter" for distributions. Grönlund agreed that this was an important issue, one that is going to start to hit people when distribution package managers go away. The exact-dependency approach used by Nix may point toward the solution there.

Another question had to do with "cloud rot"; how do you manage a situation where you depend on something that might not be there next Tuesday? This can even happen as a sort of deliberate sabotage; the FSF deleted the last GPLv2 versions of GCC and binutils from its site. How can this be tractable in the future when having things work depends on more and more people? Grönlund said he had no real answer for that problem, and that developers need to realize that the Internet is not permanent. Companies can fail, and if you rely on their infrastructure you're going to have issues. Some of these lessons have to be learned the hard way. Distributions might be able to help by caching things, but he doesn't have a real answer.

The video of this talk is available for readers wanting to see the whole thing.

[Your editor would like to thank linux.conf.au and the Linux Foundation for assisting with his travel to the event.]

Index entries for this article
Conferencelinux.conf.au/2017


to post comments

Package managers all the way down

Posted Jan 24, 2017 20:36 UTC (Tue) by Tara_Li (guest, #26706) [Link] (5 responses)

*cloud rot* - wonderful name for it. How could anyone who's been on the Internet for very long *not* realize the problems inherent in having someone else hosting code that your code is dependent on - and worse, you don't even distribute the code yourself, you reference the original code instead of hosting it yourself. So many of your users have no idea why their webpage or web app stops working - it just all of a sudden falls over and goes "boom", and they're stuck with pretty much no indication of why. (Remember, the user in a traditional web browser never sees a 404 message for an included CSS or Javascript file that isn't there...)

Package managers all the way down

Posted Jan 24, 2017 23:41 UTC (Tue) by pabs (subscriber, #43278) [Link] (4 responses)

Debian's wayback machine can probably help with this cloud rot thing:

http://snapshot.debian.org/

Ubuntu has something similar but it expires packages after a longish time IIRC.

Package managers all the way down

Posted Jan 25, 2017 15:37 UTC (Wed) by Tara_Li (guest, #26706) [Link] (3 responses)

Except that the *USER* doesn't know about these archives - hell, the majority of the users on the Internet have never heard of the Wayback Machine!

Package managers all the way down

Posted Jan 26, 2017 3:30 UTC (Thu) by pabs (subscriber, #43278) [Link] (2 responses)

Not sure how to fix that, do you have any suggestions?

Package managers all the way down

Posted Jan 26, 2017 4:09 UTC (Thu) by Tara_Li (guest, #26706) [Link] (1 responses)

Yeah - don't assume your users are internet-connected. Provide what they need to run in the package - shared libraries have a certain amount of value - but they're also a risk.

Package managers all the way down

Posted Jan 28, 2017 1:43 UTC (Sat) by pabs (subscriber, #43278) [Link]

Hmm, I don't think it is feasible to ship all 35TB of Debian snapshot to even one user. Perhaps I misunderstood something?

Package managers all the way down

Posted Jan 24, 2017 20:56 UTC (Tue) by gwolf (subscriber, #14632) [Link] (8 responses)

This is, of course, far from a new question — Back in 2007, I presented a talk at YAPC::Europe (Yet Another Perl Conference) on the work of the Debian pkg-perl group (the article was slightly edited since): "Integrating Perl in a wider distribution: The Debian pkg-perl group"

http://gwolf.org/files/integrating_perl_in_distro.pdf

Debian's experience with Perl (at least while I was active in the pkg-perl group) was smooth and excellent. But around that year, I started migrating from Perl to Ruby for my "real-life" job, and attempted to do the same with Ruby. So, by mid 2008, I presented this at DebConf (the Debian Conference): "Bringing closer Debian and Rails: Bridging apparently incompatible cultures"

http://gwolf.org/files/debian+rails.pdf

I am not currently really active packaging language libraries, but –again, due to my work committments– took up packaging Drupal for Debian some years ago. I have maintained Drupal 7 for two stable releases, and the experience has been mostly painless. However, after long work was spent to "Debianify" Drupal 8, a month ago I gave up — Precisely due to what Grönlund presents: "Giving up on the Drupal 8 debianization ☹"

http://gwolf.org/node/4087

The current rage, as he describes, seems to be containerization. But it's short-sighted and I am sure it will bite many people in the future. The problem with dependency hells is not just that two libraries depend on two different versions of some other piece of software; refering to Grönlund's figure, language package managers will happily allow me to have simultaneously installed (sometimes even simultaneously linked and in memory) D 1.0 and 2.0 — But, why did D move from 1.0? Maybe because of horrendous bugs, of security issues, of legal threats, of... A whole mess my users don't want to know!

Distributions don't just allow for conveniently packaged software. They also allow for single-place quality and security assurance. Of course, in the diagramed case, any decent distribution will either stop distributing B (making A uninstallable) or port it to work with the new version of D. Anything less than that is a disservice to the users.

And... Throwing old and vulnerable software into a container and forgetting about it... Won't solve the issue.

Package managers all the way down

Posted Jan 25, 2017 15:55 UTC (Wed) by imMute (guest, #96323) [Link] (2 responses)

I think the distribution package model needs to change slightly to handle multiple versions a little differently. For example, Debian (and others) have a simple way for handling major version upgrades of shared libraries. libfoo version 1.y.z become package "libfoo-1" with version "1.y.z", and you can also have package "libfoo-2" at version "2.w.x". Any application can depend on either version of the library and both can be installed side-by-side. I think they need to take this approach and dial it up - remove the major version from the package name and allow *any* version to be installed side-by-side. The next problem to solve would be how to organize the files such that they don't clash. 0pointer has an interesting read at http://0pointer.net/blog/revisiting-how-we-put-together-l... which lays out a way to use a filesystem that can "partition" very easily (in their example using subvolumes in btrfs) and a VFS that can create "views" of the FS depending on what the application needs by bringing together those subvolumes. I highly recommend reading that article and thinking about what could be possible - put your views on Lennart aside for a moment and just read the article.

So we aren't quite done yet, what if an application has an indirect dependency on multiple versions of the same library? I'm not entirely sure how that should be handled. "Well don't do that" is a valid, but unhelpful solution. Another would be to arrange the FS in such a way that you don't need to hide the different versions of a package, the app can see them all. Depending on the "linker" of a given language, such a thing might not even be possible; at the very least, without changing how the symbols are named internally (Rust seems to do this by attaching the version to each symbol name?).

It's all very interesting, and I really hope someone forges a path researching and attempting it, but I have a feeling they're going to get shouted into oblivion by people who disagree with some aspect of the proposed system or people behind it.

Package managers all the way down

Posted Jan 25, 2017 19:00 UTC (Wed) by smcv (subscriber, #53363) [Link]

> libfoo version 1.y.z become package "libfoo-1" with version "1.y.z", and you can also have package "libfoo-2" at version "2.w.x"

No, libfoo.so.1 becomes package libfoo1 and libfoo.so.2 becomes package libfoo2. The "marketing" version numbers don't enter into it: we don't care whether libfoo.so.1 came from Foo 0.3.6 and libfoo.so.2 came from Foo 0.3.7 (as it might in practice if the upstream breaks ABI a lot).

We can safely parallel-install the libraries precisely because they *already* have different names, where by "name" I mean the name their users (binaries and other libraries) use - for the C/C++ ABI that's the SONAME. We rely on our upstreams to get SONAMEs right, or educate them about how to get them right, or as a last resort apply downstream patches to generate something like libfoo.so.1d (d standing for "Debian-specific").

If the upstream is good about parallel installation (lots of GNOME and KDE libraries are, for instance), then we can also parallel-install the development files: a libfoo-3-dev package might contain foo-3.pc, /usr/include/foo-3 and -lfoo-3, while foo-4.pc might contain foo-4.pc, /usr/include/foo-4 and -lfoo-4. Here the important names are the pkg-config module name, the -I path for the headers, and the library name used by the linker (up to and including .so).

If other language communities sync up incompatible changes with name changes, we can do the same there: if a Python library is intended to be used like "from foo_bar2 import Bar", we would package it as python-foo-bar2 and/or python3-foo-bar2 (even if its "marketing name" is pyFoobar or something). If its upstream makes incompatible changes, *and* they are nice to their downstreams by synchronizing that with a rename so that you use "from foo_bar3 import Bar", then we'd package python-foo-bar3 and python3-foo-bar3. In Python the important/significant name of an API is the one that you "import".

Perl has a similar thing going on with formalized package naming in Debian, as does GNOME's GObject-Introspection. From other comments here it sounds as though Rust has some sort of formalized naming that would be a good fit for a similar technique.

This works the other way round too. If a compiled package has a dependency on a particular package name in its metadata, it can rely on the contents of that package matching the name that is used internally - python3-foo-bar2 will always contain what is needed by a library user that does "import foo_bar2" in Python 3, and libfoo2 will always contain what is needed by a binary executable with DT_NEEDED: libfoo.so.2 in its headers. You'll never upgrade foo-libs from one version to another and find that an executable no longer works because libfoo.so.1 has disappeared and libfoo.so.2 has replaced it.

If an upstream (or a language community) *doesn't* reflect compatible vs. incompatible changes in something that could be called a name, then we can't do that, and we're stuck with package "upgrades" that may or may not contain incompatible changes.

tl;dr: names are APIs and APIs are names. If they don't line up, something is wrong.

Package managers all the way down

Posted Jan 26, 2017 3:30 UTC (Thu) by pabs (subscriber, #43278) [Link]

Sounds like you might be looking for GoboLinux, Nix or Guix.

Package managers all the way down

Posted Jan 27, 2017 1:07 UTC (Fri) by smitty_one_each (subscriber, #28989) [Link] (4 responses)

Another other nifty point about containers, at least for services, is that you can keep arbitrary numbers of versions of an app in production, which is more the "ops" than the "dev" end of the question.

Package managers all the way down

Posted Jan 27, 2017 1:15 UTC (Fri) by gwolf (subscriber, #14632) [Link] (3 responses)

*sigh* Yes, and that's what I have at work. I have some ancient applications I wrote way back when the world was simpler, and I have had no time/motivation to rewrite to use current infrastructure. Too many APIs have changed. So, yes, I have some 2007-ish systems installed just to cope with that. Of course, they are locked into their own containers to ensure they are no security liability for my other systems.

But then again, can I honestly say that Rails 1.2.x is safe *for itself*? Or a 1.5.x Joomla system that I'm not allowed to migrate because that's the *exact* way my users want it to stay? Nope. Those sites are a breakin waiting to happen. They store, fortunately, quite uncritical information... But they are important enough for my workplace to need them running for over ten years.

But anyway — I am a seasoned sysadmin. I know about this mess and can (hopefully) cope with that mess. What about regular users who just want to get a problem solved? Should we ship old, unmaintained code because a combination of authors not updating their APIs usage force them into that situation?

Package managers all the way down

Posted Jan 27, 2017 1:20 UTC (Fri) by smitty_one_each (subscriber, #28989) [Link]

The conversation is moving off of the technical, toward the organizational.

The organizations that have the wisdom to prioritize retiring technical debt are the exception.

The general case seems to be that things muddle on until they cease.

Package managers all the way down

Posted Jan 27, 2017 9:01 UTC (Fri) by niner (subscriber, #26151) [Link]

What do containers have to do with "no security liability for my other systems"?

Package managers all the way down

Posted Jan 28, 2017 3:23 UTC (Sat) by RCL (guest, #63264) [Link]

Yes, if you want to scale. Developers may be gone, sometimes sources may be gone too (think binary only games), but there is still value in running old software (e.g. wanting to play that old game from 2017 in 2030). Or someone wants to see how the program behaved when picking up development of the old project etc.

Security is not paramount and should not be forced on the user. I think that average Linux user readily trades it for usability just like Windows users do. Being involved with gaming on Linux, I witnessed a lot of substandard security practices by our end users - for instance, the first attempt to resolve a game crash or engine build problem is to re-run it with sudo. People will bypass all the security mechanisms if they get into their way, and if they cannot, they will stop using the software because it's too hard (see OpenBSD). Security is better solved through compartmentalization IMHO and using well defined, backward compatible APIs (like syscalls) between parts.

Package managers all the way down

Posted Jan 24, 2017 21:57 UTC (Tue) by patrakov (subscriber, #97174) [Link] (1 responses)

"if you don't mind not being able to build your program without an Internet connection" often means "if you don't mind not being able to build your program in China"

Package managers all the way down

Posted Jan 24, 2017 22:31 UTC (Tue) by smcv (subscriber, #53363) [Link]

And if you don't mind trusting not only the current maintainer of every library you use (and their hosting provider, and their certificate authority, and the security of their server, etc.), but every future maintainer of those libraries (and their hosting providers, etc.) as well.

Package managers all the way down

Posted Jan 25, 2017 0:13 UTC (Wed) by smoogen (subscriber, #97) [Link] (12 responses)

---
One way to cope would be to complain and say that "kids these days are doing things the wrong way".
---

I don't think the kids these days are doing things the wrong way but they are doing what every other generation does in the world. Think that XYZ isn't a problem anymore and we can just recode everything. We did it a long time ago and pretty much all the RPM/.deb/etc fixes were reinventing things that others had done with older computers long ago... they also had lots of bandaids and special cases because there is no one language everything is written in, there is no one way to solve a problem and there is definitely no sanity clause.

I expect that just like those of us who 'learned all this' in the 1990's or the 2000's or now the 2010's, the problems will hit a wall and some cobbled together solution will come about again. It will last until the young brash programmers of 2025 or 2032 throw it out because no one uses 'blimpexs' anymore... and the cloud is for ancient people. [Those of us still around will be probably programming Fortran IV and Cobol because someone has to keep the real infrastructure going :)]

Package managers all the way down

Posted Jan 25, 2017 10:07 UTC (Wed) by mgedmin (subscriber, #34497) [Link] (4 responses)

> One way to cope would be to complain and say that "kids these days are doing things the wrong way".

Distro-independent package archives are a solution to a real problem. And it's not a particularly new one -- CTAN is 25 years old.

Package managers all the way down

Posted Jan 25, 2017 14:16 UTC (Wed) by smoogen (subscriber, #97) [Link]

I agree. They have been a solution to multiple problems over time. I also think that the problems that Kristoffer Grönlund ran into has been going on for as long as they have existed whenever the complexity of some project or group of projects combine.

Containers aren't a new solution either. Having to build chroot jails so that your overall application could have 4 or 5 different Javas and 9 different versions of the same perl modules to work is something enterprise people have done for just as long. [It was actually the selling point of AIX for a while.]

Package managers all the way down

Posted Jan 25, 2017 18:45 UTC (Wed) by rgmoore (✭ supporter ✭, #75) [Link]

Distro-independent package archives are a solution to a real problem.

And a potential cause of other problems. It seems to me that there's a deep conflict between making it as easy and fast as possible to build on what's already there and taking care of details like licensing and security updates. The reason distributions have developed their focus on things like license compatibility and avoiding bundled libraries is because they've been bitten by the long-term problems of ignoring those issues. Independent package archives that don't learn from those lessons are going to find themselves repeating them.

Where are you an expert? Where not?

Posted Jan 27, 2017 0:32 UTC (Fri) by gwolf (subscriber, #14632) [Link] (1 responses)

I agree, distro-independent pacakge are great if you are a 1337 developer who knows his way around the language / framework of choice. However, how many languages are you 1337 in? Do you want to need to have your local store of Gems for the three Ruby applications you run, of Pypy Eggs for the twenty-seven Python applications, and of all the more dull-named repositories of other languages? Shouldn't a distribution take care of all the mess that's not most important to you?

Where are you an expert? Where not?

Posted Jan 27, 2017 9:03 UTC (Fri) by niner (subscriber, #26151) [Link]

Reminds me of back when I just wanted to install some tool that happened to be written in Python which just gave me an error message talking about eggs. That was a serious WTF moment.

Package managers all the way down

Posted Jan 25, 2017 14:18 UTC (Wed) by mstone_ (subscriber, #66309) [Link] (6 responses)

> We did it a long time ago and pretty much all the RPM/.deb/etc fixes were reinventing things that others had done with older computers long ago

Like what? My perception is that you seem to forgetting just how bad package management was on old systems--can you give an example of something that did this well? The best non-linux package manager that I remember was the one Irix used. It had some functionality around multi-arch that .deb only caught up to in the past few years, but *building* those packages was a PITA, not nearly as nice as the deb workflow, and the config file handling wasn't as good. Apart from that my recollection involves getting a CD or tape or whatever and replacing the old version with the new version, then bringing the support circus in to manually fix whatever that process broke (as needed, and depending on the site and the software you might be your own support circus).

Package managers all the way down

Posted Jan 25, 2017 14:44 UTC (Wed) by smoogen (subscriber, #97) [Link] (5 responses)

Thank you for making me clarify that point. I made it sound like the status of package management was a period of halcyon days that we threw down in trying to reinvent packages. That never happened. Packaging in the mid 1990's was usually a uuencode patch to whatever came off the original tar tape. It was a pain in the ass, and we were all much happier when .deb/.rpm were put into place.

I don't have an example of a previous packaging solution before .deb/.rpm other than when people would trot out something they were doing in the packages.. some really old soul would trot out the: Oh so you are finally doing that thing from: TSX-11, Multics, <fill in other Lisp machine>. I also believe that there was some simple packaging solutions for Unix systems, but they were usually incredibly slow, memory intensive and without a depsolver like apt or yum.. horrible dependency hell. In the end it was always easier to just create patch files and hope they worked. These would also get trotted out at various times.

The part I was referring to obliquely and poorly was that when someone familiar with these package managers came into a conversation with "Have you thought about..." <conflicts, multiple architectures, etc> various developers working on the new solutions would usually skoff and say "Oh that won't be a problem anymore because people shouldn't do that.." where that was something that 6-9 months later they would find a lot of people not only did but needed to do.

Now this "reinvention of the wheel" wasn't all bad. In many cases, the new solution to the special case made use of algorithms or other improvements that weren't available in 1970. There were also many cases of "you can't do that.. it will blow up in memory" which was true in a 2M mainframe but not true in 256MB PC (well until all the other complexity added made you need a 2.5 GB machine :)).

Again thank you for making me clarify my words. I hope that removes the "Oh these kids.. they don't know the Eden we had in the 70's" kind of view my original text had.

Package managers all the way down

Posted Jan 29, 2017 20:05 UTC (Sun) by pj (subscriber, #4506) [Link] (4 responses)

As far as I'm concerned, Debian revolutionized system maintenance in the 90s with their policy of in-place upgradability. I once upgraded a 2yo debian system in front of an old SunOS sysadmin, and he just shook his head and said "Sun never managed to make that work."

Package managers all the way down

Posted Jan 29, 2017 22:21 UTC (Sun) by zlynx (guest, #2285) [Link] (3 responses)

Yeah...I suppose it mostly works. But Debian's policy of asking lame, stupid questions during the upgrade has always annoyed me. It's about as useful as asking "Unlink inode 145323?"

I believe it was upgrading some stuff including Exim 3 to Exim 4 and I gave it some wrong answer. There were a few other services too. But it took me four or five hours after the upgrade to get everything working correctly again.

I've had much better experiences with Redhat / Fedora upgrades.

Package managers all the way down

Posted Jan 30, 2017 12:10 UTC (Mon) by jubal (subscriber, #67202) [Link] (2 responses)

Not reading the release notes, are we? Exim 3 and Exim 4 are completely different programs.

Package managers all the way down

Posted Feb 1, 2017 17:37 UTC (Wed) by JanC_ (guest, #34940) [Link] (1 responses)

Maybe upgrade questions should refer to and/or include those release notes? (Maybe they already do in some cases, but as a policy.)

Package managers all the way down

Posted Feb 1, 2017 18:07 UTC (Wed) by zlynx (guest, #2285) [Link]

This was a long time ago. When Exim 4 came out in Debian stable.

And I haven't run Debian since around then, so maybe it already works that way now.

Package managers all the way down

Posted Jan 25, 2017 3:39 UTC (Wed) by lsl (subscriber, #86508) [Link] (5 responses)

> The compiler automatically incorporates the version number into symbol names, causing the two dependencies to look different from each other; the application, unaware, uses both versions of D at the same time.

I don't think this is going to work the way most people would expect, at least not for arbitrary code.

What if package B and C expose types from D in their APIs? Those won't be compatible between D1.0 and D2.0. At least that would be a compile-time problem. What if D has some kind of global state that is expected to be shared?

Package managers all the way down

Posted Jan 25, 2017 4:02 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

Then you get an error that the type you have is not compatible with the one wanted in the API. This usually involves saying that your d::X is not compatible with module::from::other::crate::d::X (yes, the message could be improved). If D has some global state, the related symbols are mangled differently. Global mutable variables are not supported in Rust, so the only thing I can think of is system-level shared state such as signal handlers, environment, etc. which is no different than in C.

Package managers all the way down

Posted Jan 25, 2017 4:16 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

> Global mutable variables are not supported in Rust
That's not quite true - you can shoot yourself in the foot by doing:

>static mut n: i32 = 43;
>unsafe {
> n = 42;
>}

Even a safe version is possible using Mutexes.

The compiler automatically incorporates the version number

Posted Feb 3, 2017 19:00 UTC (Fri) by davecb (subscriber, #1574) [Link] (2 responses)

This was used heavily in Solaris by my team, under David J. Brown. It allows allow a single library to contain different versions of the same function, distinguished by their version.

Interesting, glibc uses this same mechanism, the same way that Multics did with "updaters" and "downdaters"

When I get some Copious Spare Time I'll do a writeup on it.

--dave

The compiler automatically incorporates the version number

Posted Feb 3, 2017 22:45 UTC (Fri) by nybble41 (subscriber, #55106) [Link]

> It allows allow a single library to contain different versions of the same function, distinguished by their version.

That sounds different from what the article was describing: using two different versions of a library at the same time, not different versions of a symbol within the same library.

You're right that glibc uses symbol versioning extensively, but when it does so it goes to some effort to make the different versions of the interface compatible with each other so that they can be interleaved in the same program. You don't get that behavior simply by linking against two versions of the library while keeping their symbols distinct.

The compiler automatically incorporates the version number

Posted Feb 24, 2017 18:55 UTC (Fri) by nix (subscriber, #2304) [Link]

There is some info on the glibc wiki, too, not about the easy case of compat symbols and versioning, but about the *tricky* case of things like 'struct stat': <https://sourceware.org/glibc/wiki/Development/Versioning_...>.

Package managers all the way down

Posted Jan 25, 2017 4:49 UTC (Wed) by drag (guest, #31333) [Link] (3 responses)

My imaginary perfect packaging future system is one based on IPFS. The Intergalactic File system. The permanent web.

In case anybody is unfamiliar IPFS is intended as a companion protocol to WWW. A P2P file system were you download files based on hashes in in the act of downloading them you share them back out on the internet. As long as somebody is using a file or has a file somewhere in IPFS then it will never go away. The original server and media sharer could be long gone and the address will always be the same and it will always be accessible as long as somebody somewhere has it on a IPFS share. Also once you download a file it's always accessible using the same address regardless of whether you are connected to the internet or not. When you are local lan people can pull the files from your IPFS share using the same address and none of you have to be connected at that point.

No downloading packages. No installing or untarring or building. All versions of all applications are all pre-installed in all Linux distributions.

'Intergalactic Package Management' consists then of just hyperlink files. Just a big list of applications that you launch via some FUSE-like file system that connects to IPFS and has a read/write POSIX-ish file system overlayed on it. All versions of all applications pre-installed practically forever.

Upgrading and patching applications would just be updating that list of hyperlinks and restarting the application. Rolling back versions would consist of finding the old reference and using that instead.

Package managers all the way down

Posted Jan 25, 2017 12:27 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

Note that it is "interplanetary". The time scales involved with interstellar, nevermind intergalactic, has not been addressed yet ;) .

Package managers all the way down

Posted Jan 25, 2017 15:25 UTC (Wed) by k3ninho (subscriber, #50375) [Link]

Your civilisation surviving on the timescale of travelling between planetary systems is an exercise left up to the reader, let alone your backups.

K3n.

Package managers all the way down

Posted Jan 25, 2017 23:39 UTC (Wed) by flussence (guest, #85566) [Link]

I really want to like IPFS — it has some good ideas — but right now it's too unstable and plain broken to take seriously. The daemon is a resource hog when idle, the FUSE functionality barely works, performance seems to deteriorate exponentially with increasing file/directory sizes, and it's corrupted its own local DB for me at least once. And apropos for the article, it has its own dependency package manager that requires itself…

These should all be fixable things, but it's hard to get involved because the software is spread over a labyrinth of micro-repositories that seem to be in a constant state of “come back later - we're doing a complete rewrite of this part”.

Somewhat disappointing to see that they don't dogfood their own software either: /ipns/ipfs.io/ resolves, but returns a copy of the website that's months out of date.

Package managers all the way down

Posted Jan 25, 2017 9:43 UTC (Wed) by liam (guest, #84133) [Link]

We could have a metadata format that doesn't intermix the details of a package with how to build it.
Umm......
......
https://www.freedesktop.org/wiki/Distributions/AppStream/

Package managers all the way down

Posted Jan 25, 2017 11:09 UTC (Wed) by anatolik (guest, #73797) [Link] (6 responses)

> Usually Rails breaks, so he has to go through the dependencies to figure out which one was updated incompatibly. Or Hawk breaks and has to be fixed to work with a new Rails version. It's a pain, but it's still manageable.

The mistake here is that they try to manage Gem->RPM conversion manually. It is time-consuming and extremely error-prone. Instead of converting package and checking dependencies manually these folks should really look at the process automation.

Arch Linux has such gem->package converter https://github.com/anatol/quarry that handles 600+ popular gems http://pkgbuild.com/~anatolik/quarry/x86_64/ That is the way other distros should handle Gem/Pipy/CPAN/... packages

Package managers all the way down

Posted Jan 25, 2017 12:26 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

There is at least gem2spec for RPM. Having done this kind of stuff, problems usually exist around bundled copies of software, non-free content included in the package, or other metadata-y problems.

Package managers all the way down

Posted Jan 25, 2017 18:08 UTC (Wed) by HIGHGuY (subscriber, #62277) [Link] (1 responses)

Fpm is also a great Swiss Army knife

Package managers all the way down

Posted Aug 21, 2017 7:44 UTC (Mon) by yoe (guest, #25743) [Link]

except it's not, it's a great case of "NIH syndrome" gone wrong. At least that's my opinion, YMMV

Package managers all the way down

Posted Jan 25, 2017 22:30 UTC (Wed) by ebassi (subscriber, #54855) [Link] (2 responses)

The issue is not automating the conversion to another packaging format: it's the combinatorial explosion of packages and interaction between packages.

You cannot, in any way, shape, or form, do any form of validation or QA of the interaction between ~700 packages of dependencies of a single application already; without that, it's impossible to update even a subset of them without breaking something else that depends on those package. Worse than that, you won't even be able to know if anything broke because there can't be validation or QA process that works or scales with that kind of numbers. The closest thing that any Linux distribution ever attempted to do was a libc major API/ABI switch, and that was a single package in a sea of mostly compiled binaries. Here, we're talking about dynamic, high level languages. The only way to actually verify that updating a package doesn't break things is running the test suite (if there's one) for every other package — and hope that the interactions between tests are also tested.

Now repeat for any medium-to-large sized application that depends on an excess of 500 packages, and no amount of automation will ever get to scale to this kind of process, unless you have a spare data center or two underneath your trenchcoat.

Package managers all the way down

Posted Jan 26, 2017 13:37 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (1 responses)

But stuffing your 700 deps in a single application container won't make the QA problems go away or become easier.

Sure, superficial QA will be easier. No need to think about this stuff. Just one set to test. Very easy to define and go through a single check list.

However your single set will become more and more complex as parts of your container start using slightly different versions of the same code base, with slightly different behaviour, bugs and security problems. Every developer that takes advantage of the new system to procastinate and stay on an old "proven" (for very particular interpretations of proven) will plant time bombs that will require complex code path analysis to identify why things works in A case but not in B case (answer: they use different bundled versions of the same lib). Resource use will explode to keep all those slightly different component versions in memory. Analysing and fixing stuff when the checklist does not pass will become the lengthy, horrific and antiagile process common in proprietary codebases (they didn't bother with dep checking either).

Application containers are nice and dandy when you start from a curated codebase. Deploying this codebase clearly does not need the complexity of distro dependency checks.

However, what this insight misses is that the curated codebase is a product of distro dep check culture. Remove one you won't keep the other for long.

Package managers all the way down

Posted Jan 28, 2017 3:39 UTC (Sat) by RCL (guest, #63264) [Link]

I am afraid this is wrong. 700 deps stuffed in the container will not need to be tested for interaction between each other the same way as 700 deps in a single system since presumably they are being used only for that single containerized application. The premise of the containers is that they are atomic. One lib in the mix has a security problem, you update the whole thing.

Code/memory bloat may be a problem but hey... people program in Java and interpreted languages and most just accept the cost of low IPC, larger memory footprint and what not because of convenience of development (sweep all the hard stuff under the JVM's rug, it will figure it out - forgetting that JVM is just a tool and not a demigod that can solve NP-hard problems for them, especially given that it has less information about the program intentions than its author).

Given that people are not mass rewriting their services in C/C++ (except maybe Facebook and similar scale vendors), I think they will get over the bloat of compartmentalized applications for the ease of their deployment, as they already do on e.g. Mac.

Package managers all the way down

Posted Jan 25, 2017 17:50 UTC (Wed) by NightMonkey (subscriber, #23051) [Link] (1 responses)

Oh, Gentoo, how I love thee.

Package managers all the way down

Posted Feb 2, 2017 17:46 UTC (Thu) by glaubitz (subscriber, #96452) [Link]

If they were only able to finally migrate to gcc-6 as their default compiler.

Package managers all the way down

Posted Jan 28, 2017 22:01 UTC (Sat) by ssmith32 (subscriber, #72404) [Link] (19 responses)

It seems like configuration management software (salt, puppet, chef, etc) would address a good chunk of the issues he brought up.
It is how many people manage these issues at scale. Maybe just make it easier for such software to be applied in the small, as well as the large.

And, yes, javascript packaging is a bit of a mess, but it is slowly coming along. But I don't know why did he felt the need to package each npm package as a separate rpm/deb - that seems like overkill. Just bundle your node modules, bower stuff or, more correctly, inline it all via webpack or something (a la create-react). Ship the non-inlined version sans deps as a source package, and the user would need to run npm install, if they're interested in that. Nowadays, more and more, you have your "source" javascript, that is compiled, somehow, into... javascript. So ship the compiled one, like other languages, and have a source package.

Package managers all the way down

Posted Jan 30, 2017 4:46 UTC (Mon) by spwhitton (subscriber, #71678) [Link] (15 responses)

This makes it very hard to ship security fixes, which is particularly important for a web application that might get installed on many a Internet-facing machines.

Package managers all the way down

Posted Jan 31, 2017 23:58 UTC (Tue) by ssmith32 (subscriber, #72404) [Link] (14 responses)

Why? Like any other package, if a dependency is updated with a fix, pull in the new dependency, rebuild your webpack, and release.

Package managers all the way down

Posted Feb 1, 2017 23:28 UTC (Wed) by spwhitton (subscriber, #71678) [Link] (13 responses)

Right, but a distribution's security team do not have time to edit and rebuild large numbers of packages (potentially introducing new problems while doing so).

Package managers all the way down

Posted Feb 2, 2017 1:13 UTC (Thu) by ssmith32 (subscriber, #72404) [Link] (12 responses)

Hmm.. I think I'm missing something about this. I develop web app X. I deploy web app X as a compiled webpack - all dependencies are now inlined in, which is standard practice in web apps ( I've never seen two different, unrelated web apps share a javascript library at serving time, as with C .so's, it would be wrong for lots of reasons). I develop a deb that ships the compiled webpack. I work with the distro to follow their standards and get it shipped, and I'm now the maintainer. A security issue arises in a lib, I'm notified, and I tick a version in my package.json, build, test, and release a new version of my webapp X.

Do people not generally maintain their applications like that?

Are you still expecting every dependency to be packaged separately, even though there is no point? ( Since every webapp inlines it's dependencies at build time... )

Package managers all the way down

Posted Feb 2, 2017 1:17 UTC (Thu) by ssmith32 (subscriber, #72404) [Link]

Should say *rarely* see *modern* unrelated webapps share libs in a live production deployment at serving time, and share as in using the same file on the same filesystem.

Package managers all the way down

Posted Feb 2, 2017 6:40 UTC (Thu) by seyman (subscriber, #1172) [Link] (2 responses)

> Do people not generally maintain their applications like that?

Hopefully, if anyone is doing what you're describing, they're creating LSB-compliant rpms rather than deb packages...

My experience is that vendors do not release updates when their packaged libs have security issues. Neither do the vendors of the packaged libs when the libs they package have similar issues. Thus, you end up with servers where every system lib is up to date with regards to security fixes but every single application use inline versions that are not.

Package managers all the way down

Posted Feb 3, 2017 22:14 UTC (Fri) by ssmith32 (subscriber, #72404) [Link] (1 responses)

Curious - why rpms? Even if you're only interested in distributing on debian based systems?

Package managers all the way down

Posted Feb 3, 2017 22:30 UTC (Fri) by zlynx (guest, #2285) [Link]

Because RPM version 3 is the LSB specified package format.

Package managers all the way down

Posted Feb 2, 2017 23:27 UTC (Thu) by spwhitton (subscriber, #71678) [Link] (7 responses)

Distributions cannot expect the maintainers of every package using the lib to do the update in a timely manner. That's why distributions have security teams.

It's unreliable (as seyman describes) and a waste of time (lots of people doing updates instead of one person).

Package managers all the way down

Posted Feb 3, 2017 22:26 UTC (Fri) by ssmith32 (subscriber, #72404) [Link] (6 responses)

I see. I still feel like people are viewing included Java script files the same way as system libraries, which just seems off.. I don't know any web developer that thinks of them in that fashion. I'm not one by trade, just work with them, so I can kinda see both sides. I often argue against inline'ing everything, but in the web world, it just seems to be a lost battle:

https://github.com/facebookincubator/create-react-app/iss...

So I think I get the point of view, but still feel like maintainers and distros are trying to force the world to adapt to their thinking ( which may be correct ), and losing.

Package managers all the way down

Posted Feb 4, 2017 4:30 UTC (Sat) by spwhitton (subscriber, #71678) [Link] (5 responses)

Could you explain why you think security issues in javascript libs are any different to security issues in e.g. libc?

Package managers all the way down

Posted Feb 5, 2017 16:49 UTC (Sun) by ggiunta (guest, #30983) [Link] (2 responses)

As a professional web dev for the last ten years, I know for a fact that there is a huge friction between between 'system/os' libs and 'web app dependency' libs.

This is due to many reasons, but the biggest single burden is the difference in development speed. Web apps dependencies evolve at the speed of light compared to the C libraries and gnu tools which form the substrate for 'written in c' applications.
Another difference is that C libraries are (comparatively) few in number, with each lib used by a great deal many applications. Web-app libs in contrast are an infinite number, and many of them are used by a (comparatively) small number of applications.
Last but not least, not all developers of web-app dependencies adhere to proper semantic versioning practices, which makes upgrades a test-fest.
Finally, the languages used to code web-apps do not have the same facilities as C for resolving at runtime symbol versioning, making it hard to keep multiple versions of the same library available as part of 'system libs'.

I am not assigning any blame here, just describing the current situation which exists.

f.e. for PHP, I have never met any-one who uses the Debian's shipped version of a php application. They are all obsolete by the time the release iso is mastered.
PHP has had its own package-management system for ages, called Pear. It worked by keeping a centralized version of php-libs by default. It never took off.
Then a new package manager came along, Composer, which by default installs all the dependencies within the application itself. It took off like fire, and it is now in use by literally *all* php applications (except Wordpress).

I am not sure what the best solution would be going forward. I do not think that putting the whole Composer repository into Debian makes sense at all.
Otoh it would be nice to have a tool, as part of the core OS, that would be able to automatically find all installed php apps and for each one do a scan of its installed dependencies and give a warning when one is found which has a known vulnerability.

Package managers all the way down

Posted Feb 5, 2017 20:43 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

That's fine for web apps (and other service deployment infrastructure), but for things like web browsers, editors, and other things which are more on the developer side of things, you get the problem of everybody needing to know what package manager to ask for each app. Something like:

Oh, my editor uses cargo, but my media player is using pip, then my web browser is npm-based, other things use go, etc. And each app has to be updated individually and it may not bundle the latest security fix yet.

Using apt-get or dnf is vastly superior for these kinds of software.

But yes, for deployment-level software, breaking from the distro usually is a net win because you want to track the latest or apply your own patches anyways. But for the essentials? Eh, not pretty for lots of these languages so far.

Package managers all the way down

Posted Feb 6, 2017 1:36 UTC (Mon) by spwhitton (subscriber, #71678) [Link]

Right. As mathstuf said, what you say makes sense for deployment of the latest versions of web apps. But we were talking about packaging stable releases of those web apps for distributions, for people who just want to install it and don't need the very latest version.

Package managers all the way down

Posted Feb 10, 2017 4:34 UTC (Fri) by ssmith32 (subscriber, #72404) [Link] (1 responses)

I don't think the issues themselves are inherently different, but the way javascript include files and compiled c libraries are used are very different.

It might help to imagine a world were the expectations of both the library developers and their users were that no one linked at runtime, and everything was always compiled in ( not even statically linked, actually pulled in as source ). As you can see from my link, that is the world of javascript, like it or not. As you can see from the link, I'm aware of the downsides, just saying that's how it is, and I don't see it changing - if anything, it's moving more and more in that direction...

Package managers all the way down

Posted Mar 15, 2017 14:58 UTC (Wed) by spwhitton (subscriber, #71678) [Link]

Right, thanks, I see what you mean.

Perhaps this will change as core JavaScript libraries mature, so that frequent breaking changes are less appealing.

Package managers all the way down

Posted Jan 30, 2017 16:58 UTC (Mon) by davexunit (guest, #100293) [Link] (2 responses)

Bundling is the antithesis of good package management.

Package managers all the way down

Posted Jan 31, 2017 23:56 UTC (Tue) by ssmith32 (subscriber, #72404) [Link] (1 responses)

Not in all cases. But in most, and for that, do as I mentioned - ship "compiled" javascript, just as you would any other language, and enumerate your dependencies in you package.json, and out that in your src pkg.

Package managers all the way down

Posted Feb 1, 2017 2:28 UTC (Wed) by rahulsundaram (subscriber, #21946) [Link]

> But in most, and for that, do as I mentioned - ship "compiled" javascript,

You need to build from source anyway to ensure you are legally compliant with the licensing guidelines of a distribution. Here is what Fedora does

https://fedoraproject.org/wiki/Packaging:JavaScript

Bundling is possible. Just not recommended

https://fedoraproject.org/wiki/Bundled_Libraries?rd=Packa...

"Cloud" bit rot

Posted Jan 30, 2017 18:11 UTC (Mon) by civodul (guest, #58311) [Link] (5 responses)

> the FSF deleted the last GPLv2 versions of GCC and binutils from its site.

At ftp://ftp.gnu.org/gnu/gcc I see GCC 1.42 from 1992; likewise, ftp.gnu.org has Binutils 2.7 from 1996. GPLv3 was released in 2007.

That said, "cloud bit rot" does exist for a lot of other free software projects. Software Heritage's mission is to address that by providing the equivalent of archive.org for software.

"Cloud" bit rot

Posted Jan 31, 2017 15:12 UTC (Tue) by khim (subscriber, #9252) [Link] (4 responses)

> the FSF deleted the last GPLv2 versions of GCC and binutils from its site.

At ftp://ftp.gnu.org/gnu/gcc I see GCC 1.42 from 1992; likewise, ftp.gnu.org has Binutils 2.7 from 1996. GPLv3 was released in 2007.
Actually it was not gcc but gdb. Binutils from 2.10.1 to 2.21.1 and gdb from 6.0 to 7.3 (licensed under GPLv2) have silently disappeared from FSF's site (and were replaced with GPLv3-licensed versions) which made a lot of people who have strict "no GPLv3" policy quite upset.

"Cloud" bit rot

Posted Jan 31, 2017 15:41 UTC (Tue) by cortana (subscriber, #24596) [Link]

Wow, that's the kind of subterfuge that I would have thought would have been beneath the FSF.

FSF removing versions

Posted Feb 9, 2017 20:45 UTC (Thu) by JWatZ (guest, #114023) [Link] (2 responses)

This appears to be untrue, at least as of now, although there does seem to be some oddities, which may be where the story got started.

Specifically, in http://ftp.gnu.org/gnu/binutils/ the earliest version that mentions GPL version 3 is 2.16.1, where it is only mentioned in the files under the opcode directory, which were generated by CGEN. The earliest version in the Wayback Machine is from 2005, and does differ from the version currently being distributed. Comparing them, it looks like the CGEN-generated files were re-created in August 2011, which also had the side-effect of putting those files under GPLv3 or later.

FSF removing versions

Posted Feb 9, 2017 20:54 UTC (Thu) by mohg (guest, #114025) [Link] (1 responses)

I imagine it's related to this:
https://sourceware.org/ml/gdb/2011-09/msg00136.html

They made a mistake and corrected it by creating new tarfiles in 2011. The old ones were removed because they were missing some source files.

FSF removing versions

Posted Feb 9, 2017 21:03 UTC (Thu) by JWatZ (guest, #114023) [Link]

Yes, I suspect that is related. But in the case of the 2.16.1 version, it looks like they (accidentally) updated the original, rather than making a new one with "a" appended.

Package managers all the way down

Posted Jan 31, 2017 4:22 UTC (Tue) by BradReed (subscriber, #5917) [Link] (1 responses)

It doesn't have all the bells and whistles of all these new package formats and package managers, but Slackware's package system has caused me much less stress over the years than rpms.

Package managers all the way down

Posted Feb 2, 2017 17:49 UTC (Thu) by glaubitz (subscriber, #96452) [Link]

Does it support MultiArch yet or are you tied to manually creating multilib packages?

Isn't this all back to front?

Posted Feb 2, 2017 18:00 UTC (Thu) by Wol (subscriber, #4433) [Link]

As in, why the *** are we handling dependencies, shouldn't we be handling requirements? Having said that, it feels a little like I've just altered words without altering meanings, but ...

I got involved in the LSB in the early days. And I lost the argument, unfortunately, but I kept on saying that packages should specify their requirements, and let the package manager sort it out. Because I wanted to make it easy for proprietary programs to install on J Random Linux Distro.

The idea was that an Independent Software House could write a spec file for their program, that you would pass to the package manager, and it would make sure that the basic linux infrastructure was there. I was in particular thinking of WordPerfect 8, which requires things like libc5. I would love to get that running again :-)

And it future-proofs things as well - if those spec files were still around, package managers would have to - to some extent - continue to support all the stuff these programs need.

Cheers,
Wol

Let's not do the NP-complete part!

Posted Feb 20, 2017 22:06 UTC (Mon) by davecb (subscriber, #1574) [Link]

In two previous lives (Multics and Solaris), I've been faced with the problem Kristoffer Grönlund describes here, the fact that, if you have different versions of dependancies, the problem can be NP-complete.

In both previous lives, and more recently in Linux, the problem has been dealt with the same way, by allowing libraries to contain all the supported versions of a given interface. It's a long weekend, so I had the time to write it up, so here's the story: “DLL Hell”, and avoiding an NP-complete problem

--dave

Package managers all the way down

Posted Jan 20, 2021 19:26 UTC (Wed) by joseph.h.garvin (guest, #64486) [Link] (1 responses)

> If, instead, you need exact control over the versions of the software you are using, perhaps to track the associated licenses, the tools become harder to work with.

FWIW, cargo actually associates a license with each rust crate, and it's not super hard to set up something to prevent any licenses you dislike from showing up in your dependencies.

Package managers all the way down

Posted Jan 20, 2021 20:11 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Yes, but the accuracy of each license tag also needs to be evaluated. Luckily, `cargo new` is pretty good at it and license changes so rare that I expect just watching for when a license-looking path shows up in the diffstat gets you most of the way there.

Package managers all the way down

Posted Feb 1, 2021 7:49 UTC (Mon) by ryjen (guest, #139261) [Link]

Great topic, have been frustrated by it as well and am glad people are still discussing it.

Containerization seems like hitting a thumbtack with a hammer. Unless you really want to do some sort of container-inside-a-container thing for dependencies like an MC Escher painting.

Formal protocols seem like a good answer to coordinating the rabble (CRUD?).

The coupling of language/distribution/application/system packages is a sin, but would still rather see the various distribution species conform instead of die off. Or bow down as a plugin to a universal manager.

Could consider environment based language package management and shims as well (rbenv, virtualenv, asdf, ect). Could such a thing be useful in distribution packages?

Thanks for the Qubes OS tip, and Gobo linux has also been my radar for separating applications/system.


Copyright © 2017, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds