|
|
Subscribe / Log in / New account

Soller: Real hardware breakthroughs, and focusing on rustc

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 19:56 UTC (Thu) by farnz (subscriber, #17727)
In reply to: Soller: Real hardware breakthroughs, and focusing on rustc by Cyberax
Parent article: Soller: Real hardware breakthroughs, and focusing on rustc

Part of the problem there is that, assuming we distribute machine-ready binaries, you're stuck choosing between combinatorial explosions in the number of binaries you test (as you need to test all reasonable combinations, even if they are ABI-compatible), or you end up losing many of the benefits the distros provide as you provide packages that simply don't work together, even though they can be installed together.

The other component is that a decent amount of the hard work distributions do (that makes the ecosystem a better place) is a consequence of that restriction - if there can only be one version of (say) zlib on your system, the distro has incentives to make sure that everything works with the current, secure, version of that package. If I can have multiple versions of it installed, why can't I stick to the known bad version that works, and just complain about attempts to move me onto the newer version?


to post comments

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 20:56 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (24 responses)

Well, insistence on a single version also leads to a rigid system that moves too slowly for modern development.

> Part of the problem there is that, assuming we distribute machine-ready binaries, you're stuck choosing between combinatorial explosions in the number of binaries you test (as you need to test all reasonable combinations, even if they are ABI-compatible)
Not quite following. If you update a dependency then you start walking its rdepends graph and updating it. There's no expectation that AppA would work with a random version of LibB.

This model does lead to a proliferation of versions, just one tardy package can force the distro to keep around a huge graph of old dependencies. But this is a balancing act and can be managed.

I worked in a company that did dependency management like this. It worked mostly fine, the major problem was "migration campaigns" when a central dependency had to be upgraded and individual teams had no bandwidth to do it.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:21 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (19 responses)

> Well, insistence on a single version also leads to a rigid system that moves too slowly for modern development.

There is no such insistence of a single version distribution side. Haven't you noticed the dozens of libs available in multiple versions on all major distributions? The way they've all managed the python 2 and 3 overlap?

So, technically, multiple version works in all distributions. Practically:

> the major problem was "migration campaigns" when a central dependency had to be upgraded and individual teams had no bandwidth to do it.

devs severely underestimate the connectivity of the current software world, and the team bandwidth that would be implied by the scope of versions they demand angrily.

It would be a lot easier for distributions to provide recent versions of all components, if dev projects adhered strictly to semver.

> this is a balancing act and can be managed.

And distributions are managing it. More contributors, more packages, in more versions. Less contributors, less packages, more version consolidation. Nothing more and nothing less.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:35 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (18 responses)

> There is no such insistence of a single version distribution side.
Yes, there is.

> Haven't you noticed the dozens of libs available in multiple versions on all major distributions? The way they've all managed the python 2 and 3 overlap?
No. You're cherry-picking.

> So, technically, multiple version works in all distributions.
No. No. No. You are about as wrong as you can get.

To give you an example, right now I have a problem with python3-theano that has broken one our application in the 1.0.4 version. I can install 1.0.3 from the previous version of the distro by pinning the version, but this breaks _another_ application. There's no way to install 1.0.3 and 1.0.4 in parallel and say that "AppA wants 1.0.3, AppB wants 1.0.4".

I'm going to fix it by just packaging everything in a venv and abandoning system packages altogether.

> And distributions are managing it. More contributors, more packages, in more versions. Less contributors, less packages, more version consolidation. Nothing more and nothing less.
No. Distros have dropped the ball here. Completely. They are stuck in the "single version or GTFO" model that is not scaling past the basic system.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:57 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (1 responses)

Sorry, but no. You can technically create and install as many sublevels of parallel versions as you want.

Now, that supposes (on the *language* side, not the *distribution* side) that
* first, the language deployment format uses different file paths of all the version sublevels you want to exist
* second, there was a way at the language level, to point to a specific version, if several are found on disk.

If the language provides none of those, you are stuck installing a single version on system, or playing containers, venvs, and all those kind of things, whose sole purpose is to isolate upstream language tooling, from versions it can not cope with.

And, maybe distributions should have, changed all language stacks to work in parallel version mode. Maybe they tried, and failed. Maybe they didn’t even try

However, it’s a bit rich to blame distributions, and ask them to adopt language tooling, when the problems pointed out, are inherited from language tooling in the first place.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 23:23 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> Sorry, but no. You can technically create and install as many sublevels of parallel versions as you want.
I can not use distor infrastructure for it. I have to build my own packages and manage all of them.

This means that the distro becomes nearly useless for me, except as for very basic system utilities.

> * first, the language deployment format uses different file paths of all the version sublevels you want to exist
> * second, there was a way at the language level, to point to a specific version, if several are found on disk.
If we're stuck on Python then we can as well continue. Python supports all of these, yet no public distro utilizes it. Some unpopular distros like Nix do.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 22:02 UTC (Thu) by pizza (subscriber, #46) [Link] (11 responses)

> To give you an example, right now I have a problem with python3-theano that has broken one our application in the 1.0.4 version. I can install 1.0.3 from the previous version of the distro by pinning the version, but this breaks _another_ application. There's no way to install 1.0.3 and 1.0.4 in parallel and say that "AppA wants 1.0.3, AppB wants 1.0.4".

...So why isn't fixing the application(s) an option here? (Root cause analysis and all that..)

> There's no way to install 1.0.3 and 1.0.4 in parallel and say that "AppA wants 1.0.3, AppB wants 1.0.4". I'm going to fix it by just packaging everything in a venv and abandoning system packages altogether. [...] Distros have dropped the ball here. Completely. They are stuck in the "single version or GTFO" model that is not scaling past the basic system.

But.. what you just described isn't actually a "distro" problem at all, as the inability to install multiple versions of a given module system-wide is a limitation of python's own packaging system. Intentionally so, as managing multiple versions introduces a great deal of complexity. Instead of dealing with that complexity head-on they decided to take the approach of self-contained private installations/systems (aka venv).

But while that keeps applications from stepping on each other's toes, it doesn't help you if your application's dependencies end up with conflicting sub-dependencies (eg submoduleX only works properly with theano <= 1.0.3 but submoduleY only works properly with >= 1.0.4...)

(This latter scenario bit my team about a month ago, BTW)

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 23:17 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

> ...So why isn't fixing the application(s) an option here? (Root cause analysis and all that..)
One application is a commercial simulator that we can't fix. We'll probably just update our application to work with 1.0.3 by adding a workaround or just put it in a container. This workload should have been containerized anyway...

In this case this is easy, but I had much more complicated cases with binary dependencies that Just Didn't Work.

> But.. what you just described isn't actually a "distro" problem at all, as the inability to install multiple versions of a given module system-wide is a limitation of python's own packaging system.
No it's not. Python supports venvs, custom PYTHONPATH and custom loaders.

For example, back at $MYPREV_COMPANY we had a packaging system that basically provided a wrapper launcher taking care of that. So instead of #!/usr/bin/env python3" we used "#/xxxxx/bin/env python3" which created the correct environment based on the application manifest.

> But while that keeps applications from stepping on each other's toes, it doesn't help you if your application's dependencies end up with conflicting sub-dependencies (eg submoduleX only works properly with theano <= 1.0.3 but submoduleY only works properly with >= 1.0.4...)
Correct. This is a problem, but it happens during development, not package installation so the developer can work around it (by fixing deps, forking them, pinning previous versions, etc.)

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 8:39 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (9 responses)

>> But.. what you just described isn't actually a "distro" problem at all, as the inability to install multiple versions of a given module

> system-wide is a limitation of python's own packaging system.
> No it's not. Python supports venvs, custom PYTHONPATH and custom loaders.

None of this works at scale. It’s all local dev-specific workarounds:
– venv is poor man's containerization (you can achieve the same with full distro containers, if venv “works” then distros work too)
– PYTHONPATH is the same technical debt engine which has made Java software unmaintainable¹
– custom loaders are well, custom. they're not a generic language solution

A language that actually supports multi versioning at the language level:

1. defines a *single* system-wide directory tree where all the desired versions can coexist without stomping on the neighbor’s path
2. defines a *single* default version selection mecanism that handles upgrading and selects the best and most recent version by default (ie semver)
3. defines a way to declare, an application-level exception for a specific set of components

Because, as you wrote yourself:

> the major problem was "migration campaigns" when a central dependency had to be upgraded and individual teams had no bandwidth to do it.

Therefore, anything manageable at scale must keep semver exceptions as exceptions, not the general case.

Most languages do not support multi-version (that’s a *language* not a distribution limitation). Some languages do:
– the “backwards” C/C++, because of shared lib versionning
– Go, because Google wanted to create an internet-wide Go module repository, so they had to tackle the version collision problem
Probably others.

You can trick a mono-version language locally by installing only a specific set of components for a specific app. That does not scale at the system level, because of the version combination explosion at initial initial install time, and the version combination explosion at update decision time.

The other way that devs, that like to pretend the problem is distribution-side, workaround mono-version language limitations, is to create app-specific and app-version-specific language environments. That’s what vendoring, bundling, Java PATHs, and modularity tried to do.

All of those failed the scale test. They crumble under the weight of their own version contradictions, under the weight of the accumulated technical debt. They are snowflakes that melt under maintenability constrains. They only work at the local scale (app or company-specific local scale, limited time scale of a dev environment).

You want multi-version to work for foo language, ask foo language maintainers for 1–2–3. You won’t get distributions to fix language level failures by blaming them while praising at the same time the workarounds language devs invented to avoid fixing their runtime.

¹ Both distro *and* enterprise side, I have a *very* clear view @work of what it costs enterprises; and it’s not because of distros since everything Java related is deployed in non-distro mode

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 8:50 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

> None of this works at scale. It’s all local dev-specific workarounds
I worked in a company that is in the top 10 of the world's companies and has the name starting with an "A" and the second letter not being "p".

Pretty much all of its software is built this way, with dependency closures sometimes having many hundreds of packages. The build/packaging system supports Python, Ruby, Go, Rust, Java and other languages with only minor changes to the way binaries are launched.

So I can say for sure that this approach scales.

> A language that actually supports multi versioning at the language level
Nothing of this is needed. Nothing at all. Please, do look at how Java works with Maven, Ruby with Gems, Go with modules, or Rust with Cargo. They solved the problem of multiversioning in the language tooling without your requirements.

> Therefore, anything manageable at scale must keep semver exceptions as exceptions, not the general case.
I don't follow. Why?

> All of those failed the scale test. They crumble under the weight of their own version contradictions, under the weight of the accumulated technical debt. They are snowflakes that melt under maintenability constrains.
So far this is basically your fantasy. You are thinking that only seasoned distros that can wrangle the dependency graph into one straight line can save the world.

In reality, commercial ecosystems and even Open Source language-specific ecosystems are already solving the problem of multiversioning.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 11:20 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (7 responses)

>> None of this works at scale. It’s all local dev-specific workarounds
> I worked in a company that is in the top 10 of the world's companies and has the name starting with an "A" and the second letter not being "p".

When you attain this size you can afford the combination explosion. Most entities (starting with community distros) can not.

So yes it does not scale. With astronomic resources you can brute-force even an inefficient system.

Go modules respect my requirements. I should know, I spend enough months dissecting the module system. They will work as multi-version. Python does not. It's not a multi version language.

Java has not solved anything. Which is why its adoption outside businesses, is dismal. Businesses can afford to pay the not-scaling tax (in app servers, in ops, in lots of things induced by Java software engineering practices). Community distros can not.

>> Therefore, anything manageable at scale must keep semver exceptions as exceptions, not the general case.
> I don't follow. Why?

Because each exception is an additional thing that needs specific handling with the associated costs. That’s engineering 101 (in software or elsewhere).

Rules get defined to bring complexity and costs down. Exceptions exist to accommodate an imperfect reality. A working efficient system allows exceptions without making them the rule.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 18:56 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

> When you attain this size you can afford the combination explosion. Most entities (starting with community distros) can not.
I maintained several projects there and spent way less time on that than maintaining a package for Ubuntu and coping with it being broken by Python updates.

> So yes it does not scale. With astronomic resources you can brute-force even an inefficient system.
There's nothing inefficient there. It allows to efficiently place the onus of evolving the dependencies on the correct people - package owners.

I.e. if you own a package AppA that uses LibB then you don't care about LibB's rdepends. You just use whatever version of LibB that you need. If you need a specific older or newer version of LibB then you can just maintain it for your own project, without affecting tons of other projects.

This approach scales wonderfully, compared to legacy distro packages. Heck, even Debian right now has just around 60000 source packages and is showing scaling problems. The company I worked at had many times more than that, adding new ones all the time.

> Because each exception is an additional thing that needs specific handling with the associated costs. That’s engineering 101 (in software or elsewhere).
What is "semver exceptions" then?

> Exceptions exist to accommodate an imperfect reality. A working efficient system allows exceptions without making them the rule.
What are "exceptions" in the system I'm describing?

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:22 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (4 responses)

In your approach everything is a special case with a special dep list.

It "scales" because you do not maintain anything, you just push code blindly.

Distributions do not only push code, they fix things. When you fix things, keeping the amount of things to be fixed manageable matters.

If you don’t feel upstream code needs any fixing, then I believe you don’t need distributions at all. Just run your own Linux from scratch and be done with it.

Please report to us how much you time you saved with this approach.

Interested? I thought not. It‘s easy to claim distributions are inefficient, when adding things at the fringe. Just replace the core not the fringe if you feel that’s so easy. You won’t be the first one to try.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:56 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> In your approach everything is a special case with a special dep list.
Correct.

> It "scales" because you do not maintain anything, you just push code blindly.
Incorrect. In that companies libraries are maintained by their owner teams. The difference is that they typically maintain a handful of versions at the same time, so that all their dependants can build it.

There was also a mechanism to deprecate versions to nudge other teams away from aging code, recommendation mechanism, etc.

> Distributions do not only push code, they fix things. When you fix things, keeping the amount of things to be fixed manageable matters.
In my experience, they mostly break things by updating stuff willy-nilly without considering downstream developers.

> If you don’t feel upstream code needs any fixing, then I believe you don’t need distributions at all. Just run your own Linux from scratch and be done with it.
That's pretty much what we'll be doing eventually. The plan is to move everything to containers running on CoreOS (not quite LFS, but close enough).

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 8:58 UTC (Sat) by nim-nim (subscriber, #34454) [Link] (2 responses)

> Incorrect. In that companies libraries are maintained by their owner teams. The difference is that they typically maintain a handful of versions at the same time, so that all their dependants can build it

And that’s exactly what major distributions do, when the language tooling makes it possible without inventing custom layouts like Nix does.

Upstreams do not like distribution custom layouts. The backlash over Debian or Fedora relayouting Python unnilateraly, would be way worse, than the lack of parallel instability in the upstream Python default layout.

> Incorrect. In that companies libraries are maintained by their owner teams.

It’s awfully nice when you can order devs to use the specific versions maintained by a specific owner team.

Of course, most of your complaint is that you *do* not* *want* to use the specific versions maintained by the distro teams.

So, it’s not a technical problem. It’s a social problem.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 9:05 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> And that’s exactly what major distributions do
Except that it's impossible to use it, because it's not possible to parallel install libraries.

> when the language tooling makes it possible without inventing custom layouts like Nix does.
Then this tooling needs to be implemented for other languages. As I said, I've seen it done at the largest possible scale. It doesn't even require a lot of changes, really.

> Of course, most of your complaint is that you *do* not* *want* to use the specific versions maintained by the distro teams.
Incorrect again. I would love to see a distro-maintained repository with vetted package versions, with changes that are code-reviewed by distro maintainers.

It's just that right now these kinds of repos are useless, because they move really slowly for a variety of reasons. The main one is the necessity to upgrade all versions in the distribution in lockstep.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 9:47 UTC (Sat) by nim-nim (subscriber, #34454) [Link]

>> when the language tooling makes it possible without inventing custom layouts like Nix does.

> Then this tooling needs to be implemented for other languages. As I said, I've seen it done at the largest possible scale. It doesn't even require a lot of changes, really.

Then don’t complain at distros, write a PEP, make upstream python adopt a parallel-version layout.

Major community distributions will apply the decisions of major language upstreams, it’s that simple. Major community distributions collaborate with major language upstreams. Collaborating implies respecting upstream layout choices.

In a company, you can sit on upstream decisions and do whatever you want (as long as someone finds enough money to fund your fork). That’s ultimately completely inefficient and counter productive, but humans do not like to bow to the decisions of others, so, that’s been done countless times and will continue to be done countless times.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 10:00 UTC (Sat) by nim-nim (subscriber, #34454) [Link]

>> When you attain this size you can afford the combination explosion. Most entities (starting with community distros) can not.

> I maintained several projects there and spent way less time on that than maintaining a package for Ubuntu and coping with it being broken by Python updates.

That "works" because you do not care about the result being useful to others. But wasn’t your original complaint, that the python3-theano maintainers didn’t care that their package was not useful to your commercial app?

So you want a system that relies, on not caring for others, to scale, to be adopted by distributions, because the distributions should care about you?

Don’t you see the logical fallacy in the argument?

Al the things that you found too much work in the Ubuntu package, exist so the result can be used by others.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 9:10 UTC (Fri) by smurf (subscriber, #17840) [Link] (3 responses)

> right now I have a problem with python3-theano that has broken one our application in the 1.0.4 version

Well, we all love modules with 550+ open issues and 100+ open pull requests …

So find the offending commit that broke your app and implement a workaround that satisfies both, or file an issue (and help the Theano people deal with their bug backlog, you're using their code for free, give something back!), or dropkick the writer(s) of the code that depends on 1.0.3 to get their act together. Can't be *that* difficult.

> Distros have dropped the ball here. Completely.

Pray tell us what the distros should be doing instead?

Insisting on one coherent whole, meaning one single version of everything, is the only way to manage the complexity of a distribution without going insane. Technical debt, i.e. the inability to work with 2.1 instead of 1.19 (let alone 1.0.4 instead of 1.0.3), must be paid by the party responsible. Not by everybody else.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 9:16 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> Pray tell us what the distros should be doing instead?
Parallelly installable versions of libraries. With a mechanism allowing to create a dependency closure for each application with specific library versions.

It's been done multiple times. In proprietary systems (like in the company I worked) and in Open Source (Nix OS).

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 10:17 UTC (Fri) by smurf (subscriber, #17840) [Link] (1 responses)

This idea dies a messy death as soon as your app or library requires sub-libraries A and B, library A requires version 1 of X, and library B needs version 2 of X. Co-installation is not a problem to be solved if the results can't co-exist in the same application. Most languages out there have no mechanism to support that and some libraries (those talking to real hardware for instance) wouldn't work that way anyway.

The distro's job is to assemble a coherent whole, qhich occasionally requires poking the people responsible for A to support X.2. There's no incentive whatsoever for the distro to support co-installation of X.1. Yes it's been done, but that by itself is not a good argument for doing it again.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 10:26 UTC (Fri) by farnz (subscriber, #17727) [Link]

That works just fine in Rust - the library version is part of the symbol mangling, so as long as you don't try to use X v1 APIs on X v2 objects (or vice-versa), you're golden. Naming symbols from X v1 and X v2 in the same code is challenging and thus uncomfortable (as it should be!), but it's perfectly doable.

What doesn't work is using an object obtained from X v1 with X v2 code - the symbols are wrong, and this only works where X v2 is designed to let it work.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 13:44 UTC (Fri) by farnz (subscriber, #17727) [Link] (3 responses)

The issue is when AppA depends on LibB and LibC, and LibB also depends on LibC, but LibB and AppA want different versions of LibC. Someone has to do the work to ensure that LibB works with both its preferred version of LibC and with AppA's preferred version of LibC. Multiply up according to the number of versions of LibC that you end up with in a single app.

At my employer, we handle this by hunting down those cases, and getting people to fix LibB or AppA to use the same version. Rinse and repeat - someone has to do the long tail of grunt work to stop things getting out of hand.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 18:39 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Yes, it can happen.

But.

This happens during development when you try to create the dependency closure. So it's the app's developer (or maintainer) who is going to be resolving the issues. And they have choices like not using a conflicting library, forking it, just overriding the LibC version for LibA or LibB, etc. Typically just forcing the version works fine.

The same thing can happen in a full distro. But then you have to actually go and fix all LibA (or LibB) rdepends before you can fix your project.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:39 UTC (Fri) by farnz (subscriber, #17727) [Link] (1 responses)

That assumes that the versioning is such that the dependency closure can't be created, and that the test cases will catch the problem in time. I'm going to use plain C libraries as an example here, as symbol versioning is weakest in C.

If, for example, AppA depends on LibC 1.3.2 or above, and LibB depends on LibC 1.3.4 or above, but LibC 1.3.4 has broken a feature of AppA in a rare corner case, you're stuck - the dependency closure succeeds during development (it chooses LibC 1.3.4), and everything appears to work. Except it doesn't, because AppA is now broken. Parallel installability doesn't help - 1.3.4 and 1.3.2 share a SONAME, and you somehow have to, at run time, link both of them into AppA and use the "right" one.

Now, if AppA has depended on both LibB and LibC for a while, you'll notice this. Where it breaks, and where the distro model helps, is when AppA has been happily chuntering along with LibC 1.3.2; LibB is imported into the distro for something else, and bumps LibC to 1.3.4, breaking AppA. The distro notices this via user reports, and helps debug and fix this. In the parallel install world, when LibB is imported into the distro, AppA continues happily working with LibC 1.3.2; when AppA is updated to use LibB, then you get the user reports about how in some timezones, AppA stops frobbing the widgets for the first half of every hour, and you have more to track down, because you have a bigger set of changes between AppA working, and AppA no longer working (including new feature work in AppA).

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 20:12 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Sure. Simple parallel installability won't solve every possible issue, and you still can get bad behavior caused by a bug hit only in corner cases.

But this applies equally to ANY build infrastructure. I had Debian breaking my code because OpenJDK had a bug in zlib compression that manifested only in rare cases, I had once spent several sleepless days when Ubuntu had broken SSL-related API in Python in a minor upgrade. Bugs happen.

But even in these cases having a dependency closure helps a lot. It's trivial to bisect it, comparing exactly what's different between two states, since the closure includes everything. This is not really possible with legacy package managers.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds