|
|
Subscribe / Log in / New account

Soller: Real hardware breakthroughs, and focusing on rustc

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 7:41 UTC (Wed) by nim-nim (subscriber, #34454)
In reply to: Soller: Real hardware breakthroughs, and focusing on rustc by farnz
Parent article: Soller: Real hardware breakthroughs, and focusing on rustc

> Actually, it scales a heck of a lot better than the distro model; we have automated rebuilds and redeploys,

Congratulations, you have a scaling factor of one (company).

Designing things that can be reused by others is a lot harder than inventing one-of-a-kind company-specific systems.


to post comments

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 8:36 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (91 responses)

Why? CI/CD is now a common thing. It's not at all complicated to automate rdepends rebuilds, although direct support from tools like Github would help it even more.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 8:46 UTC (Wed) by marcH (subscriber, #57642) [Link] (90 responses)

- It builds.
- Ship it!

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 9:18 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link] (89 responses)

Sure. Queue it for automatic tests and then put it on staging. After QA gives a sign-off flip the version in LaunchDarkly and start deploying to internal customers.

Once the CS dept confirms that there are no new issues coming from them, start the gradual general deployment, incrementing the rollout percentage to 100% over the next week or so.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 9:35 UTC (Wed) by rodgerd (guest, #58896) [Link] (88 responses)

You're trying to educate people whose mental model of development practises stopped somewhere in 1998. Good luck.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 20:19 UTC (Wed) by marcH (subscriber, #57642) [Link] (3 responses)

In this particular case you're the one who couldn't see the difference between "rebuilds" versus "automatic tests + gradual deployment"

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 20:22 UTC (Wed) by farnz (subscriber, #17727) [Link] (2 responses)

Who in their right mind does a rebuild and redeploy without automatic tests and gradual deployment?!? Surely, by now, that's implicit in "rebuild and redeploy"?

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 23:43 UTC (Wed) by marcH (subscriber, #57642) [Link] (1 responses)

> Surely, by now, that's implicit in "rebuild and redeploy"?

1. I wasn't answering you.
2. Not very convenient for the main topic to be implicit. Might explain some of the length and confusion.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 0:23 UTC (Thu) by farnz (subscriber, #17727) [Link]

Perhaps you shouldn't have been implicit about who you were answering if you dislike the confusion - I replied because LWN's commenting system sent me an e-mail as your comment was a reply to a reply to one of my comments.

And yes, I agree that the implicitness is a problem. A lot of what I'm seeing in this thread is people who are making assumptions about the work practices of another group which no longer apply today - and then asserting that that group is doing something bad because you're assuming something about how they work.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 12:17 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (83 responses)

Oh the irony…

https://lwn.net/Articles/805305/

Maybe the education needs to go the other way? Or, maybe, you’re claiming k8s in 1998’s technology (in that case, what is is your 2019 alternative, please do share)

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 12:20 UTC (Thu) by farnz (subscriber, #17727) [Link] (80 responses)

Just because distros are an imperfect solution, and k8s is also an imperfect solution, does not imply that they have nothing to learn from each other - and it looks like k8s is attempting to learn from distros, while you're saying that distros do not need to learn from Cargo, k8s, NPM, Jenkins, GitHub Actions, Azure Pipelines, and other such modern things that have learnt from distros.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 15:15 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (79 responses)

No *you*’re assuming that because people distro-side disagree with the dev POW they don’t understand the wonderful world of Cargo, k8s, NPM, Jenkins, GitHub Actions, Azure Pipelines, and other such modern things. While many people distro side have been trying for a lot longer than a lot of the commentators here to make those things work, without taking the usual handwaving shortcuts. You did know that Ansible was created Fedora side, not dev side, right? there is very little software automation, that had not been not tried distro-side first, because distros have those huge scaling needs other software projects do not have.

For example: I spent two years performing 10s of thousands of Go components builds. When Google decided to go full power on modules this year, I filled upstream all the missing bits I needed to make modules work distro-side.

I was laughed out of the conversation by people that were convinced like you that they had learnt everything they needed to learn, that they were the future, distros the past and who cared about them?

End result: the switch to modules as default in Go was cancelled this summer. Between my report and the release date, most key points I had pointed had been duplicated by other people in other build contexts (and duplicated enough the Go project could not continue dismissing them).

So much for distros not understanding new tech. Distros may not understand the coding side as well as devs. They certainly understand the building and distributing part way better than devs.

I hope we can finally make Go modules work reliably, in distros and elsewhere, this winter. But if they don’t work out for distros, you can bet they won’t work out for a lot more people and organizations. No matter how uncool and stuck in the past devs feel distros are.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 16:31 UTC (Thu) by farnz (subscriber, #17727) [Link] (40 responses)

I find your projection interesting - I'm actually largely on the distro side, just pointing out that there are other successful models than Linux distros. But you've made it quite clear that either I fall in line with your views based on your experience of Go (not Rust, the language under discussion!) or you will see me as the enemy, so maybe I should stop pushing for more dynamic linking…

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 18:16 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (39 responses)

Sorry about that.

I’m just so utterly fed up with all the “distros don’t understand modern tools and should learn some things” vibe.

There are people (not just me, and for every dev language, be it rust, go, python, take your pick) that spend months if not years distro-side transforming dev deliverables into something that can be used and reused in a generic way, and the first dev who harcodes a product and organisation-specific workflow in Jenkins, using distro components as basis, is convinced he knows more about code management and building than distros.

And that would not matter much (though it *is* annoying) if the same people that do not want to invest the time and energy to make things work, were not also continuously lobbying against the changes needed by the people doing the fixing, and asking those people to drop their work and do it using the pet tool and practices of the day. Even when the pet tool and practices are demonstratively unable to handle a lot more, than the perimeter the dev is interested in.

For example, for rust, the baseline question (how to build a whole system against a shared baseline) has not been answered. Distros (and others) are expected to adopt rust, and shamed for not adopting rust yet, and being backwards, etc, etc, when rust does not provide a way to define a system-wide integration baseline, and rust devs do not make this baseline possible even if the tooling was available, because they are continuously deprecating things and relying on latest features only.

Because dev speed, right. The speed that translates in years of integration delay, unless the whole rust project is treated as a private giant Mozilla SCM tree. And that’s innovating how exactly? That’s reproducing the BSD unified CVS tree. Except the BSDs managed to produce a whole system, and rust is nowhere near that so far. Even using modern tooling the BSDs could not dream of.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 19:21 UTC (Thu) by farnz (subscriber, #17727) [Link] (37 responses)

The deep problem to face down in Rust is the dependency on monomorphization of generics. This is a deep and difficult problem that nobody has a solution to yet, and arguing for dynamic linking as the solution just reveals ignorance of the problem.

Monomorphization is a really good tool for developers - you write the code once, with generic parameters that have to be filled to make the code useful, and then the compiler duplicates the code for each set of filled in parameters; the compiler's optimizer then kicks in and reduces the set of duplicated functions down to the minimum amount of duplicated machine code given the parameters you've supplied. This is great for correctness - I've only got one version of my code to debug - and for performance - no indirect calls via a vtable, just optimal machine code. The trouble is that when you expose a generic as API, which is a common thing to want to do (e.g. Rust's Vec type is a generic), the result of compiling just your library is not machine code that could go in a dynamic library, but some form of intermediate representation that needs compiling further before you have machine code. However, for obvious reasons, the dynamic linker does not and should not include a full LTO phase.

C++ avoids this by saying that if you have generics in your interface, the only way to use your code is as a header-only library. Rust builds to "rlibs", which are a mix of IR where the optimizer needs more information, and machine code where it's possible to generate that; these are effectively static only, because of the need to have an optimizing compiler around to turn the IR into machine code.

There are people on the Rust side trying to find a solution to this, but it's a deeply difficult problem. At the same time, people who work on Cargo (Rust's build system) are taking lessons from distro people on how best to manage packages given that you need static linking - e.g. "cargo-crev" helps you limit yourself to a trusted base of libraries, not just any random code, and makes it easier to audit what's going into your compile. Cargo makes it impossible (by design) to depend on two semver-compatible versions of the same library; you must pick one, or use semver-incompatible versions (at which point you can't interact between the two incompatible libraries - this permits implementation details of a library you use to depend on incompatible but supported versions of the same thing, for example). Cargo workspaces unify dependency resolution for a set of Rust code into one lump, so that I run my top-level "cargo build", and get warned up-front if I'm working with incompatible libraries in different parts of the same workspace, just as apt warns you if you depend on two incompatible dpkgs. Cargo permits you to yank broken version from the central index, which warns people that they can't use them any more (e.g. because this version is insecure and you must upgrade). And there's work being done to make it possible to run your own partial mirror of the package indexes, limited to the packages that you have vetted already - so you can't even build against something old, because it's not in your own mirror.

This contrasts to Go, where it's common to specify the library you require as a VCS tag from a public VCS hosting service like GitHub. Cargo makes that possible if you really, really want to do it, but it's not easy to do - the goal is very much focused on being able to build up a set of trusted "vendored" packages in your workspace, and then build your code only against the trusted vendored packages, offline, in a reproducable fashion, even if the rest of the world becomes inaccessible.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:02 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (35 responses)

That’s very nice to hear and things seem to be improving on the rust front. Thanks a lot of the positive summary!

From a distributor point of view the only things missing to make the build part manageable is an easy way to point builds to a system-wide workspace, and a way to populate the index of this workspace in a granular way. Basically
1. the build system asks cargo the lists of crates+minimum semver needed for a build,
2 the answer is translated into distribution package dependencies,
3 the build system installs those dependencies,
4. each dependency installs the corresponding crate code + the system workspace index part that tells cargo to use this crate and nothing else

The new Go module system is a lot less elaborate than your description. However, it will practically allow achieving 1-4 (not always by deliberate upstream design, but the end result will be the same).

I agree that the Rust community has generally nicer and saner behaviors than some I’ve seen Go side. However, natural selection will keep only fit projects alive in both communities. As long as the core tooling enables good practices, I’ll expect the kind of convergent evolution we’ve seen between rpm and apt (for example).

On the deploy front, however, without shared libs, a large number of applications, with correct system-wide build dependency management, probably implies frequent cascading rebuilds, with the associated stress mirror, network, disk and user side. Even more so if the dev community lacks discipline and continually churns code that does not work with older semvers.

We’re not seeing this effect yet because first, there are not so many first class applications written in those new languages, and second, there are no effective system-wide build dependency management systems yet (so we’re missing a large number of rebuilds that would probably be desirable from a security or bugfix perspective)

Users are deeply suspicious of mass updates. The software world as a whole failed them all too often. It’s a lot easier to make them accept a single lib update, than replacing large sets of apps that happen to have the same code built-in (and, I’m talking about technophile users here, not the kind of user targeted by RHEL).

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:36 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

I should add, the deployment part is manageable (though it will be hard on low-resource networks and hardware).

The critical part is managing a system-wide shared layer, either at run (shared libs) or build time.

You can buy new network links and new hardware. Without a system-wide shared layer, you need a lot more human brain-cells, and that’s a lot harder to procure, especially in a community context.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 12:00 UTC (Fri) by farnz (subscriber, #17727) [Link] (33 responses)

All of your asks have been possible with Cargo for some years now.

  1. You can use cargo read-manifest to get a machine-parseable JSON representation of the package spec, including dependencies for this package and their semver requirements.
  2. With that spec in hand, you can do the translation however your distro wants to do it.
  3. You install these source dependencies from your format into a Cargo workspace, and add a Cargo.toml to tell Cargo that it's using this workspace.
  4. Having done 3, you run cargo build --offline to build inside the workspace, using only the contents of the workspace. You then do distro mechanisms to package up the built artifacts.

Given that distros still aren't happy with the Rust story, I'm guessing that you've missed a requirement or two that matters to distros here - it would be helpful to know what's missing so that it can be added to Cargo.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 16:30 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (32 responses)

Honestly? What you described looks terribly inconvenient to use and automate to me, compared with C/C++ (use system libs by default) or Golang (point GOPROXY shell variable to the system directory containing system Go modules, append a version to a plain text list file to register a module version in the index).

Nevertheless, it is probably automate-able (with lots of pain). But, it is useless, if the crates themselves are constantly churning and invalidating the system workspace.

If the system workspace is too hard to set up and use, no one will try to apply a code baseline, and creating a baseline from scratch from projects that didn’t try to converge on a baseline is hard.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 16:39 UTC (Fri) by farnz (subscriber, #17727) [Link] (4 responses)

If all you want is what C/C++ and Golang give you, just set up a workspace, and build in there. Most of the steps I've described only exist because you wanted to not depend on what was present on disk, but wanted to identify the missing packages and build them; C/C++ don't even have dependency tracking for you to copy form.

If all you want is what Golang provides, create a file Cargo.toml in your buildroot, containing a workspace section:

[workspace]
    members = [
        "/path/to/system/crates/*",
        "*"
    ]

Then, unpack your crate into a subdirectory of your buildroot, and cargo build --offline will do the needful using the system crates.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 16:50 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (3 responses)

Well that’s still more inconvenient than setting a variable or using defining default system crate directories in cargo but that looks a lot better (also, the cargo examples do not seem to version the path of crates, therefore cyberax will be unhappy because that prevents parallel installation)

However, I suppose that what makes my distributions friends working on rust most unhappy, is the constant dev churn, because devs do not know or use the default system crate locations, and just push change over change without trying to reuse stabilized crate versions.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 17:01 UTC (Fri) by farnz (subscriber, #17727) [Link] (2 responses)

In practice, you'd do this setup once, and not repeat it - unlike an env var, it's then persistent forever. There's no need to version the paths of crates; crates include metadata that identifies their version, and you can rename them to include versioning without ill effect in your system crate pile.

And the dev churn really isn't that bad in Rust - not least because Cargo does semver aware versioning to begin with, so most crates ask for things by semver-aware version.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 20:05 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (1 responses)

> In practice, you'd do this setup once, and not repeat it - unlike an env var, it's then persistent forever.

That’s a dev POW. A dev sets up its environment, it then never changes.

To get distribution synchronization, you need an environment shared by every dev, every packager, and the buildsystem. That means either an hardcoded default location, or a setting in a default system config file, or at least something dirt easy to set, like an environment variable.

Not relying on everyone setting manually the same default in each and every build env.

> There's no need to version the paths of crates;

Unless there will never be two rust apps that need the different versions of the same crate to build, you need to version paths because otherwise the paths will then collide at the share distribution level.

And you can say

> you can rename them to include versioning without ill effect in your system crate pile.

Yes we can do a lot of things at the system level. At some point people tire of working with stuff that need massaging before being used.

> most crates ask for things by semver-aware version

If crates always ask for the latest semver available, because devs have to pull things from the internet (there is no shared system crate store) so why not pull the latest one while you're at it, you *will* get a race to the next version and constant churn.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 20:14 UTC (Fri) by farnz (subscriber, #17727) [Link]

Crates do not ask for the latest semver available, because Cargo locks down crate versions on first build of a binary target. So, once I've specified (say) lazy-static = "^1", the first time I do a build with that dependency, Cargo will record the version of lazy-static it found, and use that for subsequent builds; I have to explicitly demand an update from Cargo before it will move onto a new version.

If I'm working in a workspace, then Cargo will resolve deps workspace-wide, and lock me down workspace-wide; this is the recommended way to develop Rust, as it means that you don't bump versions of crates you depend upon, unless you explicitly ask Rust to do that.

And I would note that this is no different to my experiences as a C developer working on the kernel and graphics drivers - when doing that, I used the kernel, libdrm, Mesa3D, intel-gpu-tools and the entire X.org stack from git to avoid duplicating work, and ensure that if I *did* have a fix for a problem, it wouldn't conflict with upstream work. The only difference in Rust is that, right now, there's a lot of churn from Rust being a young language.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 16:41 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (26 responses)

To give another example. You want the system to know a font file. You drop it in
/usr/share/fonts
~/.local/share/fonts/
or XDG_DATA_HOME/fonts if you like non-standard layouts

and boom, the systems knows about it as soon as the fontconfig index is rebuilt (and you can force the rebuilt with fc-cache).

That’s how hard it should be to register a crate in the system workspace. Drop in standard place. Optionally, run a reindex command.

And using this store should be either the default behaviour or just a shell variable.

Anything more complex than that will require more motivation to use. Humans are not good at using inconvenient things by default.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 16:44 UTC (Fri) by farnz (subscriber, #17727) [Link] (25 responses)

Hang on - you asked how to translate the deps chain from Cargo to the distro packaging system, and now you're talking about how you bypass the distro packaging system.

This is moving the goalposts - an equivalent is given that I need the font Kings Caslon, how do I find and install the Debian package for it? It's a *lot* harder than just copying the file into place - I have to somehow translate the font name "Kings Caslon" to a Debian package name, download and unpack *that* package, and *then* I can drop it into place.

If all you want is to use a system package source, just put the package sources in a known location, and (as per my other comment) set up a workspace to build your new code in. Job's done - all of the rest of the work is about setting up that system package source to begin with.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 18:42 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (24 responses)

On my distribution it’s just:

# dnf install font(kingscaslon)

Sorry about that:) It is good that we have this conversation.

What you feel is difficult, is not difficult at all distribution-side. Distributions had to invent names for all the things they ship, to make them exist in their index. So defining naming conventions, is part of the distribution bread and butter.

In my example the distribution did not even invent a mapping, the mapping was defined upstream, it’s the output of

$ fc-query --format '%{=pkgkit}' <fontfile>

I see we come from very different perspectives, and what is evident to me is not to you (and vice versa). Therefore, I will expand quite a bit.

Distributions want to make things available to the rest of the distribution, and to their users.

For rust, those things necessarily include the crates rust software was built from, because rust does not use dynamic libs. Therefore base-lining at the source code level is imposed by the language. Base-lining is a requirement to coordinate the work of all the distribution contributors.

Things that distributions can handle best:

1. exist under a standard system-wide root (/usr/share/fonts, /usr/lib, FHS and XDG are distribution keystones)

2. have a standard filename or path under this root, that avoids collisions between different artefacts and versions of the same artefact

3. are discovered and used by whatever needs them, as soon as they are dropped under the default root, with the default filename (eventually after reindexing)

4. have standard artefact ids and versions, and standard version conventions that map to something similar to semver

And then, for convenience (but convenience matters a lot to over-committed distribution contributors):

1. you have a command that takes things to deploy anywhere on the filesystem, and deploys them to their standard root (with $PREFIX handling pretty please, many build systems use fakeroots not advanced filesystem overlays)

2. you have a command that reindexes the whole root, if the language needs it (fc-cache, ldconfig)

3. you have a command, that can output the artefact ids and versions, corresponding to a filename (fc-query)

4. you have a command, that outputs what a new artefact needs in various contexts (for building, for running, for testing, etc). Need = artefact ID + minimal semver, not artefact ID + locked version.

All of this can eventually be recreated and redefined by distributions, if not provided upstream. However, the more work it is, the more likely distribution people will focus on mature languages, for which this work has been done a long time ago (ie C/C++).

A big difference with IDE and dev tools, is that distributions emphatically do *not* want to frob structured formats to do things. Just give use standard commands. We don’t want to do xml for language A, yaml for language B, json for language C, toml for language D, ini for F, etc. Format wars are dev fun, not integrator fun. Everything is a file under a standard place with a bunch of standard manipulation commands and eventually a set of standard shell variables is plenty enough for us.

When all of this exist, packaging a new upstream artefact is just:

1. take the upstream bunch of files

2. execute the command that tells what other artefacts and artefact versions they need for building (and optionally, testing)

3. translate the artefact ids into distro ids
(mostly, adding a namepace like font(), if upstream naming forgets it will be injected in a dependency graph, that includes many different kinds or artefacts)

4. use semver to map the minimal versions asked by the artefacts into whatever is available in the distribution baseline

5. install the result (make it exist in the standard root, reindex the result)

6. run the standard build command

7. run the standard install command (with prefix-ing). In the rust case, that would include the command that deploys the crate under the default root, for reuse

8 run the standard test command

9. run the standard "what are you" command on the resulting files, so the corresponding ids and versions can be added to the distribution index

Forcing a version is possible in 4, as a manual exception in the build process, but it’s not the default behaviour, because doing otherwise is just piling technical debt.

And that’s basically all tooling side. Once this part is done, there is no difficulty in packaging large numbers of artefacts, as long as their code and dependencies are sane, and the churn is not incompatible with the target cadencing

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:02 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (22 responses)

In other words: most of it is not rocket science, it’s just strict conventions and good defaults.

Conventions and defaults create network effects. Humans strive on habits and routines.

The difference between “plop file here and it will be used” and “just edit this file in <my pet> format, for <every> artefact, and set <whatever shared root you want to use” is technically slight but huge from a human networking point of view.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:14 UTC (Fri) by farnz (subscriber, #17727) [Link] (21 responses)

The thing is that I'm a Fedora packager - none of what you're explaining to me is news to me, and as someone who *also* develops in Rust, I don't see the difficulties in mapping Rust to Fedora packages (although I'm aware that they exist, they are largely a consequence of Fedora fighting hard to dynamically link instead of statically link, which, due to the monomorphization problem, is a hard problem for Rust).

Can you please stop assuming that I'm clueless, and start focusing on what, exactly, it is about Rust and Cargo that makes it hard for distros to package Rust code, beyond the fact that Rust crates are hard to dynamically link using a linker designed around the needs of 1980s C code?

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:48 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (20 responses)

Fedora is not “fighting hard to dynamically link” Rust. As far as I’m aware a lot of Rust code Fedora side is statically linked (and anyway, stactic vs dynamic linking never comes up in Rust packager reports).

Fedora is fighting hard to overcome the churn Rust side.

Some of the most awful and broken things in modularity, were constructed specifically for Rust, not to dynamically link, but to cope with the terrible amount of churn the SIG had to face.

And why do you get this churn in Rust? Because Rust has no notion of a shared code state layer at the system level. There is no baseline to build upon, every dev just bumps its code needs all the time.

The baseline could be constructed with shared system libs (that rust does not have) or with a default shared workspace at the system level (that rust does not have either). Or, maybe someone will have another idea.

Fact being, this baseline does not exist now. Without something to synchronize upon, people do not synchronize. Without synchronization, it’s a race to the latest version bump. It’s no more complex than that.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 20:03 UTC (Fri) by farnz (subscriber, #17727) [Link] (19 responses)

Rust does have the notion of a shared state - it's part of the workspace model. And the only reason the churn was an issue for the SIG (I followed this, BTW) is that it becomes impossible to maintain shared library packages for crates when they are being developed at high speed, and you don't have tooling to use Cargo to keep up with the churn.

In other words, this isn't a technical issue - it's that Rust is currently a young language, and developers are releasing early and releasing often. The only reason Fedora doesn't have a churn problem with C++ or C code is that the libraries in use in C and C++ have been around for a long time, and are consequentially relatively stable, as compared to Rust libraries, and thus Fedora being 12 months behind current doesn't actually make much difference.

If Fedora wanted to, it could treat the churn in Rust the way it does in C and C++ code - stick to relatively stale versions, and hope that the features users want are present. The trouble with that from Fedora's point of view is that with the current pace of Rust application development, Fedora would look bad because of all the missing features that upstream advertises, but Fedora does not support.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 20:21 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (11 responses)

> If Fedora wanted to, it could treat the churn in Rust the way it does in C and C++ code - stick to relatively stale versions, and hope that the features users want are present.

That would not work for Rust.

That works for C/C++, because the distribution versions choices are materialized in dynamic libraries. It’s easy for upstreams to check what library versions are used by major distros. It’s easy to make sure they have something that works with those versions, if they want to reach users. Also that pushes upstreams to agree on the next version they’ll consolidate upon because they know distros won’t ship every possible variation under the Sun.

Yes, I know, terrible dev oppression with stale versions.

In the meanwhile, the stuff produced by oppressed devs gets distributed, and the stuff produced by high speed rust devs, does not.

A formula one can be reinvented for every race, and stop at pit stands every lap (high speed car fixing). A normal car better run a long time without changes. Changes in normal cars better be batched R&d side for next year’s model.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 23:36 UTC (Fri) by farnz (subscriber, #17727) [Link] (10 responses)

I disagree fundamentally with you, then; the only reason I ever use my distros -devel packages for work C and C++ code is because it's usually a right pain in the backside to go to upstream and build what I want for myself. Cargo makes it easy for me to build my upstream Rust dependencies for myself, so why wouldn't I do that?

This leads to different incentives - for C and C++ libraries, I want to make life easy for distros, not because distros are necessarily good, but because they are considerably easier to deal with than either vendoring all my dependencies or getting my potential users to build them in a way that I can use. In contrast, over in Rust land, I can tell my users to run cargo install $cratename, and my crate is automatically built and installed for them.

So, from where I'm sitting as a dev, the distros fill a gap in the C and C++ ecosystem that's filled by Cargo in the Rust ecosystem. All the distros need to provide is C and C++ packages, plus rustup, and I can get users to install my code without further involvement from the distro. Remember the trifecta: users, operations, developers; because C and C++ have (quite frankly) awful tooling for non-developer installation of code, there's room for distros to get in quite a lot of operations stuff while acting as a link between developers and users, because if distros refuse to provide that link, the users can't get the developers' code to work.

In contrast, because Cargo provides most of what a package manager provides to a user (install, upgrade, uninstall, dependency resolution, rdep handling), if distros try too hard to get in the way, I can bypass them relatively easily. You thus have a problem - the operations leverage over developers in the C world is that without the distro compromise that operations will accept, users can't run developer code. This isn't true in the Rust world - Cargo is good enough as a package manager that I can bypass operations.

So, distros no longer have leverage; how do operations get heard in this world, given that if you push back against good development practices like "releases early, release often", users will bypass distros to get the tooling they want? Remember that Linux is already a minority OS - if you're saying that you're going to hold it back further, users will just bypass distros if it's easy to do so, as they're the people who, today, are already motivated to find solutions to problems beyond following the pack.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 23:56 UTC (Fri) by pizza (subscriber, #46) [Link] (5 responses)

> Remember that Linux is already a minority OS

Not according to Microsoft -- As of July 2019, Windows Server is now the minority on Azure [1], and that's the cloud provider with the lowest Linux percentage.

[1] https://www.zdnet.com/article/microsoft-developer-reveals...

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 23:58 UTC (Fri) by farnz (subscriber, #17727) [Link] (3 responses)

That's server-side, not total - and server side is exactly the place where you have developers who can run tools like Cargo instead of being helplessly dependent on distros. In other words, that's where distros are least useful to begin with, beyond the base libraries to get your developers going.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 0:43 UTC (Sat) by pizza (subscriber, #46) [Link] (2 responses)

Okay, you've managed to completely lose me. You've said that Linux is both minority and it isn't, that this is relevant and it isn't, and distros don't matter -- except when they do.

I have no idea what point you're trying to make, beyond "distros are useless, because reasons"

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 11:42 UTC (Sat) by farnz (subscriber, #17727) [Link] (1 responses)

I'm saying that Linux distros are not a significant driver for distribution of code; server-side, you do whatever the devs want you to, client side is iOS, Android, Windows etc.

This, in turn, pulls down their influence - why should the authors of Krita, or someone writing code to run on Amazon AWS or Google Cloud Engine, care if their users have to run "cargo install", or "yarn install", instead of "apt install"? Unlike C++ land, where dependency management is otherwise manual, modern languages don't need distribution packaging to be easy to install and use code written in those languages - and that means that distros no longer hold the place of power they do in C++ land.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 8, 2019 21:47 UTC (Sun) by flussence (guest, #85566) [Link]

Distribution of code and walled-garden proprietary OSes are two different universes.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 0:37 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

And likely that most of these servers use language-specific package managers, only utilizing the distros for the base system.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 10:19 UTC (Sat) by nim-nim (subscriber, #34454) [Link] (3 responses)

> This leads to different incentives - for C and C++ libraries, I want to make life easy for distros, not because distros are necessarily good, but because they are considerably easier to deal with than either vendoring all my dependencies or getting my potential users to build them in a way that I can use.

And, for Rust, you do not want to make it easy on distributions.

You’re pushing complexity on distributions.

You’re pushing complexity on users (cargo install foo is nice for a limited set of fringe components, do you actually expect users to construct the thousands of components composing a system like that? That requires big company-level of hand-holding).

And, you’re surprised distributions feel C and C++ libraries are better suited for systems works (what distributions do)? And do not want Rust anywhere near their core in its current state?

Really, what did you expect?

A system that does not incentivize, making the life of others easier, will result in little adoption by those others.

And you can feel distributions are “not necessarily good”, but that’s irrelevant, unless you want to pick up the distribution work yourself. That won’t leave you with much time for dev, but such is real life, removing someone that provides a service to you, does not make the need for the service go away.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 11:49 UTC (Sat) by farnz (subscriber, #17727) [Link]

You're missing the point - code is written by developers. If developers move en-masse to a new language, distros don't get a choice about wanting C instead of Rust - they have to live with it, because the users of code want the new features of the new versions of code.

In C land, distros have power because there's no dependency management without them. In Rust land, if distros get between users and devs, they're trivial to bypass. We've already seen what happens with C code where the developers and users agree that it's beneficial to move to the new shiny - distros have to move, too, whether they like it or not, because users want the new GNOME etc, and thus distros have to accept GNOME's dependencies.

Stop thinking that distros matter to users - they don't, particularly, and there's a reason that no other OS has a distribution-like model. Users care about applications that make their computers do useful things; if distributions make it harder to get useful things than bypassing distributions, then users will bypass distributions.

If the distributions can't work out how to work with Rust developers, and system software migrates to Rust, then distributions will get reduced to sufficient tooling to run rustup and Cargo, because that's all that users will want from them; if distributions don't want to end up here, then they need to work with Rust now to get to a state where they are still relevant.

And note that Rust does make it as easy as it can on distributions - Cargo even translates its own dependency information to a simple JSON form for you, with or without resolving versions at your discretion. If distributions want Rust to do more than it does for them, then they need to engage with the Rust community to explain what they need and why.

The discussion with nim-nim elsewhere in this thread is typical, FWIW - as they asked for things from Rust, I explained how to get what they want from Cargo, and then they went off into unrelated territory. My favourite part of that was their claim that I just need to run dnf install font(kingscaslon), and dnf would package my font for me, then install it…

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 13:11 UTC (Sat) by pizza (subscriber, #46) [Link] (1 responses)

> And you can feel distributions are “not necessarily good”, but that’s irrelevant, unless you want to pick up the distribution work yourself. That won’t leave you with much time for dev, but such is real life, removing someone that provides a service to you, does not make the need for the service go away.

This is the bit that I keep coming back to. _someone_ has to do this work, and if it's not the distros [1], and not the language/tool vendors, then it's going to have to be you [2]. It's legit to say that the traditional distro model has its flaws and room for improvement, but one would have to be pretty naive to claim the sorts of problems and concerns distros have traditionally solved (ie "testing, deploying and maintaining configuration managed baseline systems useful for 3rd parties to develop on/for") somehow no longer matter.

[1] Not just distros ala Ubuntu, but whatever k8s and various proprietary cloud vendors come up with to fill the same role.

[2] "you" can mean an individual developer or an tools/platform team that big orgs tend to have. With every one of them reinventing the same wheel. If only there were independent organizations that could take on this work for the benefit of all involved... I know, we'll call them "distributions"...

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 14:12 UTC (Sat) by farnz (subscriber, #17727) [Link]

And therein lies the rub - part of the problem here is that distros are saying that someone else should do that work - and it looks like, at least for Rust, the language/tool vendors are going to do it well enough for users' and developers' needs, and simply bypass the distros altogether, with things like crater for testing the entire package ecosystem in a single build.

Note that this has already happened for JavaScript - if that's your language of choice, you get Node.js from the distro, and then switch to NPM or yarn from there on in; users are happy to run bundles created by developer tooling, and the distro becomes an irrelevance as soon as it's good enough to get Node.js to run. Maybe that's a good end point for distros - enough runutime etc to let the language tooling take over?

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 20:32 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (6 responses)

I do agree it’s not a technical issue, because technical choices reflect human intentions.

So, if the community wanted it, the technical aspects could be changed.

However, it is too easy to attribute it to the language age. Java never outgrew the churn state, and few would call the language young today. Early choices, made when a community is young, and its members few, easily set in hard habits later.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 4:18 UTC (Sat) by mathstuf (subscriber, #69389) [Link] (5 responses)

There is work on picking minimum versions rather than maximizing them when selecting dependency versions. There are a number of packages which don't specify accurate minimum versions in their crates, but I've been trying to sweep my dependencies for them at least. But, any recursive fix needs a new release which bumps the minimum for the depending crate…so, no exactly the easiest thing. But, if/when that lands in stable, I'd expect to see CI for such builds to become more popular.

Then distros' jobs would be to say "hey, there's a security fix in x.y.z, please update your minimum to x.y.z+1 and make a new release" and every stable-tracking distro gets a similar fix available. Of course, if the distros provide an index that cargo can read, just not providing the minimum and giving a minimum-with-fixes available (treating the intermediate versions as "yanked") will likely have the same effect. Investigation into that as a viable alternative would be necessary.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 8:46 UTC (Sat) by nim-nim (subscriber, #34454) [Link] (4 responses)

> any recursive fix needs a new release which bumps the minimum for the depending crate…

That’s the thundering herd effect that makes it unpractical, to package large amounts of software, if upstream devs do not coordinate their version needs a minimum (and they can coordinate via shared libs, unified master company scm like Google, anything else but the coordination is necessary). semver alone does not work if everyone keeps requiring semver tip versions.

You can delay the effect with containers or static linking, but as soon as a critical fix is needed, it comes back with a vengeance. The dev dream of “any dev uses whatever versions it wants and the rest of the supply chain will cope” is utopic in presence of un-perfect code, that will always require an eventual fix.

> Of course, if the distros provide an index that cargo can read

To find out what is available on my current distribution:

$ dnf repoquery -q --provides $(dnf repoquery --whatprovides 'crate()') | grep 'crate('

crate(abomonation) = 0.7.3
crate(abomonation/default) = 0.7.3
crate(actix) = 0.8.3
crate(actix-codec) = 0.1.2
crate(actix-codec/default) = 0.1.2
crate(actix-connect) = 0.2.3
crate(actix-connect/default) = 0.2.3
crate(actix-connect/http) = 0.2.3
crate(actix-connect/openssl) = 0.2.3
crate(actix-connect/rust-tls) = 0.2.3

To install one of those

$ sudo dnf install 'crate(abomonation) = 0.7.3'

That could probably be streamlined and plugged into cargo if someone wanted to (it would be much better if cargo provided a standard name and version mapping for rpm and apt at least, that would simplify reading it back cargo-side; the artefact naming is pretty flexible distribution side but the version format less so, as it is used to compute upgrade paths. The original semver format, before non Linux people added rpm and deb-incompatible things in it, was easy to map).

And then the result of the install is useless for the rust dev if there is no system workspace enabled by default. The crates will be installed but not used.

> Of course, if the distros provide an index that cargo can read, just not providing the minimum and giving a minimum-with-fixes available (treating the intermediate versions as "yanked") will likely have the same effect.

Yes, that’s the most likely scenario distro-side. There is a limited set of checked and available versions, anything else is considered yanked. Doing things any other way means drowning under version combinations.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 11:52 UTC (Sat) by farnz (subscriber, #17727) [Link]

Every one of those crates is using the Cargo name and versioning - there is a trivial mapping from Cargo to dnf notation, and back again. The trouble is that the distro has not written the code to do that mapping for me, but is expecting that Cargo will magically stop using its own dependency resolution and handling (which is at least as good as dnf's) and start using the distro's code, just because that's what the distro wants.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 13:33 UTC (Sat) by mathstuf (subscriber, #69389) [Link] (2 responses)

> That’s the thundering herd effect that makes it unpractical, to package large amounts of software, if upstream devs do not coordinate their version needs a minimum (and they can coordinate via shared libs, unified master company scm like Google, anything else but the coordination is necessary). semver alone does not work if everyone keeps requiring semver tip versions.

If OpenSSL 1.0.2j has some critical fix, why would I want to allow my software to compile against 1.0.2i anymore? Bumping the minimum helps *everyone* get better versions over time, not just those who remember to rebuild their OpenSSL.

The current flurry of changes needed for minvers support is because it is new in Cargo. If it had been there from the beginning, it likely would have been much easier to handle. But, this is the same problem with any bootstrapped initiative be it reproducible builds, new architectures, etc. The initial push is large, but the constant churn should be quite low.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 14:09 UTC (Sat) by smurf (subscriber, #17840) [Link] (1 responses)

No remembering should be necessary. Just take each package depending on libopenssl-dev, rebuild it, test it (the missing piece of the puzzle in all-too-many packages …), upload the result if it changed, rinse&repeat.

All of this can happen automagically, given modern tooling (which Debian for one is slowly moving towards – much too damn slowly for my taste) … except for the "fix the inevitable regressions" part of course.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 8, 2019 11:33 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

I think you're missing the minimum deps thing. With that resolution (which avoid ratcheting to newer-and-newer deps for everyone), you get the *oldest* that is compatible. Distros can just not provide the old version and the lowest compatible version would be sufficient. But for the sake of everyone else, sending a patch upstream that says "use a newer version of your dep which solves $some_problem" would be great.

I'm arguing that Debian shouldn't make its own house fine and not let upstream know that there's an issue with one of their deps.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:10 UTC (Fri) by farnz (subscriber, #17727) [Link]

Yet again, I am completely lost in what you're suggesting - you keep jumping around between what has to be done to package something (which is the core of the discussion), and what can be done when you *have* packaged something.

I have an unpackaged TTF file for Kings Caslon (it's a proprietary font, so not in any distro). When I run dnf install font(kingscaslon) on Fedora 31, it does not find my TTF, package it, and install it in the right place - how do I make that command work for me, since you claim it solves the problem of packaging a font?

Rust has already solved the naming problem to Rust's satisfaction - in practice, you use the name any given crate has on the primary crate index for the community, and you're done. For Rust's needs, because Cargo does semver-aware version resolution, and tries to minimise the number of crates it pulls in subject to dependency resolution, this also keeps the dependency tree under control.

It sounds, from what you saying, like distros can't handle C libraries - they don't meet 2 or 4. Rust meets all four of your requirements, and the only bit missing is the command to translate from Rust's crates.io names to distro names, and use that for dep resolution instead of asking Cargo to download from crates.io.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 9, 2019 1:22 UTC (Mon) by kvaml (guest, #61841) [Link]

Thanks for the education to this non-Rust user. That was very clear and lucid.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 10, 2019 12:41 UTC (Tue) by jezuch (subscriber, #52988) [Link]

I'm a little late to the party and it's already late for what I'm proposing, but the Rust project has just finished collecting proposals for things to focus on in 2020. This year, for example, the focus was things like ergonomics (IIRC). Making Rust work well with distributions would be a valid goal to focus on for the project, IMVHO. So the best place to rant about this would be #rust2020 :)

Just my 2¢

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 18:22 UTC (Thu) by marcH (subscriber, #57642) [Link] (37 responses)

> No *you*’re assuming that because people ... disagree ... they don’t understand ...

I think we're getting dangerously close to "he said that she said that they said..." :-)

> So much for distros not understanding new tech. Distros may not understand the coding side as well as devs. They certainly understand the building and distributing part way better than devs.

Well said.

> I was laughed out of the conversation...

By the way: while sometimes tense, I found this particular discussion here probably the most interesting on the topic I've seen yet. I didn't see anyone laughing anyone else out of the conversation yet.

While this topic is typically territorial/tribal (we're only humans), I think the real issue is not so much about the ideal design and place to implement release management but more about who has the time, skills, resources and motivation to_ actually do it_. In other words:

- if "upstream" developers suck at software distribution and integration (often true), then better let distributions do the work. They've done it for ages.
- elsif PyPI/NPM/cargo/k8s/other [finally] gets a software distribution clue (https://lwn.net/Articles/806230/ k8s), then distros should probably stop duplicating the effort and getting in the way. As you wrote, devs are more specialised and know their particular technology better. They just need to realise at last that writing product code is a tiny part of software.

I don't care that much whether the command I need is "dnf update python3-foo*" versus "pip3 update foo*". What I really care about is the release and QA efforts behind it - the results of which should be easy to browse. "Open software", not just "open source".

A third possibility is for developers to work in Linux distributions directly. Unfortunately there are too many of them, so this typically translate into just one Ubuntu repository.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 19:26 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (36 responses)

Honestly, I don't think distributions care a lot about dnf or apt or whatever either. Sure, a lot of distributions have been created around a particular package manager, but most distribution folks are a lot more agnostic about the technical packaging format that people think (or, there would not be so many distributions, that reuse the package format of others).

Distributions care about integrating.

Therefore, first, distros won’t have a hard time migrating to demonstratively better packaging technology, if it ever becomes available (the apt/rpm duopoly exists because apt and rpm are so close capability wise, so there’s no good reason to drop existing investment in one for the other). flatpack as a technical format, is a poor limited copy of either of those.

And second, it won’t matter one bit, because the basic integration requirements are independent of the technical format (most of them, are even independent from the operating system; copying a lot of features of Linux packaging systems in Windows Update served Microsoft well). Therefore, changing the format, won’t remove the friction with devs, that do not want integration constrains to exist in the first place.

Lastly, the more a dev-oriented packaging format matures, the easier it is to map into deb or rpm. Those are vastly more capable than all the dev-packaging managers I've inspected. What’s blocking most mapping attempts is not missing capabilities deb or rpm side. It’s missing capabilities dev package side. Those missing capabilities require a lot of distribution work to fill, package by package.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 19:38 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (34 responses)

The fundamental problem with ALL legacy distributions is the insistence on single versions of packages. You can't have two versions of libssl installed, for example. And attempts to fix it (Fedora's Modularity) hit tons of roadblocks.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 19:56 UTC (Thu) by farnz (subscriber, #17727) [Link] (25 responses)

Part of the problem there is that, assuming we distribute machine-ready binaries, you're stuck choosing between combinatorial explosions in the number of binaries you test (as you need to test all reasonable combinations, even if they are ABI-compatible), or you end up losing many of the benefits the distros provide as you provide packages that simply don't work together, even though they can be installed together.

The other component is that a decent amount of the hard work distributions do (that makes the ecosystem a better place) is a consequence of that restriction - if there can only be one version of (say) zlib on your system, the distro has incentives to make sure that everything works with the current, secure, version of that package. If I can have multiple versions of it installed, why can't I stick to the known bad version that works, and just complain about attempts to move me onto the newer version?

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 20:56 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (24 responses)

Well, insistence on a single version also leads to a rigid system that moves too slowly for modern development.

> Part of the problem there is that, assuming we distribute machine-ready binaries, you're stuck choosing between combinatorial explosions in the number of binaries you test (as you need to test all reasonable combinations, even if they are ABI-compatible)
Not quite following. If you update a dependency then you start walking its rdepends graph and updating it. There's no expectation that AppA would work with a random version of LibB.

This model does lead to a proliferation of versions, just one tardy package can force the distro to keep around a huge graph of old dependencies. But this is a balancing act and can be managed.

I worked in a company that did dependency management like this. It worked mostly fine, the major problem was "migration campaigns" when a central dependency had to be upgraded and individual teams had no bandwidth to do it.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:21 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (19 responses)

> Well, insistence on a single version also leads to a rigid system that moves too slowly for modern development.

There is no such insistence of a single version distribution side. Haven't you noticed the dozens of libs available in multiple versions on all major distributions? The way they've all managed the python 2 and 3 overlap?

So, technically, multiple version works in all distributions. Practically:

> the major problem was "migration campaigns" when a central dependency had to be upgraded and individual teams had no bandwidth to do it.

devs severely underestimate the connectivity of the current software world, and the team bandwidth that would be implied by the scope of versions they demand angrily.

It would be a lot easier for distributions to provide recent versions of all components, if dev projects adhered strictly to semver.

> this is a balancing act and can be managed.

And distributions are managing it. More contributors, more packages, in more versions. Less contributors, less packages, more version consolidation. Nothing more and nothing less.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:35 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (18 responses)

> There is no such insistence of a single version distribution side.
Yes, there is.

> Haven't you noticed the dozens of libs available in multiple versions on all major distributions? The way they've all managed the python 2 and 3 overlap?
No. You're cherry-picking.

> So, technically, multiple version works in all distributions.
No. No. No. You are about as wrong as you can get.

To give you an example, right now I have a problem with python3-theano that has broken one our application in the 1.0.4 version. I can install 1.0.3 from the previous version of the distro by pinning the version, but this breaks _another_ application. There's no way to install 1.0.3 and 1.0.4 in parallel and say that "AppA wants 1.0.3, AppB wants 1.0.4".

I'm going to fix it by just packaging everything in a venv and abandoning system packages altogether.

> And distributions are managing it. More contributors, more packages, in more versions. Less contributors, less packages, more version consolidation. Nothing more and nothing less.
No. Distros have dropped the ball here. Completely. They are stuck in the "single version or GTFO" model that is not scaling past the basic system.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:57 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (1 responses)

Sorry, but no. You can technically create and install as many sublevels of parallel versions as you want.

Now, that supposes (on the *language* side, not the *distribution* side) that
* first, the language deployment format uses different file paths of all the version sublevels you want to exist
* second, there was a way at the language level, to point to a specific version, if several are found on disk.

If the language provides none of those, you are stuck installing a single version on system, or playing containers, venvs, and all those kind of things, whose sole purpose is to isolate upstream language tooling, from versions it can not cope with.

And, maybe distributions should have, changed all language stacks to work in parallel version mode. Maybe they tried, and failed. Maybe they didn’t even try

However, it’s a bit rich to blame distributions, and ask them to adopt language tooling, when the problems pointed out, are inherited from language tooling in the first place.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 23:23 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> Sorry, but no. You can technically create and install as many sublevels of parallel versions as you want.
I can not use distor infrastructure for it. I have to build my own packages and manage all of them.

This means that the distro becomes nearly useless for me, except as for very basic system utilities.

> * first, the language deployment format uses different file paths of all the version sublevels you want to exist
> * second, there was a way at the language level, to point to a specific version, if several are found on disk.
If we're stuck on Python then we can as well continue. Python supports all of these, yet no public distro utilizes it. Some unpopular distros like Nix do.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 22:02 UTC (Thu) by pizza (subscriber, #46) [Link] (11 responses)

> To give you an example, right now I have a problem with python3-theano that has broken one our application in the 1.0.4 version. I can install 1.0.3 from the previous version of the distro by pinning the version, but this breaks _another_ application. There's no way to install 1.0.3 and 1.0.4 in parallel and say that "AppA wants 1.0.3, AppB wants 1.0.4".

...So why isn't fixing the application(s) an option here? (Root cause analysis and all that..)

> There's no way to install 1.0.3 and 1.0.4 in parallel and say that "AppA wants 1.0.3, AppB wants 1.0.4". I'm going to fix it by just packaging everything in a venv and abandoning system packages altogether. [...] Distros have dropped the ball here. Completely. They are stuck in the "single version or GTFO" model that is not scaling past the basic system.

But.. what you just described isn't actually a "distro" problem at all, as the inability to install multiple versions of a given module system-wide is a limitation of python's own packaging system. Intentionally so, as managing multiple versions introduces a great deal of complexity. Instead of dealing with that complexity head-on they decided to take the approach of self-contained private installations/systems (aka venv).

But while that keeps applications from stepping on each other's toes, it doesn't help you if your application's dependencies end up with conflicting sub-dependencies (eg submoduleX only works properly with theano <= 1.0.3 but submoduleY only works properly with >= 1.0.4...)

(This latter scenario bit my team about a month ago, BTW)

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 23:17 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

> ...So why isn't fixing the application(s) an option here? (Root cause analysis and all that..)
One application is a commercial simulator that we can't fix. We'll probably just update our application to work with 1.0.3 by adding a workaround or just put it in a container. This workload should have been containerized anyway...

In this case this is easy, but I had much more complicated cases with binary dependencies that Just Didn't Work.

> But.. what you just described isn't actually a "distro" problem at all, as the inability to install multiple versions of a given module system-wide is a limitation of python's own packaging system.
No it's not. Python supports venvs, custom PYTHONPATH and custom loaders.

For example, back at $MYPREV_COMPANY we had a packaging system that basically provided a wrapper launcher taking care of that. So instead of #!/usr/bin/env python3" we used "#/xxxxx/bin/env python3" which created the correct environment based on the application manifest.

> But while that keeps applications from stepping on each other's toes, it doesn't help you if your application's dependencies end up with conflicting sub-dependencies (eg submoduleX only works properly with theano <= 1.0.3 but submoduleY only works properly with >= 1.0.4...)
Correct. This is a problem, but it happens during development, not package installation so the developer can work around it (by fixing deps, forking them, pinning previous versions, etc.)

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 8:39 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (9 responses)

>> But.. what you just described isn't actually a "distro" problem at all, as the inability to install multiple versions of a given module

> system-wide is a limitation of python's own packaging system.
> No it's not. Python supports venvs, custom PYTHONPATH and custom loaders.

None of this works at scale. It’s all local dev-specific workarounds:
– venv is poor man's containerization (you can achieve the same with full distro containers, if venv “works” then distros work too)
– PYTHONPATH is the same technical debt engine which has made Java software unmaintainable¹
– custom loaders are well, custom. they're not a generic language solution

A language that actually supports multi versioning at the language level:

1. defines a *single* system-wide directory tree where all the desired versions can coexist without stomping on the neighbor’s path
2. defines a *single* default version selection mecanism that handles upgrading and selects the best and most recent version by default (ie semver)
3. defines a way to declare, an application-level exception for a specific set of components

Because, as you wrote yourself:

> the major problem was "migration campaigns" when a central dependency had to be upgraded and individual teams had no bandwidth to do it.

Therefore, anything manageable at scale must keep semver exceptions as exceptions, not the general case.

Most languages do not support multi-version (that’s a *language* not a distribution limitation). Some languages do:
– the “backwards” C/C++, because of shared lib versionning
– Go, because Google wanted to create an internet-wide Go module repository, so they had to tackle the version collision problem
Probably others.

You can trick a mono-version language locally by installing only a specific set of components for a specific app. That does not scale at the system level, because of the version combination explosion at initial initial install time, and the version combination explosion at update decision time.

The other way that devs, that like to pretend the problem is distribution-side, workaround mono-version language limitations, is to create app-specific and app-version-specific language environments. That’s what vendoring, bundling, Java PATHs, and modularity tried to do.

All of those failed the scale test. They crumble under the weight of their own version contradictions, under the weight of the accumulated technical debt. They are snowflakes that melt under maintenability constrains. They only work at the local scale (app or company-specific local scale, limited time scale of a dev environment).

You want multi-version to work for foo language, ask foo language maintainers for 1–2–3. You won’t get distributions to fix language level failures by blaming them while praising at the same time the workarounds language devs invented to avoid fixing their runtime.

¹ Both distro *and* enterprise side, I have a *very* clear view @work of what it costs enterprises; and it’s not because of distros since everything Java related is deployed in non-distro mode

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 8:50 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (8 responses)

> None of this works at scale. It’s all local dev-specific workarounds
I worked in a company that is in the top 10 of the world's companies and has the name starting with an "A" and the second letter not being "p".

Pretty much all of its software is built this way, with dependency closures sometimes having many hundreds of packages. The build/packaging system supports Python, Ruby, Go, Rust, Java and other languages with only minor changes to the way binaries are launched.

So I can say for sure that this approach scales.

> A language that actually supports multi versioning at the language level
Nothing of this is needed. Nothing at all. Please, do look at how Java works with Maven, Ruby with Gems, Go with modules, or Rust with Cargo. They solved the problem of multiversioning in the language tooling without your requirements.

> Therefore, anything manageable at scale must keep semver exceptions as exceptions, not the general case.
I don't follow. Why?

> All of those failed the scale test. They crumble under the weight of their own version contradictions, under the weight of the accumulated technical debt. They are snowflakes that melt under maintenability constrains.
So far this is basically your fantasy. You are thinking that only seasoned distros that can wrangle the dependency graph into one straight line can save the world.

In reality, commercial ecosystems and even Open Source language-specific ecosystems are already solving the problem of multiversioning.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 11:20 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (7 responses)

>> None of this works at scale. It’s all local dev-specific workarounds
> I worked in a company that is in the top 10 of the world's companies and has the name starting with an "A" and the second letter not being "p".

When you attain this size you can afford the combination explosion. Most entities (starting with community distros) can not.

So yes it does not scale. With astronomic resources you can brute-force even an inefficient system.

Go modules respect my requirements. I should know, I spend enough months dissecting the module system. They will work as multi-version. Python does not. It's not a multi version language.

Java has not solved anything. Which is why its adoption outside businesses, is dismal. Businesses can afford to pay the not-scaling tax (in app servers, in ops, in lots of things induced by Java software engineering practices). Community distros can not.

>> Therefore, anything manageable at scale must keep semver exceptions as exceptions, not the general case.
> I don't follow. Why?

Because each exception is an additional thing that needs specific handling with the associated costs. That’s engineering 101 (in software or elsewhere).

Rules get defined to bring complexity and costs down. Exceptions exist to accommodate an imperfect reality. A working efficient system allows exceptions without making them the rule.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 18:56 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

> When you attain this size you can afford the combination explosion. Most entities (starting with community distros) can not.
I maintained several projects there and spent way less time on that than maintaining a package for Ubuntu and coping with it being broken by Python updates.

> So yes it does not scale. With astronomic resources you can brute-force even an inefficient system.
There's nothing inefficient there. It allows to efficiently place the onus of evolving the dependencies on the correct people - package owners.

I.e. if you own a package AppA that uses LibB then you don't care about LibB's rdepends. You just use whatever version of LibB that you need. If you need a specific older or newer version of LibB then you can just maintain it for your own project, without affecting tons of other projects.

This approach scales wonderfully, compared to legacy distro packages. Heck, even Debian right now has just around 60000 source packages and is showing scaling problems. The company I worked at had many times more than that, adding new ones all the time.

> Because each exception is an additional thing that needs specific handling with the associated costs. That’s engineering 101 (in software or elsewhere).
What is "semver exceptions" then?

> Exceptions exist to accommodate an imperfect reality. A working efficient system allows exceptions without making them the rule.
What are "exceptions" in the system I'm describing?

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:22 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (4 responses)

In your approach everything is a special case with a special dep list.

It "scales" because you do not maintain anything, you just push code blindly.

Distributions do not only push code, they fix things. When you fix things, keeping the amount of things to be fixed manageable matters.

If you don’t feel upstream code needs any fixing, then I believe you don’t need distributions at all. Just run your own Linux from scratch and be done with it.

Please report to us how much you time you saved with this approach.

Interested? I thought not. It‘s easy to claim distributions are inefficient, when adding things at the fringe. Just replace the core not the fringe if you feel that’s so easy. You won’t be the first one to try.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:56 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> In your approach everything is a special case with a special dep list.
Correct.

> It "scales" because you do not maintain anything, you just push code blindly.
Incorrect. In that companies libraries are maintained by their owner teams. The difference is that they typically maintain a handful of versions at the same time, so that all their dependants can build it.

There was also a mechanism to deprecate versions to nudge other teams away from aging code, recommendation mechanism, etc.

> Distributions do not only push code, they fix things. When you fix things, keeping the amount of things to be fixed manageable matters.
In my experience, they mostly break things by updating stuff willy-nilly without considering downstream developers.

> If you don’t feel upstream code needs any fixing, then I believe you don’t need distributions at all. Just run your own Linux from scratch and be done with it.
That's pretty much what we'll be doing eventually. The plan is to move everything to containers running on CoreOS (not quite LFS, but close enough).

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 8:58 UTC (Sat) by nim-nim (subscriber, #34454) [Link] (2 responses)

> Incorrect. In that companies libraries are maintained by their owner teams. The difference is that they typically maintain a handful of versions at the same time, so that all their dependants can build it

And that’s exactly what major distributions do, when the language tooling makes it possible without inventing custom layouts like Nix does.

Upstreams do not like distribution custom layouts. The backlash over Debian or Fedora relayouting Python unnilateraly, would be way worse, than the lack of parallel instability in the upstream Python default layout.

> Incorrect. In that companies libraries are maintained by their owner teams.

It’s awfully nice when you can order devs to use the specific versions maintained by a specific owner team.

Of course, most of your complaint is that you *do* not* *want* to use the specific versions maintained by the distro teams.

So, it’s not a technical problem. It’s a social problem.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 9:05 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

> And that’s exactly what major distributions do
Except that it's impossible to use it, because it's not possible to parallel install libraries.

> when the language tooling makes it possible without inventing custom layouts like Nix does.
Then this tooling needs to be implemented for other languages. As I said, I've seen it done at the largest possible scale. It doesn't even require a lot of changes, really.

> Of course, most of your complaint is that you *do* not* *want* to use the specific versions maintained by the distro teams.
Incorrect again. I would love to see a distro-maintained repository with vetted package versions, with changes that are code-reviewed by distro maintainers.

It's just that right now these kinds of repos are useless, because they move really slowly for a variety of reasons. The main one is the necessity to upgrade all versions in the distribution in lockstep.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 9:47 UTC (Sat) by nim-nim (subscriber, #34454) [Link]

>> when the language tooling makes it possible without inventing custom layouts like Nix does.

> Then this tooling needs to be implemented for other languages. As I said, I've seen it done at the largest possible scale. It doesn't even require a lot of changes, really.

Then don’t complain at distros, write a PEP, make upstream python adopt a parallel-version layout.

Major community distributions will apply the decisions of major language upstreams, it’s that simple. Major community distributions collaborate with major language upstreams. Collaborating implies respecting upstream layout choices.

In a company, you can sit on upstream decisions and do whatever you want (as long as someone finds enough money to fund your fork). That’s ultimately completely inefficient and counter productive, but humans do not like to bow to the decisions of others, so, that’s been done countless times and will continue to be done countless times.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 7, 2019 10:00 UTC (Sat) by nim-nim (subscriber, #34454) [Link]

>> When you attain this size you can afford the combination explosion. Most entities (starting with community distros) can not.

> I maintained several projects there and spent way less time on that than maintaining a package for Ubuntu and coping with it being broken by Python updates.

That "works" because you do not care about the result being useful to others. But wasn’t your original complaint, that the python3-theano maintainers didn’t care that their package was not useful to your commercial app?

So you want a system that relies, on not caring for others, to scale, to be adopted by distributions, because the distributions should care about you?

Don’t you see the logical fallacy in the argument?

Al the things that you found too much work in the Ubuntu package, exist so the result can be used by others.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 9:10 UTC (Fri) by smurf (subscriber, #17840) [Link] (3 responses)

> right now I have a problem with python3-theano that has broken one our application in the 1.0.4 version

Well, we all love modules with 550+ open issues and 100+ open pull requests …

So find the offending commit that broke your app and implement a workaround that satisfies both, or file an issue (and help the Theano people deal with their bug backlog, you're using their code for free, give something back!), or dropkick the writer(s) of the code that depends on 1.0.3 to get their act together. Can't be *that* difficult.

> Distros have dropped the ball here. Completely.

Pray tell us what the distros should be doing instead?

Insisting on one coherent whole, meaning one single version of everything, is the only way to manage the complexity of a distribution without going insane. Technical debt, i.e. the inability to work with 2.1 instead of 1.19 (let alone 1.0.4 instead of 1.0.3), must be paid by the party responsible. Not by everybody else.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 9:16 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> Pray tell us what the distros should be doing instead?
Parallelly installable versions of libraries. With a mechanism allowing to create a dependency closure for each application with specific library versions.

It's been done multiple times. In proprietary systems (like in the company I worked) and in Open Source (Nix OS).

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 10:17 UTC (Fri) by smurf (subscriber, #17840) [Link] (1 responses)

This idea dies a messy death as soon as your app or library requires sub-libraries A and B, library A requires version 1 of X, and library B needs version 2 of X. Co-installation is not a problem to be solved if the results can't co-exist in the same application. Most languages out there have no mechanism to support that and some libraries (those talking to real hardware for instance) wouldn't work that way anyway.

The distro's job is to assemble a coherent whole, qhich occasionally requires poking the people responsible for A to support X.2. There's no incentive whatsoever for the distro to support co-installation of X.1. Yes it's been done, but that by itself is not a good argument for doing it again.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 10:26 UTC (Fri) by farnz (subscriber, #17727) [Link]

That works just fine in Rust - the library version is part of the symbol mangling, so as long as you don't try to use X v1 APIs on X v2 objects (or vice-versa), you're golden. Naming symbols from X v1 and X v2 in the same code is challenging and thus uncomfortable (as it should be!), but it's perfectly doable.

What doesn't work is using an object obtained from X v1 with X v2 code - the symbols are wrong, and this only works where X v2 is designed to let it work.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 13:44 UTC (Fri) by farnz (subscriber, #17727) [Link] (3 responses)

The issue is when AppA depends on LibB and LibC, and LibB also depends on LibC, but LibB and AppA want different versions of LibC. Someone has to do the work to ensure that LibB works with both its preferred version of LibC and with AppA's preferred version of LibC. Multiply up according to the number of versions of LibC that you end up with in a single app.

At my employer, we handle this by hunting down those cases, and getting people to fix LibB or AppA to use the same version. Rinse and repeat - someone has to do the long tail of grunt work to stop things getting out of hand.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 18:39 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

Yes, it can happen.

But.

This happens during development when you try to create the dependency closure. So it's the app's developer (or maintainer) who is going to be resolving the issues. And they have choices like not using a conflicting library, forking it, just overriding the LibC version for LibA or LibB, etc. Typically just forcing the version works fine.

The same thing can happen in a full distro. But then you have to actually go and fix all LibA (or LibB) rdepends before you can fix your project.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 19:39 UTC (Fri) by farnz (subscriber, #17727) [Link] (1 responses)

That assumes that the versioning is such that the dependency closure can't be created, and that the test cases will catch the problem in time. I'm going to use plain C libraries as an example here, as symbol versioning is weakest in C.

If, for example, AppA depends on LibC 1.3.2 or above, and LibB depends on LibC 1.3.4 or above, but LibC 1.3.4 has broken a feature of AppA in a rare corner case, you're stuck - the dependency closure succeeds during development (it chooses LibC 1.3.4), and everything appears to work. Except it doesn't, because AppA is now broken. Parallel installability doesn't help - 1.3.4 and 1.3.2 share a SONAME, and you somehow have to, at run time, link both of them into AppA and use the "right" one.

Now, if AppA has depended on both LibB and LibC for a while, you'll notice this. Where it breaks, and where the distro model helps, is when AppA has been happily chuntering along with LibC 1.3.2; LibB is imported into the distro for something else, and bumps LibC to 1.3.4, breaking AppA. The distro notices this via user reports, and helps debug and fix this. In the parallel install world, when LibB is imported into the distro, AppA continues happily working with LibC 1.3.2; when AppA is updated to use LibB, then you get the user reports about how in some timezones, AppA stops frobbing the widgets for the first half of every hour, and you have more to track down, because you have a bigger set of changes between AppA working, and AppA no longer working (including new feature work in AppA).

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 20:12 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Sure. Simple parallel installability won't solve every possible issue, and you still can get bad behavior caused by a bug hit only in corner cases.

But this applies equally to ANY build infrastructure. I had Debian breaking my code because OpenJDK had a bug in zlib compression that manifested only in rare cases, I had once spent several sleepless days when Ubuntu had broken SSL-related API in Python in a minor upgrade. Bugs happen.

But even in these cases having a dependency closure helps a lot. It's trivial to bisect it, comparing exactly what's different between two states, since the closure includes everything. This is not really possible with legacy package managers.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 20:00 UTC (Thu) by pizza (subscriber, #46) [Link] (7 responses)

> You can't have two versions of libssl installed, for example.

Uh? On my current Fedora 31 laptop, I have both openssl 1.1.d and 1.0.2o installed, and I know both Fedora/RH and Debian have supported (via policies and tooling) this sort of thing for at least a decade or two.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 20:03 UTC (Thu) by farnz (subscriber, #17727) [Link] (6 responses)

It's not well-supported, though. I can't (for example) easily use distribution packages to get me both perl 5.28 and 5.30 on the same machine, or python 3.7 and 3.8. There are packages that allow it, but they're carefully curated, and not the norm.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 20:48 UTC (Thu) by pizza (subscriber, #46) [Link] (1 responses)

That's fair, but at the same time it's not entirely the fault of the distros. Some (most?) upstream software is not developed with parallel installability in mind.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 21:03 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Most upstream software can actually be parallel installable with ease. Typically you can customize the destination installation prefix by configure flags and dependency locations using "--with" flags. Nix does basically this and it works well usually with only minor changes.

But it's fair to say that it's not a common way to install packages in today's distros.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 6, 2019 8:16 UTC (Fri) by smurf (subscriber, #17840) [Link]

You're right about Perl, but Python 3.x and 3.y have been co-installable on Debian (and presumably on other distros) from the start.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 8, 2019 23:29 UTC (Sun) by flussence (guest, #85566) [Link] (2 responses)

That's a non-issue for perl because perl has QA, it doesn't Move Fast And Break Downstream on a brittle six-week cycle. 5.30 will run code that was written for a different century in the same program as code written today.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 9, 2019 8:39 UTC (Mon) by farnz (subscriber, #17727) [Link]

I have a system with a C extension to perl that disagrees strongly with you - perl 5.30 is different enough to perl 5.22 (that the code was written for) that the resulting application does not function correctly with perl 5.30.

Now, it's almost certainly the C extension that's buggy, but perl 5.30 has changed enough about the way it works that it has broken code written for and tested against 5.22. In my case, because perl is only used for this one application that we're rewriting anyway, it's easy enough to stick to 5.22; however, if we still developed against perl, we'd have to debug this to keep uup to date.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 9, 2019 8:48 UTC (Mon) by smurf (subscriber, #17840) [Link]

*Some* code writtn for a different century. Possibly even *most* code.

But Perl, too, has non-negotiable and backwards-incompatible deprecations.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 19:46 UTC (Thu) by marcH (subscriber, #57642) [Link]

Interesting but I fail to see the connection between the packaging format and tool and the rest of this thread...?

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 13:59 UTC (Thu) by smurf (subscriber, #17840) [Link] (1 responses)

You seem to mean https://lwn.net/Articles/806230/ .

Sensible dependency management is a nontrivial task. Done well, it requires that everybody responsible for any piece of code tries hard not to break its dependents *and* quickly responds if one of its dependants introduces a regression.

Distributions [need to] do this on behalf of the libraries and programs they package. Repositories like CPAN or PyPI? not so much. Collections of Go or Rust libraries? even less.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 5, 2019 15:17 UTC (Thu) by nim-nim (subscriber, #34454) [Link]

Yes this one, sorry about the bad cut & paste.

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 8:45 UTC (Wed) by marcH (subscriber, #57642) [Link] (1 responses)

> > Actually, it scales a heck of a lot better than the distro model; we have automated rebuilds and redeploys,

> Congratulations, you have a scaling factor of one (company).

No, because all that automation is of course open sourced and freely shared with the rest of the world.

Oh, wait...

Soller: Real hardware breakthroughs, and focusing on rustc

Posted Dec 4, 2019 9:35 UTC (Wed) by farnz (subscriber, #17727) [Link]

Funnily enough, the majority of it is open source - Jenkins is the big chunk of code, and does the builds and tests for us. I believe you can do the same with GitHub Actions, and I've seen projects use Azure Pipelines for the same job.

Fundamentally, just building code and generating issues if builds and/or tests fail isn't hard - and we do contribute our test cases to the libraries we use.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds