Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Posted Dec 1, 2019 15:44 UTC (Sun) by mcatanzaro (subscriber, #93033)In reply to: Soller: Real hardware breakthroughs, and focusing on rustc by mjg59
Parent article: Soller: Real hardware breakthroughs, and focusing on rustc
If you don't personally feel that bundling libraries on this magnitude is a technical problem, that's fine. But as you are no doubt well aware, the status quo in Linux distribution communities is to minimize use of bundled dependencies as far as possible. If we are to start rewriting existing C and C++ code using Rust, this means most applications will eventually be linking to a couple dozen shared libraries written in Rust, each such library statically linked to hundreds of Rust dependencies, most of which will be duplicated among the other shared libraries used by the application. Perhaps you're fine with this -- perhaps it is the future of Linux distributions -- but for the purposes of my argument it's sufficient to recognize that a lot of important developers don't want to see this happen, and those developers are not going to be using Rust.
I think Rust needs those developers. Rust currently proposes to upend two well-established norms: (a) the use of memory unsafe languages in systems programming, and (b) current aversion to use of bundled dependencies, especially for system libraries. Challenging either norm, on its own, would be tremendously difficult. Challenging both at the same time seems like a strategic mistake. The Rust community loses essential allies, and we wind up with more and more code written in unsafe languages.
Currently, Rust seems like the only memory-safe systems programming language that's generated enough interest among Linux developers to have serious long-term potential to upend C and C++. I see potential to have a future, perhaps 30 years from now, where distributing libraries written in C or C++ is considered malpractice, and where modern code is all written in Rust (or future memory-safe systems languages). But I don't see that happening without more acceptance and buy-in from the existing Linux development community, and Rust doesn't seem to be on track to achieve it. The Rust community and the Linux distribution community are separate silos with separate goals, and it's hard to see outside one's own silo. If the Rust community were to better understand and address the concerns of the Linux distribution community, the potential exists to significantly increase adoption and acceptance of Rust.
Posted Dec 1, 2019 20:38 UTC (Sun)
by roc (subscriber, #30627)
[Link]
If you specifically mean "Rust needs those developers if it's to completely usurp C/C++", then sure. Maybe enough Rust people are inspired by that to make some concessions.
Posted Dec 1, 2019 20:57 UTC (Sun)
by mjg59 (subscriber, #23239)
[Link] (21 responses)
Posted Dec 1, 2019 22:40 UTC (Sun)
by rodgerd (guest, #58896)
[Link]
Posted Dec 1, 2019 23:57 UTC (Sun)
by mcatanzaro (subscriber, #93033)
[Link] (19 responses)
Let's say Rust is currently poised to be successful in the user application space. It's not poised to be successful at replacing the C and C++ used to power our operating systems. And that's a shame, because we have a lot of unsafe C and C++ code that needs to be replaced over the coming decades.
Posted Dec 2, 2019 0:06 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (18 responses)
Posted Dec 2, 2019 0:15 UTC (Mon)
by mcatanzaro (subscriber, #93033)
[Link] (17 responses)
Unless your application code never provides any untrusted input to the runtime -- which seems very unlikely -- the runtime is security-sensitive too, and it is all C and C++. 40 years from now, this will hopefully no longer be the case. We'll still have distributions, but they will use a safe systems programming language instead. It could be Rust, but it seems Rust does not want to target this space, so I suppose more likely it will be something else instead.
Posted Dec 2, 2019 0:21 UTC (Mon)
by mcatanzaro (subscriber, #93033)
[Link]
Posted Dec 2, 2019 0:30 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (15 responses)
Basically: If nobody cares about the shared libraries a distribution ships, why should a distribution care?
Posted Dec 2, 2019 9:57 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (14 responses)
The user-side part of the equation has been learning what containered apps mean in the last years. It's not impressed. They’re a lot worse than downloading random exe files from random web sites on the internet (worse because containers permit extending the amount of crap third party code in the end result).
Posted Dec 2, 2019 12:00 UTC (Mon)
by pizza (subscriber, #46)
[Link] (12 responses)
This is a very naive belief.
Look no further than the rise of "wget http://some/random/script.sh | sudo bash" build scripts. Also, proprietary software is far, far worse, and manages to do just fine in the market.
Posted Dec 2, 2019 14:16 UTC (Mon)
by NAR (subscriber, #1313)
[Link] (10 responses)
And this is the problem with the distribution landscape. Most developers won't bother learning the intricacies of creating packages for N distributions (especially if they are working on e.g. Mac). There's also a quite big chance that at least one of their dependencies are not packaged, so either they package those too (even more work) or bundle the dependencies (which is a big no-no in the eyes of distribution people). But I'm starting to see "Installation Instructions" like "docker pull ...", so even the "wget ... | bash" kind of instructions can be obsoleted. If distributions want to stay relevant, they might need to invent some automatic mechanism that takes a package from npmjs.com, pypi.org, cpan.org, hex.pm, etc. and creates parallel-installable distribution packages.
Posted Dec 2, 2019 17:03 UTC (Mon)
by mcatanzaro (subscriber, #93033)
[Link] (9 responses)
Expecting third-party software developers to package software for Linux distributions doesn't make a lot of sense either. They can if they want to, targeting the biggest and most important distros, but surely most will probably prefer to distribute containerized software that works everywhere using docker or flatpak or similar. Nothing wrong with that. It doesn't mean distros are no longer relevant, it just means there are nowadays better ways to distribute applications to users. Your application is still 100% useless if the user doesn't have a distro to run it on.
I see no problem with "the distribution landscape." It works well for building the core OS and for distributing popular open source applications. It was never a good way for distributing third-party software, and that's fine.
Posted Dec 2, 2019 18:06 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link]
If distributions are reduced to only the core bits and everything else is managed by per language package management systems, distributions have far less relevance than they used to have and therefore their packaging policies don't have as much influence as it used to as well. This may very well be the better role for distributions but historically distributions have attempted to package up the whole world of open source software and pretty much the only way regular users would get the software installed on their systems. This isn't the case any longer
Posted Dec 2, 2019 18:28 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (7 responses)
Posted Dec 2, 2019 18:41 UTC (Mon)
by pizza (subscriber, #46)
[Link] (4 responses)
(Or will they just download some customizable container template containing some precompiled libraries from a third party?)
Posted Dec 2, 2019 18:48 UTC (Mon)
by mjg59 (subscriber, #23239)
[Link] (1 responses)
Posted Dec 4, 2019 8:08 UTC (Wed)
by nim-nim (subscriber, #34454)
[Link]
In the meanwhile, real-world containers in Docker… etc public stores have been publicly audited by security researchers and those researches found out first, that those containers did rely on distributions for their content and second, that the more they tried to replace the distribution layer with custom dev-friendly ways to do things, the less up to date and secure the result ended up.
Things that may work technically are large out-of band container content inspection by Microsoft (GitHub) Google (Go), where Microsoft or Google or another internet giant orders devs to fix their open source code if they want to continue being listed in the audited container store.
I doubt devs will love this ordering a lot more than being told politely by distributions to fix things because they are becoming un-packageable.
And, that’s nothing more than a capture of free software commons by commercial entities, taking advantage of the lack of nurturing of those commons by the people who benefit from them today.
Posted Dec 5, 2019 6:37 UTC (Thu)
by sionescu (subscriber, #59410)
[Link] (1 responses)
Posted Dec 5, 2019 21:31 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link]
So, good model for a cloud giant.
Atrocious model for everyone *not* a cloud giant.
Posted Dec 2, 2019 21:47 UTC (Mon)
by mcatanzaro (subscriber, #93033)
[Link]
Of course, the shared libraries that distributions need to ship is 90% of what I care about. So you lose me, and other developers working in this space. We wind up with "Rust is cool for application development, but don't use it for systems programming." If that's the goal, then OK, but I suspect that's not really the goal of Rust.
Posted Dec 3, 2019 20:10 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
And that imho is exactly the attitude that led to the failure of the LSB :-(
It was focussed too much on allowing the distros to tell applications what they provided - which is sort-of okay for Open Source applications, but I tried to push it (unsuccessfully) towards a way where apps (quite possibly closed-source) could tell the distro what they needed. I really ought to try to get WordPerfect 6 running under Wine again, but it would have been nice for WordPerfecrt for Linux 8 to have an lsb "requires" file I could pass to emerge, or rpm, or whatever, and it would provide the pre-requisites for me.
Oh well ...
Cheers,
Posted Dec 2, 2019 15:27 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link]
Proprietary software is not intrinsically worse. The level of atrocity is limited by the commercial requirement to support the result.
Container-ish distribution and vendoring are proprietary software habits at their worst, without any support requirement.
Posted Dec 4, 2019 7:13 UTC (Wed)
by patrakov (subscriber, #97174)
[Link]
Posted Dec 2, 2019 0:05 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (12 responses)
Rust has no real incentive to play by distribution rules. Firefox's usage of Rust and subsequent increasingly widespread adoption will not be hampered by distributions
Distributions don't have the strong influence they used to as guiding upstreams towards certain behaviour. Upstream projects like Rust will simply bypass distributions aided by widespread use of containers and tooling like Cargo. Distributions are now in the defensive, figuring out how to stay relevant. That's the new reality.
Posted Dec 2, 2019 3:29 UTC (Mon)
by pizza (subscriber, #46)
[Link] (11 responses)
And what happens when this brave new reality encounters its inevitable "zlib" moment?
The Rust ecosystem has some very serious tunnel vision -- Well-maintained open-source project (ala Firefox) are the exception, not the rule.
It's all fine and dandy to say that distributions are irrelevant or obsolete, but that doesn't make their experience and advice on sustainability/maintainability wrong.
Posted Dec 2, 2019 3:44 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (7 responses)
Maybe but it is clear that power centre has shifted away from distributions (and I say that as someone who has involved in distribution maintenance for atleast a decade). It is up to distributions to convince the language ecosystems why the experience is valid or useful
Posted Dec 2, 2019 10:11 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (6 responses)
So either those devs work with free software distributions to provide this stability, or the market for their app segment will be captured by non-free-software third parties, that perform this distribution role.
This is happening right now with Cloudera, OpenShift, etc.
The end result of not working with distributions, and producing things end users can not use with any level of security, is not users adopting the dev model, it’s users giving up on distributions *and* upstream binaries, and switching to proprietarized intermediated product sources.
And, practically, that also means devs having to follow the whims of the proprietary intermediaries if they want any influence on how their code is actually used. Do you really think they will love better? Even if the proprietary intermediaries provide them with free-as-beer SDKs?
Posted Dec 2, 2019 11:05 UTC (Mon)
by NAR (subscriber, #1313)
[Link] (2 responses)
Actually providing stability is a reason to bypass distributions. It's more than annoying when installing/upgrading an unrelated application upgrades a common dependency too with an incompatible new version...
Posted Dec 3, 2019 10:56 UTC (Tue)
by roblucid (guest, #48964)
[Link]
Library updates should support the API, incompatible API changes require a new package, which may provide a legacy API or support co-existence with an installation of the old library by changing filenames or directories. Shared libraries actually do permit applications choosing different implementations if required.
Rather than 'just in case' silos, fix the bugs and write competent software. Bad security fixes breaking stuff are bugs, regressions which ought be fixed and the sysadmin is the only one who can decide the right mitigation in the deployment ..
The ability to secretly rely on vulnerabilities IS NOT a benefit to the end user
Posted Dec 3, 2019 13:52 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
To expand on that, there are three groups involved here, not two:
Distributions that survive end up being a compromise between the three. They update fast enough to keep developers happy; they're good enough at stopping things that you don't want to have happen to keep operations happy; they make the computer do enough useful things that users are happy. But note that distributions are not essential in this set - they just happen to be one form of compromise between the three groups that has historically worked out well enough to survive. Containers are another - especially combined with CI - where you build a complete FS image of the "application" and run it; you regularly update that image, and all is good.
Basically, things go wrong when the desires of the three groups are unbalanced - if a distribution becomes "this is what operations want", then users and developers leave it to ossify; if a distribution becomes "this is what users want", it loses developers and operations; if it becomes "this is what developers want", it loses users and operations. And as every real individual is a combination of users, developers and operations to various degrees, the result of such ossification is a gradual loss of people to work on the distribution.
As soon as distributions see themselves as "in opposition" to developers (or operations, or users), they're at risk - this is about finding a balance between developers' love of using the best technologies they know about, and operations' love of not bringing in tech for the sake of it that results in users getting value from the distribution.
Posted Dec 2, 2019 14:48 UTC (Mon)
by walters (subscriber, #7396)
[Link] (2 responses)
I know the meaning of the individual words in your message, but the way you've combined them isn't making much sense to me... ("proprietarized intermediated product sources"? Really?)
Posted Dec 4, 2019 8:57 UTC (Wed)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
The difference between effective open sourcing and over the wall code dumping others can not use is the existence of things like Oracle Linux.
I haven’t seen the equivalent OpenShift-side but I may not have looked hard enough.
Posted Dec 4, 2019 9:47 UTC (Wed)
by zdzichu (subscriber, #17118)
[Link]
Posted Dec 2, 2019 22:42 UTC (Mon)
by rodgerd (guest, #58896)
[Link] (2 responses)
Posted Dec 2, 2019 22:52 UTC (Mon)
by pizza (subscriber, #46)
[Link]
Posted Dec 3, 2019 6:36 UTC (Tue)
by tzafrir (subscriber, #11501)
[Link]
So how will such a case work with bundled libraries? Nobody fixes problematic libraries down the stack? Developers do occasionally keep patched versions of such libraries? But then it is each developer on their own, right?
The same problem does not go away with bundled libraries. It may even get worst, because each developer / project is on its own tackling those issues.
Posted Dec 2, 2019 9:40 UTC (Mon)
by joib (subscriber, #8541)
[Link] (4 responses)
The current distro model is *a* solution (certainly not the only possible solution, and not necessarily the "best" one either, depending on how you define "best") to the problem of how to handle distribution of mostly C code, and largely set in stone 25 years ago.
If one looks at the where the world is going, it seems that more and more we're seeing applications bundling their own dependencies, be it in the form of container images, language specific build/dependency management systems with static linking like Rust or Go, or per-application virtual environments for dynamic languages like Python/Ruby/Javascript/Julia/etc. And things like app stores where applications can rely on a fairly limited system interface, bundling everything else.
I think the onus is on distros to adapt to the world as it is, not wishing for a bygone age. Otherwise they'll just be relegated to the position of an increasingly idiosyncratic way of providing the basic low-level OS while people get the actual applications they want to run from somewhere else. I also think that distros can have a very useful role to play here, e.g. keeping track of security updates for all these apps with bundled dependencies etc.
Posted Dec 2, 2019 15:33 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (3 responses)
Posted Dec 2, 2019 17:18 UTC (Mon)
by joib (subscriber, #8541)
[Link] (2 responses)
Providing a single coherent interface to manage packages, QA, integration testing, vetting of packages, following the distro policy, security updates (as previously mentioned) are certainly valuable things that distros do. I'd like to see distros continuing this valuable work, rather than being delegated to an increasingly irrelevant provider of the low level OS. And to do this, I think distros need to better accommodate how software is developed and deployed today, rather than wishing for a bygone era where gcc, make and shell was all that was needed.
Posted Dec 5, 2019 14:46 UTC (Thu)
by nim-nim (subscriber, #34454)
[Link] (1 responses)
Distributions, however, have minimal standards to retain the trust of their userbase.
Anything that needs too much work, to attain those standards, will get kicked outside distros, just because no one wants to work on it.
That’s why alternatives to gcc, make and shell get not traction. dev-only tech, that only helps devs, and makes everyone except devs miserable, won’t be adopted by anyone except devs. If those devs want their stuff to get into distributions, they can do the work themselves, or make it easier for others to do this work.
If those devs don’t want to do the work, and don't want to help others do it, they can stop moaning distributions are unfriendly, and prepare for world where the thing they deploy on is controlled by Apple, Google or Microsoft, and they need to construct all the parts over this proprietary baseline in Linux From Scratch mode, depending on proprietary cloud services.
Devs can certainly kill distros. So far, they haven’t demonstrated any working plan once this is done.
Posted Dec 5, 2019 15:11 UTC (Thu)
by pizza (subscriber, #46)
[Link]
That's a key point -- Traditional distributions do a lot of work integrating a large pile of unrelated code into a mostly-cohesive (and usable, and maintainable) whole. Application (and container) developers rely heavily on that work, allowing themselves to focus on the stuff they care about (ie their application)
If those distributions go away, that low-level systems development and integration work still needs to get done by _someone_, and to be blunt, application developers have shown themselves to be, even when willing, to be generally incapable of doing this work.
Oddly enough, the developers that are both willing and capable seem to recognize the value (and time/effort savings) that distributions bring to the table -- because without the distros, the developers would have a lot more work on their hands.
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Look no further than the rise of "wget http://some/random/script.sh | sudo bash" build scripts.
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Yes, eventually everyone should compile their own stuff. Most task containers on Borg contain exactly one statically-linked binary in addition to a very thin runtime mounted read-only from the host.
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Wol
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Distributions that provide piles of half-baked, half-checked, half-fixed third party code in the form of never-updates shared libraries and other packages will be hard pressed to find a market, among anyone who values not losing his data.
</troll>
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
I think devs are so drunk on their new tooling ability to bypass free software distributions, they forget users do require a level of stability.
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
In practice, CentOS built images once and did not provide any updates for them.
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc
Soller: Real hardware breakthroughs, and focusing on rustc