Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Posted Sep 1, 2014 13:03 UTC (Mon) by ovitters (guest, #27950)In reply to: Poettering: Revisiting how we put together Linux systems by colo
Parent article: Poettering: Revisiting how we put together Linux systems
btrfs-only seems like a step back. The various filesystems are better in some workloads than others. I guess you could have everything in btrfs except the data somehow. But then how would systemd automatically know that these things belong together? Hrm...
Posted Sep 1, 2014 13:12 UTC (Mon)
by cate (subscriber, #1359)
[Link] (102 responses)
So the static libraries are only a workaround until people and distribution will behave more stable.
Posted Sep 1, 2014 15:37 UTC (Mon)
by torquay (guest, #92428)
[Link] (100 responses)
Which is going to be never, as almost all distributions have no qualms about breaking APIs and ABIs from one release to the next. Fedora being the prime example, with Ubuntu not far behind in this broken state of affairs. (And hence it's no wonder many people have moved to Mac OS X, which provides a refreshingly stable environment, with the OS updates being free on Mac machines).
The distributions in turn try to shift the blame to "upstream", because they have no manpower to fix the breakage, nor the power or willingness to punish upstream developers. Many upstream developers behave well and try to maintain backwards compatibility, but on the scale of a distribution the number of broken and/or changed libraries (made by undisciplined kids with Attention Deficit Disorder) quickly accumulates. The constant mess created by Gnome and GTK comes to mind.
Hence we end up with the effort by the systemd folks to try to fix this mess, by proposing in effect a massive abstraction layer. While it seems to be an overly elaborate scheme with many moving parts, any effort in fixing the mess is certainly welcome.
Perhaps there's an easier way to skin the cat: have each app run inside its own Docker container, but with access to a common /home partition. All the libraries and runtime required for the app are bundled with the app, including an X or Wayland display server (*). The windows produced by the app are captured and shown by a "master" display server. It's certainly a heavy handed and size-inefficient solution, but this is the price to pay to tame the constant API and ABI brokenness.
(*) perhaps this requirement can be relaxed to omit components that are guaranteed to never break their APIs/ABIs; by default all libraries and components are treated as incompatible from one version to the next, unless explicitly shown otherwise through extensive regression tests.
Posted Sep 1, 2014 16:43 UTC (Mon)
by cyperpunks (subscriber, #39406)
[Link] (9 responses)
Let's use the Heartbleed issue as an example.
To get fully protected after the bug, all work a distro user was required to do was to install the latest openssl package form the distro.
Now, in this new scheme of things, the user is forced to upgrade every single instance and check each for any possible Heartbleed issue.
The new scheme brings flexibility, however from a security viewpoint, it seems like a nightmare.
Posted Sep 1, 2014 17:00 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (3 responses)
Posted Sep 1, 2014 17:26 UTC (Mon)
by cyperpunks (subscriber, #39406)
[Link] (2 responses)
Posted Sep 1, 2014 21:12 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
Posted Sep 2, 2014 9:52 UTC (Tue)
by NAR (subscriber, #1313)
[Link]
Posted Sep 1, 2014 19:45 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (3 responses)
For a non-distro user (or, like me, a gentoo user), all that was needed was to not switch on the broken functionality in the first place! The reports I've seen all said that - for most machines - heartbleed was functionality that wasn't wanted and should not have been enabled to start with.
Yes I know users "don't want" the hassle, but gentoo suits me fine. I switch things on if I need them. That *should* be the norm.
Cheers,
Posted Sep 2, 2014 17:17 UTC (Tue)
by rich0 (guest, #55509)
[Link] (2 responses)
I think you just got lucky, and running USE=-* has its own issues.
Posted Sep 2, 2014 18:29 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
(a) most people had it switched on
That's a recipe for minimal testing and maximal problems.
Your scenario is where most people need the functionality, so I'm in a minority of not wanting or needing. I don't think that is anywhere near as likely (although I could be wrong ...)
Cheers,
Posted Sep 4, 2014 19:30 UTC (Thu)
by NightMonkey (subscriber, #23051)
[Link]
Posted Sep 2, 2014 2:26 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 1, 2014 16:46 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
Sure, one of the reasons Docker exists and is so popular is to try and skin this cat, heck the reason VMs are so popular is to abstract away this software dependency management problem in a simple but heavy handed way. The problem is that no one really wants to run nested kernels to make this work, you lose a lot of information about the work to be scheduled when nesting kernels, so this is a possible way to solve the fundamental software compatibility management problem on a shared kernel.
I'm sure that as others digest this proposal and try to build systems using it they will discover corner cases which are not handled, compatibility issues, ways to simplify it so that the final result may be somewhat different but this could be a vastly useful system.
Posted Sep 1, 2014 18:26 UTC (Mon)
by HenrikH (subscriber, #31152)
[Link] (17 responses)
Posted Sep 1, 2014 20:22 UTC (Mon)
by Karellen (subscriber, #67644)
[Link] (16 responses)
That's generally the point of major.minor.patch versioning, at least among (shared) libraries. An update which changes the "patch" level should not change the ABI *at all*, it should only change (improve) the functionality of the existing ABI.
A change to the "minor" level should only *add* to the ABI, so that all users of 1.2.x should be able to use 1.3.0+ if it's dropped in as a replacement.
If, as a library author, you need to change the ABI, by for instance modifying a function signature, or deleting a function that shouldn't be used any more, that's when you change the "major" version to 2.0.0, and make SDL-2.a.b co-installable with SDL-1.x.y. That way, legacy apps linked against the old and busted SDL1 can continue to work, while their modern replacements can link with the new hotness SDL2 and run together on the same system.
It's not always easy, and requires care and discipline. But creating shims would be just as much work, and tools already exist to help get it right, like abicheck.
Posted Sep 1, 2014 20:26 UTC (Mon)
by dlang (guest, #313)
[Link] (15 responses)
If ABIs were managed properly, we wouldn't be having these disucssions.
Posted Sep 2, 2014 2:48 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (11 responses)
Posted Sep 2, 2014 19:38 UTC (Tue)
by robclark (subscriber, #74945)
[Link] (8 responses)
that's kind of cool.. I hadn't seen it before. Perhaps they should add a 'wall of shame' ;-)
At least better awareness amongst dev's about ABI compat seems like a good idea.
Posted Sep 3, 2014 10:13 UTC (Wed)
by accumulator (guest, #95885)
[Link] (7 responses)
Posted Sep 3, 2014 22:35 UTC (Wed)
by nix (subscriber, #2304)
[Link] (6 responses)
Posted Sep 4, 2014 19:06 UTC (Thu)
by zlynx (guest, #2285)
[Link] (5 responses)
Posted Sep 5, 2014 13:58 UTC (Fri)
by jwakely (subscriber, #60262)
[Link] (4 responses)
Posted Sep 5, 2014 14:38 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (3 responses)
Posted Sep 8, 2014 16:05 UTC (Mon)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Sep 9, 2014 12:47 UTC (Tue)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
Posted Sep 9, 2014 13:54 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Sep 2, 2014 22:05 UTC (Tue)
by jondo (guest, #69852)
[Link] (1 responses)
Reality kicks in: This would simply stop all updates ...
Posted Sep 3, 2014 15:05 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 6:21 UTC (Tue)
by krake (guest, #55996)
[Link]
Its wiki page on C++ ABI dos-and-don'ts has become one of the most often referenced resource in that matter.
Posted Sep 2, 2014 11:26 UTC (Tue)
by cjcoats (guest, #9833)
[Link] (1 responses)
Posted Sep 3, 2014 12:32 UTC (Wed)
by rweir (subscriber, #24833)
[Link]
Posted Sep 1, 2014 19:38 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (7 responses)
So why, even in the last couple of days, having I been hearing moans from Mac users that they can't upgrade their system (this chap was stuck on Snow Leopard).
Cheers,
Posted Sep 2, 2014 1:42 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (6 responses)
Posted Sep 2, 2014 5:53 UTC (Tue)
by torquay (guest, #92428)
[Link] (5 responses)
And that may not be such a bad thing, if it gives you piece of mind and freedom from the broken API/ABI in the Linux world.
Constantly dealing with broken API/ABI is certainly not free: it takes up your time, which could have been used for more productive things.
Posted Sep 2, 2014 10:45 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (4 responses)
Posted Sep 2, 2014 14:34 UTC (Tue)
by madscientist (subscriber, #16861)
[Link] (1 responses)
Posted Sep 2, 2014 17:11 UTC (Tue)
by pizza (subscriber, #46)
[Link]
I'm heavily involved with Gutenprint, and it is not an exaggeration to say that every major OSX release has broken (oh, sorry, "incompatibly changed") something we depended on.
Posted Sep 3, 2014 9:08 UTC (Wed)
by dgm (subscriber, #49227)
[Link] (1 responses)
Posted Sep 3, 2014 11:32 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
Posted Sep 1, 2014 22:02 UTC (Mon)
by sramkrishna (subscriber, #72628)
[Link]
Yet, it is from that community where application sandboxing and single binary is coming from.
Posted Sep 1, 2014 23:41 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (52 responses)
I love the ultimate simplicity of this scheme. I mean, coming up with a good scheme to name btrfs sub-volume is a good idea anyway, and then just going on single step further and actually packaging the OS that way is actually not that big a leap!
I mean, maybe it isn't obvious when one comes from classic packaging systems with all there dependency graph theory, but looking back, after figuring out that this could work, it's more like "oh, well, this one was obvious..."
Posted Sep 2, 2014 17:18 UTC (Tue)
by paulj (subscriber, #341)
[Link] (50 responses)
If so, I'm wondering:
- Who assigns or registers or otherwise coördinates these distro-abstract dependency names?
- Who specifies the ABI for these abstract dependencies? I guess for GNOME3_20?
- What if multiple dependencies are needed? How is that dealt with?
The ABI specification thing for the labels seems a potentially tricky issue. E.g., should GNOME specify the one in this example? But what if there are optional features distros might want to enable/disable? That means labels are needed for every possible combination of ABI-affecting options that any dependency might have?
?
Posted Sep 2, 2014 18:55 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (49 responses)
I think some of the vendor ids are getting truncated when making examples.
Posted Sep 4, 2014 9:33 UTC (Thu)
by paulj (subscriber, #341)
[Link] (48 responses)
On which point: Lennart has used "API" in comments here quite a lot, but I think he means ABI. ABI is even more difficult to keep stable than API, and the Linux desktop people havn't even managed to keep APIs stable!
#include "rants/referencer-got-broken-by-glib-gnome-vfs-changes.txt"
Posted Sep 4, 2014 14:46 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (47 responses)
Posted Sep 4, 2014 22:34 UTC (Thu)
by dlang (guest, #313)
[Link] (31 responses)
Posted Sep 5, 2014 4:14 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (30 responses)
Posted Sep 5, 2014 6:08 UTC (Fri)
by dlang (guest, #313)
[Link] (29 responses)
But if every upgrade of every software package on your machine is the same way, ti will be a fiasco
Remember that the "base system" used for this "unchanging binary compatibility" is subject to change at he whim of the software developer, any update they do, you will (potentially) have to do as well, so that you have the identical environment.
Posted Sep 5, 2014 16:08 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (28 responses)
Posted Sep 5, 2014 16:14 UTC (Fri)
by paulj (subscriber, #341)
[Link] (12 responses)
Posted Sep 5, 2014 17:07 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (11 responses)
One thing if this proposal is worked on that it puts pressure on distros to define a subset as stable and it puts pressure on app makers to standardize on few runtimes so even if this proposal does not become the standard, it may create the discussion which that results in a new LSB standard for distros binary compatibility which is much more comprehensive than the weak-sauce LSB currently is. I think the discussion of what goes into /usr is very useful on its own even if nothing else comes out of this proposal.
Posted Sep 5, 2014 18:02 UTC (Fri)
by paulj (subscriber, #341)
[Link] (10 responses)
However, note that this is the "This is how the world should be, and we're going to change our bit to make it so, and if it ends up hurting users then that will supply the pressure needed to make the rest of the world so" approach. An approach to making progress which I think has been tried at least a few times before in the Linux world, which I'm not sure always helps in the greater scheme of things. The users who get hurt may not be willing to stick around to see the utopia realised, and not everything may get fixed.
Posted Sep 5, 2014 18:53 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (9 responses)
Posted Sep 6, 2014 10:40 UTC (Sat)
by paulj (subscriber, #341)
[Link] (8 responses)
Posted Sep 7, 2014 15:17 UTC (Sun)
by raven667 (subscriber, #5198)
[Link] (7 responses)
Posted Sep 8, 2014 9:30 UTC (Mon)
by paulj (subscriber, #341)
[Link] (6 responses)
If I had to make a criticism, it wouldn't be of him specifically, but of an attitude common amongst Linux user-space devs, that holds that it is acceptable to burn users with breakage, in the quest for some future code or system utopia. So code gets written that assumes other components are perfect, so that if they are not breakage will expose them and those other components will eventually get fixed. Code gets rewritten, because the old code was ugly, and the new code will of course be much better, and worth the regressions.
The root cause of this seems to be because of the fractured way Linux works. There is generally no authority that can represent the users' interests for stability. There is no authority that can coördinate and ensure that if subsystem X requires subsystem Y to implement something that was never really used before, that subsystem Y is given time to do this before X goes out in the wild. Or no authority to coördinate rewrites and release cycles.
Instead the various fractured groups of developers, in a way, interact by pushing code to users who, if agitated sufficiently, will investigate and report bugs or even help fix them.
You could also argue this is because of a lack of QA resources. As a result of which the user has to help fill that role. However, the lack of resources could also be seen as being in part due to the Linux desktop user environment never having grown out of treating the user as QA, and regularly burning users away.
Posted Sep 8, 2014 10:59 UTC (Mon)
by dlang (guest, #313)
[Link]
Posted Sep 8, 2014 11:59 UTC (Mon)
by pizza (subscriber, #46)
[Link] (4 responses)
I've been on both sides of this argument, as both an end-user and as a developer.
In balance, I'd much rather have the situation today; where stuff is written assuming the other components work properly, and where bugs get fixed in their actual locations rather than independently, inconsistently, and incompatibly papered over.
For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today.
The workaround-everything approach is only necessary when you are stuck with components you can't fix at the source -- ie proprietary crap. We don't have that problem, so let's do this properly!
The days where completely independent, underspecified, and barely-coupled components are a viable path forward have been over for a long, long time.
Posted Sep 8, 2014 12:33 UTC (Mon)
by nye (subscriber, #51576)
[Link] (1 responses)
Except they don't.
A couple of examples:
My home machine has both the front channels and the rear channels connected, for a total of four audio channels. In Windows, I go in to my sound properties and pick the 'quadrophonic' option; now applications like games or VLC will mix the available channels appropriately so that I get sound correctly coming out of the appropriate speakers - not only all four channels, but all four channels *correctly*, so that, for example, 5.1 audio is appropriately remixed by the application without any configuration needed.
I had a look at how I might get this in OpenSuSE earlier this year, and eventually concluded either that PA simply can't do this *at all*, or that if it can, nobody knows how[0]. I did find some instructions for how to set up something like this using a custom ALSA configuration, though that would have required that applications be configured to know about it (rather than doing the right thing automatically), and I never got around to trying it out before giving up on OS for a multitude of reasons.
Another example:
A related example:
It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.
[0] Some more recent googling has turned up more promising discussion of configuration file options, but I no longer have that installation to test out.
Posted Sep 8, 2014 13:14 UTC (Mon)
by pizza (subscriber, #46)
[Link]
The last three motherboards I've had, with multi-channel audio, have JustWorked(tm) once I selected the appropriate speaker configuration under Fedora/GNOME. Upmixed and downmixed PCM output, and even the analog inputs are mixed properly too.
(Of course, some of the responsibility for handling this is in the hands of the application, even if only to query and respect the system speaker settings instead of using a fixed configuration)
> I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.
NM has been effectively flawless for me for several years now (even with switching back and forth), also with Fedora, but that shouldn't matter in this case -- I hope you filed a bug report. Folks can't fix problems they don't know about.
> For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.
I can't speak to Ubuntu's DHCP stuff here (did you file a bug?) but I've seen a similar problem in the past using dnsmasq's DHCP client -- the basic problem I found was that certian DHCP servers were a bit..special in their configuration and the result is that the DHCP client didn't get a valid DNS entry. dnsmasq eventually implemented a workaround for the buggy server/configuration. This was maybe three years ago?
> It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.
Come on, that's being grossly unfair. Before NM came along, wireless was more unreliable than not, with every driver implementing the WEXT stuff slightly differently requiring every client (or user) to treat every device type slightly differently. Now the only reason things don't work is if the hardware itself is unsupported, and that's quite rare these days.
Posted Sep 8, 2014 15:38 UTC (Mon)
by paulj (subscriber, #341)
[Link] (1 responses)
While I agree this path has led to sound and wireless working well in Linux today, I would disagree it was the only possible path. I'd suggest there were other paths available that would have ultimately led to the same result, but taken more care to avoid regressions and/or provide basic functionality even when other components hadn't been updated to match some new specification.
What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.
Posted Sep 8, 2014 17:12 UTC (Mon)
by pizza (subscriber, #46)
[Link]
...perhaps you are correct, but those other paths would have taken considerably longer, leading to a different sort of user burning.
> What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.
These days, Linux just isn't a cool counterculture status symbol any more. It's part of the boring infrastructure that's someone else's problem.
Anyway. The technical ones basically boil down to the benefits of Apple controlling the entire hardware/software/cloud stack -- Stuff JustWorks(tm). As long as you color within the lines, anyway.
Posted Sep 6, 2014 4:41 UTC (Sat)
by dlang (guest, #313)
[Link] (14 responses)
I disagree that that is a wonderful thing and everyone should be making use of it.
we are getting conflicting messages about who would maintain these base systems, if it's the distros, how are they any different than the current situation?
if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.
The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs
I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)
building up technical debt with the intention to pay it off later almost never works, and even when it does, it doesn't work well.
anything that encourages developers to build up technical debt by being able to ignore compatibility is bad
This encourages developers to ignore compatibilty in two ways.
1. the app developers don't have to worry about compatibility because they just stick with the version that they started with.
2.library developers don't have to worry about compatibility because anyone who complains about the difficulty in upgrading can now be told to just stick with the old ABI
Posted Sep 6, 2014 11:22 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (4 responses)
1) People who don't want their disk space chewed up with multiple environments.
2) People who don't want (or like me can't understand :-) btrfs.
3) Devs who (like me) like to run an up-to-date rolling distro.
4) Distro packagers who don't want the hassle of current packages that won't run on current systems.
Personally, I think any dev who ignores compatibility is likely to find themselves in the "deprecated" bin fairly quickly, and will just get left behind.
Where it will really score is commercial software which will be sold against something like a SLES or RHEL image, which will then continue to run "for ever" :-)
Cheers,
Posted Sep 6, 2014 21:00 UTC (Sat)
by dlang (guest, #313)
[Link] (3 responses)
Yep, that's part of my fear.
this 'forever' doesn't include security updates.
People are already doing this with virtualization (see the push from vmware about how it allows people to keep running Windows XP forever), and you are seeing a lot of RHEL5 in cloud deployments, with no plans to ever upgrade to anything newer.
Posted Sep 6, 2014 22:14 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Sep 6, 2014 23:00 UTC (Sat)
by dlang (guest, #313)
[Link] (1 responses)
I don't think so.
Posted Sep 7, 2014 15:58 UTC (Sun)
by raven667 (subscriber, #5198)
[Link]
On desktops as well being able to run older apps on newer systems rather than being force-upgraded because the distro updates and also being able to run other newer apps (and bugfixes) on a cadence faster than what a distro that releases every 6mo or 1yr gives, is a benefit that many seem to be looking for, staying on GNOME2 for example while keeping up with Firefox and LibreOffice updates or whatever. Being able to run multiple userspaces on the same system with low friction allows them to fight it out and compete more directly than dual-booting or VMs, rather than being locked in to what your preferred distro provides.
Posted Sep 6, 2014 23:15 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (8 responses)
I think you answered that in your first paragraph:
> I agree that this proposal makes it easy to have many different ABIs on the same computer.
There is currently more friction in running different ABIs on the same computer, there is no standard automated means for doing so, people have to build and run their own VMs or containers with limited to non-existant integration between the VMs or containers.
The other big win is a standard way to do updates that works with any kind of distro, from Desktop to Server to Android to IVI and embedded, without each kind of systems needing to redesign updates in their own special way.
> if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.
The proposal is that a developer would pick an ABI to build against, such as OpenSuSE 16.4 or for a non-desktop example OpenWRT 16.04, not that every developer would be responsible for bundling and building all of their library dependancies.
> The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs
This whole proposal is a way to try to use technology to change the social and political dynamic by changing the cost of different incentives, it is not guaranteed how it will play out. I think there is pressure though from end users who don't want to run 18 different ABI distros to run 18 different applications, to pick a few winners, maybe 2 or 3 at most, in the process there might be a de-facto standard created which re-vitalizes the LSB.
> I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)
I don't see it playing out that way, developers love having the latest and greatest too much for them to continue to deploy apps built against really old runtimes, all of the pressure is for them to build against the latest ABI release of their preferred distro. The thing is that one of the problems this is trying to solve is that many people don't want to have to upgrade their entire computer with all new software every six months just to keep updated on a few applications they care about, or conversely be forced to update their main applications because their distro has moved on, it might actually make more sense to run the latest distro ABI alongside the users preferred environment, satisfying both developers and end-users.
Posted Sep 7, 2014 8:20 UTC (Sun)
by dlang (guest, #313)
[Link] (7 responses)
I can see why developers would like this, but I still say that this is a bad result.
Posted Sep 7, 2014 8:29 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
And since there's no base ABI, everybody just does whatever suits them. Just last week we found out that Docker images for Ubuntu don't have libcrypto installed, for example.
Maybe this container-lite initiative will motivate distros to create a set of basic runtimes, that can actually be downloaded and used directly.
Posted Sep 7, 2014 9:48 UTC (Sun)
by dlang (guest, #313)
[Link] (5 responses)
if you don't define them to include every package, you will run into the one that you are missing.
These baselines are no easier to standardize than the LSB or distros.
In fact, they are worse than distros because there aren't any dependencies available (other than on "RHEL10 baseline")
Posted Sep 7, 2014 9:56 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
> These baselines are no easier to standardize than the LSB or distros.
The ecosystem of runtime might help to evolve at least a de-facto standard. I'm pretty sure that it can be done for the server-side (and let's face it, that's the main area of non-Android Linux use right now) but I'm less sure about the desktop.
Posted Sep 7, 2014 10:43 UTC (Sun)
by dlang (guest, #313)
[Link] (3 responses)
>> These baselines are no easier to standardize than the LSB or distros.
> There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.
so who is going to define the standards for the new approach? Every distro will just declare each of their releases to be a standard (and stop supporting them as quickly as they do today)
that gains nothing over the current status quo, except give legitimacy to people who don't want to upgrade
If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
Posted Sep 7, 2014 10:53 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
> If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.
Posted Sep 7, 2014 11:18 UTC (Sun)
by dlang (guest, #313)
[Link] (1 responses)
thanks for the laugh
there isn't going to be "the runtime" any more than there will be "the distro", for the same reason, different people want different things and have different tolerance of risky new features.
> I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.
they already do, it's called their distro releases
> as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.
no, your users may just have to download a few tens of GB of base packaging to run it instead.
Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Posted Sep 7, 2014 11:52 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> they already do, it's called their distro releases
That's the crux of the problem - distros are wildly incompatible and there's no real hope that they'll merge any time soon.
> no, your users may just have to download a few tens of GB of base packaging to run it instead.
>Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Posted Sep 5, 2014 3:01 UTC (Fri)
by paulj (subscriber, #341)
[Link] (14 responses)
The de-duping thing seems tenuous to me, for the "ship your own runtime" case. What are the chances that two different application vendors happen to pick the exact same combination of compiler toolchain, compile flags and libraries necessary to give identical binaries?
Having a system to make it possible to run different applications, built against different "distros" (or runtimes), at the same time, running with the same root/config (/etc, /var) and /home seems good. Though, I am sceptical that:
a) There won't be configuration compatibility issues with different apps using slightly different versions of a dependency that reads some config in /home (ditto for /etc).
This kind of thing used to not be an issue, back when it was more common to share /home across different computers thanks to NFS, and application developers would get more complaints if they broke multi-version access. However $HOME sharing is rare these days, and I got sick a long time ago of, e.g., GNOME not dealing well with different versions accessing the same $HOME (non-concurrently!).
b) Sharing /var across different runtimes similarly is likely fraught with multi-version incompatibility issues.
It's ironic that shared (even non-concurrent) $HOME support got broken / neglected in Linux, and now it seems we need it to help solve the runtime-ABI proliferation problem of Linux. :)
Posted Sep 5, 2014 3:03 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
https://mail.gnome.org/archives/gnome-os-list/2014-Septem...
Posted Sep 5, 2014 3:06 UTC (Fri)
by martin.langhoff (guest, #61417)
[Link] (1 responses)
Posted Sep 5, 2014 3:11 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link]
Posted Sep 8, 2014 1:20 UTC (Mon)
by Arker (guest, #14205)
[Link] (10 responses)
That used to bother me too. I deleted GNOME. As GNOME is the source of the breakage (in this and so many other situations) that is the only sensible response. The herd instinct to embrace insanity instead, in order to keep GNOME (a process that is actually ongoing, and accelerating) is what I do not understand. Why are people so insistent on throwing away things that work and replacing them with things that do not?
Posted Sep 8, 2014 10:14 UTC (Mon)
by mchapman (subscriber, #66589)
[Link] (9 responses)
What's there to understand? Clearly these people you're talking about are having a different experience with the software than you are. Why would you think your particular experience with it is canonical? Is it so hard to believe other people's experiences are different?
Posted Sep 8, 2014 12:53 UTC (Mon)
by Arker (guest, #14205)
[Link] (8 responses)
Posted Sep 8, 2014 13:18 UTC (Mon)
by JGR (subscriber, #93631)
[Link] (5 responses)
Or to put it another way, not everyone necessarily shares your view of what is "broken" and what is not.
Posted Sep 8, 2014 14:07 UTC (Mon)
by Arker (guest, #14205)
[Link] (4 responses)
This is a problem that affects the entire market for computers, worldwide. Markets work well when buyers and sellers are informed. Buyers of computer systems, on the other hand, are for the most part as far from well informed as imaginable. A market where the buyers do not understand the products well enough to make informed choices between competitors is a market which has problems. And Free Software is part of that larger market.
And that's what we see with GNOME. The example we were discussing above had to do with the $home directory. The GNOME devs simply refuse to even try to do it right. Since none of them used shared $home directories, they did not see the problem, and had no interest in fixing it. And here is where the broken market comes in - because there were enough end users who like the GNOME devs did not understand how $home works and how it is to be used who simply did not understand why they should care.
And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally. That's the path the herd is on right now. Anything that your 13 year old doesnt want to take the time to understand - it' s gone or going. A few more years of this and we will have computing systems setting world records for potential power and actual uselessness simultaneously.
Posted Sep 8, 2014 15:03 UTC (Mon)
by pizza (subscriber, #46)
[Link] (3 responses)
I'm not quite sure what your point here is... you're basically blaming GNOME for the fact that its users are uninformed, and further, it's also GNOME's fault because those same uninformed users don't know enough to care about a philosophical argument under the hood about a situation those same uninformed users will never actually encounter?
> And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally.
Please, lay off on the ad honimem insults.
Posted Sep 8, 2014 16:17 UTC (Mon)
by Arker (guest, #14205)
[Link] (2 responses)
I was not assessing blame, I was simply making you aware of the progression of events.
"Please, lay off on the ad honimem insults."
Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.
Posted Sep 8, 2014 17:20 UTC (Mon)
by pizza (subscriber, #46)
[Link] (1 responses)
> Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.
Personally, I would be embarrased(sic) if I was the one who considered the above a statement of fact, and petulantly pointed out a spelling error while making one of your own.
But hey, thanks for the chuckle.
Posted Sep 8, 2014 17:34 UTC (Mon)
by Arker (guest, #14205)
[Link]
The pattern of behavior from the GNOME project is indeed a fact, it's not disputable, the tracks are all over the internet and since it has been the same pattern for over a decade it certainly seems fair to expect it to continue. If you think you have an objection to that characterization that is legitimate, please feel free to put it forward concretely. Putting forward baseless personal accusations instead cuts no ice.
Posted Sep 8, 2014 14:12 UTC (Mon)
by mchapman (subscriber, #66589)
[Link] (1 responses)
But that wasn't what you claimed used to bother you. You were talking about broken behaviour, not code. Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed.
So I'm having trouble following your argument. Are you saying people shouldn't be supporting GNOME -- that the only sensible thing to do with it is uninstall it -- because there are *some* use cases that for *some* people don't work properly? That seems really unfair for everybody else.
Posted Sep 8, 2014 14:30 UTC (Mon)
by Arker (guest, #14205)
[Link]
A distinction with no difference. Behaviour is the result of code, and code is experienced as behaviour.
"Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed."
That is correct, but also incomplete. Since GNOME screwed this up, they set a (bad) example that has been followed by others as well, and I am afraid today you will find so many commonly used programs have now emulated the breakage that it's widespread and this essential core OS feature is now practically defunct.
Of course YMMV, but in my universe, the damage done in this single, relatively small domain, done simply by not caring and setting a bad example and being followed by those who know no better, is orders of magnitude greater than their positive contributions. I am not trying to be mean I am simply being honest.
Posted Sep 3, 2014 22:19 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Now *if* the majority of the data we're dealing with is either block-aligned at a similarly large block size or compressed (and thus more or less non-deduplicable anyway unless identical) we might be OK with a block-based deduplicator. I hope this is actually the case, but fear it might not be: certainly many things in ELF files are aligned, but not on anything remotely as large as fs block boundaries!
But maybe we don't care about all our ELF executables being stored repeatedly as long as stuff that, e.g. hasn't been recompiled between runtime / app images gets deduplicated -- and btrfs deduplication should certainly be able to manage that.
Posted Sep 2, 2014 7:15 UTC (Tue)
by ringerc (subscriber, #3071)
[Link] (2 responses)
Sorry, but...
*doubles over laughing for several minutes*
I don't think there's yet been a Mac OS X release in which libedit has not been broken in one of several exciting ways. They change bundled libraries constantly, and often in non-BC ways. It's routine for major commercial software (think: Adobe Creative Suite) to break partially or fully a couple of OS X releases after the release of the package, so you just have to buy the new version. Their Cocoa APIs clearly aren't much more stable than their POSIX ones.
Then there's the system level stuff. Launchd was introduced abruptly, and simply broke all prior code that expected to get started on boot. (Sound familiar?). NetInfo was abruptly replaced by OpenDirectory and most things that used to be done with NetInfo stopped working, or had to be done in different (and usually undocumented) ways.
I had the pleasure of being a sysadmin who had to manage macs over the OS X 10.3 to 10.6 period, and I tell you, Fedora has nothing on the breakage Apple threw at us every release.
Posted Sep 2, 2014 7:21 UTC (Tue)
by ringerc (subscriber, #3071)
[Link] (1 responses)
About the only highly backward compatible platforms out there are the moribund ones (Solaris, AIX, etc); FreeBSD, which makes a fair bit of effort but still breaks things occasionally; and Windows, which suffers greatly because it's so backward compatible.
Part of why OS X is so successful is *because* it breaks stuff, so it's free to change.
Posted Sep 2, 2014 11:24 UTC (Tue)
by jwakely (subscriber, #60262)
[Link]
The things that get broken probably needed to break and you should be grateful for it, you worthless non-believer.
Posted Sep 2, 2014 13:47 UTC (Tue)
by javispedro (guest, #83660)
[Link] (5 responses)
It extends much further. Simply put, the Cascade of Attention Deficit Teenagers problem prevents every other OSS project from ever committing to an specification. Gtk+ will change its library API. But then Bluez will change its D-Bus specification and all of your containers become useless (library API didn't change). Or Gtk+ decides not to break the ABI, but rather start rendering things in a slightly different way and your window manager breaks (e.g. client side decorations). Etc. Etc.
I just don't see how having a new massive abstraction layer is going to help. In fact, I don't even even see how even a universal abstraction layer is feasible. Efforts like have freedesktop.org have made the most progress (look, icons of Gnome applications now appear in KDE menus! tbh this had been unthinkable for me less than 10 years ago). But now they have been corrupted into furthering the agendas of some people with "visions" instead of trying to be a common ground of disparaging APIs.
Posted Sep 2, 2014 15:52 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 22:33 UTC (Tue)
by ovitters (guest, #27950)
[Link] (3 responses)
Can you give me any specifics?
For instance, GNOME makes use of logind due to ConsoleKit being deprecated. We actually still support ConsoleKit, though probably it's pretty buggy. The logind bits resulted in a clear specification and allowing even more things to be shared across different Desktop Environments.
We still have stuff like gnome-session. What is does is pretty similar across various Desktop Environments. It was proposed to make use of the infrastructure provided by systemd user sessions, though that's not fully ready yet. This would then allow various Desktop Environments to handle their session bits in the same way. AFAIK, this is something KDE, GNOME and Enlightenment appreciate.
Regarding Client Side Decorations: GNOME is working with other Desktop Environments as well as Window Managers. Suggest to read the bugreports that the KWin maintainer linked to. It's not so doom and gloom as he makes it out to be. Further, it's nice that he dislikes the idea of CSD, but in his Google+ post he sometimes goes too much into anti-CSD advocacy based more on feelings than anything happening. That just on Google+, in Bugzilla + mailing lists he's awesome (don't recall KWin maintainers name on the top of my head -- I assume everyone knows who I am talking about).
The D-Bus APIs changing is a real problem. I'd suggest not calling people names. Lennart wrote how to properly handle versioning in D-BUs interfaces. But yeah, just be an ass and say shit like "Cascade of Attention Deficit Teenagers problem", because that's how you'll get a friendly response from e.g. me (nope!).
> But now they have been corrupted
Get lost with calling me corrupted.
Posted Sep 3, 2014 14:24 UTC (Wed)
by javispedro (guest, #83660)
[Link] (2 responses)
When was the last XDG specification published? Most times I see the freedesktop.org domain referenced these days is because systemd is hosted there, for some reason.
In the meantime, the existing standards are being "deprecated" or ignored e.g. notification icons is at this point not supported by 2 out of the 3 largest X desktops. There's still no replacement even when these two desktops have their own version of "notification icons".
But I do not really want to argue about FDO's mission. I just used how quickly its standards are becoming useless to show how library APIs is only a small part of the problem. The bigger problem is the lack of commitment to standards (I'm not saying I'm not part of this problem, too). Ideally, good reason should be provided when dropping support for an existing and approved XDG or LSB standard. Not "it's just that we have a different vision for Linux". Without that, a generic abstraction layer is just infeasible.
> Regarding Client Side Decorations
I do not even dislike CSDs. But it's just yet another way in which _existing_ cross-desktop compatibility is being thrown down the drain for no good reason. I do not know about KDE but there are plenty other DEs out there some of which don't even use decorations at all.
And this compatibility change would not be fixed either by the proposal discussed in the article.
> say shit like "Cascade of Attention Deficit Teenagers problem"
That is not my quote. E.g.
Posted Sep 3, 2014 17:07 UTC (Wed)
by jwarnica (subscriber, #27492)
[Link] (1 responses)
Posted Sep 3, 2014 22:46 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Sep 2, 2014 11:22 UTC (Tue)
by cjcoats (guest, #9833)
[Link]
I've got to do my work on that &#()^%!! thing.
Posted Sep 1, 2014 16:34 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (14 responses)
I would guess that naming LVMs using the same scheme would extend this to any filesystem, supporting ext4, xfs on lvm and btrfs using the same management framework would cover most users, maybe with some degraded functionality and warnings as you moved into harder to support or less well tested configurations.
Posted Sep 1, 2014 23:45 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (13 responses)
The file-level de-dup is actually key here, because if you dump multiple runtimes and the OS into the fs, then you need to make sure you don't pay the price for the duplication. And the file-level de-dup is not only important to ensure that you can share the data on disk, but also later on in RAM.
So no, LVM/DM absolutely doesn't provide us with anything we need here. Sorry. It's technology from the last century, it's not a way to win the future. It conceptually cannot deliver what we need here.
Posted Sep 2, 2014 0:23 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (8 responses)
Posted Sep 2, 2014 1:24 UTC (Tue)
by dlang (guest, #313)
[Link] (6 responses)
Posted Sep 2, 2014 1:33 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
Posted Sep 2, 2014 2:33 UTC (Tue)
by rodgerd (guest, #58896)
[Link]
That's hardly the land of wild and crazy any more.
(Anecdata-wise, I found it rubbish under 3.4 - 3.10, and am running data I care about on 3.12 in the RAID1 config. It's been very reliable and coped with a drive failure/rebuild, growing arrays, and so on and so forth.)
Posted Sep 2, 2014 20:44 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (3 responses)
I think using it in this logic is a great way to get the stabilization process sped up, because we will get it into the hands of people that way, but we don#t actually really care about the data placed in the btrfs volumes (at least initially): it is exclusively vendor-supplied, verified, read-only data. If the file system goes bad, we just download the stuff again, and nothing is lost. It's a nice slow adopt path we can do here...
Actually, we can easily start adopting this by just pushing the runtimes/os images/apps/frameworks into a loopback btrfs pool somewher in /var. This file system would only be mounted into the executed containers and apps, and not even appear in the mount table of the host...
Posted Sep 3, 2014 10:09 UTC (Wed)
by dgm (subscriber, #49227)
[Link] (2 responses)
Put it in the hands of developers. Or volunteers. But please! Let users alone.
Posted Sep 3, 2014 18:49 UTC (Wed)
by ermo (subscriber, #86690)
[Link]
I also happen to think that Lennart is correct in taking the longer view that btfrs needs to be included gradually in the ecosystem if it is ever to become a mature, trusted, default Linux FS.
There will be bumps in the road, sure, that's a given. But Lennart's point that he wants to ease the pain by starting off with storing non-essential data (in the easily replaceable sense) in btfrs while this process is onging, is IMHO both sound and valid.
Others may see it differently, of course.
Posted Sep 5, 2014 20:06 UTC (Fri)
by HenrikH (subscriber, #31152)
[Link]
You can perform all the QA you want internally and yet some random user with his random setup and random hardware till find tons of bugs on the first day of use.
Posted Sep 2, 2014 20:40 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link]
Posted Sep 2, 2014 2:13 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (2 responses)
I am also interested in how this plays out with NFS based NAS devices, it seems a lot like VDI where you have a set of very hot gold master images, mixed with something like Docker you have a whole data center humming along with a very high level of deduplication and standardization.
If this makes any sort of sense then someone will try to implement it for sure, maybe everyone will end up with btrfs in the end but the path to there might go through stages of using block level copy-on-write, and failing, before they are convinced.
Posted Sep 2, 2014 11:53 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
And if people CAN run this stuff over ext4, or xfs, or reiser (does anyone still use it :-), then maybe people will also be motivated to add these features to those file systems. Succeed or fail, it's all within the Linus philosophy of "show me you can do it, show me it's a good idea". That's the way you get new stuff into the Linux kernel, that should be the way you get stuff into Linux distros.
And succeed or fail, it's good for the developers to have a go :-)
Cheers,
Posted Sep 2, 2014 22:59 UTC (Tue)
by rodgerd (guest, #58896)
[Link]
Posted Sep 9, 2014 11:23 UTC (Tue)
by alexl (subscriber, #19068)
[Link]
This is unfortunately not true. The files on each btrfs subvolume have a per-subvolume st_dev (different minor nr), and the page-cache is per-device. So, block sharing between btrfs volumes is strictly an on-disk thing, they will be cached separately in RAM.
I know this because I wrote the btrfs docker backend hoping to get this feaure (among others), and it didn't work.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
So the static libraries are only a workaround until people and distribution will behave more stable.
Security
Security
Security
Security
Security
Security
Wol
Security
Security
(b) most people weren't using it
Wol
Security
Security
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Am I missing something? It seems to me that any binaries that were compiled against the version of the header that didn't have the volatile in it may display incorrect behaviour with respect to accessing it, and on that basis it seems to me that it's reasonable to call it an ABI break (since the only way to fix the break is to recompile).
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
in the WRF weather-forecast modeling system between WRF-3.0.1 and 3.0.2.
Or at least, there were that many that I found and had to re-program
for...
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
They are free provided you have given your monetary offering to the shrine of Jobs recently enough.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.
For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Not every package, but at least _something_. And dedup partially solves the space problem.
There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
I'm hoping that the "market" is going to dictate the standard. Application developers will prefer to use a runtime that is well-supported and easy to maintain. And perhaps in time it will the become _the_ runtime.
Exactly. However, as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
No. Most developers want pretty much the same basic feature set with small custom additions.
No they don't. Distro model is exclusionary - I can't just ship RHEL along with my software package (well, I can but it's impractical). So either I have to FORCE my clients to use a specific version of RHEL or I have to test my package on lots of different distros.
Bullshit. Minimal Docker image for Ubuntu is less than 100Mb and it contains lots of software. There's no reason at all for the basic system to be more than 100Mb in size.
Who cares. All existing software, except for high-profile stuff like browsers, is insecure like hell. Get over it.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
> people with "visions" instead of trying to be a common ground of
> disparaging APIs.
Poettering: Revisiting how we put together Linux systems
http://blogs.gnome.org/tthurman/2009/07/02/cascade-of-att...
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
running the "module" run-time-package-management system, for
which there are over two dozen different compiler versions,
all of them declared incompatible with each other by the Powers
That Be, each with its own shared libraries, so that whether
a program you've built will run or not depends upon whether
the current state if "module laod" is correct --- and the odds
are that it is not!
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
> share the data on disk, but also later on in RAM.