Poettering: Revisiting how we put together Linux systems
Now, with the name-spacing concepts we introduced above, we can actually relatively freely mix and match apps and OSes, or develop against specific frameworks in specific versions on any operating system. It doesn't matter if you booted your ArchLinux instance, or your Fedora one, you can execute both LibreOffice and Firefox just fine, because at execution time they get matched up with the right runtime, and all of them are available from all the operating systems you installed. You get the precise runtime that the upstream vendor of Firefox/LibreOffice did their testing with. It doesn't matter anymore which distribution you run, and which distribution the vendor prefers."
Posted Sep 1, 2014 12:04 UTC (Mon)
by colo (guest, #45564)
[Link] (146 responses)
Posted Sep 1, 2014 12:40 UTC (Mon)
by cjcoats (guest, #9833)
[Link] (22 responses)
Serious system architects really understand that multiplying
And what about those few of us who write small-niche compile-and-run
Posted Sep 1, 2014 12:52 UTC (Mon)
by dvdhrm (subscriber, #85474)
[Link]
I'm not sure where you got that from, but this is definitely not the intention of the proposal. On the contrary, "small-niche compile-and-run software" should benefit from this, as you can provide your application as a bundle that can just run, instead of requiring package-descriptions for each distribution.
Also note that users are free to run package-managers on top of this system. Sure, /usr will be read-only, but you're free to install traditional packages into /opt/ or /home/<user>/.local/ just like you do right now.
Posted Sep 1, 2014 12:55 UTC (Mon)
by Robin.Hill (subscriber, #4385)
[Link] (20 responses)
The idea would seem to be that you (as an app developer) only need to build it for a single platform (as you do at the moment, I guess). If the user wants to run your software then they may need to install the runtime for that platform in order to use the software, but can do that without needing to use that platform for everything else as well. They can (for example) use debian unstable for their desktop platform and still run apps built for RHEL/Ubuntu/Slackware without needing to worry about incompatibilities or differing dependencies.
I'm not sure how well it will work in practice, or what sort of overhead there'll be with all these differing versions, but it seems a pretty neat solution. Of course, it does depend on the stability of btrfs and I've had some issues with that in the past.
Posted Sep 1, 2014 13:48 UTC (Mon)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Sep 2, 2014 11:17 UTC (Tue)
by cjcoats (guest, #9833)
[Link] (1 responses)
To be honest, "thinking things through" properly is quite rare ;-(
Posted Sep 3, 2014 22:33 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Sep 1, 2014 13:54 UTC (Mon)
by nim-nim (subscriber, #34454)
[Link] (5 responses)
Which is pretty much why all the attempts to "help poor developers" at the expanse of ops never went anywhere. Developers may decide what is cool but without operational buy-in it stays in the R&D dep (strangely normal beings don't want to self-maintain their computer devices).
Posted Sep 2, 2014 7:08 UTC (Tue)
by drag (guest, #31333)
[Link] (4 responses)
When I want to use python I can't rely on distro python at all. I can't use any of the packages they use internally.
The apache servers we use are not distro provided. The java tomcat installations are not the ones provided for distributions. I can't rely on distro provided dependencies for any of the software I want to use because distributions do not provide the versions I need, and even if they did they wouldn't maintain it in a way useful for me...
And this is not a small problem. I deal with hundreds of applications, several thousands of servers. All of it is a mixture of in house software and external software. I have to deal with data that gets exchanged between dozens of different applications made by half a dozen different application development groups. Languages that get used range from C++, to Java, to Node.JS, python, ruby, and a massive amount of perl and shell... Some of the stuff has been under development for almost 20 years, some of it is brand new following latest trends, some is stuff that hasn't been touched in 7 years, but is still expected to work as well as the first day it was deployed on a secure OS on modern hardware. The complexity is mind boggling sometimes.
What do I do about it in Linux? I can deal with it and make it work, but it's not easy. It needs custom ways to setup environments, custom ways to deploy application configs and metadata... lots of perl, lots of puppet, lots of custom packages.
(Oh, and I came from Ops background first.)
Business/Users needs dictate how applications behave and are written. How applications are written dictate the environment that needs to be provided by the OS. If you think that it's proper that the OS should dictate how applications look and behave you are looking at things completely backwards; The job of the OS is to make it as easy to develop and run applications as possible...
Distributions really have only two choices in this sort of situation, long term. Either embrace it and get their inputs into the design by contributing upstream, or see themselves minimized as users realize there is better tools to solve their problems. I have no doubt that the containerized approach will eventually find acceptance, though. It's just a better way.
Posted Sep 4, 2014 19:24 UTC (Thu)
by NightMonkey (subscriber, #23051)
[Link] (1 responses)
Posted Sep 8, 2014 1:11 UTC (Mon)
by gerdesj (subscriber, #5446)
[Link]
Cheers
Posted Sep 7, 2014 23:02 UTC (Sun)
by Arker (guest, #14205)
[Link]
I can certainly understand the desire to have a tool that will let you just keep it running till the end of the day till you clock out, anyone in that position would feel it, but the real problem will simply continue to fester until it reaches the point that no tool can paper it over and the entire enterprise grinds to a halt.
The real problem here is not technical, it's social. You need to impose sanity and you do not have the juice to do it. That's not a problem with a technical solution.
Posted Sep 8, 2014 1:10 UTC (Mon)
by gerdesj (subscriber, #5446)
[Link]
Custom Apache n Tomcat installs though: Have the API changes for those really got you in a twist or do you have to deal with rather unimaginative devs who refuse to look at later point releases ...
Cheers
Posted Sep 2, 2014 7:52 UTC (Tue)
by ncm (guest, #165)
[Link] (3 responses)
Seriously, Aegis, and then Domain/ix had this in the '80s, and had BSD and SYSV personalities per-shell session. Dragonfly BSD cobbled up a version recently. No, it's not a security hole.
Posted Sep 2, 2014 12:28 UTC (Tue)
by foom (subscriber, #14868)
[Link] (2 responses)
Because it doesn't use normal environment vars, but a brand new kind of var.
Posted Sep 2, 2014 21:45 UTC (Tue)
by ncm (guest, #165)
[Link] (1 responses)
Posted Sep 3, 2014 1:38 UTC (Wed)
by foom (subscriber, #14868)
[Link]
Posted Sep 2, 2014 15:15 UTC (Tue)
by landley (guest, #6789)
[Link] (6 responses)
Dear Redhat: Katamari Damacy was not an engineering roadmap.
Posted Sep 3, 2014 2:53 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (5 responses)
But isn't that what the Unix Philosophy tells me I should do? Take small programs and string them together into larger tools?
Just because I can write a one liner which uses find, sort, and awk piped to awk to generate a file given a directory tree doesn't mean I should.
Posted Sep 3, 2014 4:20 UTC (Wed)
by rodgerd (guest, #58896)
[Link]
Posted Sep 4, 2014 19:27 UTC (Thu)
by NightMonkey (subscriber, #23051)
[Link] (3 responses)
Posted Sep 5, 2014 14:42 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (2 responses)
Posted Sep 7, 2014 17:45 UTC (Sun)
by jwakely (subscriber, #60262)
[Link] (1 responses)
You can use awk's pipe operator instead:
cmd = "ls -l";
Posted Sep 8, 2014 5:15 UTC (Mon)
by mathstuf (subscriber, #69389)
[Link]
Posted Sep 1, 2014 13:03 UTC (Mon)
by ovitters (guest, #27950)
[Link] (118 responses)
btrfs-only seems like a step back. The various filesystems are better in some workloads than others. I guess you could have everything in btrfs except the data somehow. But then how would systemd automatically know that these things belong together? Hrm...
Posted Sep 1, 2014 13:12 UTC (Mon)
by cate (subscriber, #1359)
[Link] (102 responses)
So the static libraries are only a workaround until people and distribution will behave more stable.
Posted Sep 1, 2014 15:37 UTC (Mon)
by torquay (guest, #92428)
[Link] (100 responses)
Which is going to be never, as almost all distributions have no qualms about breaking APIs and ABIs from one release to the next. Fedora being the prime example, with Ubuntu not far behind in this broken state of affairs. (And hence it's no wonder many people have moved to Mac OS X, which provides a refreshingly stable environment, with the OS updates being free on Mac machines).
The distributions in turn try to shift the blame to "upstream", because they have no manpower to fix the breakage, nor the power or willingness to punish upstream developers. Many upstream developers behave well and try to maintain backwards compatibility, but on the scale of a distribution the number of broken and/or changed libraries (made by undisciplined kids with Attention Deficit Disorder) quickly accumulates. The constant mess created by Gnome and GTK comes to mind.
Hence we end up with the effort by the systemd folks to try to fix this mess, by proposing in effect a massive abstraction layer. While it seems to be an overly elaborate scheme with many moving parts, any effort in fixing the mess is certainly welcome.
Perhaps there's an easier way to skin the cat: have each app run inside its own Docker container, but with access to a common /home partition. All the libraries and runtime required for the app are bundled with the app, including an X or Wayland display server (*). The windows produced by the app are captured and shown by a "master" display server. It's certainly a heavy handed and size-inefficient solution, but this is the price to pay to tame the constant API and ABI brokenness.
(*) perhaps this requirement can be relaxed to omit components that are guaranteed to never break their APIs/ABIs; by default all libraries and components are treated as incompatible from one version to the next, unless explicitly shown otherwise through extensive regression tests.
Posted Sep 1, 2014 16:43 UTC (Mon)
by cyperpunks (subscriber, #39406)
[Link] (9 responses)
Let's use the Heartbleed issue as an example.
To get fully protected after the bug, all work a distro user was required to do was to install the latest openssl package form the distro.
Now, in this new scheme of things, the user is forced to upgrade every single instance and check each for any possible Heartbleed issue.
The new scheme brings flexibility, however from a security viewpoint, it seems like a nightmare.
Posted Sep 1, 2014 17:00 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (3 responses)
Posted Sep 1, 2014 17:26 UTC (Mon)
by cyperpunks (subscriber, #39406)
[Link] (2 responses)
Posted Sep 1, 2014 21:12 UTC (Mon)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
Posted Sep 2, 2014 9:52 UTC (Tue)
by NAR (subscriber, #1313)
[Link]
Posted Sep 1, 2014 19:45 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (3 responses)
For a non-distro user (or, like me, a gentoo user), all that was needed was to not switch on the broken functionality in the first place! The reports I've seen all said that - for most machines - heartbleed was functionality that wasn't wanted and should not have been enabled to start with.
Yes I know users "don't want" the hassle, but gentoo suits me fine. I switch things on if I need them. That *should* be the norm.
Cheers,
Posted Sep 2, 2014 17:17 UTC (Tue)
by rich0 (guest, #55509)
[Link] (2 responses)
I think you just got lucky, and running USE=-* has its own issues.
Posted Sep 2, 2014 18:29 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
(a) most people had it switched on
That's a recipe for minimal testing and maximal problems.
Your scenario is where most people need the functionality, so I'm in a minority of not wanting or needing. I don't think that is anywhere near as likely (although I could be wrong ...)
Cheers,
Posted Sep 4, 2014 19:30 UTC (Thu)
by NightMonkey (subscriber, #23051)
[Link]
Posted Sep 2, 2014 2:26 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 1, 2014 16:46 UTC (Mon)
by raven667 (subscriber, #5198)
[Link]
Sure, one of the reasons Docker exists and is so popular is to try and skin this cat, heck the reason VMs are so popular is to abstract away this software dependency management problem in a simple but heavy handed way. The problem is that no one really wants to run nested kernels to make this work, you lose a lot of information about the work to be scheduled when nesting kernels, so this is a possible way to solve the fundamental software compatibility management problem on a shared kernel.
I'm sure that as others digest this proposal and try to build systems using it they will discover corner cases which are not handled, compatibility issues, ways to simplify it so that the final result may be somewhat different but this could be a vastly useful system.
Posted Sep 1, 2014 18:26 UTC (Mon)
by HenrikH (subscriber, #31152)
[Link] (17 responses)
Posted Sep 1, 2014 20:22 UTC (Mon)
by Karellen (subscriber, #67644)
[Link] (16 responses)
That's generally the point of major.minor.patch versioning, at least among (shared) libraries. An update which changes the "patch" level should not change the ABI *at all*, it should only change (improve) the functionality of the existing ABI.
A change to the "minor" level should only *add* to the ABI, so that all users of 1.2.x should be able to use 1.3.0+ if it's dropped in as a replacement.
If, as a library author, you need to change the ABI, by for instance modifying a function signature, or deleting a function that shouldn't be used any more, that's when you change the "major" version to 2.0.0, and make SDL-2.a.b co-installable with SDL-1.x.y. That way, legacy apps linked against the old and busted SDL1 can continue to work, while their modern replacements can link with the new hotness SDL2 and run together on the same system.
It's not always easy, and requires care and discipline. But creating shims would be just as much work, and tools already exist to help get it right, like abicheck.
Posted Sep 1, 2014 20:26 UTC (Mon)
by dlang (guest, #313)
[Link] (15 responses)
If ABIs were managed properly, we wouldn't be having these disucssions.
Posted Sep 2, 2014 2:48 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (11 responses)
Posted Sep 2, 2014 19:38 UTC (Tue)
by robclark (subscriber, #74945)
[Link] (8 responses)
that's kind of cool.. I hadn't seen it before. Perhaps they should add a 'wall of shame' ;-)
At least better awareness amongst dev's about ABI compat seems like a good idea.
Posted Sep 3, 2014 10:13 UTC (Wed)
by accumulator (guest, #95885)
[Link] (7 responses)
Posted Sep 3, 2014 22:35 UTC (Wed)
by nix (subscriber, #2304)
[Link] (6 responses)
Posted Sep 4, 2014 19:06 UTC (Thu)
by zlynx (guest, #2285)
[Link] (5 responses)
Posted Sep 5, 2014 13:58 UTC (Fri)
by jwakely (subscriber, #60262)
[Link] (4 responses)
Posted Sep 5, 2014 14:38 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (3 responses)
Posted Sep 8, 2014 16:05 UTC (Mon)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Sep 9, 2014 12:47 UTC (Tue)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
Posted Sep 9, 2014 13:54 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Sep 2, 2014 22:05 UTC (Tue)
by jondo (guest, #69852)
[Link] (1 responses)
Reality kicks in: This would simply stop all updates ...
Posted Sep 3, 2014 15:05 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 6:21 UTC (Tue)
by krake (guest, #55996)
[Link]
Its wiki page on C++ ABI dos-and-don'ts has become one of the most often referenced resource in that matter.
Posted Sep 2, 2014 11:26 UTC (Tue)
by cjcoats (guest, #9833)
[Link] (1 responses)
Posted Sep 3, 2014 12:32 UTC (Wed)
by rweir (subscriber, #24833)
[Link]
Posted Sep 1, 2014 19:38 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (7 responses)
So why, even in the last couple of days, having I been hearing moans from Mac users that they can't upgrade their system (this chap was stuck on Snow Leopard).
Cheers,
Posted Sep 2, 2014 1:42 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (6 responses)
Posted Sep 2, 2014 5:53 UTC (Tue)
by torquay (guest, #92428)
[Link] (5 responses)
And that may not be such a bad thing, if it gives you piece of mind and freedom from the broken API/ABI in the Linux world.
Constantly dealing with broken API/ABI is certainly not free: it takes up your time, which could have been used for more productive things.
Posted Sep 2, 2014 10:45 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (4 responses)
Posted Sep 2, 2014 14:34 UTC (Tue)
by madscientist (subscriber, #16861)
[Link] (1 responses)
Posted Sep 2, 2014 17:11 UTC (Tue)
by pizza (subscriber, #46)
[Link]
I'm heavily involved with Gutenprint, and it is not an exaggeration to say that every major OSX release has broken (oh, sorry, "incompatibly changed") something we depended on.
Posted Sep 3, 2014 9:08 UTC (Wed)
by dgm (subscriber, #49227)
[Link] (1 responses)
Posted Sep 3, 2014 11:32 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link]
Posted Sep 1, 2014 22:02 UTC (Mon)
by sramkrishna (subscriber, #72628)
[Link]
Yet, it is from that community where application sandboxing and single binary is coming from.
Posted Sep 1, 2014 23:41 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (52 responses)
I love the ultimate simplicity of this scheme. I mean, coming up with a good scheme to name btrfs sub-volume is a good idea anyway, and then just going on single step further and actually packaging the OS that way is actually not that big a leap!
I mean, maybe it isn't obvious when one comes from classic packaging systems with all there dependency graph theory, but looking back, after figuring out that this could work, it's more like "oh, well, this one was obvious..."
Posted Sep 2, 2014 17:18 UTC (Tue)
by paulj (subscriber, #341)
[Link] (50 responses)
If so, I'm wondering:
- Who assigns or registers or otherwise coördinates these distro-abstract dependency names?
- Who specifies the ABI for these abstract dependencies? I guess for GNOME3_20?
- What if multiple dependencies are needed? How is that dealt with?
The ABI specification thing for the labels seems a potentially tricky issue. E.g., should GNOME specify the one in this example? But what if there are optional features distros might want to enable/disable? That means labels are needed for every possible combination of ABI-affecting options that any dependency might have?
?
Posted Sep 2, 2014 18:55 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (49 responses)
I think some of the vendor ids are getting truncated when making examples.
Posted Sep 4, 2014 9:33 UTC (Thu)
by paulj (subscriber, #341)
[Link] (48 responses)
On which point: Lennart has used "API" in comments here quite a lot, but I think he means ABI. ABI is even more difficult to keep stable than API, and the Linux desktop people havn't even managed to keep APIs stable!
#include "rants/referencer-got-broken-by-glib-gnome-vfs-changes.txt"
Posted Sep 4, 2014 14:46 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (47 responses)
Posted Sep 4, 2014 22:34 UTC (Thu)
by dlang (guest, #313)
[Link] (31 responses)
Posted Sep 5, 2014 4:14 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (30 responses)
Posted Sep 5, 2014 6:08 UTC (Fri)
by dlang (guest, #313)
[Link] (29 responses)
But if every upgrade of every software package on your machine is the same way, ti will be a fiasco
Remember that the "base system" used for this "unchanging binary compatibility" is subject to change at he whim of the software developer, any update they do, you will (potentially) have to do as well, so that you have the identical environment.
Posted Sep 5, 2014 16:08 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (28 responses)
Posted Sep 5, 2014 16:14 UTC (Fri)
by paulj (subscriber, #341)
[Link] (12 responses)
Posted Sep 5, 2014 17:07 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (11 responses)
One thing if this proposal is worked on that it puts pressure on distros to define a subset as stable and it puts pressure on app makers to standardize on few runtimes so even if this proposal does not become the standard, it may create the discussion which that results in a new LSB standard for distros binary compatibility which is much more comprehensive than the weak-sauce LSB currently is. I think the discussion of what goes into /usr is very useful on its own even if nothing else comes out of this proposal.
Posted Sep 5, 2014 18:02 UTC (Fri)
by paulj (subscriber, #341)
[Link] (10 responses)
However, note that this is the "This is how the world should be, and we're going to change our bit to make it so, and if it ends up hurting users then that will supply the pressure needed to make the rest of the world so" approach. An approach to making progress which I think has been tried at least a few times before in the Linux world, which I'm not sure always helps in the greater scheme of things. The users who get hurt may not be willing to stick around to see the utopia realised, and not everything may get fixed.
Posted Sep 5, 2014 18:53 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (9 responses)
Posted Sep 6, 2014 10:40 UTC (Sat)
by paulj (subscriber, #341)
[Link] (8 responses)
Posted Sep 7, 2014 15:17 UTC (Sun)
by raven667 (subscriber, #5198)
[Link] (7 responses)
Posted Sep 8, 2014 9:30 UTC (Mon)
by paulj (subscriber, #341)
[Link] (6 responses)
If I had to make a criticism, it wouldn't be of him specifically, but of an attitude common amongst Linux user-space devs, that holds that it is acceptable to burn users with breakage, in the quest for some future code or system utopia. So code gets written that assumes other components are perfect, so that if they are not breakage will expose them and those other components will eventually get fixed. Code gets rewritten, because the old code was ugly, and the new code will of course be much better, and worth the regressions.
The root cause of this seems to be because of the fractured way Linux works. There is generally no authority that can represent the users' interests for stability. There is no authority that can coördinate and ensure that if subsystem X requires subsystem Y to implement something that was never really used before, that subsystem Y is given time to do this before X goes out in the wild. Or no authority to coördinate rewrites and release cycles.
Instead the various fractured groups of developers, in a way, interact by pushing code to users who, if agitated sufficiently, will investigate and report bugs or even help fix them.
You could also argue this is because of a lack of QA resources. As a result of which the user has to help fill that role. However, the lack of resources could also be seen as being in part due to the Linux desktop user environment never having grown out of treating the user as QA, and regularly burning users away.
Posted Sep 8, 2014 10:59 UTC (Mon)
by dlang (guest, #313)
[Link]
Posted Sep 8, 2014 11:59 UTC (Mon)
by pizza (subscriber, #46)
[Link] (4 responses)
I've been on both sides of this argument, as both an end-user and as a developer.
In balance, I'd much rather have the situation today; where stuff is written assuming the other components work properly, and where bugs get fixed in their actual locations rather than independently, inconsistently, and incompatibly papered over.
For example, this attitude is why Sound and [Wireless] Networking JustWork(tm) in Linux today.
The workaround-everything approach is only necessary when you are stuck with components you can't fix at the source -- ie proprietary crap. We don't have that problem, so let's do this properly!
The days where completely independent, underspecified, and barely-coupled components are a viable path forward have been over for a long, long time.
Posted Sep 8, 2014 12:33 UTC (Mon)
by nye (subscriber, #51576)
[Link] (1 responses)
Except they don't.
A couple of examples:
My home machine has both the front channels and the rear channels connected, for a total of four audio channels. In Windows, I go in to my sound properties and pick the 'quadrophonic' option; now applications like games or VLC will mix the available channels appropriately so that I get sound correctly coming out of the appropriate speakers - not only all four channels, but all four channels *correctly*, so that, for example, 5.1 audio is appropriately remixed by the application without any configuration needed.
I had a look at how I might get this in OpenSuSE earlier this year, and eventually concluded either that PA simply can't do this *at all*, or that if it can, nobody knows how[0]. I did find some instructions for how to set up something like this using a custom ALSA configuration, though that would have required that applications be configured to know about it (rather than doing the right thing automatically), and I never got around to trying it out before giving up on OS for a multitude of reasons.
Another example:
A related example:
It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.
[0] Some more recent googling has turned up more promising discussion of configuration file options, but I no longer have that installation to test out.
Posted Sep 8, 2014 13:14 UTC (Mon)
by pizza (subscriber, #46)
[Link]
The last three motherboards I've had, with multi-channel audio, have JustWorked(tm) once I selected the appropriate speaker configuration under Fedora/GNOME. Upmixed and downmixed PCM output, and even the analog inputs are mixed properly too.
(Of course, some of the responsibility for handling this is in the hands of the application, even if only to query and respect the system speaker settings instead of using a fixed configuration)
> I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.
NM has been effectively flawless for me for several years now (even with switching back and forth), also with Fedora, but that shouldn't matter in this case -- I hope you filed a bug report. Folks can't fix problems they don't know about.
> For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.
I can't speak to Ubuntu's DHCP stuff here (did you file a bug?) but I've seen a similar problem in the past using dnsmasq's DHCP client -- the basic problem I found was that certian DHCP servers were a bit..special in their configuration and the result is that the DHCP client didn't get a valid DNS entry. dnsmasq eventually implemented a workaround for the buggy server/configuration. This was maybe three years ago?
> It's great that some people have more reliable wireless thanks to NM, but it's not great that this comes at the expense of fully-functional ethernet.
Come on, that's being grossly unfair. Before NM came along, wireless was more unreliable than not, with every driver implementing the WEXT stuff slightly differently requiring every client (or user) to treat every device type slightly differently. Now the only reason things don't work is if the hardware itself is unsupported, and that's quite rare these days.
Posted Sep 8, 2014 15:38 UTC (Mon)
by paulj (subscriber, #341)
[Link] (1 responses)
While I agree this path has led to sound and wireless working well in Linux today, I would disagree it was the only possible path. I'd suggest there were other paths available that would have ultimately led to the same result, but taken more care to avoid regressions and/or provide basic functionality even when other components hadn't been updated to match some new specification.
What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.
Posted Sep 8, 2014 17:12 UTC (Mon)
by pizza (subscriber, #46)
[Link]
...perhaps you are correct, but those other paths would have taken considerably longer, leading to a different sort of user burning.
> What's the point of utopia, if you've burned off all your users getting there? Linux laptops were once well-represented at über-techy / dev conferences, now it's OS X almost everywhere.
These days, Linux just isn't a cool counterculture status symbol any more. It's part of the boring infrastructure that's someone else's problem.
Anyway. The technical ones basically boil down to the benefits of Apple controlling the entire hardware/software/cloud stack -- Stuff JustWorks(tm). As long as you color within the lines, anyway.
Posted Sep 6, 2014 4:41 UTC (Sat)
by dlang (guest, #313)
[Link] (14 responses)
I disagree that that is a wonderful thing and everyone should be making use of it.
we are getting conflicting messages about who would maintain these base systems, if it's the distros, how are they any different than the current situation?
if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.
The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs
I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)
building up technical debt with the intention to pay it off later almost never works, and even when it does, it doesn't work well.
anything that encourages developers to build up technical debt by being able to ignore compatibility is bad
This encourages developers to ignore compatibilty in two ways.
1. the app developers don't have to worry about compatibility because they just stick with the version that they started with.
2.library developers don't have to worry about compatibility because anyone who complains about the difficulty in upgrading can now be told to just stick with the old ABI
Posted Sep 6, 2014 11:22 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (4 responses)
1) People who don't want their disk space chewed up with multiple environments.
2) People who don't want (or like me can't understand :-) btrfs.
3) Devs who (like me) like to run an up-to-date rolling distro.
4) Distro packagers who don't want the hassle of current packages that won't run on current systems.
Personally, I think any dev who ignores compatibility is likely to find themselves in the "deprecated" bin fairly quickly, and will just get left behind.
Where it will really score is commercial software which will be sold against something like a SLES or RHEL image, which will then continue to run "for ever" :-)
Cheers,
Posted Sep 6, 2014 21:00 UTC (Sat)
by dlang (guest, #313)
[Link] (3 responses)
Yep, that's part of my fear.
this 'forever' doesn't include security updates.
People are already doing this with virtualization (see the push from vmware about how it allows people to keep running Windows XP forever), and you are seeing a lot of RHEL5 in cloud deployments, with no plans to ever upgrade to anything newer.
Posted Sep 6, 2014 22:14 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Sep 6, 2014 23:00 UTC (Sat)
by dlang (guest, #313)
[Link] (1 responses)
I don't think so.
Posted Sep 7, 2014 15:58 UTC (Sun)
by raven667 (subscriber, #5198)
[Link]
On desktops as well being able to run older apps on newer systems rather than being force-upgraded because the distro updates and also being able to run other newer apps (and bugfixes) on a cadence faster than what a distro that releases every 6mo or 1yr gives, is a benefit that many seem to be looking for, staying on GNOME2 for example while keeping up with Firefox and LibreOffice updates or whatever. Being able to run multiple userspaces on the same system with low friction allows them to fight it out and compete more directly than dual-booting or VMs, rather than being locked in to what your preferred distro provides.
Posted Sep 6, 2014 23:15 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (8 responses)
I think you answered that in your first paragraph:
> I agree that this proposal makes it easy to have many different ABIs on the same computer.
There is currently more friction in running different ABIs on the same computer, there is no standard automated means for doing so, people have to build and run their own VMs or containers with limited to non-existant integration between the VMs or containers.
The other big win is a standard way to do updates that works with any kind of distro, from Desktop to Server to Android to IVI and embedded, without each kind of systems needing to redesign updates in their own special way.
> if it's joe random developer defining the ABI that his software is built against, it's going to be a disaster.
The proposal is that a developer would pick an ABI to build against, such as OpenSuSE 16.4 or for a non-desktop example OpenWRT 16.04, not that every developer would be responsible for bundling and building all of their library dependancies.
> The assertion is being made that all the random developers (who can't agree on much of anything today, not language, not development distro, not packaging standards even within a distro, etc) are going to somehow standardize on a small set if ABIs
This whole proposal is a way to try to use technology to change the social and political dynamic by changing the cost of different incentives, it is not guaranteed how it will play out. I think there is pressure though from end users who don't want to run 18 different ABI distros to run 18 different applications, to pick a few winners, maybe 2 or 3 at most, in the process there might be a de-facto standard created which re-vitalizes the LSB.
> I see this as being a way for app developers to care LESS about maintainability, because now they don't have to deal with distro upgrades any longer, they can just state that they need ABI X and stick with it forever. When they do finally decide to upgrade, it will be a major undertaking, of the scale that kills projects (because they stop adding new features for a long time, and even loose features, as they are doing the infrastructure upgrade)
I don't see it playing out that way, developers love having the latest and greatest too much for them to continue to deploy apps built against really old runtimes, all of the pressure is for them to build against the latest ABI release of their preferred distro. The thing is that one of the problems this is trying to solve is that many people don't want to have to upgrade their entire computer with all new software every six months just to keep updated on a few applications they care about, or conversely be forced to update their main applications because their distro has moved on, it might actually make more sense to run the latest distro ABI alongside the users preferred environment, satisfying both developers and end-users.
Posted Sep 7, 2014 8:20 UTC (Sun)
by dlang (guest, #313)
[Link] (7 responses)
I can see why developers would like this, but I still say that this is a bad result.
Posted Sep 7, 2014 8:29 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (6 responses)
And since there's no base ABI, everybody just does whatever suits them. Just last week we found out that Docker images for Ubuntu don't have libcrypto installed, for example.
Maybe this container-lite initiative will motivate distros to create a set of basic runtimes, that can actually be downloaded and used directly.
Posted Sep 7, 2014 9:48 UTC (Sun)
by dlang (guest, #313)
[Link] (5 responses)
if you don't define them to include every package, you will run into the one that you are missing.
These baselines are no easier to standardize than the LSB or distros.
In fact, they are worse than distros because there aren't any dependencies available (other than on "RHEL10 baseline")
Posted Sep 7, 2014 9:56 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
> These baselines are no easier to standardize than the LSB or distros.
The ecosystem of runtime might help to evolve at least a de-facto standard. I'm pretty sure that it can be done for the server-side (and let's face it, that's the main area of non-Android Linux use right now) but I'm less sure about the desktop.
Posted Sep 7, 2014 10:43 UTC (Sun)
by dlang (guest, #313)
[Link] (3 responses)
>> These baselines are no easier to standardize than the LSB or distros.
> There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.
so who is going to define the standards for the new approach? Every distro will just declare each of their releases to be a standard (and stop supporting them as quickly as they do today)
that gains nothing over the current status quo, except give legitimacy to people who don't want to upgrade
If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
Posted Sep 7, 2014 10:53 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
> If the "standards" are going to be defined by anyone else, then they are going to be doing the same work that the distros are doing today, they will be just another distro group, with all the drawbacks of having to back-port security fixes (or fail to do so) that that implies.
I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.
Posted Sep 7, 2014 11:18 UTC (Sun)
by dlang (guest, #313)
[Link] (1 responses)
thanks for the laugh
there isn't going to be "the runtime" any more than there will be "the distro", for the same reason, different people want different things and have different tolerance of risky new features.
> I'm hoping that somebody like Debian or RHEL/CentOS can pick up the job of making runtimes. It shouldn't be hard, after all.
they already do, it's called their distro releases
> as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.
no, your users may just have to download a few tens of GB of base packaging to run it instead.
Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Posted Sep 7, 2014 11:52 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> they already do, it's called their distro releases
That's the crux of the problem - distros are wildly incompatible and there's no real hope that they'll merge any time soon.
> no, your users may just have to download a few tens of GB of base packaging to run it instead.
>Plus, if the baseline you pick has security problems, your users will blame you for them (because if you had picked a different base that didn't have those problems, their system wouldn't have been hit by X)
Posted Sep 5, 2014 3:01 UTC (Fri)
by paulj (subscriber, #341)
[Link] (14 responses)
The de-duping thing seems tenuous to me, for the "ship your own runtime" case. What are the chances that two different application vendors happen to pick the exact same combination of compiler toolchain, compile flags and libraries necessary to give identical binaries?
Having a system to make it possible to run different applications, built against different "distros" (or runtimes), at the same time, running with the same root/config (/etc, /var) and /home seems good. Though, I am sceptical that:
a) There won't be configuration compatibility issues with different apps using slightly different versions of a dependency that reads some config in /home (ditto for /etc).
This kind of thing used to not be an issue, back when it was more common to share /home across different computers thanks to NFS, and application developers would get more complaints if they broke multi-version access. However $HOME sharing is rare these days, and I got sick a long time ago of, e.g., GNOME not dealing well with different versions accessing the same $HOME (non-concurrently!).
b) Sharing /var across different runtimes similarly is likely fraught with multi-version incompatibility issues.
It's ironic that shared (even non-concurrent) $HOME support got broken / neglected in Linux, and now it seems we need it to help solve the runtime-ABI proliferation problem of Linux. :)
Posted Sep 5, 2014 3:03 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link] (2 responses)
https://mail.gnome.org/archives/gnome-os-list/2014-Septem...
Posted Sep 5, 2014 3:06 UTC (Fri)
by martin.langhoff (subscriber, #61417)
[Link] (1 responses)
Posted Sep 5, 2014 3:11 UTC (Fri)
by rahulsundaram (subscriber, #21946)
[Link]
Posted Sep 8, 2014 1:20 UTC (Mon)
by Arker (guest, #14205)
[Link] (10 responses)
That used to bother me too. I deleted GNOME. As GNOME is the source of the breakage (in this and so many other situations) that is the only sensible response. The herd instinct to embrace insanity instead, in order to keep GNOME (a process that is actually ongoing, and accelerating) is what I do not understand. Why are people so insistent on throwing away things that work and replacing them with things that do not?
Posted Sep 8, 2014 10:14 UTC (Mon)
by mchapman (subscriber, #66589)
[Link] (9 responses)
What's there to understand? Clearly these people you're talking about are having a different experience with the software than you are. Why would you think your particular experience with it is canonical? Is it so hard to believe other people's experiences are different?
Posted Sep 8, 2014 12:53 UTC (Mon)
by Arker (guest, #14205)
[Link] (8 responses)
Posted Sep 8, 2014 13:18 UTC (Mon)
by JGR (subscriber, #93631)
[Link] (5 responses)
Or to put it another way, not everyone necessarily shares your view of what is "broken" and what is not.
Posted Sep 8, 2014 14:07 UTC (Mon)
by Arker (guest, #14205)
[Link] (4 responses)
This is a problem that affects the entire market for computers, worldwide. Markets work well when buyers and sellers are informed. Buyers of computer systems, on the other hand, are for the most part as far from well informed as imaginable. A market where the buyers do not understand the products well enough to make informed choices between competitors is a market which has problems. And Free Software is part of that larger market.
And that's what we see with GNOME. The example we were discussing above had to do with the $home directory. The GNOME devs simply refuse to even try to do it right. Since none of them used shared $home directories, they did not see the problem, and had no interest in fixing it. And here is where the broken market comes in - because there were enough end users who like the GNOME devs did not understand how $home works and how it is to be used who simply did not understand why they should care.
And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally. That's the path the herd is on right now. Anything that your 13 year old doesnt want to take the time to understand - it' s gone or going. A few more years of this and we will have computing systems setting world records for potential power and actual uselessness simultaneously.
Posted Sep 8, 2014 15:03 UTC (Mon)
by pizza (subscriber, #46)
[Link] (3 responses)
I'm not quite sure what your point here is... you're basically blaming GNOME for the fact that its users are uninformed, and further, it's also GNOME's fault because those same uninformed users don't know enough to care about a philosophical argument under the hood about a situation those same uninformed users will never actually encounter?
> And that's how the breakage created by a cluster of ADD spreads out to tear down the ecosystem generally.
Please, lay off on the ad honimem insults.
Posted Sep 8, 2014 16:17 UTC (Mon)
by Arker (guest, #14205)
[Link] (2 responses)
I was not assessing blame, I was simply making you aware of the progression of events.
"Please, lay off on the ad honimem insults."
Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.
Posted Sep 8, 2014 17:20 UTC (Mon)
by pizza (subscriber, #46)
[Link] (1 responses)
> Please, learn to distinguish between ad honimem(sic) insults and statements of fact you find inconvenient or embarrasing.
Personally, I would be embarrased(sic) if I was the one who considered the above a statement of fact, and petulantly pointed out a spelling error while making one of your own.
But hey, thanks for the chuckle.
Posted Sep 8, 2014 17:34 UTC (Mon)
by Arker (guest, #14205)
[Link]
The pattern of behavior from the GNOME project is indeed a fact, it's not disputable, the tracks are all over the internet and since it has been the same pattern for over a decade it certainly seems fair to expect it to continue. If you think you have an objection to that characterization that is legitimate, please feel free to put it forward concretely. Putting forward baseless personal accusations instead cuts no ice.
Posted Sep 8, 2014 14:12 UTC (Mon)
by mchapman (subscriber, #66589)
[Link] (1 responses)
But that wasn't what you claimed used to bother you. You were talking about broken behaviour, not code. Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed.
So I'm having trouble following your argument. Are you saying people shouldn't be supporting GNOME -- that the only sensible thing to do with it is uninstall it -- because there are *some* use cases that for *some* people don't work properly? That seems really unfair for everybody else.
Posted Sep 8, 2014 14:30 UTC (Mon)
by Arker (guest, #14205)
[Link]
A distinction with no difference. Behaviour is the result of code, and code is experienced as behaviour.
"Now maybe GNOME's behaviour *is* broken, that sharing a $HOME between multiple versions doesn't work properly, and it's just because I don't do that that I've never noticed."
That is correct, but also incomplete. Since GNOME screwed this up, they set a (bad) example that has been followed by others as well, and I am afraid today you will find so many commonly used programs have now emulated the breakage that it's widespread and this essential core OS feature is now practically defunct.
Of course YMMV, but in my universe, the damage done in this single, relatively small domain, done simply by not caring and setting a bad example and being followed by those who know no better, is orders of magnitude greater than their positive contributions. I am not trying to be mean I am simply being honest.
Posted Sep 3, 2014 22:19 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Now *if* the majority of the data we're dealing with is either block-aligned at a similarly large block size or compressed (and thus more or less non-deduplicable anyway unless identical) we might be OK with a block-based deduplicator. I hope this is actually the case, but fear it might not be: certainly many things in ELF files are aligned, but not on anything remotely as large as fs block boundaries!
But maybe we don't care about all our ELF executables being stored repeatedly as long as stuff that, e.g. hasn't been recompiled between runtime / app images gets deduplicated -- and btrfs deduplication should certainly be able to manage that.
Posted Sep 2, 2014 7:15 UTC (Tue)
by ringerc (subscriber, #3071)
[Link] (2 responses)
Sorry, but...
*doubles over laughing for several minutes*
I don't think there's yet been a Mac OS X release in which libedit has not been broken in one of several exciting ways. They change bundled libraries constantly, and often in non-BC ways. It's routine for major commercial software (think: Adobe Creative Suite) to break partially or fully a couple of OS X releases after the release of the package, so you just have to buy the new version. Their Cocoa APIs clearly aren't much more stable than their POSIX ones.
Then there's the system level stuff. Launchd was introduced abruptly, and simply broke all prior code that expected to get started on boot. (Sound familiar?). NetInfo was abruptly replaced by OpenDirectory and most things that used to be done with NetInfo stopped working, or had to be done in different (and usually undocumented) ways.
I had the pleasure of being a sysadmin who had to manage macs over the OS X 10.3 to 10.6 period, and I tell you, Fedora has nothing on the breakage Apple threw at us every release.
Posted Sep 2, 2014 7:21 UTC (Tue)
by ringerc (subscriber, #3071)
[Link] (1 responses)
About the only highly backward compatible platforms out there are the moribund ones (Solaris, AIX, etc); FreeBSD, which makes a fair bit of effort but still breaks things occasionally; and Windows, which suffers greatly because it's so backward compatible.
Part of why OS X is so successful is *because* it breaks stuff, so it's free to change.
Posted Sep 2, 2014 11:24 UTC (Tue)
by jwakely (subscriber, #60262)
[Link]
The things that get broken probably needed to break and you should be grateful for it, you worthless non-believer.
Posted Sep 2, 2014 13:47 UTC (Tue)
by javispedro (guest, #83660)
[Link] (5 responses)
It extends much further. Simply put, the Cascade of Attention Deficit Teenagers problem prevents every other OSS project from ever committing to an specification. Gtk+ will change its library API. But then Bluez will change its D-Bus specification and all of your containers become useless (library API didn't change). Or Gtk+ decides not to break the ABI, but rather start rendering things in a slightly different way and your window manager breaks (e.g. client side decorations). Etc. Etc.
I just don't see how having a new massive abstraction layer is going to help. In fact, I don't even even see how even a universal abstraction layer is feasible. Efforts like have freedesktop.org have made the most progress (look, icons of Gnome applications now appear in KDE menus! tbh this had been unthinkable for me less than 10 years ago). But now they have been corrupted into furthering the agendas of some people with "visions" instead of trying to be a common ground of disparaging APIs.
Posted Sep 2, 2014 15:52 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 22:33 UTC (Tue)
by ovitters (guest, #27950)
[Link] (3 responses)
Can you give me any specifics?
For instance, GNOME makes use of logind due to ConsoleKit being deprecated. We actually still support ConsoleKit, though probably it's pretty buggy. The logind bits resulted in a clear specification and allowing even more things to be shared across different Desktop Environments.
We still have stuff like gnome-session. What is does is pretty similar across various Desktop Environments. It was proposed to make use of the infrastructure provided by systemd user sessions, though that's not fully ready yet. This would then allow various Desktop Environments to handle their session bits in the same way. AFAIK, this is something KDE, GNOME and Enlightenment appreciate.
Regarding Client Side Decorations: GNOME is working with other Desktop Environments as well as Window Managers. Suggest to read the bugreports that the KWin maintainer linked to. It's not so doom and gloom as he makes it out to be. Further, it's nice that he dislikes the idea of CSD, but in his Google+ post he sometimes goes too much into anti-CSD advocacy based more on feelings than anything happening. That just on Google+, in Bugzilla + mailing lists he's awesome (don't recall KWin maintainers name on the top of my head -- I assume everyone knows who I am talking about).
The D-Bus APIs changing is a real problem. I'd suggest not calling people names. Lennart wrote how to properly handle versioning in D-BUs interfaces. But yeah, just be an ass and say shit like "Cascade of Attention Deficit Teenagers problem", because that's how you'll get a friendly response from e.g. me (nope!).
> But now they have been corrupted
Get lost with calling me corrupted.
Posted Sep 3, 2014 14:24 UTC (Wed)
by javispedro (guest, #83660)
[Link] (2 responses)
When was the last XDG specification published? Most times I see the freedesktop.org domain referenced these days is because systemd is hosted there, for some reason.
In the meantime, the existing standards are being "deprecated" or ignored e.g. notification icons is at this point not supported by 2 out of the 3 largest X desktops. There's still no replacement even when these two desktops have their own version of "notification icons".
But I do not really want to argue about FDO's mission. I just used how quickly its standards are becoming useless to show how library APIs is only a small part of the problem. The bigger problem is the lack of commitment to standards (I'm not saying I'm not part of this problem, too). Ideally, good reason should be provided when dropping support for an existing and approved XDG or LSB standard. Not "it's just that we have a different vision for Linux". Without that, a generic abstraction layer is just infeasible.
> Regarding Client Side Decorations
I do not even dislike CSDs. But it's just yet another way in which _existing_ cross-desktop compatibility is being thrown down the drain for no good reason. I do not know about KDE but there are plenty other DEs out there some of which don't even use decorations at all.
And this compatibility change would not be fixed either by the proposal discussed in the article.
> say shit like "Cascade of Attention Deficit Teenagers problem"
That is not my quote. E.g.
Posted Sep 3, 2014 17:07 UTC (Wed)
by jwarnica (subscriber, #27492)
[Link] (1 responses)
Posted Sep 3, 2014 22:46 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Sep 2, 2014 11:22 UTC (Tue)
by cjcoats (guest, #9833)
[Link]
I've got to do my work on that &#()^%!! thing.
Posted Sep 1, 2014 16:34 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (14 responses)
I would guess that naming LVMs using the same scheme would extend this to any filesystem, supporting ext4, xfs on lvm and btrfs using the same management framework would cover most users, maybe with some degraded functionality and warnings as you moved into harder to support or less well tested configurations.
Posted Sep 1, 2014 23:45 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (13 responses)
The file-level de-dup is actually key here, because if you dump multiple runtimes and the OS into the fs, then you need to make sure you don't pay the price for the duplication. And the file-level de-dup is not only important to ensure that you can share the data on disk, but also later on in RAM.
So no, LVM/DM absolutely doesn't provide us with anything we need here. Sorry. It's technology from the last century, it's not a way to win the future. It conceptually cannot deliver what we need here.
Posted Sep 2, 2014 0:23 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (8 responses)
Posted Sep 2, 2014 1:24 UTC (Tue)
by dlang (guest, #313)
[Link] (6 responses)
Posted Sep 2, 2014 1:33 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
Posted Sep 2, 2014 2:33 UTC (Tue)
by rodgerd (guest, #58896)
[Link]
That's hardly the land of wild and crazy any more.
(Anecdata-wise, I found it rubbish under 3.4 - 3.10, and am running data I care about on 3.12 in the RAID1 config. It's been very reliable and coped with a drive failure/rebuild, growing arrays, and so on and so forth.)
Posted Sep 2, 2014 20:44 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (3 responses)
I think using it in this logic is a great way to get the stabilization process sped up, because we will get it into the hands of people that way, but we don#t actually really care about the data placed in the btrfs volumes (at least initially): it is exclusively vendor-supplied, verified, read-only data. If the file system goes bad, we just download the stuff again, and nothing is lost. It's a nice slow adopt path we can do here...
Actually, we can easily start adopting this by just pushing the runtimes/os images/apps/frameworks into a loopback btrfs pool somewher in /var. This file system would only be mounted into the executed containers and apps, and not even appear in the mount table of the host...
Posted Sep 3, 2014 10:09 UTC (Wed)
by dgm (subscriber, #49227)
[Link] (2 responses)
Put it in the hands of developers. Or volunteers. But please! Let users alone.
Posted Sep 3, 2014 18:49 UTC (Wed)
by ermo (subscriber, #86690)
[Link]
I also happen to think that Lennart is correct in taking the longer view that btfrs needs to be included gradually in the ecosystem if it is ever to become a mature, trusted, default Linux FS.
There will be bumps in the road, sure, that's a given. But Lennart's point that he wants to ease the pain by starting off with storing non-essential data (in the easily replaceable sense) in btfrs while this process is onging, is IMHO both sound and valid.
Others may see it differently, of course.
Posted Sep 5, 2014 20:06 UTC (Fri)
by HenrikH (subscriber, #31152)
[Link]
You can perform all the QA you want internally and yet some random user with his random setup and random hardware till find tons of bugs on the first day of use.
Posted Sep 2, 2014 20:40 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link]
Posted Sep 2, 2014 2:13 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (2 responses)
I am also interested in how this plays out with NFS based NAS devices, it seems a lot like VDI where you have a set of very hot gold master images, mixed with something like Docker you have a whole data center humming along with a very high level of deduplication and standardization.
If this makes any sort of sense then someone will try to implement it for sure, maybe everyone will end up with btrfs in the end but the path to there might go through stages of using block level copy-on-write, and failing, before they are convinced.
Posted Sep 2, 2014 11:53 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
And if people CAN run this stuff over ext4, or xfs, or reiser (does anyone still use it :-), then maybe people will also be motivated to add these features to those file systems. Succeed or fail, it's all within the Linus philosophy of "show me you can do it, show me it's a good idea". That's the way you get new stuff into the Linux kernel, that should be the way you get stuff into Linux distros.
And succeed or fail, it's good for the developers to have a go :-)
Cheers,
Posted Sep 2, 2014 22:59 UTC (Tue)
by rodgerd (guest, #58896)
[Link]
Posted Sep 9, 2014 11:23 UTC (Tue)
by alexl (subscriber, #19068)
[Link]
This is unfortunately not true. The files on each btrfs subvolume have a per-subvolume st_dev (different minor nr), and the page-cache is per-device. So, block sharing between btrfs volumes is strictly an on-disk thing, they will be cached separately in RAM.
I know this because I wrote the btrfs docker backend hoping to get this feaure (among others), and it didn't work.
Posted Sep 1, 2014 18:06 UTC (Mon)
by kreijack (guest, #43513)
[Link] (3 responses)
In the Lennart post, I don't see any btrfs-only feature. Even if he seems to told the opposite.
The snapshot is a "photo" of a subvolume (or an another snapshot). After a snapshot, the subvolume and the snapshot are two interdependently trees, but BTRFS handles transparently the uncommon data as unshared and the common data as shared until they diverge. So send and receive are efficient because they are able to skip common data computing the diff.
But to work the subvolume and the snapshot have to be in a relationship parent-child (in historical sense) and have to be build in the same filesystem.
Instead the runtime/usr/app subvolume aren't coupled at all. They may be even build in different computer (upstream vs downstream). In this case btrfs send and receive aren't different/more efficient than rsync.
Posted Sep 6, 2014 6:56 UTC (Sat)
by mab (guest, #314)
[Link] (2 responses)
Posted Sep 6, 2014 11:32 UTC (Sat)
by Wol (subscriber, #4433)
[Link] (1 responses)
If I have loads of disk space, and am happy to tolerate the waste, would it work?
(My home system has multiple users, and photos are shared between user accounts. I make extensive use of hard links to do so. Would something like that work? I know deduping is hard work, but I've got a script that does MD5 hashes, checks for duplicates, and replaces duplicates with hard links.)
Cheers,
Posted Sep 8, 2014 16:17 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Sep 1, 2014 12:28 UTC (Mon)
by thexf (guest, #91471)
[Link] (20 responses)
Posted Sep 1, 2014 12:31 UTC (Mon)
by corbet (editor, #1)
[Link] (19 responses)
Posted Sep 1, 2014 14:23 UTC (Mon)
by bersl2 (guest, #34928)
[Link] (18 responses)
I'm not sure we can, unfortunately, not until the whole systemd thing gets resolved. It took a lot of self-control myself to not hastily post something snarky about systemd seemingly capturing udev and other core parts of the system. Yes, it in fact helps nobody, but it does help us cope (badly) with how we see a future we do not want playing out.
Factual or not, it feels like systemd is threatening future compatibility for all distros which want to remain with modern Linux as a Unix-like OS, for gains which do not make any sense for a significant number of users. And it is a paralyzing feeling.
It's really hard to compartmentalize the different things a person or group does when one feels like the other has a sword or gun pointed in one's face. It does not matter whether the sword is rubber, or the gun is a fake, or even if there's nothing at all and it's all in our imaginations: we think it's real, so we act as though its real.
Communication remains poor due to trust issues, I think. I don't think many of us actually trust Poettering, et al. with the parts of the core system being bulldozed and replaced figuratively overnight, compatibility with existing components be damned. Certainly we don't trust their words.
Nor can I personally provide factual backing for this lack of trust. This affects technical issues, but at its core, these are human issues.
Posted Sep 1, 2014 15:37 UTC (Mon)
by ovitters (guest, #27950)
[Link] (4 responses)
Aside from pretending you're speaking on behalf of multiple people, you're post is quite ok. However, the case you argue for is not.
Your post is totally fine. I think you're wrong, but I see nothing wrong in your post. It expresses how you feel things are going.
However, you're arguing that snarky comments are ok. That entirely unreasonable stance to take. Such comments provides no value at all, result in emotional responses and the value for this site is 0. The original poster expresses his/hers emotions and likely feels a bit better, but that is IMO done at the expense of everyone reading this site. It's just not within reason to tolerate such behaviour.
Posted Sep 1, 2014 19:24 UTC (Mon)
by SiB (subscriber, #4048)
[Link] (3 responses)
He is.
> ... Such comments provides no value at all,
This one brought us to this post from bersl2, which helped me, at least.
Posted Sep 2, 2014 7:15 UTC (Tue)
by oldtomas (guest, #72579)
[Link]
> [SiB] He is.
Seconded. And Olav: you should know that.
Whenever I see something like this, I think "oh, noes! another Lennart Poettering thread" and turn away in disgust. That's most probably why those "multiple people" are overheard.
Posted Sep 2, 2014 9:07 UTC (Tue)
by ovitters (guest, #27950)
[Link] (1 responses)
No, it did not. There's a warning by the someone from LWN that such comments are not useful. A hit and run comment is terrible. That maybe perhaps there can be value out of such comment: yeah whatever. Let's get concrete, that comment is utterly useless, negative and leads nowhere.
You value the post from bersl2, that is what should be on LWN. Not the initial comments because maybe after a totally crappy comment followed by a warning by LWN site owner you finally get to a better comment.
You're advocating terrible commenting. Start your own site.
Posted Sep 3, 2014 14:54 UTC (Wed)
by ms-tg (subscriber, #89231)
[Link]
Very well said.
Posted Sep 1, 2014 19:18 UTC (Mon)
by rgmoore (✭ supporter ✭, #75)
[Link] (12 responses)
I think there's a big difference between what you did and what the grandfather post that corbet is complaining about did. Your worries may be more emotional than factual, but in theory somebody could write a reply that would address them. In contrast, the grandfather post is just slinging an insult without further details. There's no hope of any kind of productive discussion coming from it.
Posted Sep 2, 2014 16:34 UTC (Tue)
by JoeBuck (subscriber, #2330)
[Link] (11 responses)
Posted Sep 2, 2014 16:42 UTC (Tue)
by corbet (editor, #1)
[Link] (10 responses)
That said, I'm not sure that the current system is working all that badly? We had one troll; I griped at them and we heard no more. Beyond that, what would you have us censor, were we willing to do so? The comment thread has been way too long and somewhat repetitive at times, but there has also been some good discussion. I don't really feel that we could have improved it by applying a heavy hand.
Posted Sep 2, 2014 18:18 UTC (Tue)
by smoogen (subscriber, #97)
[Link]
Posted Sep 2, 2014 18:31 UTC (Tue)
by andresfreund (subscriber, #69562)
[Link]
Yes, it is. The signal/noise ratio in here has made it increasingly pointless to even read the comments. There's so many repetitions of the same inflammatory bullshit that the significant number of very capable people here that I very much want to read are a) completely drowned out b) commenting far less c) understandably can't always resist the trolls.
It also makes the 'Unread comments' feature far less useful because there's always just lots of repetitive stuff in there. While sad I'd much prefer ignoring certain article's comments so I can read the rest in peace. Sifting through 50 comments to two flamed articles, just to find the two others is annoying.
Posted Sep 2, 2014 19:45 UTC (Tue)
by gmaxwell (guest, #30048)
[Link]
Back on topic here, more then complaining— there is something you can do which provide absolute protection against this stuff: don't run it. E.g. Gentoo runs fine without you using the latest trendy mac/smartphone architectural imports.
If you're like me, there are alternatives that better meet your principles and work styles— and perhaps they're only not as attractive because they don't get the enormous resource investment that things like Fedora do, but there is only one way to fix that...
Posted Sep 2, 2014 19:53 UTC (Tue)
by tjc (guest, #137)
[Link]
The discussion here is significantly better than the "discussion" unfolding at That Other Site around the same topic, so I'm an advocate for leaving things as they are.
Posted Sep 3, 2014 0:49 UTC (Wed)
by rgmoore (✭ supporter ✭, #75)
[Link]
I think the current system is working OK, except for a handful of topics that bring out the worst in people. Unfortunately, "anything proposed by Lennart Poettering" seems to be on that list. I certainly wouldn't mind seeing topics that are likely to spark big arguments be set to subscribers only, since guest posters do seem to be more prone to angry, unproductive arguments.
Posted Sep 3, 2014 2:19 UTC (Wed)
by leoc (guest, #39773)
[Link] (1 responses)
Posted Sep 3, 2014 9:04 UTC (Wed)
by cladisch (✭ supporter ✭, #50193)
[Link]
Voting might be able to remove the few bad comments, but it definitely would change the way how (much) people write their comments for the intended audience, and I cannot see this as an improvement.
Posted Sep 3, 2014 4:46 UTC (Wed)
by speedster1 (guest, #8143)
[Link] (1 responses)
I still find the signal to noise ratio much higher here than almost anywhere else, and am pleased that my subscription is helping support this community with such excellent leadership.
Posted Sep 3, 2014 17:44 UTC (Wed)
by Trelane (subscriber, #56877)
[Link]
Posted Sep 5, 2014 7:39 UTC (Fri)
by karath (subscriber, #19025)
[Link]
Censoring is both a very emotive word and an extremely complex topic. If a site publisher has a clear policy that abusive and/or spam postings will be removed then all users of the site have to accept that policy or go elsewhere. I believe that removing posts as part of the process of maintaining that policy is _not_ censorship. And of course, users, particularly paying users, are free to lobby for a change in policy.
However, I think the best policy of all is that the editorial team publicly call out the postings that they consider abusive. As has happened twice in the comments to this article. It makes it clear to all where the borders of taste are and generally most people are willing to go along with this kind of social pressure. Serial offenders may eventually have to have their posting privileges curtailed.
Like many suggestions, mine have the downside of requiring more effort from the editorial team that would be better put towards identifying high quality news and (continuing) writing higher quality articles.
Posted Sep 1, 2014 12:45 UTC (Mon)
by roc (subscriber, #30627)
[Link] (37 responses)
Posted Sep 1, 2014 12:56 UTC (Mon)
by AlexHudson (guest, #41828)
[Link]
You can readily imagine that a popular Linux distro, dividing things up into reasonable run-times, will create the defacto notion. The run-time is just a label and a promise to maintain stability basically, what technically goes into it doesn't matter so much - Gtk plus OCaml could be an entirely valid runtime, it will stand or fall on use and popularity.
Posted Sep 1, 2014 15:43 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (35 responses)
The runtimes include everything you need to develop an app. All the way from glibc, to gtk, glib, gstreamer, the crypto libs and everything else. An app picks one of these runtimes, and gets the full API they provide. This is not unlike you'd do app development for Android where you also pick one android version you want to develop for and then get all its APIs. Of course we are Linux and we are pluralistic, hence the need for multiple parallel runtimes.
Posted Sep 1, 2014 18:15 UTC (Mon)
by clopez (guest, #66009)
[Link] (25 responses)
1) Say, that I have App1 and App2 that depend on the gnome-3.6 runtime. Then a new version of App2 goes out that requires gnome-3.8 runtime. The user installs this new version of App2.
Will App1 continue to use the gnome-3.6 runtime, or its runtime will be forcibly upgraded to gnome-3.8 at the risk of breaking App1?
2) Who will take care of security upgrades on the runtimes?
Say, that the developer of App1 don't cares to upgrade his application to a newer gnome runtime. Who will fix the security bugs on the gnome-3.6 runtime? Do you really expect that the developers will have the patience and determination to upgrade their runtimes each time that a security bulletin is issued (probably weekly)?
Posted Sep 1, 2014 22:41 UTC (Mon)
by droundy (subscriber, #4559)
[Link] (1 responses)
Posted Sep 1, 2014 23:26 UTC (Mon)
by droundy (subscriber, #4559)
[Link]
Posted Sep 1, 2014 23:54 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (22 responses)
So, if your App2 requires the gnome-3.8 runtime, and the App1 wants gnome-3.6, totally fine, you will get both runtimes installed.
The guys who put the runtime together are responsible for the security. It's the same as for distros: the guys who put together the distros are responsible for the security updates. You can consider a runtime little more than a fixed set of packages, taken out of a distro, stabilized, with a long-term security fixes, and under the strict focus on being a somewhat consisten set of APIs that is interesting for app developers to hack against. And then you can have multiple runtimes from different vendors, and you can have multiple runtime versions of the same runtime around.
If you are an app developer, and want to write your app for GNOME, then you pick a specific major version of gnome you want to focus on, let's say GNOME_3_38. Then you write your code for that, release it. It's then GNOME's job to do security fixes, and push out minimally modified updates to GNOME_3_38. Then one day, you actually invest some time, want to make use of newer GNOME features, so you rebase your app onto GNOME_3_40. This is a completely new runtime, which means it will be downloaded to the clients, and also be available. Again, this runtime will need fixes over time, for CVE stuff. But they key here is that you can have many of the runtimes installed in parallel. While they stay mostly immutable after their release by the runtime vendor, they do receive minimal updates to cover for CVEs and stuff.
Posted Sep 2, 2014 1:09 UTC (Tue)
by clopez (guest, #66009)
[Link] (5 responses)
That sounds good in theory. But in practice what is going to happen is that most runtimes are not going to have an acceptable level of security support.
Also, with this new setup updating becomes much more complicated: instead of upgrading one runtime (your system), you have to upgrade dozens of runtimes (assuming that the runtime provider cared to release an update)
Just imagine the pain of patching all your runtimes after a bug like heartbleed....
To put some examples:
I install an application that uses the Fedora 18 runtime. For how long I'm going to have security upgrades for the Fedora 18 runtime? What happens after that, if the application wasn't updated for a new Fedora runtime? I'm on my own?
Even worse, say that a developer publishes an application using a custom Gentoo runtime. Do you trust the developer to provide security updates for that custom runtime? really?
Posted Sep 2, 2014 4:54 UTC (Tue)
by NightMonkey (subscriber, #23051)
[Link] (4 responses)
Like any software, you will have to carefully choose where you get it from. The straw man of some wild and crazy and irresponsible Gentoo-using compiling-even-on-Sunday developer being the ONLY ONE who supports the software that ONLY YOU AND A MILLION OTHERS need is preposterous.
Posted Sep 2, 2014 6:14 UTC (Tue)
by dlang (guest, #313)
[Link] (1 responses)
Posted Sep 2, 2014 7:35 UTC (Tue)
by NightMonkey (subscriber, #23051)
[Link]
Posted Sep 2, 2014 10:32 UTC (Tue)
by clopez (guest, #66009)
[Link] (1 responses)
My point here is that a developer that releases an App using an Ubuntu or Debian runtime can rely on other developers (or even the distribution) upgrading the runtime. He don't has to be the one that upgrades the runtime with security upgrades, he can "outsource" that job to the distribution or other developers.
However, for a developer using a Gentoo runtime, outsourcing that job is pretty much impossible. This is because Gentoo is both a rolling release (the package version numbers change constantly) and because each package can have very customized compilation flags or patches.
Everybody using the "Ubuntu X" runtime shares the same runtime, so outsourcing (or delegating) security upgrades to others becomes easier. However, each one of the Gentoo runtimes are different. No one is going to share a Gentoo runtime. So the responsibility of security upgrades on a Gentoo runtime falls only on the developer of the application using that runtime.
Posted Sep 2, 2014 14:34 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
I can take a snapshot and then do an "emerge world" - great for keeping my system up-to-date, and makes a great development platform.
But any developer who develops for just the one distro - the one on his own system - is an idiot if he wants others to use it too. For testing purposes you really need to build it on a couple of distros. In my case, I'd build it on the latest SLES (provided it wasn't too long in the tooth).
Then the version that's released for general use is against some version of LTS. Those who want bleeding edge run the rolling release, those who want stable run it against an LTS.
Cheers,
Posted Sep 2, 2014 1:23 UTC (Tue)
by dlang (guest, #313)
[Link]
Posted Sep 2, 2014 1:43 UTC (Tue)
by torquay (guest, #92428)
[Link] (14 responses)
Except that this won't match up to reality. The current status-quo is that as soon as Gnome version N is released, the Gnome kids don't want anything to do with Gnome version N-1, and certainly much less with version N-2 (ie. Gnome versions < N essentially become AbandonWare). I suspect a similar situation occurs in the KDE camp.
So who maintains Gnome N-1 and KDE N-1 ? A given distro to some extent, but then most distros are on a 6-12 month cycle (not including RHEL and Ubuntu LTS). In other words, a given run-time provided by a distro becomes outdated within a year. This is an awfully short time from both the developers' and users' points of view.
Sure, we can still develop against an "obsolete" run-time, but it will get no security fixes, nor fixes for critical bugs. So what exactly is the value of having multiple run-times, if essentially they're still forcing application developers to deal with broken APIs and ABIs in order to run on a security-supported run-time?
The proposal put forward by the systemd folks is certainly interesting, but I can only see it useful for having 2 run-times: (1) the Ubuntu LTS run-time, (2) and the RHEL/CentOS/Scientific run-time. Essentially it becomes an abstraction layer for the (allegedly) two most practical run-times. Every other run-time is pointless, as it provides no value over a separate distro.
Posted Sep 2, 2014 2:41 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (6 responses)
Shhh... don't tell everyone that most distros are redundant, they might get restless ... 8-)
If this scheme gets any traction I think the next question everyone will have is why they have so many different runtimes installed to get the apps they want and try to minimize and standardize, asking some hard questions about why exactly the distros are different and the API/ABIs are so broken.
The next question is one of branding, people brand themselves as a Debian, Ubuntu, Redhat, Gentoo, etc. person, like vi vs. emacs, but what point is this self-identification if the distros run co-equally on the same kernel and you can run a mix of them.
Posted Sep 2, 2014 4:00 UTC (Tue)
by mgb (guest, #3226)
[Link]
Until the TC drank the systemd kool-aid we were very happy with Debian Stable for its breadth, stability, security, and seamless upgrades between releases.
But allowing RH to leverage systemd to churn a distro into oblivion is not a smart move.
Posted Sep 2, 2014 4:50 UTC (Tue)
by NightMonkey (subscriber, #23051)
[Link] (2 responses)
Gentoo's primary reason for existence is to avoid the pitfalls that apparently have been plaguing binary distros for a decade. The task of proper dependency management is what Gentoo is just fantastic at accomplishing.
Posted Sep 2, 2014 8:28 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
I switched to gentoo, because when I was running the latest stable SuSE, I couldn't (for whatever reason) upgrade to the latest stable lilypond.
Now although I normally don't bother, I have full control if I need it ... (and I gather there are several linux developers who run gentoo, presumably for the same reason ...)
Cheers,
Posted Sep 3, 2014 4:54 UTC (Wed)
by speedster1 (guest, #8143)
[Link]
I know Greg KH is a long-time gentoo dev who runs it on servers and build machines; just curious what other kernel devs have mentioned running gentoo?
Posted Sep 5, 2014 19:40 UTC (Fri)
by picca (subscriber, #90087)
[Link] (1 responses)
and eventually at the end only one runtime will remain.
Who will install a gnome runtime not maintain by gnome ?
so it will reduce diversity at the end.
Posted Sep 21, 2014 14:39 UTC (Sun)
by vonbrand (subscriber, #4458)
[Link]
The Gnome developers I know do use different distributions...
Posted Sep 2, 2014 9:19 UTC (Tue)
by ovitters (guest, #27950)
[Link] (1 responses)
Regarding this proposal: Lennart mentioned somewhere else that he only expects the bare minimum of fixes to go in. Security fixes and that's it. That's so minimal that I think it is something GNOME could take up.
We still run into and rely on all the other points you made. Maybe solution is indeed to rely on LTS distributions. Have two runtimes: LTS based one, and a shorter supported one.
I do see the usefulness of this though: When GNOME is released anyone in any distribution can immediately make use of GNOME. That's a question we get fairly often. How to use latest GNOME in their current distribution. There's a lot of practicalities though; GNOME often relies on newer versions of lower level stuff (e.g. Wayland).
Posted Sep 2, 2014 11:05 UTC (Tue)
by warmcat (guest, #26416)
[Link]
Signed distro packages say "something"... maybe not much if some source packages came from sourceforge or somebody's USB stick or whatever, but something. People have rallied around distro security policy as their starting point for their system being clean, rightly or wrongly.
If Gnome put out a sort of layer of stuff I can install and run as a unit, that does sound useful, however they might sign the image but the process that sourced and created the contents is kind of opaque and unrelated to how a distro functions.
Obviously it differs but at heart this is not a million miles from "some kind of filesystem apk", and Android has to expect they are malicious, control their system access with an enforced manifest you can inspect before installation, etc. Something like that also seems to be needed here.
Posted Sep 3, 2014 10:34 UTC (Wed)
by ebassi (subscriber, #54855)
[Link] (3 responses)
I fully expect that if this scheme takes hold then we'll see upstreams coping with it, and coming up with new security teams. plus, I fully expect efforts like a Long Term Support GNOME release to happen. again, this is conditional on this scheme working: right up until now, there has been no need for upstream to cope with long term support or security rollouts, since the distributions insulated upstreams pretty much completely. as a side note, could please stop calling the GNOME project members "kids"? it comes off as patronizing and insulting.
Posted Sep 8, 2014 13:31 UTC (Mon)
by Arker (guest, #14205)
[Link] (2 responses)
Posted Sep 8, 2014 13:59 UTC (Mon)
by ebassi (subscriber, #54855)
[Link] (1 responses)
if I had a nickel every time somebody linked Jamie's CADT not ironically, I'd be a millionaire. that page is not the gospel from on high, and if you think nobody, ever, declared "bug bankruptcy" and marked stuff as obsolete or "needs reproduction with a newer version", then you, like Jamie, are kidding yourself. plus, as a user and as a maintainer, I prefer upstreams closing bugs with OBSOLETE/NEEDINFO, as opposed to bugs lying around forever. it's not like Jamie couldn't re-open bugs at the time either: he just decided to be a prick about it (jwz acting like an emo teenager instead of an adult? that literally never happened in the history of ever!) anyway, you'll note that for the past 20 years we had distributions, and for the past 20 years distributions did shield many upstreams. if things change, and responsibilities shift, processes will change — or projects will simply die. we are actually discussing this in GNOME, and have been doing that since we started generating continuous integration VM images. plus, the people doing the security updates downstream will just have to push their work upstream, like they already do. it's not like the people that comprise security and QA teams in distributions will magically cease to exist.
Posted Sep 8, 2014 14:24 UTC (Mon)
by Arker (guest, #14205)
[Link]
I do not doubt that one bit. But it sounds like you need to think about why that is true.
"if you think nobody, ever, declared "bug bankruptcy" and marked stuff as obsolete or "needs reproduction with a newer version", then you, like Jamie, are kidding yourself."
And that is just a straw man. Neither I nor Jamie nor anyone else I can think of right off has said otherwise. The issue is not declaring bug bankruptcy, the problem is a long-term, consistent pattern of ignoring bugs, avoiding maint. and refusing to fix, simply kicking every problem down the road till the next version comes out and 'bug bankruptcy' is invoked.
"it's not like Jamie couldn't re-open bugs at the time either"
There is little more pointless than re-opening or re-filing a bug with the same team that studiously ignored your bug for years already.
And this was really an old pattern already by the time jwz wrote that. Let me repeat that - 12 years ago, when jwz wrote that, this was already an old pattern.
Sure it would be different if this were a new project, or one that had a good reputation. But it's just not. GNOME has been on this past for nearly 15 years, expecting that to suddenly change seems quite irrational.
Posted Sep 9, 2014 15:19 UTC (Tue)
by jonnor (guest, #76768)
[Link]
Currently there is not much of a point in updating older upstream releases, as to get the fixes out to users, each of the NN distributions have to be involved. This process is painful, slow and largely outside control of upstream.
Posted Sep 1, 2014 21:24 UTC (Mon)
by roc (subscriber, #30627)
[Link] (5 responses)
FTR I quite like the ideas here from the point of view of an app vendor (Mozilla). It's just a rather large change in the way Linux is organized (at the human level, not just the technical level), and I don't think this blog post makes those changes clear enough.
Posted Sep 1, 2014 22:08 UTC (Mon)
by sramkrishna (subscriber, #72628)
[Link]
Posted Sep 2, 2014 0:08 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (3 responses)
But it's not just the desktop projects and the distributions that can put a runtime together. Let's say you are building a phone platform. Great! So you put together your PHONE_PLATFORM_1_0 runtime, and everybody who writes apps for your platform links against that. You do a couple of CVE fixes for that runtime, hence you do minor updates to it. But eventually, you want to introduce new functionality, so you do PHONE_PLATFORM_1_2, and then your apps link against that. But the old apps continue to work, because you can keep them both around easily.
And similar I figure the IVI people could agree on a runtime. Or if you are a TV manufacturer you can do a runtime for your series of TVs, and people can hack against that.
And even certain smaller open source projects could define their own runtime, like let's say some media center thing like XBMC or so. They could do a runtime for their major releases, that people can write plugins a again, and then support a couple of the runtimes in parallel.
And so on, you get the idea.
And note that runtimes are not necessarily something you completely make up of thin air. If you did, you would make yourself a lot of work, because then you have to do CVE fixes and shit, which most people don't want to be burdened with. So if I'd be KDE or GNOME I would build my runtime out of existing distro packages. That way, one can take benefit of the good work the distros already do in the CVE area. And then I pick a couple of packages that I think should make up my runtime, and there you go. Or you could even base your runtime on some packaged stuff you get from a distro (so that you don't have to maintain glibc yourself), but then you add compiled versions of the libraries you actually care about yourself. IF you do that, you can take benefit of the CVE work of the distro you built on, and only have to do the CVE work for the stuff you added yourself on top.
That all said, I ultimately don't think that one the usual desktops we will really see that many different runtimes. My hopes at least is that there will be KDE's and GNOME's and maybe a couple of more, but that would be it. And I think this will be self-regulating a bit, since these will be well maintained, and you will get frequent CVE updates for a long time for, and that are likely already installed on your system when you first installed it. If apps otoh pick random exotic runtimes, then this would already mean a much bigger download since you would have to get the runtime first.
Posted Sep 2, 2014 1:52 UTC (Tue)
by torquay (guest, #92428)
[Link] (1 responses)
Posted Sep 2, 2014 18:27 UTC (Tue)
by daniels (subscriber, #16193)
[Link]
Posted Sep 2, 2014 8:02 UTC (Tue)
by imunsie (guest, #68550)
[Link]
Apps choose one single runtime.
Any library the app needs that is not in that runtime must be provided by the app.
Therefore, the app is responsible for security updates of all libraries it used that were not provided by the runtime.
Fail.
Posted Sep 2, 2014 17:26 UTC (Tue)
by rich0 (guest, #55509)
[Link] (1 responses)
Suppose your software wants to do arbitrary-precision math. Do you think the Gnome devs will include libraries for that?
How about using some ANSI standard for data exchange written in the 70s?
How about OCR?
There are lots of libraries that do things other than draw menus and render the top 3 video/image/audio formats.
Posted Sep 2, 2014 19:12 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 17:54 UTC (Tue)
by paulj (subscriber, #341)
[Link]
I havn't tried it recently, because my experience says "next to no chance".
Have the distros converged significantly ABI wise (for same abstract runtime) somehow, that I missed? Or do the ABI issues go much deeper?
Posted Sep 1, 2014 12:58 UTC (Mon)
by rsidd (subscriber, #2582)
[Link]
Posted Sep 1, 2014 13:08 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Sep 1, 2014 14:00 UTC (Mon)
by gracinet (guest, #89400)
[Link] (2 responses)
If distributions provide an easy way to install these snapshots/building blocks, if that can be automated by a descriptor in which the downloaded version of the end app would prescribe the blocks it's been validated for…
Posted Sep 1, 2014 22:10 UTC (Mon)
by sramkrishna (subscriber, #72628)
[Link] (1 responses)
It would be harder to brand differentiation between distros I think.
Posted Sep 3, 2014 19:19 UTC (Wed)
by ermo (subscriber, #86690)
[Link]
In theory at least, this could potentially free up currently duplicated resources across 'brands' to work on something more productive (like security updates), possibly to the benefit of the entire ecosystem.
For someone like Ubuntu, this means more resources can be allocated towards the UX, for instance, making for even stronger 'branding'.
For someone like Scientific Linux, this means that more more resources can be allocated towards developing and including the software used in academic circles.
For someone like CentOS, this means that more resources can be allocated towards creating DevOps documentation and service bundles that sysadmins can leverage in the deployment of their services and infrastructure.
In other words, this will potentially help create and define fewer but stronger and more well-supported brands (or frameworks/platforms/runtimes), which better serve the needs of a particular set of users than they do now due to each brand/distribution having to wear a lot of different hats.
At least that's my take on it in an ideal world. Things probably won't work out quite like that, be one can dream can't he?
Posted Sep 1, 2014 14:05 UTC (Mon)
by flussence (guest, #85566)
[Link] (3 responses)
Posted Sep 1, 2014 16:28 UTC (Mon)
by ovitters (guest, #27950)
[Link]
This likely impacts the quality of the software itself. As in, you might more easily get the runtime but the packaging might take more work than it did before.
I've read the Google+ comments, and then it was explained that the expectation is that a runtime is actually made using a distribution. Various distributions have as feature/goal that they can be used to make a distribution. Instead of ending up being used in another distribution, they'd be used for creating a runtime.
The other expectation that there are a limited amount of runtimes. Taking this into account, I have less reservations about the idea.
Posted Sep 1, 2014 16:50 UTC (Mon)
by bjartur (guest, #67801)
[Link]
Don't force people into dependancy on software repositories by braking indie packages. Lure them by promising quality, simplicity and loyalty (i.e. less crapware and less malware). Allow people to lock down their systems and install only signed packages, as Lennart et al. propose. Make that option attractive, please do. But don't stop that inexperienced kid from toying with software from a variety of sources unless you really must (e.g. to prevent him from accidentally spamming everyone in his contact list). Free software is more easily locked down in a custom fashion. Ease of packaging will enable larger sofware repositories. More people will be able to package their software for the same platform. More vendors will be able to repackage applications targeting either the same software stack—or repackage applications from a single, standard format to their custom non-standard to suit their niche. More software distributors will be able to recommend their own sub- or superset of packages to more administrators with access to standard tools. More software repositories competing in a larger market means you can hopefully choose the best binary repository, not just the best deb repository. Of course, those of us who like e.g. source packages will continue to do things differently. But we don't need to force Arch and Fedora spend more time on repackaging common software when they really want to be the first ones to package some bleeding edge library or application. Nor force Red Hat and users to risk anything when an exact application image has been widely tested already. Debian didn't kill Ubuntu. It laid the ground for it. Downstream can change things if they wish. With standardization, those changes can hopefully be applied in a more systematic fashion. Standardization doesn't necessarily kill competition. It shifts it. Especially if oddballs are allowed to break them. And unlike, say, an extended libc like glibc, this doesn't need developers to adopt as much of an incompatible interface. It just presents you with the option of skipping the whole repackaging step, if ever you so desire.
Posted Sep 2, 2014 3:46 UTC (Tue)
by quotemstr (subscriber, #45331)
[Link]
Posted Sep 1, 2014 14:13 UTC (Mon)
by ken (subscriber, #625)
[Link] (27 responses)
What do you do when the base OS runs a version that is incompatible with what the app is expecting? somehow force two versions to run anyway ??
Posted Sep 1, 2014 16:00 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (26 responses)
In our concept we put a lot of emphasis on the kernel/userspace interfaces, on the bus interfaces and on file system formats. So far the kernel API has probably been the most stable API for all we ever had on Linux, so we should be good on that. File formats tend to stay pretty compatible too over a long period of time. Currently we suck though at keeping bus API stable. If we ever want to get to a healthy app ecosystem, this is where we need to improve things.
Posted Sep 1, 2014 16:21 UTC (Mon)
by ken (subscriber, #625)
[Link] (25 responses)
I once used to have home directories on NFS and mount it from different computers. But that has not worked well in years as the "desktop" people apparently cant handle having different versions of the same program.
I won't repeat what came out of my mouth when I tested a new version of a distro but used my old home directory and evolution converting the on disk storage format to a new one but failing to understand its own config so nothing worked in the new version and obviously totally broke everything for the old version. I don't run evolution anymore or try to use the same homedir from different distro versions.
in the 90s I used the same nfs home dir for solaris and diffrent linux versions now doing it with only linux and different version of the same distor is just asking for trouble.
Posted Sep 1, 2014 22:52 UTC (Mon)
by droundy (subscriber, #4559)
[Link]
Interestingly, if this plan were to take off, it might force desktop developers to be more considerate in what they do with your home directory, since at its essence the scheme uses a single home directory for multiple running operating systems.
Posted Sep 2, 2014 0:16 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (23 responses)
At parsing old file formats our software generally has been pretty good. Doing concurrent access on the same files is a much much harder problem. And quite frankly I don't think it is really worthy goal, and something people seldom test. In particular, since NFS systems are usually utter crap, and you probably find more NFS setups where locking says it works but actually doesn't and is a NOP.
If it was about me I'd change GNOME to try to lock the home directory as soon as you logged in, so that you can only have a single GNOME session at a time on the same home directory. It's the only honest thing to do, since we don't test against this kind of parallel use. However, we can't actually really do such a singleton lock, since NFS is a pile of crap, and as mentioned locking more often doesn't work than it does. And you cannot really emulate locking with the renames, and stuff, because then you get not automatic cleanup of the locks when the GNOME session abnormally terminates.
Or in other words: concurrent graphical sessions on the same $HOME are fucked...
Posted Sep 2, 2014 9:34 UTC (Tue)
by nix (subscriber, #2304)
[Link] (17 responses)
I've had $HOME on NFS for eighteen years now, with mail delivery and MUAs *both* on different clients (neither on the server) for the vast majority of that time. Number of locking-related problems in all that time? Zero -- I can remember because I too expected locking hell, and was surprised when it didn't happen. NFS locking works perfectly well, or at least well enough for long-term practical use.
Really, the only problem I have with NFS is interfaces like inotify which totally punt on the problem of doing file notification on networked filesystems, and desktop environments that then assume that inotify is sufficient and that they don't need to find a way to ship inotifies on servers over to interested clients, when for people with $HOME on NFS single-machine inotify is utterly useless.
That's the only problem I have observed -- and because people like me exist, you can be reasonably sure that major NFS regressions will get reported fairly promptly, so there won't be many other problems either.
Oh yeah -- there is one other problem: developers saying 'NFS sucks, we don't support it' and proceeding to design their software in the expectation that all the world is their development laptop. Not so.
Posted Sep 2, 2014 20:23 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (16 responses)
And yeah, this happens, because BSD locks are per-fd and hence actually usable to use. And POSIX locks are per-process, which makes them very hard to use (especially as *any* close() invoked by the fd on the file drops the lock implicitly), but then again they support byte-range locking. Hence people end up using both inter-mixed quite frequently, maybe not on purpose, but certainly in real-life.
So yeah, file locking is awful on Linux anyway, and it's particularly bad on NFS.
Posted Sep 2, 2014 21:36 UTC (Tue)
by bfields (subscriber, #19510)
[Link] (9 responses)
For what it's worth, note Jeff Layton's File-private POSIX locks have been merged now.
Posted Sep 3, 2014 19:30 UTC (Wed)
by ermo (subscriber, #86690)
[Link] (8 responses)
"File-private POSIX locks are an attempt to take elements of both BSD-style and POSIX locks and combine them into a more threading-friendly file locking API."
Sounds like the above is just what the doctor ordered?
Posted Sep 3, 2014 23:07 UTC (Wed)
by nix (subscriber, #2304)
[Link] (7 responses)
I don't see a way to solve this without a new protocol revision :(
Posted Sep 4, 2014 14:03 UTC (Thu)
by foom (subscriber, #14868)
[Link]
Yet, on Linux, local POSIX locks interoperate properly with POSIX locks via NFS, so, if software all switches to using POSIX locks, it'll work properly when used both locally and remotely at the same time.
Of course, very often, nothing is ever running on the NFS server that touches the exported data (or at least, nothing that needs to lock it) -- the NFS server is *just* a fileserver. In such an environment, using BSD locks over NFS on linux works properly too.
Posted Sep 5, 2014 0:44 UTC (Fri)
by mezcalero (subscriber, #45103)
[Link] (5 responses)
Just pretending that locking works, even if it doesn't, and returning success to apps is really the worst thing to do...
Posted Sep 8, 2014 16:12 UTC (Mon)
by nix (subscriber, #2304)
[Link] (4 responses)
Posted Sep 8, 2014 18:56 UTC (Mon)
by bfields (subscriber, #19510)
[Link] (1 responses)
That wouldn't help. I think he's suggesting just returning -ENOLCK to BSD locks unconditionally. I agree that that's cleanest but in practice I suspect it would break a lot of existing setups.
I suppose you could make it yet another mount option and then advocate making it the default. Or just add support NFS protocol support for BSD locks if it's really a priority, doesn't seem like it should be that hard.
Posted Sep 9, 2014 13:56 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Sep 9, 2014 14:43 UTC (Tue)
by foom (subscriber, #14868)
[Link] (1 responses)
Linux also had that behavior a long time ago IIRC. Not sure why it changed, that was before I paid attention.
Posted Sep 9, 2014 15:27 UTC (Tue)
by bfields (subscriber, #19510)
[Link]
If a file is locked by a process through flock(), any record within the file will be seen as locked from the viewpoint of another process using fcntl(2) or lockf(3), and vice versa. Recent linux's flock(2) suggests the Linux behavior was an attempt to match BSD behavior that has since changed?:
Since kernel 2.0, flock() is implemented as a system call in its own right rather than being emulated in the GNU C library as a call to fcntl(2). This yields classical BSD semantics: there is no interaction between the types of lock placed by flock() and fcntl(2), and flock() does not detect deadlock. (Note, however, that on some modern BSDs, flock() and fcntl(2) locks do interact with one another.) Strange. In any case, changing the local Linux behavior is probably out of the question at this point.
Posted Sep 3, 2014 23:05 UTC (Wed)
by nix (subscriber, #2304)
[Link] (5 responses)
What I really want -- and still seems not to exist -- is something that gives you the POSIXness of local filesystems (and things like ceph, IIRC) while retaining the 'just take a local filesystem tree, possibly constituting one or many or parts of local filesystems, and export them to other machines' property of NFS: i.e,. not needing to make a new filesystem or move things around madly on the local machine just in order to export the fs. I know, this property is really hard to retain due to the need to make unique inums on the remote machine without exhausting local state, and NFS doesn't quite get it right -- but it would be very nice if it could be done.
Posted Sep 4, 2014 15:09 UTC (Thu)
by bfields (subscriber, #19510)
[Link] (4 responses)
What exactly are you missing?
"not needing to make a new filesystem or move things around madly on the local machine just in order to export the fs. I know, this property is really hard to retain due to the need to make unique inums on the remote machine without exhausting local state"
I'm not sure I understand that description of the problem. The problem I'm aware of is just that it's difficult to determine given a filehandle whether the object pointed to by that filehandle is exported or not.
"NFS doesn't quite get it right"
Specifically, if you export a subtree of a filesystem then it's possible for someone with a custom NFS client and access to the network to access things outside that subtree by guessing filehandles.
Posted Sep 8, 2014 15:55 UTC (Mon)
by nix (subscriber, #2304)
[Link] (3 responses)
Clearly NFS can't do all this: silly-rename and the rest are intrinsic to (the way NFS has chosen to do) statelessness. So I guess we need something else.
As for the not-quite-rightness of NFS's lovely ability to just ad-hoc export things, I have seen spurious but persistent -ESTALEs from nested exports and exports crossing host filesystems in the last year or two, and am still carrying round a horrific patch to make them go away (I was going to submit it, but it's a) horrific and b) I have to retest and make sure it's actually still needed: the underlying bug may have been fixed).
Posted Sep 8, 2014 16:30 UTC (Mon)
by rleigh (guest, #14622)
[Link] (1 responses)
Posted Sep 8, 2014 18:49 UTC (Mon)
by bfields (subscriber, #19510)
[Link]
The actual kernel client code is pretty trivial, so the bug's probably either in the FreeBSD server or the client-side nfs4-acl-tools. Please report the problem.
Posted Sep 8, 2014 18:46 UTC (Mon)
by bfields (subscriber, #19510)
[Link]
The spec does require that it be implemented, but you're not required to use it. If you're using NFS between two hosts with a recent linux boxes then you're likely already using NFSv4. (It's default since RHEL6, for example.)
See the discussion of OPEN4_RESULT_PRESERVE_UNLINKED in RFC 5661. It hasn't been implemented. I don't expect it's hard, so will probably get done some time depending on the priority, at which point you'll no longer see sillyrenames between updated 4.1 clients and servers.
Do let us know what you figure out (linux-nfs@vger.kernel.org, or your distro).
Posted Sep 2, 2014 11:01 UTC (Tue)
by helge.bahmann (subscriber, #56804)
[Link]
How about vnc/nx & friends?
Posted Sep 2, 2014 18:11 UTC (Tue)
by paulj (subscriber, #341)
[Link] (2 responses)
Note, version here doesn't just specify the release version of the software concerned, but ABI issues like 64 v 32bit. You might have some software where one ABI's version can read and upgrade files from the other but not other way around.
Does this mean $HOME may need to have dependencies on apps?
Posted Sep 3, 2014 23:09 UTC (Wed)
by nix (subscriber, #2304)
[Link] (1 responses)
This is a really hard problem to solve as long as you permit more than one instance of an application (not a desktop!) requiring configuration to run at once :( which is clearly a desirable property!
Posted Sep 4, 2014 21:54 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
And more to the point, old config versions shouldn't be wiped as a matter of course, they should exist in parallel. Of course, this then has the side effect that when the second, older version gets upgraded it doesn't upgrade the old config but will spot and use the pre-existing newer config. Is that good or bad? I don't know.
Cheers,
Posted Sep 8, 2014 14:51 UTC (Mon)
by Arker (guest, #14205)
[Link]
And yet they worked just fine until GNOME came along...
Posted Sep 1, 2014 14:16 UTC (Mon)
by NAR (subscriber, #1313)
[Link] (10 responses)
Posted Sep 1, 2014 16:06 UTC (Mon)
by mezcalero (subscriber, #45103)
[Link] (9 responses)
Posted Sep 2, 2014 0:54 UTC (Tue)
by aquasync (guest, #26654)
[Link] (7 responses)
Realistically I see this working best if a single distro provides these runtimes and commits to long-term support for these (maybe RHEL/Centos-based?). Anyway I think they're worthy goals and hope the project succeeds.
Posted Sep 2, 2014 9:35 UTC (Tue)
by nix (subscriber, #2304)
[Link]
Posted Sep 2, 2014 10:01 UTC (Tue)
by fmuellner (subscriber, #70150)
[Link] (5 responses)
Posted Sep 2, 2014 11:10 UTC (Tue)
by NAR (subscriber, #1313)
[Link] (4 responses)
Posted Sep 2, 2014 13:36 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (3 responses)
Posted Sep 2, 2014 14:02 UTC (Tue)
by NAR (subscriber, #1313)
[Link] (2 responses)
Posted Sep 2, 2014 15:43 UTC (Tue)
by raven667 (subscriber, #5198)
[Link] (1 responses)
You are right, what I said didn't sound right, I re-read the proposal and there are several levels which are related so this is better abstracted. So you have a root filesystem which is unique and you can have several of these, they depend on a /usr filesystem which is from a distro and is shared and is a dependency of various runtimes which are shared infrastructure that apps depend on, additionally you have frameworks which are the devel libraries for building apps against. I'll have to read through again but it might also be that runtimes are supposed to be able to run against multiple different /usr distros but that doesn't seem possible because the ABIs of the /usr are different.
Examples taken from original article
Posted Sep 2, 2014 15:56 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 18:00 UTC (Tue)
by paulj (subscriber, #341)
[Link]
Posted Sep 1, 2014 14:50 UTC (Mon)
by guus (subscriber, #41608)
[Link] (21 responses)
And of course, the obligatory XKCD: http://xkcd.com/927/
Posted Sep 1, 2014 15:15 UTC (Mon)
by mpr22 (subscriber, #60784)
[Link] (17 responses)
In that case, you file a bug against the kernel.
Posted Sep 1, 2014 16:24 UTC (Mon)
by amck (subscriber, #7270)
[Link] (16 responses)
Thats missing the point: filing and fixing the bugs is the procedure for fixing the individual problem, but there is no guarantee that there will ever be a set of software (kernel, runtime, app) that works.
The "runtime" in this picture is a large set of libraries (all of the libs on a distro?). This will break several times a week (look at the security releases on lwn.net). Hence there is no guarantee that this stuff stays stable.
This is what distros do. Its essentially a guarantee : "we've tested this stuff to make sure it all works together, and fixed it when it didn't and froze it when it did to get you release X. There will be point releases of X as there are security fixes, but they won't break your apps' ABI".
This included the kernel. Now you're breaking that by removing the kernel, in order to avoid fixing a problem that has to be fixed within the distro anyway (versioning, compatibility checking): why not look at the work that does on in distros like Debian to ensure that library ABIs and APIs work and learn?
Posted Sep 1, 2014 16:31 UTC (Mon)
by ovitters (guest, #27950)
[Link]
Posted Sep 1, 2014 17:04 UTC (Mon)
by ggiunta (guest, #30983)
[Link] (14 responses)
What if app X relies on runtime A-1 which has known security bugs and does not claim compatibility with the current runtime A-99? End user will just run unsafe libraries to get X running while the developer of X can still claim his app is fully functional and has little incentive to fix it.
Bloat: even in the age of large ssds, keeping 5 versions times 5 OS-packages installed "just for fun" is not something I'd like to do. Heck, I already resent the constant stream of updates I get on Android for apps I barely use, I really do not need to clog the pipe with 25 x downloads from security.linuxdDistibutionZ.org.
What about userland apps which keep open ports? Say LibreOffice plugin X which integrates with Pidgin-29, while Audacity plugin Y integrates with Pidgin-92. Even if there was a namespace for sockets, I'd not like to run 2 concurrent copies of the same IM application.
I wish there was a magical hammer allowing the to move on the other direction instead, and force-push the ABI-stability concept into the mind of each oss developer... (in fact I use windows as my everyday os, mainly because its core apis are stable, and I can generally upgrade any app independently of each other and expect them to work together. True, it is nowhere near linux in flexibility)
Posted Sep 1, 2014 18:11 UTC (Mon)
by xtifr (guest, #143)
[Link] (13 responses)
And its not just disk use that will skyrocket. One of the advantages of shared libraries on Linux is shared memory use. If my browser, editor, and compiler each use a different version of glibc, that means a lot more memory used up on different copies of glibc. Not to mention the various applets and daemons I have running. Then factor in the various versions of all the other libraries these various things use. The term "combinatorial explosion" comes to mind.
Posted Sep 1, 2014 19:01 UTC (Mon)
by mjthayer (guest, #39183)
[Link]
Posted Sep 1, 2014 20:03 UTC (Mon)
by robclark (subscriber, #74945)
[Link]
so.. running things in a separate VM or chroot (which is what this is an alternative for) is somehow less wasteful?
Posted Sep 3, 2014 16:15 UTC (Wed)
by nye (subscriber, #51576)
[Link] (10 responses)
I've just had a look at the nearest Linux machine to hand: this is only a rough estimate based on what top reports, but it appears that, out of a little under 30GB RSS, there's about 30MB shared - and that's just by adding up the 'shared' column, so I guess it's probably counting memory multiple times if it's used by multiple processes(?)
Either way, I'm not going to lose much sleep over a memory increase on the order of a tenth of a percent if it makes other things simpler.
Posted Sep 3, 2014 17:47 UTC (Wed)
by Trelane (subscriber, #56877)
[Link] (9 responses)
Posted Sep 3, 2014 20:50 UTC (Wed)
by mjthayer (guest, #39183)
[Link] (8 responses)
Posted Sep 3, 2014 21:52 UTC (Wed)
by Trelane (subscriber, #56877)
[Link] (7 responses)
Posted Sep 3, 2014 22:03 UTC (Wed)
by zlynx (guest, #2285)
[Link] (6 responses)
This is because the static linker reads .a libraries and includes required .o files.
A badly put together static library has one, or just a few .o files in it. Using any function from the library pulls in all of the unrelated code as well.
Posted Sep 4, 2014 5:51 UTC (Thu)
by mjthayer (guest, #39183)
[Link] (5 responses)
Posted Sep 4, 2014 6:55 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
Each time you loaded a library, it checked the list of unsatisfied functions in the program against the list of functions in the library, and pulled them across.
So if one library function referenced another function in the same library, you often had to load the library twice to satisfy the second reference.
I've often felt that was better than the monolithic "just link the entire library", but it does prevent the "shared library across processes" approach.
Cheers,
Posted Sep 4, 2014 7:20 UTC (Thu)
by mjthayer (guest, #39183)
[Link]
Posted Sep 4, 2014 14:14 UTC (Thu)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Sep 5, 2014 7:54 UTC (Fri)
by mjthayer (guest, #39183)
[Link] (1 responses)
Posted Sep 8, 2014 15:42 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Sep 1, 2014 16:05 UTC (Mon)
by bokr (guest, #58369)
[Link] (1 responses)
Is there a way to get something like a dmesg/ftrace log for the
I'd like to see exactly what was accessed and how.
Also how the comparable trace would look under the proposed system.
Posted Sep 1, 2014 16:20 UTC (Mon)
by amacater (subscriber, #790)
[Link]
Notably, Lennart is used to Red Hat - enough said about the problems of running any software requiring multiple third party repositories ... :(
Posted Sep 2, 2014 7:21 UTC (Tue)
by imunsie (guest, #68550)
[Link]
Posted Sep 1, 2014 16:43 UTC (Mon)
by LightBit (guest, #88716)
[Link] (1 responses)
Posted Sep 4, 2014 16:14 UTC (Thu)
by hkario (subscriber, #94864)
[Link]
Posted Sep 1, 2014 17:10 UTC (Mon)
by mgb (guest, #3226)
[Link] (25 responses)
lennart.shark_jumped_p = true;
Posted Sep 1, 2014 17:39 UTC (Mon)
by Trelane (subscriber, #56877)
[Link] (24 responses)
Perhaps future Poettering / systemd articles should be paywalled.
Posted Sep 1, 2014 20:44 UTC (Mon)
by edomaur (subscriber, #14520)
[Link]
Posted Sep 2, 2014 1:44 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
Posted Sep 2, 2014 13:32 UTC (Tue)
by javispedro (guest, #83660)
[Link]
Posted Sep 11, 2014 11:19 UTC (Thu)
by ksandstr (guest, #60862)
[Link] (20 responses)
What's more, I don't see your response to the actual question being asked: what if a per-app btrfs subvolume depends on a version of Lennartware that's fundamentally incompatible with the One True systemd In The Sky which the outer system is based around? Will there be multiple concurrent instances? How does the division of authority work? "Where's the spec, Eggman?"
It's usually the case that when a difficult question is posed, a bright young spark comes along and responds with practiced derision in order to conceal his/her inability to feel like the question has been adequately addressed previously, let alone make an answer that could be discussed further. Yet it's not these people that attract demands for bannination but the so-called "trolls", who (we're supposed to accept) are infinitely worse for reasons that've gone entirely unstated.
As it stands, systemd continues to be backed by this particular branch of schoolyard debate tactics, and anyone who doesn't appreciate the fact will be branded a troll and (as the self-nominated inquisition would have) excluded by administrative fiat. This alone should cause immediate and severe dissatisfaction in the systemd movement's actions.
Posted Sep 11, 2014 23:12 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (19 responses)
That can't be said about SysV scripts.
Posted Sep 12, 2014 1:53 UTC (Fri)
by mgb (guest, #3226)
[Link] (18 responses)
http://www.freedesktop.org/wiki/Software/systemd/Interfac...
However systemd's ABIs are a relatively minor problem, as are systemd's bugs. The serious problem with systemd is the endless churning of the plumbing layer every time RH decides that all systemd distros must downgrade yet another feature to match Fedora.
"One of the main goals of systemd is to unify basic Linux configurations and service behaviors across all distributions."
Posted Sep 12, 2014 2:00 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (16 responses)
>The serious problem with systemd is the endless churning of the plumbing layer every time RH decides that all systemd distros must downgrade yet another feature to match Fedora.
Posted Sep 12, 2014 2:23 UTC (Fri)
by mgb (guest, #3226)
[Link] (15 responses)
But people who value Debian Stable shouldn't have to suffer endless churn as systemd drags it down to Fedora's level.
Posted Sep 12, 2014 2:30 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (14 responses)
Posted Sep 12, 2014 3:54 UTC (Fri)
by mgb (guest, #3226)
[Link] (13 responses)
For a better perspective you'll have to read a few months of debian-user and debian-devel.
Or you can save some time by deducing the scope of the churn from RH's admission that "One of the main goals of systemd is to unify basic Linux configurations and service behaviors across all distributions." RH can't do that without throwing away great features and millions of person hours in every non-RH systemd distro.
Posted Sep 12, 2014 4:29 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (12 responses)
>For a better perspective you'll have to read a few months of debian-user and debian-devel.
>Or you can save some time by deducing the scope of the churn from RH's admission that "One of the main goals of systemd is to unify basic Linux configurations and service behaviors across all distributions." RH can't do that without throwing away great features and millions of person hours in every non-RH systemd distro.
Posted Sep 12, 2014 4:54 UTC (Fri)
by mgb (guest, #3226)
[Link] (11 responses)
Posted Sep 12, 2014 6:07 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (9 responses)
A good solution requires a bit of rethinking of how services are organized, resulting in a more robust and system: https://lists.debian.org/debian-devel/2014/09/msg00403.html which is accepted by the author https://lists.debian.org/debian-devel/2014/09/msg00434.html And it really is better, because it's possible to introspect the running system with the usual tools, without knowing the magic OpenVPN-specific init.d arguments.
Posted Sep 12, 2014 13:52 UTC (Fri)
by mgb (guest, #3226)
[Link] (8 responses)
If DDs want to spend their time making things work with systemd they are of course at liberty to choose how they spend their time.
But using unnecessary dependencies to force Debian users to switch to systemd is a serious violation of Debian's Social Contract and very very wrong.
Posted Sep 12, 2014 14:26 UTC (Fri)
by johannbg (guest, #65743)
[Link] (7 responses)
Please provide the reference to the information that shows maintenance within Fedora is less functional and reliable than it is in Debian as you are indicating with this remark
Thanks
Posted Sep 12, 2014 14:42 UTC (Fri)
by mgb (guest, #3226)
[Link] (6 responses)
Fedora is bleeding edge which is appropriate for some use cases.
Debian Stable is magnificently reliable and upgrades smoothly from one release to the next without reinstalling.
Posted Sep 12, 2014 15:00 UTC (Fri)
by anselm (subscriber, #2796)
[Link] (4 responses)
What gives you the idea that this will no longer be the case once systemd is Debian's default init system? If there are snags with the upgrade from wheezy to jessie then bugs will be filed and the problems will be fixed, like Debian has worked for the last 20 years. That's what the pre-release freeze is for, after all.
Posted Sep 12, 2014 15:14 UTC (Fri)
by mgb (guest, #3226)
[Link] (3 responses)
Forcing systemd on unwilling Debian users is an egregious violation of Debian's Social Contract.
Leaving servers inaccessible or even unbootable after an upgrade is distinctly below the standard achieved by previous Debian upgrades.
Posted Sep 12, 2014 16:55 UTC (Fri)
by anselm (subscriber, #2796)
[Link]
Inittab isn't exactly rocket science. If there was sufficient demand it would certainly be possible to come up with a program that looks at a system's /etc/inittab and generates a corresponding set of systemd configuration files, at least for the common use cases. This already works for other legacy configuration files like /etc/fstab.
Note that, even before systemd, Debian never guaranteed you a “seamless upgrade” if you'd tweaked the hell out of your /etc/inittab file, or for that matter any configuration file. Debian does make an honest effort not to break things but we are not, and never were, in the business of promising miracles.
So far the only change is that Debian has (for a variety of reasonable reasons) decided that new installs of the distribution will use systemd unless otherwise specified. SysV init still exists as a Debian package. Package maintainers are still free to include SysV init support in their packages, and users who would rather use SysV init are still free to contribute SysV init scripts for packages that don't come with them (while maintainers are encouraged to include these). So far nothing is being “forced” on anybody.
As far as the Social Contract is concerned, nothing in it says that Debian shouldn't use systemd. If anything, it stipulates that, in the interest of its users, the software in Debian – especially the software that is installed by default – should embody the appropriate state of the art. In 2014, the state-of-the-art solution for basic Linux plumbing seems to be systemd, and this is further corroborated by the fact that all other mainstream Linux distributions seem to concur with this.
You may have noticed that so far no stable Debian release actually involved systemd in an upgrade. Therefore there is no evidence whatsoever that upgrading Debian from one stable version to the next will, in fact, leave “servers inaccessible or even unbootable“ on account of systemd. On the contrary, it is safe to say that the upgrade from wheezy to jessie will be extensively tested by Debian maintainers, and showstopping problems will hopefully be corrected before jessie is actually released.
Posted Sep 13, 2014 1:20 UTC (Sat)
by mchapman (subscriber, #66589)
[Link] (1 responses)
Why is this such a problem? Upstart ignores all service definitions in inittab too, and yet I don't remember much complaint about that.
Different init systems are different, just as different windows managers are different and different text editors are different. There is no reason they should be "compatible" with one another in any sense. Where they *are* compatible is simply a bonus.
Posted Sep 13, 2014 1:23 UTC (Sat)
by mchapman (subscriber, #66589)
[Link]
To clarify, I too think claims of "drop-in compatibility" are meaningless. But that's OK, since I don't consider "drop-in compatibility" to be a necessary feature anyway.
Posted Sep 12, 2014 15:33 UTC (Fri)
by johannbg (guest, #65743)
[Link]
Fedora is not "bleeding edge" ( rawhide is which is comparable to Debian Unstable I suppose ) or "First" for that matter even if it claims to be ( Arch took that title long time ago ) and people have been ( proudly ) upgrading Fedora from FC1 and boast about doing so in the process with every new release of Fedora.
On one hand Fedora releases with newer release of an component containing bugfixes and enhancements more often than Debian does given it has shorter <-- release cycles then Debian, while Debian chooses to backport those fixes into the release due to it having longer <--- release cycle then Fedora.
You are claiming that the maintainership and quality assurance community in Fedora is of lesser quality than it is in Debian yet both of those distribution work closely with their upstream to the best of their ability as far as I know so please by all means enlighten me and elaborate how Debian manages to maintain their component "better" than maintainers within the Fedora and help the maintainers within Fedora as well as it's quality assurance community understand what they need to improve to be on par with Debians maintainership.
Posted Sep 12, 2014 8:44 UTC (Fri)
by anselm (subscriber, #2796)
[Link]
So? As far as I can tell that thread came up with a few sensible suggestions on how to make OpenVPN work with systemd, and the Debian OpenVPN maintainer seemed to like them.
As far as »systemd makes every distribution into Fedora« is concerned: The systemd developers seem to be looking for good solutions, not necessarily Fedora solutions. In point of fact some of systemd's approaches had been patterned on Debian (rather than Fedora) long before Debian declared that systemd would be the new default init system. If some distribution finds that whatever systemd does is too limited they can (a) lobby the systemd developers to adapt, which if there are good technical reasons they probably will, or (b) replace that particular bit of systemd (which, you know, is pretty modular as these things go) with one that is closer to their requirements.
Posted Sep 12, 2014 10:50 UTC (Fri)
by johannbg (guest, #65743)
[Link]
I have to ask why are you under the impression that Red Hat decides anything that happens in the systemd community?
Where does that thought originate from?
Posted Sep 1, 2014 17:38 UTC (Mon)
by dashesy (guest, #74652)
[Link]
Posted Sep 1, 2014 17:58 UTC (Mon)
by ibukanov (subscriber, #3942)
[Link] (6 responses)
As Lennart's proposal does not explain how the new architecture can provide such resilience against bugs, I do not see how it would simplify the job for Linux distributions. They still need to fix critical bugs in all applications they provide one way or another.
The good part of the proposal is that it wants to ensure that all updates are atomic and can be safely reverted. But than again, a 100% safe revert is not that useful if the only choice it brings is between working but exploitable application or the one that does not work.
Posted Sep 1, 2014 18:12 UTC (Mon)
by mjthayer (guest, #39183)
[Link] (1 responses)
Posted Sep 1, 2014 19:15 UTC (Mon)
by ibukanov (subscriber, #3942)
[Link]
Posted Sep 2, 2014 0:25 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (3 responses)
Security fixes must happen, there is no way around that. However, we need to make sure that we allow them to be done by people who have the expertise and focus on fixing them. Hence: programs like firefox or google earth that you download from their respective website usually come with a ton of bundled libraries, in the versions mozilla or google has tested their stuff with. Now these vendors are actually not that interested in those libraries, they are primarily just interested in their own app code. So, the runtime concept is about attempting to put together a fixed set of libraries in a fixed set of versions that is basically immutable (modulo the minimal changes necessary to do CVE fixes), maintained by people who actually care about the library code. This way, you give the app vendors what they want (which is a fixed set of libraries, in specific versions that they can test stuff with and where they know that it is exactly this version the stuff will ultimately run on) but at the same time you retain the ability to minimally update the libraries for CVEs, because the runtimes are still maintained by the runtime vendor, and not by a mostly-desinterested app vendor.
Posted Sep 2, 2014 5:59 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link] (2 responses)
And that is the reason I am rather skeptical about compatibility claims in the proposal. On the other hand anything that can get 100% reliable and revertible updates together with goodies likes read-only /usr are extremely welcomed.
Posted Sep 2, 2014 10:08 UTC (Tue)
by roc (subscriber, #30627)
[Link] (1 responses)
Posted Sep 7, 2014 17:51 UTC (Sun)
by pabs (subscriber, #43278)
[Link]
Posted Sep 1, 2014 18:16 UTC (Mon)
by mjthayer (guest, #39183)
[Link]
Posted Sep 1, 2014 18:45 UTC (Mon)
by NightMonkey (subscriber, #23051)
[Link] (35 responses)
Posted Sep 1, 2014 20:13 UTC (Mon)
by Wol (subscriber, #4433)
[Link] (34 responses)
Not because I use Gnome (I personally can't stand it), but because I'd like to have multiple stations on a single pc. Comes by default with systemd apparently, but OpenRC well I don't know - I got the impression it couldn't.
Dunno when, but I suspect I will be upgrading my dev system soon, in preparation for upgrading the main system later (I'm currently upgrading both systems to raid).
Cheers,
Posted Sep 1, 2014 22:57 UTC (Mon)
by NightMonkey (subscriber, #23051)
[Link] (3 responses)
Pardon? You mean like multiboot? Or different runlevels? Or radio stations? ;)
Posted Sep 1, 2014 23:16 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Sep 2, 2014 0:01 UTC (Tue)
by NightMonkey (subscriber, #23051)
[Link] (1 responses)
Posted Sep 2, 2014 8:34 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
Cheers,
Posted Sep 2, 2014 19:18 UTC (Tue)
by jackb (guest, #41909)
[Link] (29 responses)
Likewise. Except in my case, I really just want the feature of automatically restarting crashed daemons, and automatically funnelling all their console output into the journal. I've made a lot of progress converting a lot of my (virtualized guest) machines to systemd, but it's been incredibly difficult and I can't convert the host machines due to bugs with systemd's dmcrypt/mdraid/lvm setup. (apparently they only test on Fedora or something) I've had to deal with problems like how one release of systemd-networkd worked perfectly, but the next release it consistently failed to set an ip address as a dhcp client. No errors, no warning, no indication of what the problem might have been at all. After one update I had ~30 guest machines that couldn't get networking parameters that I had to manually log into and set up static IP addressing to get working again. If it wasn't for the two features I listed at the beginning of this comment, I wouldn't even bother with systemd at all, and even given those improvements it's barely worth it to switch.
Posted Sep 2, 2014 20:20 UTC (Tue)
by NightMonkey (subscriber, #23051)
[Link] (22 responses)
Those are behaviors that shouldn't be in the init system, if you like UNIX philosophical models of "do one thing and one thing well." These complicate an already complex job, better done by task-specific narrowly-scoped tools. Monit, Puppet, Chef, watchdog, and many other programs can do that simply defined task and do it well. And fix those crashing daemons! Crashing should never become accepted, routine program behavior! :)
For the console output, can't syslong-ng (or rsyslog, or similar) do that?
Posted Sep 2, 2014 20:51 UTC (Tue)
by dlang (guest, #313)
[Link] (6 responses)
There's nothing preventing you from running the output of any program into logger (or equivalent) to have that data sent to syslog-ng or rsyslog ( 2|logger -t appname.err |logger -t appname.stdout or something similar)
Posted Sep 4, 2014 15:20 UTC (Thu)
by xslogic (guest, #97478)
[Link] (5 responses)
Posted Sep 4, 2014 15:51 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (4 responses)
Posted Sep 4, 2014 22:10 UTC (Thu)
by Darkmere (subscriber, #53695)
[Link] (3 responses)
Posted Sep 4, 2014 22:36 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
why do so few people see the problems with this sort of thing?
Posted Sep 4, 2014 23:25 UTC (Thu)
by anselm (subscriber, #2796)
[Link]
The non-forking daemon approach as recommended for systemd is what basically every init system other than System-V init prefers (check out Upstart or, e.g., the s6 system mentioned here earlier). It allows the init system to notice when the daemon quits because it will receive a SIGCHLD, and the init system can then take appropriate steps like restart the daemon in question. In addition, it makes it reasonably easy to stop the daemon if that is necessary, because the init process always knows the daemon's PID (systemd uses cgroups to make this work even if the daemon forks other processes).
The »double-forking« strategy is needed with System-V init so that daemon processes will be adopted by init (PID 1). The problem with this is that in this case the init process does receive the notification if the daemon exits but has no idea what to do with it. The init process also has no idea which daemons are running on the system in the first place and where to find them, which is why many »proper Linux daemons« need to write their PID to a file just so the init script has a fighting chance of being able to perform a »stop« – but this is completely non-standardised, requires custom handling in every daemon's init script, and has a certain (if low) probability of being wrong.
In general it is a good idea to push this sort of thing down into the infrastructure rather than to force every daemon author to write it out themselves. That way we can be reasonably sure that it actually works consistently across different services and is well-debugged and well-documented. That this hasn't happened earlier is only too bad but that is not a reason not to start doing it now. People who would like to run their code on System-V init are free to include the required song and dance as an option, but few if any systems other than Linux actually use System-V init these days – and chances are that the simple style of daemon that suits systemd better is also more appropriate for the various init replacements that most other Unix-like systems have adopted in the meantime.
Posted Sep 5, 2014 8:34 UTC (Fri)
by cortana (subscriber, #24596)
[Link]
Posted Sep 2, 2014 22:47 UTC (Tue)
by jackb (guest, #41909)
[Link]
That's the kind of philosophy that's useless to me. I have a lot of work to get done. I don't have time to fix all the broken daemons in the world. I welcome tools that help me get my work done and reject tools that get in my way .
Unfortunately systemd is complicated because it contains a mixture of both so I'm always ambivalent.
Posted Sep 3, 2014 10:26 UTC (Wed)
by cortana (subscriber, #24596)
[Link] (13 responses)
If they do it by running '/etc/init.d/foo status' then, no, they can't.
Posted Sep 3, 2014 15:26 UTC (Wed)
by NightMonkey (subscriber, #23051)
[Link] (12 responses)
Posted Sep 3, 2014 15:46 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (11 responses)
Posted Sep 3, 2014 16:09 UTC (Wed)
by NightMonkey (subscriber, #23051)
[Link] (10 responses)
More work is needed to make the relationship between users and developers LESS obscured, not more. When I reported a core Apache bug to the Apache developers in 1999, I had a fix in 24 hours, and so did everyone else. Now, if instead I just relied on some system to restart Apache, that bug might have gone unnoticed and unfixed, at least for longer.
Systems like this are a band-aid. Putting more and more complex systems in as substitutes for bug fixing and more human-to-human communication are what are the problem.
Posted Sep 3, 2014 16:23 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (9 responses)
I'm not sure what you are talking about nagios check_procs for eaxmple just shells out to ps to walk /proc, it is not the parent of services and doesn't have the same kind of iron clad handle to their execution state that something like runit or daemontools or systemd has.
> Systems like this are a band-aid.
You are never going to fix every possible piece of software out in the world to never crash, the first step is to admit that such a problem is possible, then you can go about mitigating the risks, not by building fragile systems that pretend the world is perfect and fall apart as soon as something doesn't go right, especially not as some form of self-punishment to cause pain to force bug fixes.
Posted Sep 3, 2014 16:34 UTC (Wed)
by NightMonkey (subscriber, #23051)
[Link] (8 responses)
I just think a lot of this is because of decisions made to keep the binary-distro model going.
I'm not interested in fixing all the softwares. :) I am interested in getting the specific software I need, use and am paid to administer in working order. There are certainly many ways to skin that sad, sad cat. ;)
Posted Sep 4, 2014 22:05 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (7 responses)
The sysadmin did a "shutdown -r". The system (using init scripts) made the mistake of shutting the network down before it shut bind down. Bind - unable to access the network - got stuck in an infinite loop. OOPS!
The sysadmin, 3000 miles away, couldn't get to the console or the power switch, and with no network couldn't contact the machine ...
If a heavily-used program like bind can have such disasters lurking in its sysvinit scripts ...
And systemd would just have shut it down.
Cheers,
Posted Sep 4, 2014 23:02 UTC (Thu)
by NightMonkey (subscriber, #23051)
[Link] (6 responses)
I don't care how "robust" the OS is. It's just being cheap that gets your organization into these kinds of situations (and that *is* an organizational problem, not just yours as the sysadmin.)
Posted Sep 4, 2014 23:15 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
And then it breaks because of a race condition that happens in only about 10% of cases.
That sysadmin in this dramatization was me, and the offending script was: http://anonscm.debian.org/cgit/users/lamont/bind9.git/tre... near the line 92.
Of course, REAL Unix sysadmins must live in the server room, spending 100% of their time tweaking init scripts and auditing all the system code to make sure it can NEVER hang. NEVER. Also, they disable memory protection because their software doesn't need it.
Posted Sep 4, 2014 23:20 UTC (Thu)
by NightMonkey (subscriber, #23051)
[Link] (4 responses)
Again, the answer to system hangs (which are *inevitable* - this is *commodity hardware* we're talking about most of the time, not mainframes) is remote power booters. I don't like living in the DC, myself.
Posted Sep 4, 2014 23:33 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
> Again, the answer to system hangs (which are *inevitable* - this is *commodity hardware* we're talking about most of the time, not mainframes) is remote power booters.
Posted Sep 5, 2014 4:09 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (2 responses)
That sounds dangerously close to internalizing and making excuses for unreliable software rather than engineering better systems that work even in the crazy imperfect world, duct taping and RPS to the side of a machine is in addition to not a replacement for making it work right in the first place.
Posted Sep 5, 2014 4:43 UTC (Fri)
by NightMonkey (subscriber, #23051)
[Link] (1 responses)
I don't think what you are saying are actually separate tasks. All software has bugs.
Posted Sep 5, 2014 14:45 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 4, 2014 23:46 UTC (Thu)
by flussence (guest, #85566)
[Link] (5 responses)
I've found runit handles both those things flawlessly. Half of my system daemons are running under it — the other half's still on OpenRC, but that's mostly due to laziness.
Posted Sep 5, 2014 8:50 UTC (Fri)
by cortana (subscriber, #24596)
[Link] (4 responses)
Posted Sep 6, 2014 0:21 UTC (Sat)
by flussence (guest, #85566)
[Link] (3 responses)
Posted Sep 6, 2014 0:47 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Sep 6, 2014 9:38 UTC (Sat)
by cortana (subscriber, #24596)
[Link]
Posted Sep 6, 2014 15:45 UTC (Sat)
by rahulsundaram (subscriber, #21946)
[Link]
Posted Sep 1, 2014 21:02 UTC (Mon)
by roskegg (subscriber, #105)
[Link] (34 responses)
Around 2007, the Linux desktop was just about sweet, and poised to start taking on MacOS and Windows. Then, like a flood, Lennart and friends start dumping half-baked bloatware into the OS.
In the last 7 years, things haven't gotten better, we've just experienced a lot of code churn in the Linux world.
The BSDs don't have this problem, because they have integrated systems. Kernel and base go together, and things get fixed at the right abstraction layer. Coupling and cohesion are properly handled. It is doubtful that Lennart and his friends even know about the Demeter principle.
It seems Lennart is trying to impose a "base system" for the Linux kernel. Which is a good idea, but he is doing it badly, not in the Unix/BSD spirit, but in the spirit of VMS/Win32 and OSX.
If there was a simple way to port a modern web browser to Plan9, I'd seriously consider switching. When everyone is trying to re-implement Plan9 poorly, why not go back to the real deal? It fulfills the vision of the Unix creators.
Oh, and as of recently, Plan9 source is available under the GPLv2 license.
Posted Sep 1, 2014 21:22 UTC (Mon)
by roskegg (subscriber, #105)
[Link] (27 responses)
Posted Sep 2, 2014 1:48 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (26 responses)
Posted Sep 2, 2014 2:41 UTC (Tue)
by roskegg (subscriber, #105)
[Link] (25 responses)
Imagine if every new operating system had to implement PL/I and COBOL and FORTRAN to be usable. It's kind of like that.
The pcc compiler was a valiant effort, but network effects defeated it. Network effects are so strong that the top coders, even those who invent the technology, can't make much headway. Backward compatibility is a corker.
How hard is it to write an html5 renderer anyway? I mean, parsing HTML5 is easy. Parsing CSS is easy. But rendering... really, how much C code would it take? Seriously, why is C++ creeping into and infecting everything.
Posted Sep 2, 2014 7:44 UTC (Tue)
by ncm (guest, #165)
[Link]
C++ exists and grows because it *uniquely* meets a real need. It's far from a perfect solution to that need, but it persists because hardly else seems to be trying. Rust seems to be trying, but it's a decade or two away from sufficient maturity. Worse, it could make some fatal mistake any day, not recognize it for a little too long, and then never get there. (Might have done already, I haven't kept up.)
Posted Sep 2, 2014 10:49 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link]
You say this, forgetting that both specify what happens when the syntax is broken (e.g., <i><b><p>text</i></b>) and that CSS is such a mess in the semantics department. Take a dive into the WebKit codebase sometime and tell me how C would have been simpler, easier to read, and shorter. I'm sure the world would be thrilled to get that monstrosity simplified.
Posted Sep 2, 2014 10:51 UTC (Tue)
by ibukanov (subscriber, #3942)
[Link]
10 man-years is a minimum if you want to render substantial portion of the web...
Posted Sep 2, 2014 13:04 UTC (Tue)
by jwakely (subscriber, #60262)
[Link]
Maybe developers choose to use it. You know, the people who are actually doing the work not just complaining and criticising.
But no, probably not that, it's probably some kind of fungal infection like ophiocordyceps unilateralis that corrupts them away from your idea of purity.
Posted Sep 2, 2014 13:53 UTC (Tue)
by mpr22 (subscriber, #60784)
[Link]
An assortment of reasons, frequently involving one or more of the features that your tone suggests you think should be set on fire, indented six feet down, and covered with dirt.
Posted Sep 2, 2014 15:28 UTC (Tue)
by JGR (subscriber, #93631)
[Link] (18 responses)
Because it enables developers to get more done for less effort than C, with minimal performance penalties, better type safety/compile-time checking and less bugs (or at least, different bugs).
Posted Sep 2, 2014 22:05 UTC (Tue)
by zlynx (guest, #2285)
[Link] (8 responses)
Anyone who has ever seen shared_ptr implemented in pure C will run to C++ and grab it with open arms.
Sure it is done in Python and Perl runtime interpreters. And it sucks. Macros, macros everywhere, and so very easy to make mistakes. So very easy to forget an increment or decrement at some important point or use the wrong macro.
And GObject. Come on. GObject is an argument in favor of C++ if I've ever seen one!
Posted Sep 3, 2014 9:50 UTC (Wed)
by dgm (subscriber, #49227)
[Link] (7 responses)
I would be interested in seeing some example of this. Really. I'm tempted to say it's simply impossible, because C lacks destructors.
Unless... you mean plain old reference counting, which is rather trivial and easy to understand. Much easier than, for instance, the subtle differences between all the smart pointer templates in the STL. And you can add BOOST and/or Qt or Microsoft's for extra fun. Easy-peasy.
Posted Sep 3, 2014 18:26 UTC (Wed)
by Trelane (subscriber, #56877)
[Link]
Posted Sep 3, 2014 18:52 UTC (Wed)
by zlynx (guest, #2285)
[Link] (2 responses)
Posted Sep 5, 2014 8:56 UTC (Fri)
by quotemstr (subscriber, #45331)
[Link] (1 responses)
I don't agree that manual reference counting is particularly hard. Practically the entire world does it, and it works fine.
I've read my share of interpreters. Reference counting isn't particularly hard, although you want to use tracing GC if you don't want your users ripping their hair out over cycles.
I've seen far more problems with shared_ptr.
Posted Sep 5, 2014 18:13 UTC (Fri)
by zlynx (guest, #2285)
[Link]
Posted Sep 4, 2014 5:25 UTC (Thu)
by jra (subscriber, #55261)
[Link] (2 responses)
Posted Sep 23, 2014 13:05 UTC (Tue)
by dgm (subscriber, #49227)
[Link] (1 responses)
Thanks for the pointer (pun intended).
Posted Sep 23, 2014 16:56 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link]
http://sgallagh.wordpress.com/2010/03/17/why-you-should-u...
Posted Sep 3, 2014 10:47 UTC (Wed)
by jb.1234abcd (guest, #95827)
[Link] (8 responses)
@jwakely
@mpr22
@JGR
Why do you spread misinformation ?
Now, to support your claims:
jb
Posted Sep 3, 2014 11:29 UTC (Wed)
by jb.1234abcd (guest, #95827)
[Link] (4 responses)
Well, the market has spoken thanks to minority of people in the know.
Now be nice, and please the last one turn off the light -:)
jb
Posted Sep 3, 2014 23:22 UTC (Wed)
by nix (subscriber, #2304)
[Link] (3 responses)
(Given that you clearly have no idea even how long the committee has been in existence -- hint, it's more than twice as long as you suggested -- the likelihood of this seems low.)
Posted Sep 4, 2014 10:14 UTC (Thu)
by jb.1234abcd (guest, #95827)
[Link] (2 responses)
Firstly, a C++ Standards Committee is a technical, but also a political body.
Secondly, you have to understand what C++ is, and its history.
Thirdly, there is an interesting inverse relationship between an expansion of semantics and syntax of C++ (C++11, soon C++14), called "featurism" by some, and a rapid decline in C++ acceptance as shown on chart I quoted. The OOP part of "a new paradigm" contributed to it as well.
jb
Posted Sep 4, 2014 14:29 UTC (Thu)
by nix (subscriber, #2304)
[Link]
Parrotting bits of D&E at me would be more impressive if there were any sign you'd understood it -- Stroustrup doesn't exactly display any signs there of wanting to cover any parts of C with dirt (other than decrying the use of the C preprocessor in C++ programs, which is pretty justified, I'd say).
btw, C *has* evolved since C++ was created: you even mention one example. Nobody much likes having the languages drift into incompatibility, but not because of some nefarious plot on the part of either committee: rather because nobody wants 'extern "C"' and link-compatibility to break.
If the C++ committee wanted to cover C with dirt, would the two committees really have spent so much time and effort making sure their newly-formalized memory models were to some degree compatible? And yes, though C11 did incorporate the model from C++11 rather than the other way round there was definitely attention paid on the part of the people defining the C++11 memory model to make sure they weren't specifying something that made no sense for C.
Posted Sep 5, 2014 14:03 UTC (Fri)
by jwakely (subscriber, #60262)
[Link]
Posted Sep 3, 2014 11:32 UTC (Wed)
by niner (subscriber, #26151)
[Link] (1 responses)
Posted Sep 3, 2014 11:59 UTC (Wed)
by jb.1234abcd (guest, #95827)
[Link]
Now please spread the knowledge instead of misinformation.
jb
Posted Sep 4, 2014 7:04 UTC (Thu)
by Wol (subscriber, #4433)
[Link]
Is that because of all the new web script kiddies that have appeared? Really, I don't know. But a shrinkage in %age can easily hide a rise in absolute numbers. And if the target audience hasn't grown, then those stats are lying.
"Statistics tell you how to get from A to B. What they don't tell you is that you're all at C."
Cheers,
Posted Sep 2, 2014 18:38 UTC (Tue)
by daniels (subscriber, #16193)
[Link]
Posted Sep 1, 2014 21:33 UTC (Mon)
by JGR (subscriber, #93631)
[Link] (4 responses)
> Around 2007, the Linux desktop was just about sweet, and poised to start taking on MacOS and Windows.
> In the last 7 years, things haven't gotten better, we've just experienced a lot of code churn in the Linux world.
It's not 1980 any more, and if some other "Way" turns out to be better for modern systems/requirements than the "Unix Way" of old, then so be it. This implies some new ideas, experimentation and pushing of boundaries, rather than just sticking with whatever was cool 30 years ago.
Posted Sep 1, 2014 22:18 UTC (Mon)
by sramkrishna (subscriber, #72628)
[Link]
Blind allegiance to the Unix Way is an injustice to.. the Unix Way.
Posted Sep 2, 2014 1:25 UTC (Tue)
by efitton (guest, #93063)
[Link] (1 responses)
Posted Sep 2, 2014 6:36 UTC (Tue)
by eru (subscriber, #2753)
[Link]
Me too. I have sadly found the last Mandriva:s to ship with KDE3.* (2008 or thereabouts) were probably the best desktop distributions, ever. Featureful, the software was well integrated, easy to administer, yet lightweight enough to run satisfactorily on a Pentium-M Thinkpad.
I guess one problem is the people developing the desktops want to have fun and an interesting time doing it, therefore change things. But for end users the desktop (and the OS in general) is a "necessary evil". What they really are interested in are the applications. The desktop system is nothing but a way to manage them, and arbitrate screen space and other "desktop peripherals" (which may include removable disks, speakers, cameras or USB sticks). Otherwise it should stay out of the way.
Posted Sep 3, 2014 14:12 UTC (Wed)
by jb.1234abcd (guest, #95827)
[Link]
I assume you were around the time of an introduction of Gnome 3 ?
It is judged that half of former Gnome 2 users switched to other spins (KDE, XFCE, even LXDE, and others), most never to return.
This is what happens when "new wave" of devs in Linux OS ecosystem think
Posted Sep 2, 2014 9:47 UTC (Tue)
by k3ninho (subscriber, #50375)
[Link]
In the details, the idea that [storage] is a file and [program] is a server, that's a clear sense of what Unix wanted anyway. We probably want to go from Bell Labs into Outer Space next, with distances and comms times factored into scheduling the work that [program] has to do.
Now that I think about it, why dedup and btrfs-send/recv when there exists Git Annex or Venti? Particularly when you can use the hash of the library's interfaces, graphics and sounds to find it within the git or Venti storage.
K3n.
Posted Sep 1, 2014 23:32 UTC (Mon)
by bojan (subscriber, #14302)
[Link] (13 responses)
According to this plan, in order to be able to run things properly, I will have to have a number copies of pretty much the same thing loaded into memory, doing pretty much the same thing, all at once. Great.
And why? Because, after over a decade, we cannot agree on some basic stuff.
Communications breakdown indeed.
Posted Sep 2, 2014 0:28 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (12 responses)
Posted Sep 2, 2014 1:17 UTC (Tue)
by bojan (subscriber, #14302)
[Link]
Posted Sep 2, 2014 8:59 UTC (Tue)
by oldtomas (guest, #72579)
[Link] (9 responses)
So "lib compatibility" for an app means hash-equality of libs. Thanks for making that clear. Now I know this whole thing ain't for me.
I still cling to the old dream that an app has the responsibility of working with a whole range of environments (file system layout, minor variances in lib versions, etc.)
I don't care about the lazy app developers whose bloated monsters stop working because some file ain't in /etc/foo or because $libbar went from 1.2.7.15 to 1.2.7.16. I don't want to cater to that -- not on the boxes I'm responsible.
Posted Sep 2, 2014 16:00 UTC (Tue)
by dskoll (subscriber, #1630)
[Link] (3 responses)
+1 We work very hard to ensure our product runs on any Linux distro out there, as well as FreeBSD and pretty much any UNIX-like system. That's what proper programmers do.
Posted Sep 3, 2014 2:57 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (2 responses)
Depends on the layer you're targeting. If you're writing a Linux driver, caring about FreeBSD doesn't make much sense.
Posted Sep 3, 2014 6:58 UTC (Wed)
by oldtomas (guest, #72579)
[Link]
It not only makes other's lives easier (the poor FreeBSD gal/guy willing to use said device will thank you if she can rip off parts of your code, which for in-house use would be perfectly OK), but it ends up making the code clearer, more readable, and in the long term healthier.
Now I'd grant you that writing a kernel driver might put the tradeoff at another point that writing (say) an SMTP daemon, but "doesn't make much sense" seems to me too low a level for any case.
It's more work (as dskoll put it), but it's definitely worth it. And every time I see code like that, I thank the likes of dskoll.
Posted Sep 3, 2014 15:44 UTC (Wed)
by dskoll (subscriber, #1630)
[Link]
Well, yes. :) I'm referring to applications, not kernel or driver programming.
Posted Sep 3, 2014 22:31 UTC (Wed)
by nix (subscriber, #2304)
[Link] (4 responses)
Posted Sep 4, 2014 7:04 UTC (Thu)
by oldtomas (guest, #72579)
[Link] (3 responses)
I think this is a very valid concern. Still, I think it's worth to take a step back and look at it from some distance: Tests, after all, are just a last line of defense. To keep software correct (or "as correct as possible"), we need first and foremost good interfaces. Meaning simple, understandable, well-designed. Small is paramount here -- you can't fulfill a contract you don't understand (and bad examples abound!).
By all means, test -- but first you gotta get a feeling that your software is doing the right thing.
Posted Sep 4, 2014 14:15 UTC (Thu)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Sep 5, 2014 6:35 UTC (Fri)
by oldtomas (guest, #72579)
[Link] (1 responses)
Strongly agree: not yet, and by a far stretch.
But utopia is a place to "move towards" and not to "be in", anyway. So watch me making uncomfortable noises whenever I think the direction is wrong.
And yes, designing a good interface is definitely the hard part. But it's rewarding. And we as a profession should insist on getting that reward :-)
Posted Sep 5, 2014 16:17 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 21:35 UTC (Tue)
by martin.langhoff (subscriber, #61417)
[Link]
Is this up to date? https://btrfs.wiki.kernel.org/index.php/Deduplication -- it seems fairly limited. Yes, you can run "hardlink" style programs telling btrfs that they are dupes instead off hardlinking. However that does not scale very well at all: (a) to get savings across VMs/containers you need to see "everything", and (b) "everything" in a large system is far too many files to use this strategy.
Netapp filers have a fast-and-small hash for each block, computed and saved at write time, and use those to get a hint of dedupe candidates. This solves the issue of finding dedupe candidates across large volumes, without having a "user land" that "can see everything". Cost is ~7% slowdown in writes...
Posted Sep 2, 2014 2:42 UTC (Tue)
by ofranja (guest, #11084)
[Link] (4 responses)
It seems to me that everything outlined in this article can be already done by using bind mounts (w/some specialized filesystem hierarchy), LVM (for snapshots) and namespaces/chroot. No hard dependencies on any init system or any specific filesystem features seems required. In fact, I wonder if the current systemd interfaces are not responsible for making it inherently harder to do.
Posted Sep 2, 2014 5:36 UTC (Tue)
by kloczek (guest, #6391)
[Link] (1 responses)
Posted Sep 3, 2014 17:56 UTC (Wed)
by ofranja (guest, #11084)
[Link]
Well, the Linux kernel is much more than two decades old technology, but here we are. :)
Posted Sep 3, 2014 0:21 UTC (Wed)
by kreijack (guest, #43513)
[Link] (1 responses)
The snapshots have a totally different scope: they are needed to take a *atomic* photo of a filesystem.
Posted Sep 3, 2014 18:09 UTC (Wed)
by ofranja (guest, #11084)
[Link]
The specific technology used is actually not that important, as long as it stays as a different layer - if you are creative enough, you could even use union mounts for that.
My point is: there are mechanisms to implement snapshots which do not create any dependency on a specific filesystem feature.
You might not agree that this is important - but for me, it's a must.
Posted Sep 2, 2014 2:51 UTC (Tue)
by roskegg (subscriber, #105)
[Link] (4 responses)
in the late 70s, Edsger Dijkstra and Tony Hoare advocated the
in summary, a "humbly-written" program will not unnecessarily waste the
End of Quote
Lennart and his crowd aren't humble. And it is damaging the whole Free Software movement, and will continue to until we route around the damage.
Posted Sep 2, 2014 17:42 UTC (Tue)
by flussence (guest, #85566)
[Link] (1 responses)
Allow me to respond to your biblical quotation tirade with another from the same mythology:
"Patches welcome."
Posted Sep 3, 2014 23:27 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Awesome! Can I have one? I think I may need it. (The mind-patching technology would be very useful too.)
Posted Sep 2, 2014 18:42 UTC (Tue)
by daniels (subscriber, #16193)
[Link] (1 responses)
Posted Sep 4, 2014 16:43 UTC (Thu)
by luzy (subscriber, #90204)
[Link]
Posted Sep 2, 2014 2:55 UTC (Tue)
by BradReed (subscriber, #5917)
[Link] (1 responses)
Say some game company writes a new game they distribute via Steam or HumbleBundle. How is this made into a "runtime?" Who keeps it updated?
If Kovid Goyal releases a new version of Calibre every week, who makes the "runtime?"
I personally don't see the problem this runtime-based system is "fixing."
Posted Sep 2, 2014 3:32 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 2, 2014 5:32 UTC (Tue)
by kloczek (guest, #6391)
[Link]
Bollocks. Main problem is not do something "quickly" but handle any install and upgrade issue to not leave installed or half installed resources in unknown state.
Posted Sep 2, 2014 7:53 UTC (Tue)
by suckfish (guest, #69919)
[Link] (2 responses)
I rely heavily on maintaining various parts of my systems in subvols that are regularly cloned & manipulated in various ways (chroots/containerisation, backups, major upgrades). I believe I am far from unique here.
This relies on me being able to decide what constitutes a subvolume. As soon as a different idea gets imposed on my system (e.g., fragmenting my OS install into multiple subvols) my ability to use subvols to manage my system will be impeded (e.g., I could no longer create a standalone environment just by "btrfs subvol clone"ing my OS).
OTOH a stow (remember that?) like system of isolating packages into their own directories sounds great to me, not least because I can choose to put a package into it's own subvol if I wish.
Posted Sep 2, 2014 18:49 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (1 responses)
Posted Sep 3, 2014 6:37 UTC (Wed)
by suckfish (guest, #69919)
[Link]
If upstream packaging imposes a decision on how the filesystem is split into subvolumes, then the sysadmin no longer gets to choose.
This makes it much harder for the sysadmin to subvolumes to maintain their system.
The bottom line is sysadmins want to choose what 'btrfs subvol clone ...' clones, not have that choice imposed by upstream.
Posted Sep 2, 2014 11:13 UTC (Tue)
by helge.bahmann (subscriber, #56804)
[Link] (3 responses)
Standardizing containers for runtimes to catch odd-ball applications is one thing, declaring it the primary paradigm for application distribution is another thing. There is not even consideration of integration (which is the really hard problem), only about isolation (which is the trivial problem).
Posted Sep 2, 2014 18:48 UTC (Tue)
by mezcalero (subscriber, #45103)
[Link] (2 responses)
Chrome in your example would probably get fully access to the XDG userdir download directory. It would be mounted into the app's sandbox to the exact same place it appears at externally, so chrome wouldn't have to care...
Posted Sep 3, 2014 23:32 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted Sep 7, 2014 18:19 UTC (Sun)
by helge.bahmann (subscriber, #56804)
[Link]
Posted Sep 2, 2014 14:08 UTC (Tue)
by mst@redhat.com (subscriber, #60682)
[Link]
Posted Sep 2, 2014 16:21 UTC (Tue)
by bokr (guest, #58369)
[Link]
Posted Sep 2, 2014 18:51 UTC (Tue)
by kim (subscriber, #73716)
[Link] (1 responses)
1) Requiring btrfs for the "runtime" is fine, but if say Gnome or KDE devs switch their development model to "we provide binaries and the runtime and that's it" then I can forsee developpers saying... "well since btrfs (or anyother dependency) is required for the runtime and we are the one releasing the runtime then we can make use of btrfs features in our binaries as well (i.e. in gnome, kde, libreoffice) directly". All that maybe with the attitude of "hey, if you want OUR program to run on YOUR system, then YOU have to provide a shim or an emulation layer for the btrfs features that are missing".
2) If again say KDE or Gnome or Libreoffice is only released by means of a binary runtime (and of course source code that, with time, will not make any effort to compile with/for anything else than the runtime), then I found doubtfull that distro will be able to package gnome/kde/libreoffice the old fashioned way. The will just bundle a huge "runtime_libroffice.deb" for instance with the whole image inside, thus defeating their purpose as a package distribution.
So when Lennart's say that distros serve a certain use case, it's all fine, but his solution defacto makes that distro will not be able to provide runtime-based apply (unless they do so in a clumsy way).
3) I don't belive for one second that KDE/GNOME/Libroffice can commit to provide N and N-1 and LTS security support for their runtime. Debian can do it because of the sheer number of debian volonteers (seriously thank you guys and girls). Redhat and Canonical can do it because they have a business model and can somehow pay for it (and canonical also benefits from the work of debian in that area).
Posted Sep 3, 2014 9:59 UTC (Wed)
by HelloWorld (guest, #56129)
[Link]
Posted Sep 2, 2014 22:35 UTC (Tue)
by martin.langhoff (subscriber, #61417)
[Link] (12 responses)
Of course, with brtfs you get some pixie dust (dedupe, "mountpoints" without a disk partition) that make it work well; but the underlying problems persist...
* bundled libraries and their (unpredictable level of) maintenance
It is the first move from the "systemd cabal" that leaves me scratching my head. Everything else so far has been IMO very well defined.
As an alternative view into a related problem-space...
Back at Sugar/OLPC we ended up building our own "bundles" for sugar apps (.xo packages). Our goal was to install sandboxed apps in the users' homedirs, not requiring root or sudo access.
The main alternative at the time, in my view, was to teach the yum/rpm toolchain to make truly "relocatable" packages, which could be installed under an arbitrary prefix. So you could install rpms in your homedir, without system-wide privileges.
This would allow you to install a base OS, then say "yum install --relocateprefix /foo/postgres9.2" and have all the dependencies under /foo/postgres9.2 . It would combine very well with Copr (PPAs in Debian/Ubuntu) to install experimental versions of an app, toolchain or desktop; and it would not be hard to imagine it being "yum install --relocateprefix /foo/pg9.2 --makeitasnapshot"; at which point we get to Lennart's dream scenario with a toolchain improvement that has many valuable uses.
A similar thing would be doable with the dpkg toolchain. It will take a long list of fixups (path, ld, a secondary yum/apt db, etc...), but there are no mysteries to solve, it is essentially a SMoP.
Posted Sep 3, 2014 14:36 UTC (Wed)
by mjthayer (guest, #39183)
[Link]
* Experience of which libraries link well statically and which not. E.g. glibc vs uclibc.
On the other hand I can also imagine popular hosting services adding build services which would improve the security problem. A developer who did not have the resources to follow all security updates could just let the service re-build and re-package the software whenever there was a security update to a bundled library, and they could use a standard (statically linked) library to check for and download updates at the hosting service on software start-up.
Posted Sep 3, 2014 15:17 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (10 responses)
I think the idea is that with this proposal, by using mounts and mount namespaces, you don't have to retro-fit the whole world to use some new lookup scheme to find what they want on the filesystem, every service sees a standard and consistent filesystem that just works as it always has.
Posted Sep 3, 2014 16:02 UTC (Wed)
by martin.langhoff (subscriber, #61417)
[Link] (9 responses)
Am I going to trust OpenSSL libs bundled in a dozen apps? When a significant bug is found, I will be depending on the responsiveness and expertise of a dozen teams -- some will patch/update early and correctly, some will mess it up, some teams will be dormant and never get updated.
This is not a theory, it happens on OSX today. I have had OSX as a secondary desktop env for ~10 years now, alongside Debian/Ubuntu/Fedora desktops.
On these mainstream Linux OSs we have a fantastic thing: even old, lightly-maintained applications get active care from packagers working as a team (with some level of consistency). Apps get updates to their libraries, and the packagers are knowledgeable enough to sort out minor compatibility issues, or to bring them to upstream's attention with good diagnostics in hand.
"Fat" app bundles forgo all that. It is a gigantic loss.
Posted Sep 3, 2014 16:13 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (8 responses)
Posted Sep 3, 2014 18:13 UTC (Wed)
by HelloWorld (guest, #56129)
[Link] (7 responses)
Posted Sep 3, 2014 19:02 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
That remains to be seen, there is room in this scheme for a whole marketplace of different pre-baked /usr filesystems with as much or as little included, how popular those runtimes are can help determine where the sweet spot is. There will surely be many different runtimes for different workloads under this scheme, very minimal ones for embedded and maximal ones for desktops, which can be built using the same tools that distros use now for yum grouplists and such.
There are definitely a group of people though who have a visceral reaction against anything they are not actively using being on their system and feel unclean, they will be resistant to a generic /usr because of the amount of stuff that must go in it. I figure for a generic desktop you can afford to spend maybe 10G on system libraries and apps given that most desktops have at least a 128G SSD, which is enough for 2 maybe 3 maximal /usr filesystems, for a generic server maybe 1-2G is appropriate and one /usr, although for a Docker-like container server it may be appropriate to have 50G or more with dozens of /usr filesystems for each distro and server framework, much like AWS images.
> This whole thing doesn't solve any real problem, and that's not a surprise given that the underlying problem is basically unsolvable: you want bugfixes for the libraries, but no regressions or incompatibilities. Given that there's no way to tell them apart, you're hosed.
You are right in that it doesn't solve this problem, it sidesteps it entirely because the problem is practically unsolvable as you state. By having a standardized scheme for managing multiple /usr filesystems it lowers the friction and increases the integration of a mix-and-match system, compared to running each app in a VM with nested kernels and limited shared system state (no shared services like d-bus or x or wayland). Instead of picking one way or the other, do both, cut the gordian knot.
Posted Sep 3, 2014 19:08 UTC (Wed)
by martin.langhoff (subscriber, #61417)
[Link] (5 responses)
OSX and Windows have not, and the price they have paid is: a "base" install that is bloated and includes much of the graphical stack even for servers, yet so lean that it forgets to include important libraries, so app authors have to bundle them.
Wherever you draw the line, you are doing it wrong :-)
Fedora seems poised to draw some line between a "base" and "stacks", but that line us a lot more fluid because it is still underpinned by yum. The promise is that each stack will be better integrated and easy to install "whole", providing a better defined platform. And still, you get your openssl security patches from one high quality source.
Posted Sep 3, 2014 19:33 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (4 responses)
This whole thing is about not having to make a singular choice, you can have 2 or 3 or more different /usr systems which draw the line in different places for different apps. Over time under this proposed system I would expect a few natural groupings to fall out and a feedback between app and runtime developers to negotiate what makes sense at what layer so app developers are not ultimately responsible for system libraries they don't care about.
Posted Sep 3, 2014 20:46 UTC (Wed)
by martin.langhoff (subscriber, #61417)
[Link] (3 responses)
Looking at it from this perspective, things look even weirder. If application (and app stack) developers were keen on this kind of distribution, tools like Klik and ZeroInstall would be much more popular amongst app developers and users than they are today.
I honestly believe that most projects are happy to leave packaging, with all the specialized knowledge it entails about ABI changes in different distro releases, to folks on the distro side. The exceptions are very large projects, with the manpower and "ecosystem" to sustain that role. Those projects are hosting their PPA/Copr style repositories already -- but it's not that many, and you can see those repos are not that well maintained.
Posted Sep 3, 2014 21:04 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (2 responses)
I think the proposal is for the exact opposite, an app package will depend on exactly one runtime and will not claim to work with anything else, reducing the test matrix from all popular distros which a user might conceivably have installed to just the one type of system they built the app on in the first place.
> Klik and ZeroInstall would be much more popular
This does cover some of the same ground as those utilities, this is a discussion about how to solve this in a generic and standard way across all of the different kinds of Linux, maybe avoiding some of the pitfalls those utilities have had.
> I honestly believe that most projects are happy to leave packaging, with all the specialized knowledge it entails about ABI changes in different distro releases, to folks on the distro side.
Which is a system this proposal preserves as the distros are the ones maintaining /usr volumes, as an application developer you get to choose which /usr fits your needs and can rely on the end user being able to easily get a copy of that /usr, supported by its distributor, when they want to run your application, without disturbing the rest of their system preferences.
Posted Sep 3, 2014 21:17 UTC (Wed)
by martin.langhoff (subscriber, #61417)
[Link] (1 responses)
a - that each bundle will match only one "base OS runtime", but that you expect app packagers to produce one bundle for each popular "base OS runtime"? In this case the test matrix is 1:1 for each bundle, but large for the app packaging team...
b - that each app dev team will publish _one_ bundle, matched to one "base OS runtime". In this scenario, it is the "end user"/sysadmin that might be in a situation of having to install and run a particular base OS runtime because the app bundle demands it.
Or perhaps both depending on manpower. I don't like an ecosystem that spans the gamut between these two dynamics.
Most app projects are under-resourced, so I suspect case "b" will be the most common case. So if on my desktop I'm running a dozen apps, they might in turn each be coupled to a specific OS runtime, so perhaps I'd be running bits and pieces of 6 OS runtimes.
Posted Sep 3, 2014 21:36 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
So yes, you might have 6 different base OS /usr runtimes installed to run 6 different apps although it seems clear that there will be a lot of pressure to standardize on just a few /usr runtimes in each use case (Desktop, Server, Phone, Tablet, IVI, Network appliance, other embedded, etc.) and since this reduces the friction of maintaining different /usr spaces (no re-install or booting of VMs) it makes the process of shaking out what developers and users really want from /usr smoother.
Maybe this could lead to a resurgence of LSB, defining the ABI for an entire /usr for applications to target rather than a uselessly small subset. By changing where the pain points are for supporting applications a very different marketplace could stably emerge.
Posted Sep 3, 2014 5:32 UTC (Wed)
by mrdocs (guest, #21409)
[Link] (4 responses)
And to be provocative, most of them suck at it and find it a point of pain. I'd love to be proven wrong.
Packaging is an afterthought. I've dealt with this in two prior jobs. Undoing an unholy mess because developer A setup his dev environment in such and such a fashion and then Dev B cannot get stuff to run right.
Moreover, developers and coders often approach packaging as a programming paradigm, when there are long settled rules about how to package things properly on Linux. This creates unneeded complexity and fragility. I had one developer with a Ph.d completely redefine every rpm macro in a spec file because he thought he was smarter than enterprise distros ;-)
Lastly, I'm wondering how my Fortune 1000 customers are going to look at this and ask them selves: " How can I audit this properly ? How can I maintain golden images without going insane? Now I need another way to manage what is on my systems ? "
Posted Sep 5, 2014 20:37 UTC (Fri)
by sjj (guest, #2020)
[Link] (1 responses)
I haven't really thought this thing through but I'm cautiously positive by default on rethinking systems. I do like systemd because it brings sanity into the twisted nest of hacks that is SysV init.
That being said, this smells of some kind of desktop environment oriented hackery I'm not at all sure is useful on stable servers.
Posted Sep 5, 2014 21:03 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 8, 2014 14:06 UTC (Mon)
by ebassi (subscriber, #54855)
[Link] (1 responses)
shockingly, when "packaging" is a set of policies that change in order to ensure that "packaging" cannot be automated like it should, in order to keep around fiefdoms and OCD-level of control over them, people actually writing the software have issues complying with all the little rules and incompatibilities.
Posted Sep 8, 2014 19:02 UTC (Mon)
by tao (subscriber, #17563)
[Link]
The policies are there to make sure that all packages can either co-exist or, in cases where they're by nature conflicting, said conflicts are formalised in terms of package dependencies/conflicts/etc.
Sure, there are things you'll need to do manually (generally only once per package though, unless the package drastically changes), but for the average Debian developer most of the effort is spent either on:
* Fixing things that should've been done upstream (symbol versioning, portability, ... Even things you'd think would obviously included with every piece of software, such as manual pages)
* Backporting fixes when upstream cannot be bothered to release security fixes for older versions of their software
* Modifying code to be able to run with older versions of libraries, to ease situations where software A, B, C all depend on library version 2.4 but software D depends on library version 2.5
* Reinstating functionality that upstream has removed with the intent to replace it with something better (but have not yet done so)
* Ensuring that the end product is actually legally redistributable (care must always be taken so that the license of software A is compatible with the licenses for library B, C, D, E, ...)
Most of all though, the main reason upstream developers are generally not good packagers is that the developer has software A to care for (and only needs to make sure that it works as long as its dependencies are available), but the packager needs to ensure not only that A works, but also that A doesn't cause breakage in totally unrelated packages B, C, D, ...
The packagers also need to worry about annoying little details such as unrelated software sharing the same name (git, node, epiphany for instance).
PS: I cannot speak for other distributions, but none of the packaging policies in Debian change to "ensure that packaging cannot be automated". If you have something you believe to be a counter-example, please share.
Posted Sep 3, 2014 8:45 UTC (Wed)
by lottin (guest, #98688)
[Link] (1 responses)
I don't know the details, but in a recent demo I saw it looked like it was based on "profiles" which allowed you to install multiple incompatible versions of a package system-wide or in the user's home directory.
Posted Sep 5, 2014 22:44 UTC (Fri)
by thoughtpolice (subscriber, #87455)
[Link]
But rather than using subvolumes or btrfs, you instead calculate hashes of system dependencies (roughly speaking) based on their own input dependencies, and use that to separate things on the filesystem - this means 'firefox 29' and 'firefox 30' can coexist, because they have separate hashes, and are stored on the FS in separate locations.
The final 'system' is then 'built' using symlinks to these paths, as POSIX (IIRC) guarantees symlink renames are atomic on any filesystem. So the package manager is by design transactional on any filesystem. This means you can rollback any transaction like installing a package that might break.
It also means you can do things like take the 'closure' of a package (the package + the transitive closure of its dependencies) and ship them to other machines using SSH or whatnot.
The Guix developers are also working on a distro (I don't remember the name) based on Guix and the GNU Daemon Managing Daemon (dmd), an alternative to systemd.
In contrast, Nix has NixOS, built on the original Nix package manager. NixOS uses systemd for its init system (and used upstart before).
(For full disclosure, I'm a NixOS developer as well.)
Posted Sep 3, 2014 13:11 UTC (Wed)
by HelloWorld (guest, #56129)
[Link] (1 responses)
In Lennarts new model, an “app” is essentially allowed to depend on a single collection of libraries (“runtime”), which will receive updates from the vendor of that collection, leading to the same problem as before: inadvertent compatibility breaks. And of course the “runtime” is never going to ship all the libraries a given app needs, so libraries will have to be bundled with the app, leading to the aforementioned problem of missing security updates and bugfixes. Really, how does this help anyone? And that's not the only problem I see. It says that a “framework” is supposed to ship everything that is necessary to develop an “app” for a given “runtime”, including compilers and the like. What if I develop an application in a language that the “framework” doesn't provide for, or if I need a newer compiler than what ships with the “framework”?
I think I must be missing something, because so far most of what the systemd crew has shipped seemed reasonable and well thought-out. But to me, it doesn't seem that way this time around.
Posted Sep 3, 2014 16:08 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
You are right that because the /usr runtime is fixed and readonly it needs to define what is included and what is left out, not every library in existence is going to be in every runtime, so for some apps this will lead to library bundling, but for others the ability to specify which /usr the app runs on means that they can now depend on it and stop bundling. It is unclear until this is tried and some lines are drawn as to what is commonly included in /usr how this will play out.
As far as developing, if you want to develop using particular tools then you need to use a framework, which is a whole /usr filesystem, that includes those tools, or you need to include those tools with your app. How much of a problem this actually is in practice depends on how maximal the frameworks are in what tools they include. The /usr filesystems will be provided by distros under this scheme so if your tools are currently packaged by a distro you have a better chance that they will be commonly included in frameworks.
Posted Sep 3, 2014 19:19 UTC (Wed)
by xxiao (guest, #9631)
[Link] (2 responses)
Posted Sep 3, 2014 19:35 UTC (Wed)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 4, 2014 21:00 UTC (Thu)
by mezcalero (subscriber, #45103)
[Link]
Lennart
Posted Sep 3, 2014 21:57 UTC (Wed)
by gvy (guest, #11981)
[Link] (4 responses)
Posted Sep 3, 2014 22:11 UTC (Wed)
by corbet (editor, #1)
[Link]
Posted Sep 3, 2014 22:18 UTC (Wed)
by mjg59 (subscriber, #23239)
[Link] (2 responses)
Posted Sep 4, 2014 1:55 UTC (Thu)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
Posted Sep 5, 2014 12:33 UTC (Fri)
by carenas (guest, #46541)
[Link]
funny is I self censored my self when he was introducing systemd and decided to deploy it in /usr/bin; I had to admit too, I was not sporting the required unix beard either, to support my point, but it was obvious that there could be less friction around this if we were a little more willing in general to listen to users (or other developers) concerns in a positive light.
looking forward to meet Junio Hamano sometime, and buy him a beer instead
Posted Sep 4, 2014 21:44 UTC (Thu)
by jb.1234abcd (guest, #95827)
[Link] (3 responses)
jb
Posted Sep 4, 2014 22:51 UTC (Thu)
by JGR (subscriber, #93631)
[Link] (2 responses)
Posted Sep 21, 2014 8:21 UTC (Sun)
by jb.1234abcd (guest, #95827)
[Link] (1 responses)
http://uselessd.darknedgy.net/
jb
Posted Sep 21, 2014 20:36 UTC (Sun)
by anselm (subscriber, #2796)
[Link]
What we have here are the buggy whip manufacturers railing against the ascent of the automobile. Give them another two or three years and the issue will have taken care of itself.
Posted Sep 5, 2014 1:09 UTC (Fri)
by ms-tg (subscriber, #89231)
[Link] (2 responses)
I perceive that adoption of this proposal will facilitate the construction of systems out of separately maintained, read-only components, each of which can be separately distributed and updated.
My favorite implication of this proposal is the market effects it will have. Popular "substrates" of a system will have many others depending on them, focusing community support and security attention in useful places.
Distros can become sources of some of these substrates, but perhaps get out of the business of being responsible for all of them.
I hope it happens!
Posted Sep 5, 2014 1:35 UTC (Fri)
by dlang (guest, #313)
[Link] (1 responses)
Or this proposal will facilitate the construction of systems out of separately unmaintained, read-only, components, each of which is separately distributed and abandoned.
I wonder how much experience correlates to opinion between the two extremes?
Posted Sep 5, 2014 4:17 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
Posted Sep 8, 2014 11:04 UTC (Mon)
by etienne (guest, #25256)
[Link] (2 responses)
Posted Sep 8, 2014 14:47 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (1 responses)
Posted Sep 8, 2014 15:30 UTC (Mon)
by etienne (guest, #25256)
[Link]
I was speaking of "gcc -Wl,-rpath=/usr/lib-rhel6.5" so no change needed to ld.
> changing how userspace works
Point taken, but will be difficult to maintain and to keep in sync...
Posted Sep 8, 2014 18:45 UTC (Mon)
by jb.1234abcd (guest, #95827)
[Link]
Adopting it to Linux, UNIX, BSD* ecosystems (independent projects, distros,
The argument that the current status of that ecosystem is not better does not take into account the model of its development, which is mostly voluntary, contributory, and of bazaar type. The also-ran distros, unmaintained projects or distros are a natural part of it if you accept the idea of market forces at work or just freedom to experiment and educate.
So, learn from the systemd "voluntary enforcement" debacle, please.
jb
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
their lessons very well from Microsoft...
dependencies the way "systemd" does, and the way this does is
a Bad Idea®
software like environmental models -- I have only a few thousand
clients worldwide, I don't have the resources for all these builds on
who knows how many versions of what distributions, and the clients
don't either -- it looks like he's freezing us out: Linux for the
masses (only), and niche users can just forget about it.
Poettering: Revisiting how we put together Linux systems
> software like environmental models -- I have only a few thousand
> clients worldwide, I don't have the resources for all these builds on
> who knows how many versions of what distributions, and the clients
> don't either -- it looks like he's freezing us out: Linux for the
> masses (only), and niche users can just forget about it.
Obviously, this relies on other distributions to support such bundles. Otherwise, it just becomes one more package you need to provide.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
almost always comes from not having thought the thing through -- and
I've seen lots of it over the course of thirty years experience with
systems architecture.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Jon
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Jon
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
while ((cmd | getline x) == 1) xx = xx x "\n";
close(cmd);
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
So the static libraries are only a workaround until people and distribution will behave more stable.
Security
Security
Security
Security
Security
Security
Wol
Security
Security
(b) most people weren't using it
Wol
Security
Security
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Am I missing something? It seems to me that any binaries that were compiled against the version of the header that didn't have the volatile in it may display incorrect behaviour with respect to accessing it, and on that basis it seems to me that it's reasonable to call it an ABI break (since the only way to fix the break is to recompile).
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
in the WRF weather-forecast modeling system between WRF-3.0.1 and 3.0.2.
Or at least, there were that many that I found and had to re-program
for...
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
They are free provided you have given your monetary offering to the shrine of Jobs recently enough.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
I have a laptop running Kubuntu. Occasionally, I can persuade it to make a wireless connection, but mostly it just doesn't - no messages, just nothing happening. It also doesn't properly manage the wired network, especially if the cable is unplugged or replugged. NetworkManager is essentially just a black box that might work, and if it doesn't then you're screwed. I eventually persuaded it to kind-of work without NM, though it then required some manual effort to switch between wired and wireless networks.
For several consecutive releases, the Ubuntu live CD images didn't have fully working wired networking out of the box, due to some DHCP cock-up. Sadly I no longer remember the specifics for certain, but I *believe* it was ignoring the DNS setting specified by the DHCP server, and manually disabling the DHCP client and editing resolv.conf fixed the problem. There may have been something more to it than that, like getting the default gateway wrong; I don't recall.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Not every package, but at least _something_. And dedup partially solves the space problem.
There are no standards right now. None. And LSB has shown us that when a group of vendors dictate their vision of a standard, distors simply ignore them.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
I'm hoping that the "market" is going to dictate the standard. Application developers will prefer to use a runtime that is well-supported and easy to maintain. And perhaps in time it will the become _the_ runtime.
Exactly. However, as an app developer I won't have to play against network effects of distributions - my software will run on ALL distributions supporting the /usr-based packaging.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
No. Most developers want pretty much the same basic feature set with small custom additions.
No they don't. Distro model is exclusionary - I can't just ship RHEL along with my software package (well, I can but it's impractical). So either I have to FORCE my clients to use a specific version of RHEL or I have to test my package on lots of different distros.
Bullshit. Minimal Docker image for Ubuntu is less than 100Mb and it contains lots of software. There's no reason at all for the basic system to be more than 100Mb in size.
Who cares. All existing software, except for high-profile stuff like browsers, is insecure like hell. Get over it.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
> people with "visions" instead of trying to be a common ground of
> disparaging APIs.
Poettering: Revisiting how we put together Linux systems
http://blogs.gnome.org/tthurman/2009/07/02/cascade-of-att...
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
running the "module" run-time-package-management system, for
which there are over two dozen different compiler versions,
all of them declared incompatible with each other by the Powers
That Be, each with its own shared libraries, so that whether
a program you've built will run or not depends upon whether
the current state if "module laod" is correct --- and the odds
are that it is not!
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
> share the data on disk, but also later on in RAM.
Poettering: Revisiting how we put together Linux systems
> statically linking all binaries while de-duplicating their embedded
> object files by means of btrfs-only features. :/
The fact that these subvolumes share some files is like having an hard link between these files (remember these subvolume are RO, so an hard link works).
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Please, can we do without this kind of stuff? If you have specific technical objections then by all means express them. But this kind of comment helps nobody.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Agreed, and LWN should stop tolerating this sort of thing. LWN comment threads on anything controversial have become a sewer, which is particularly disturbing because we are paying for this. If the LWN staff don't want to censor, they could let subscribers upvote and downvote comments Slashdot-style and people can then set a score threshold for what they want to see.
Poettering: Revisiting how we put together Linux systems
Voting is one of those things that has been on the list for a while. One of these days.
Comment quality
Comment quality
Comment quality
Comment quality
Comment quality
Comment quality
I rather have a setting where I can just hide all the comments by guest accounts rather than having slashdot style comment voting. If people want to troll, make them contribute something to running LWN.
Comment quality
A agree that if anything is to be done, it should be the ability to filter guests’ comments instead of voting.Comment quality
Comment quality
Comment quality
Comment quality - voting vs censoring vs social pressure
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
It's then GNOME's job to do security fixes, and push out minimally modified updates to GNOME_3_38. Then one day, you actually invest some time, want to make use of newer GNOME features, so you rebase your app onto GNOME_3_40.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
They've spent roughly the last two decades acting like the worst stereotypes of teenagers
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
If we had runtimes, which would be distributed directly to end-users by upstream, the potential benefit of fixes would increase significatly. Thus one would at least hope it would happen more frequently.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
My hopes at least is that there will be KDE's and GNOME's and maybe a couple of more, but that would be it. And I think this will be self-regulating a bit, since these will be well maintained, and you will get frequent CVE updates for a long time for, and that are likely already installed on your system when you first installed it.
Going by the past behaviour of Gnome, this is wishful thinking.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Just hoping it won't become a case of perpetual movement, with the same story repeating in 10-20 years ;-)
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Quality control
especially the complete lack of quality control it enables
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
And quite frankly I don't think it is really worthy goal, and something people seldom test. In particular, since NFS systems are usually utter crap, and you probably find more NFS setups where locking says it works but actually doesn't and is a NOP.
This might be true in some interoperable situations, but it hasn't been true of Linux-only deployments for a very long time indeed (well over a decade, more like a decade and a half).
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
And yeah, this happens, because BSD locks are per-fd and hence actually usable to use.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
That wouldn't help. I think he's suggesting just returning -ENOLCK to BSD locks unconditionally. I agree that that's cleanest but in practice I suspect it would break a lot of existing setups.
Given how awful POSIX locks are (until you have a very recent kernel and glibc 2.20), and how sane people therefore avoided using the bloody things, I'd say it would break almost every setup relying on locking over NFS at all. A very bad idea.
Poettering: Revisiting how we put together Linux systems
A BSD lock will block a POSIX lock, and v.v.. (At least that's what happens locally; no idea what the BSD's NFS clients do.)
Huh. A freebsd man page agrees with you:
Poettering: Revisiting how we put together Linux systems
https://www.freebsd.org/cgi/man.cgi?query=flock&sektion=2
http://man7.org/linux/man-pages/man2/flock.2.html
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
With a FreeBSD server and Linux client, NFSv4 ACL support isn't working for me, though the standard ownership and perms work correctly. I put this down to the Linux NFS client being less sophisticated and/or buggy, but I can't rule out some configuration issue.
Poettering: Revisiting how we put together Linux systems
I think its requirements for strong authentication are getting in my way
silly-rename and the rest are intrinsic
spurious but persistent -ESTALEs from nested exports and exports crossing host filesystems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Reinventing bundled libraries?
Poettering: Revisiting how we put together Linux systems
One of the problems is that it removes any pressure from upstreams to adhere to any overarching standard like LSB, and just code against whatever distribution they have. Also, it removes the pressure to make sure their code runs on newer versions of that distribution. That may not seem so important, but in fact there are many libraries that need to be updated when the kernel and its drivers are updated. Just an example: the binary nVidia driver needs to be the same version as the nVidia GLX library, otherwise 3D acceleration does not work. Old versions of Glibc might not work correctly on the latest Linux kernel. Or what about when you want to run multiple apps, and some use ALSA directly, others PulseAudio? There are good reasons why you want to stick to a single distribution.
Poettering: Revisiting how we put together Linux systems
Old versions of Glibc might not work correctly on the latest Linux kernel.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
I have seen the rise of "composer" in php-land, which uses a somewhat-related scheme (each app magically gets all the dependencies it needs) and the times for dependency resolution and download are ugly.
Poettering: Revisiting how we put together Linux systems
Bloat: even in the age of large ssds, keeping 5 versions times 5 OS-packages installed "just for fun" is not something I'd like to do.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
conditional on booted kernel name, and a few other things,
but not chrooted, wondering what needful thing might I
make inaccessible.
entire boot process up to presentation of console login prompt?
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
SystemD has an actual ABI compatibility promise: http://www.freedesktop.org/wiki/Software/systemd/Interfac...
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Forcing everyone (including Fedora!) to reimplement broken features is a nice side of systemd's adoption.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Care to point out this problem?
I'm browsing debian-devel, I don't see anything worse than the usual bugfixing cycle.
These 'great features' being? Fragmentation for the sake of fragmentation? Buggy init scripts?
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Debian Stable is magnificently reliable and upgrades smoothly from one release to the next without reinstalling.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
systemd ignores inittab and therefore any claims of "drop-in compatibility" are meaningless.
Forcing systemd on unwilling Debian users is an egregious violation of Debian's Social Contract.
Leaving servers inaccessible or even unbootable after an upgrade is distinctly below the standard achieved by previous Debian upgrades.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
a) To use later versions of libraries than distros are shipping. This lets us fix security and other bugs faster.
b) To expose interfaces and functionality that aren't widely deployed yet and possibly won't ever go upstream.
c) To increase consistency across platforms. This helps reduce our bug load.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Except, as a gentoo user, I want to use systemd!
Not because I use Gnome (I personally can't stand it), but because I'd like to have multiple stations on a single pc. Comes by default with systemd apparently, but OpenRC well I don't know - I got the impression it couldn't.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Those are behaviors that shouldn't be in the init system, if you like UNIX philosophical models of "do one thing and one thing well." These complicate an already complex job, better done by task-specific narrowly-scoped tools. Monit, Puppet, Chef, watchdog, and many other programs can do that simply defined task and do it well. And fix those crashing daemons! Crashing should never become accepted, routine program behavior! :)
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Oh, I've witnessed mainframe hangups. Remote reboot is nice, but that server was from around 2006 so it didn't have IPMI and the datacenter where it was hosted offered only manual reboots.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Seriously, why is C++ creeping into and infecting everything.
Poettering: Revisiting how we put together Linux systems
In particular, memory management and string handling are fairly key activities in a browser engine, and in C these are labour-intensive and therefore error-prone. (Not that C++ is some language of perfection, but it is better in this regard).
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
"C++ exists and grows because it *uniquely* meets a real need."
"Maybe developers choose to use it. You know, the people who are actually doing the work not just complaining and criticising."
"An assortment of reasons, frequently involving one or more of the features that your tone suggests you think should be set on fire, indented six feet down, and covered with dirt."
This LWN.net site has some ambition to become a source of good technical
knowledge about Linux, UNIX, and Computer Science in general.
http://imagebin.org/318679
C++ popularity as a language of choice has declined from 18% in 2004 to
4.7% as of today.
If anything, this is a disaster !
Poettering: Revisiting how we put together Linux systems
apologists who for more than 10 years tried to cover C with dirt (C++, misinformation, and politics) in order to deny it improvements where needed and make it ready for a graveyard.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
domain, but obviously require some mental effort to do it ?
Otherwise you will not understand what people are talking about and react like a cat whose tail was stepped on.
You should understand the origin of the term "designed by committee".
C++ was built on C; Stroustrup originally called it "C with Classes".
What it means is that majority of C became a "subset" and a hostage of C++.
So, it is clear that C++, thru its governing body C++ Standards Committee,
suffers from a split personality disorder - letting C evolve would shake C++ boat. It would create C and C++ incompatibilities (C99 anybody ?) that are not desired. This works both ways.
According to Stroustrup, there is another language trying to emerge from C++. The question is: with or without C "subset" hostage of C++ ?
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
http://www.tiobe.com/index.php/content/paperinfo/tpci/ind...
Poettering: Revisiting how we put together Linux systems
4.7% as of today.
> If anything, this is a disaster !
Wol
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
This frankly does not encourage me to look to Plan9 for a solution.
Personally I suspect that this is somewhat optimistic.
All that "code churn" has resulted in better functionality and usability for end users. This is more important that paying homage to the "Unix Way".
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Some honestly and sincerely feel like there is less usability for end users now than 7 years ago. I certainly fall in that camp.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
> Personally I suspect that this is somewhat optimistic.
There was a downloads counter on Fedora's site. It included their default
Gnome 3 desktop edition in a distant place, after KDE and XFCE.
Btw, the current counter does not show actual download numbers and does not include Gnome 3, just the spins. Oh well, if we do not like the message, let's kill the messanger ...
The other effect was that the officially unsupported Gnome 2 was resurrected, which tells us a lot.
that users are just a bunch of exorcists.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
I beg to differ: programming is abstraction. It always pays off to (a) make as little assumptions as might make sense and (b) make as many of those assumptions explicit as you ever can.
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
I still cling to the old dream that an app has the responsibility of working with a whole range of environments (file system layout, minor variances in lib versions, etc.)
It would be nice... but in practice this means an exponential explosion of test environments, and what it really means is that your personal environment has never been tested by anyone but you, ever. Which means you get your own personal bugs. Now, I like this -- it means I get to help fix those bugs, and improve the quality of the software for everyone -- but for end users? Not so good.
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting the fragmentation
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Each snapshot adds own overhead and slows down everything.
LVM in his fundamentals it is almost two decades old technology.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
> done by using bind mounts (w/some specialized filesystem hierarchy), LVM
> (for snapshots) and namespaces/chroot.
For what Lennart needed it is more simple to hardlink the common files during the "package" installation: you need a database of the hash and the path, when there is an hash collision you create an hard link instead of a copy of the file. This should work because these trees are RO.
Poettering: Revisiting how we put together Linux systems
This is what Lennart consistently Misses:
"humble-programmer" philosophy, which says that humans tend to
overestimate their ability to handle complexity in software and
consequently one should strive (in addition to one's other objectives)
to pessimize the complexity (measured in lines of code) of the software
one relies on. often, this is achieved by finding a novel way of
viewing or conceptualizing the problem (like per-process namespaces).
they pointed out the programmer who can meet a set of requirements with
fewer lines of code is the better programmer because a smaller program
will usually be easier for the user to control, more likely to behave
the way the programmer thinks it will behave and easier for future
programmers to modify to do something the original programmer did not
provide for.
time and the attention of the programmer trying to modify it or the user
trying to control it.
This is what Lennart consistently Misses:
This is what Lennart consistently Misses:
This is what Lennart consistently Misses:
This is what Lennart consistently Misses:
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
OMG are they going to blow it up?
BTW: these guys should really ave look on Solaris IPS which is around more than 5 years. Incredible but seems no one of these "inventors" been looking on what was done up to now to solve similar issues.
Again NIH Linux syndrome .. what a shame :->
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Upstream projects are notoriously understaffed and underfunded.
Poettering: Revisiting how we put together Linux systems
I think that the only way to do this properly is to get upstream involved, and if Lennart's proposal achieves that, I'm all for it.
Poettering: Revisiting how we put together Linux systems
* infrastructure and ABI changes in the underlying OS
* trust -- instead of trusting one distro team, I have to trust N number of teams dealing with bundling, security and distribution
Poettering: Revisiting how we put together Linux systems
* Experience of what ABIs one can depend on on a random system . E.g. the Linux kernel system call interface, the glibc dynamic interface (as long as one knows a few tricks).
* Avoiding statically linking to high-frequency update libraries. E.g. piping and shelling out to openssl(1) rather than linking in the library.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
One thing no one has brought up is in my experience, most developers are not really good as packagers.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
How does this solve any problem?
How does this solve any problem?
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
regarding 'cabal' folks
Do I really have to ask again for people to cool it with this kind of stuff? Nothing is accomplished with personal attacks, please take them somewhere else.
Please.
regarding 'cabal' folks
regarding 'cabal' folks
regarding 'cabal' folks
Poettering: Revisiting how we put together Linux systems
The "systemd cabal" under attack.
New Group Calls For Boycotting Systemd.
http://www.phoronix.com/scan.php?page=news_item&px=MT...
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
If they receive more support, half of the original systemd's misfeatures
will be gone soon.
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
/usr/include-rhel6.5
/usr/lib-rhel6.5
and setting up "gcc"/"ldd" to the right directory tree from some stable applications.
Anyway there won't be any identical files (unless you are running rhel6.5 - then use symbolic or hard links).
But the problems of managing one set of library (incompatibilities ...) may not be solved by adding another set, and it seem people are statically including because they need *newer* versions than available...
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
Poettering: Revisiting how we put together Linux systems
OSs) would be a disaster, a chaos of components and people (many of whom are volunteers).
This model does not prevent a formation of professional organizations around it, which are free to pick and choose and mold it all according to their idea of the next best "Slowaris", on their own terms, but outside of it. The point is to not allow them to monopolize it or force their ideas onto it.
The title of this article should be instead:
"Poettering: Revisiting how we put together Linux systems at Red Hat"