|
|
Subscribe / Log in / New account

Some unreliable predictions for 2015

By Jonathan Corbet
January 7, 2015
Welcome to the first LWN Weekly Edition for 2015. We hope that the holiday season was good to all of you, and that you are rested and ready for another year of free-software development. It is a longstanding tradition to start off the year with a set of ill-informed predictions, so, without further ado, here's what our notoriously unreliable crystal ball has to offer for this year.

We will hear a lot about the "Internet of things" of course. For larger "things" like cars and major appliances, Linux is the obvious system to use. For tiny things with limited resources, the picture is not so clear. If the work to shrink the Linux kernel is not sufficiently successful in 2015, we may see the emergence of a disruptive competitor in that space. We may feel that no other kernel can catch up to Linux in terms of features, hardware support, and development community size, but we could be surprised if we fail to serve an important segment of the industry.

We'll hear a lot about "the cloud" too, and we'll be awfully tired of it by the end of the year. Some of the hype over projects like OpenStack will fade as the project deals with its growing pains. With some luck, we'll see more attention to projects that allow users to own and run their own clouds rather than depending on one of the large providers — but your editor has often been overly optimistic about such things.

While we're being optimistic: the systemd wars will wind down as users realize that their systems still work and that Linux as a whole has not been taken over by some sort of alien menace. There will still be fights — we, as a community, do seem to like fighting about such things — but most of us will increasingly choose to simply ignore them.

There is a wider issue here, though: we are breaking new ground in systems design, and that will necessarily involve doing things differently than they have been done in the past. There will certainly be differences of opinion on the directions our systems should take; if there aren't, we are doing something wrong. There is a whole crowd of energetic developers out there looking to do interesting things with the free software resources we have created. Not all of their ideas will be good ones, but it is going to be fun to watch what they come up with.

There will be more Heartbleed-level security incidents in 2015. There are a lot of dark, unmaintained corners in our software ecosystem, many of which undoubtedly contain ancient holes that, if we are lucky, nobody has yet discovered. But they will be discovered, and we'll not be getting off the urgent-update treadmill this year.

Investments in security will grow considerably as a consequence of 2014's high-profile vulnerabilities, high-profile intrusions at major companies, and ongoing spying revelations. How much good that investment will do remains to be seen; much will be swallowed up by expensive security companies that have little interest in doing the hard work required to actually make our systems more secure.

Investments in other important development areas will grow more slowly despite the great need in many areas. We all depend on code which is minimally maintained, if at all, and there are many unsolved problems out there that nobody seems willing to pick up. The Linux Foundation's Critical Infrastructure Initiative is a good start, but it cannot come close to addressing the whole problem.

Speaking of important development areas, serious progress will be made on the year-2038 problem in 2015. The pace picked up in 2014, but developers worked mostly on the easy part of the problem — internal kernel interfaces. But a real solution will involve user-space changes, and the sooner those are made, the better. The relevant developers understand the need; by the end of this year we'll know at least what the shape of the solution will be.

Some long-awaited projects will gain some traction this year. The worst Btrfs problems are being addressed thanks to stress testing at Facebook and real-world deployment in distributions like openSUSE. Wayland is reaching a point of usability for brave early adopters. Even Python 3, which has been ready for a while, will see increasing use. We'll have programs like X.org and Python 2 around for a long time, but the world does eventually move on.

There has been some talk of a decline in the number of active Linux distributions. If that is indeed the case, any decline in the number of distributions will be short-lived. We may not see a whole lot more general-purpose desktop or server distributions; that ground has been pretty well explored by now, and, with the possible exception of the systemd-avoidance crowd, there does not appear to be a whole lot to be done in that area. But we will see more and more distributions that are specialized for particular applications, be it network-attached storage, routing, or driving small gadgets. The flexibility of Linux in this area is one of its greatest strengths.

Civility within our community will continue to be a hot-button issue in 2015. Undoubtedly somebody will say something offensive and set off a firestorm somewhere. But, perhaps, we will see wider recognition of the fact that the situation has improved considerably over the years. With luck, we'll be able to have a (civil!) conversation on how to improve the environment we live in without painting the community as a whole in an overly bad light. We should acknowledge and address our failures, but we should recognize our successes as well.

Finally, an easy prediction is that, on January 22, LWN will finish its 17th year of publication. We could never have predicted that we would be doing this for so long, but it has been a great ride and we have no intention of slowing down anytime soon. 2015 will certainly be an interesting year for those of us working in the free software community, with the usual array of ups, downs, and surprises. We're looking forward to being a part of it with all of you.


to post comments

Some unreliable predictions for 2015

Posted Jan 8, 2015 5:55 UTC (Thu) by okusi (guest, #96501) [Link]

Civility within our community? oh shut up!

Some unreliable predictions for 2015

Posted Jan 8, 2015 7:09 UTC (Thu) by alison (subscriber, #63752) [Link] (3 responses)

Jonathan writes: 'For larger "things" like cars and major appliances, Linux is the obvious system to use.'

It may be obvious to LWN readers that Linux is the best choice for automotive, but unfortunately it wasn't obvious to Ford Motor Company or VW Group, who have chosen to go with the incumbent leader, QNX. These choices occurred even though QNX is closed-source and available only from a single vendor. Ford notably ditched Windows but chose QNX. Contributors to Linux should not be so smug as to believe that its triumph everywhere is inevitable. We have to win our position as market leader by continuing to improve.

Some unreliable predictions for 2015

Posted Jan 8, 2015 9:42 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (1 responses)

QNX is 'traditional' for In-Vehicle systems. It's often used as a real-time OS in the ECU and other auxiliary systems.

It's understandable that Ford doesn't want to experiment anymore after years of failure with Sync, lots of which were caused by outdated WinCE-based software stack.

Some unreliable predictions for 2015

Posted Jan 8, 2015 22:16 UTC (Thu) by alison (subscriber, #63752) [Link]

> It's often used as a real-time OS in the ECU and other auxiliary systems.

QNX is the market leader in the large-touchscreen head units that provide "infotainment" (maps, music, traffic data, etc.) as well as Advanced Driver Assistance Systems (ADAS) that will eventually evolve into autonomous driving controllers. In other words, QNX is competing head-on with Linux and often winning straight-up battles. Automotive Grade Linux and the Linux-based GENIVI Consortium are also thriving, but the Linux community cannot take a win in automotive for granted by any means.

Some unreliable predictions for 2015

Posted Jan 8, 2015 21:32 UTC (Thu) by mm7323 (subscriber, #87386) [Link]

On the plus side, QNX userspace is somewhat Unix-like to program for and has excellent POSIX compatibility. While I've only ever had experience in porting from Linux user-space to QNX (which was really very easy for most code I needed), going the other way might also be not be too difficult either; it could eventually be an advantage for Linux.

Obviously if you are taking device drivers, realtime or any of the advanced QNX features, any porting from QNX to Linux is going to be difficult, and you might be needing the QNX features anyway.

Some unreliable predictions for 2015

Posted Jan 8, 2015 8:43 UTC (Thu) by patrakov (subscriber, #97174) [Link] (6 responses)

I find it rather strange that no predictions are made at all about BSDs.

Some unreliable predictions for 2015

Posted Jan 8, 2015 9:17 UTC (Thu) by ppedroni (subscriber, #6592) [Link] (1 responses)

Maybe because this is LWN (Linux Weekly News), not BWN (BSDs Weekly News). ;-)))

Lack of BSD predictions in LWN

Posted Jan 11, 2015 3:43 UTC (Sun) by giraffedata (guest, #1954) [Link]

While the name was originally derived from Linux Weekly News, it doesn't mean that anymore (since many years ago) and the topic is really free software.

LWN has covered the BSDs.

Some unreliable predictions for 2015

Posted Jan 8, 2015 15:05 UTC (Thu) by Uraeus (guest, #33755) [Link] (3 responses)

I predict that the people who care about BSDs will keep being surprised this year by most of the world forgetting about the existence of the BSDs :)

Some unreliable predictions for 2015

Posted Jan 8, 2015 16:49 UTC (Thu) by smitty_one_each (subscriber, #28989) [Link] (2 responses)

Inviting the obvious prediction that this year Theo de Raadt comes off with some splashy riff that
(a) makes a strong technical point,
(b) reminds everyone that the BSDs still have their audience, and
(c) ensures the pot stays stirred.

Some unreliable predictions for 2015

Posted Jan 9, 2015 8:46 UTC (Fri) by dgm (subscriber, #49227) [Link] (1 responses)

This is actually a very good prediction. I'm thinking of saving it for next year too.

Some unreliable predictions for 2015

Posted Jan 9, 2015 9:19 UTC (Fri) by smitty_one_each (subscriber, #28989) [Link]

Why, thank you, sir.

Re: Internet of things not running Linux (Some unreliable predictions for 2015)

Posted Jan 8, 2015 16:28 UTC (Thu) by fredrik (subscriber, #232) [Link] (2 responses)

I'm curious, what would be a good pick - open source friendly, wide choice of supported devices, etc - of operating system when scaling down below Linux limits? AFAIU Linux stops making sense when you have less than a few MB of memory on the device, is that correct?

What then is the next smaller OS that developers pick to put on their IOT devices? Are L4 microkernel OS:es, say seL4, relevant in the real world? Is that a realistic competitor to QNX?

Re: Internet of things not running Linux (Some unreliable predictions for 2015)

Posted Jan 8, 2015 19:44 UTC (Thu) by pj (subscriber, #4506) [Link]

With such devices so cheap that they're almost always turned to a dedicated purpose, I foresee the OS at that level going away to some extent, replaced by purpose-written firmware ala the Arduino or the more recent ESP8266 firmware toolchain.

Re: Internet of things not running Linux (Some unreliable predictions for 2015)

Posted Jan 9, 2015 13:10 UTC (Fri) by aleXXX (subscriber, #2742) [Link]

Some unreliable predictions for 2015

Posted Jan 9, 2015 10:15 UTC (Fri) by mirabilos (subscriber, #84359) [Link] (7 responses)

The systemd war is only settling down because each side has to cut their losses. The new year sees the advent of several source packages who carry binary packages renamed to have the postfix “-without-systemd” in my APT repository. There may still be more to come: the fight for a Debian with the freedom to choose the init system you want is still ongoing despite Debian not helping.

Some unreliable predictions for 2015

Posted Jan 9, 2015 16:30 UTC (Fri) by johannbg (guest, #65743) [Link]

Debian or Debian based distribution and or distribution in general that ship and thus "support" more than one init system will ship two components or two sub component depended on said init system.

In Fedora FESCo/FPC in it's infinite wisdom allowed for sysv initscripts being shipped in a separated sub component after unit migration even thou it made no sense and had no practical purpose.

If I can recall correctly maintainers of about 30 components decided to take advantage of that thus supporting both systemd units and the legacy sysv or upstart ( no distro migrated to native upstart configuration file otherwise everybody would have been done arguing since it would have been the same amount of transition pain ) but none of the maintainers of components that make up the core/baseOS and depend on an init system did , hence even if you wanted to use those components on a systemd free system you could not since you could not boot one ;)

I assume other distro went through same/similar "migration" process but in the case of Debian or Debian based distros or simply distro that are supporting more than one init system, the maintainership burden will be multiplied by the number of supported init system as well as bug reports and the user frustration that goes hand in hand with that when things dont work as expected ( things work fine on init system x but not y, the component only ships init system configuration file for init system x not y etc ).

And I dont think the fight for Debian with the freedom to choose the init system will be ongoing but rather the community will simple refer those that want that to that self proclaimed VUA crowd and the fork they based on Debian.

Some unreliable predictions for 2015

Posted Jan 15, 2015 16:44 UTC (Thu) by phred14 (guest, #60633) [Link] (2 responses)

Implicit in this prediction appears to be the assumption that "systemd will win", and sysv, upstart, and even those Linux distributions using a bsd-style init will all move to systemd.

NOTHING in Unix or Linux has been like this - ever. Both emacs and vi are still with us, decades later. We still have sendmail, postfix, courier, exim, etc. We still have DJB versions of various programs, along with the non-DJB versions. About the closest thing we have to a monoculture is XOrg, but that wasn't by Xorg trying to stamp out SVGALIB, and even now XOrg isn't trying to stamp out Mir or Wayland.

I would like to see the systemd wars die down, too. I would just like to see it die down with several distributions using it, several distributions using other init systems, and just have the flaming stop.

Some unreliable predictions for 2015

Posted Jan 16, 2015 11:47 UTC (Fri) by hitmark (guest, #34609) [Link]

There is a certain difference though. Most of those are either end programs (they don't interact with other programs, only the user) or have strongly defined interfaces between themselves and the rest of the ecosystem.

Some unreliable predictions for 2015

Posted Jan 16, 2015 16:28 UTC (Fri) by raven667 (subscriber, #5198) [Link]

> Implicit in this prediction appears to be the assumption that "systemd will win", and sysv, upstart, and even those Linux distributions using a bsd-style init will all move to systemd.

systemd has won, the big three PC distros Debian, Redhat, SuSE are using it, mobile Tizen, Jolla are using it, thats a sufficient de-facto standard. It doesn't matter if Gentoo or Slackware uses it, it'd be better if they didn't so that there is always room around the edges for people to get together and try different things, bsd or sysv or whatever.

Poppycock!

Posted Jan 16, 2015 14:22 UTC (Fri) by ksandstr (guest, #60862) [Link] (2 responses)

This so-called "settling down" in the ongoing systemd war is a misconception stemming from the end-of-year holidays around the world, and this narrative is being pushed by journalists with financial ties to Red Hat. Unsurprisingly, examples of the latter will go out of their way to present systemd's brave new world as a foregone conclusion. According to this narrative we're to accept that everything pre-systemd will now break, and that the only way to fix it is to lube up and bend over; or become a ridiculous dinosaur for life.

In the mean time, power management in Debian testing and unstable continues to remain broken unless systemd or systemd-equivalent components are installed. Relatedly cryptsetup will still interact incorrectly with sysvinit boot scripts and the boot console, preventing it from setting up encrypted block devices at boot even when the correct passphrase is given. Since power management no longer detects that the computer is attached to a power supply, anacron only runs its jobs at fresh reboot and only if that is done while leashed; this keeps automatic backups from happening as specified.

The situation where power management failure prevents automatic suspend and automatic switch to "lose at most 30 seconds' work instead of 60 minutes" in low-battery conditions is a recipe for catastrophic data loss. The failing of regular automated backups aggravates such a catastrophe to outright disaster by preventing recovery from any backup besides those that were created manually, or pre-date Debian's systemd madness.

The constellation of outright egregious systemd breakage persists, so the war is not over. However, as in any war, propagandists will sacrifice truth and journalistic integrity for an appearance of pious conformity as their patrons require.

Poppycock!

Posted Jan 16, 2015 16:32 UTC (Fri) by raven667 (subscriber, #5198) [Link]

> this narrative is being pushed by journalists with financial ties to Red Hat.

> propagandists will sacrifice truth and journalistic integrity for an appearance of pious conformity as their patrons require.

hahahahahahahahahahaha, I'm _sure_ that is what is going on (sarcasm), no one actually has real opinions, the entire world is a conspiracy of paymasters against you and that's not even a little bit paranoid.

Poppycock!

Posted Jan 16, 2015 20:41 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

Those sound like things which should be filed as bugs. Also, finally a concrete lost of things broken in the systemd transition (versus previous complaints of "all kinds of things *hand wave*").

Some unreliable predictions for 2015

Posted Jan 9, 2015 12:14 UTC (Fri) by HIGHGuY (subscriber, #62277) [Link] (44 responses)

There has been some talk of a decline in the number of active Linux distributions. If that is indeed the case, any decline in the number of distributions will be short-lived

Actually, I would rather propose that the distribution landscape could be stirred significantly due to more thought going into API/ABI compatibility and application deployment (think of Docker, Lennart's suggestion, Ubuntu's efforts, ...)

Application deployment systems will make quite a bit of what distributions duplicate in effort today simply redundant, up to the point where that duplication is stripped. Distributions can then focus on their core mission and values rather than spending the bulk of their time on packaging.
This will also free up a lot of time that can be spent elsewhere, improving the global Linux picture.

Some unreliable predictions for 2015

Posted Jan 10, 2015 2:39 UTC (Sat) by dlang (guest, #313) [Link] (43 responses)

what do you see as the "core mission and values" of a distribution if it's not to select software and versions and make sure that they all work together?

different distros will select different versions of an application (and configure that version in different ways) depending on how that distro opts to balance newness vs risk and many other subtle things.

Expecting that that sort of variation will "go away" and that everyone will run things with the compile time options, default configuration, and exact patches that the upstream maintainer provides is just ignoring the benefits of Open Source and wishing that it was like Proprietary Software where you didn't have those sorts of options.

Some unreliable predictions for 2015

Posted Jan 11, 2015 19:40 UTC (Sun) by HIGHGuY (subscriber, #62277) [Link] (42 responses)

Well, that question is answered by another question:
Why do people create a new distro or fork an existing one?
I doubt it is because they find that they can do package management better...

They typically have some use-cases in mind that they want to tackle or integrate into the OS a bit better. Perhaps they want to be more user-friendly or target power users. It's those values and mission that ultimately decide how a distro looks like, not whether it's RPM, xPKG or yum or apt or <insert your favorite system here>.
Once application deployment is done properly, I think the boring and labor-intensive part of what a distro is will fade.

Some unreliable predictions for 2015

Posted Jan 11, 2015 20:07 UTC (Sun) by viro (subscriber, #7872) [Link] (7 responses)

Er... So you trust $BIGNUM app duhvelopers to deal with e.g. security fixes? Including such things as "the version of a library it's currently using has a hole"? Really?

Some unreliable predictions for 2015

Posted Jan 11, 2015 21:23 UTC (Sun) by HIGHGuY (subscriber, #62277) [Link]

Unfortunately, no ;)

But if anything, I think this is _the_ big problem to overcome for those new deployment schemes.
Also, I'm not saying app packaging will cease to exist. Instead, I think that such schemes may lower the duplication that goes into many distro's doing the same thing equally or slightly differently.

But of course, they'll first need to see daylight and we'll surely learn a lot on the way there.

Some unreliable predictions for 2015

Posted Jan 11, 2015 22:08 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

The problem is, I also don't trust OpenSource developers not to break an ABI during a security update.

Some unreliable predictions for 2015

Posted Jan 12, 2015 4:14 UTC (Mon) by viro (subscriber, #7872) [Link] (4 responses)

And your point is...? Other than "library authors have atrocious habits", that is (in other news: sudden loud noises in the vicinity of Mephitis mephitis might be inadvisable).

There is no easy solution; the whole point is that it's a bloody hard work that has to be done. And no, "just leave the libraries as app authors shipped" is not a solution either.

If somebody is trying to claim that this will be the year when said library writers will suddenly acquire a less stinky attitude towards compatibility (and better interface design - which is *also* a bloody hard work), well... there's a nice bridge in NY they might want to buy.

Some unreliable predictions for 2015

Posted Jan 12, 2015 4:28 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

> And your point is...?
Just stop pretending that libraries are secure and stable. Package everything and then provide strong isolation (using containers, seccomp, SELinux or whatever) for as much stuff as you can.

Perhaps, ultra-important urgent bugfixes can be provided in an ad-hoc manner by patching the affected libraries.

Some unreliable predictions for 2015

Posted Jan 12, 2015 4:32 UTC (Mon) by dlang (guest, #313) [Link] (2 responses)

it's this packaging that is the hard work that the distros do.

depending on every app developer to package things sanely and blindly running whatever combination they happen to have used at the time of release is even less sane.

Some unreliable predictions for 2015

Posted Jan 12, 2015 20:23 UTC (Mon) by HIGHGuY (subscriber, #62277) [Link] (1 responses)

Actually, that's not what I'm saying in my comment.
I'm saying that we might see the redundant work going away if a good deployment solution is found. I don't think we'll see packaging go away, I think we'll see less of it because there's no 30 flavors of distro each using their own packaging system.

And going one step further, I think it could make sense to let a package 'container' be updated with ABI-stable backports and fixes. i.e. You first make a container with app X and lib Y and Z, then provide stable updates to each through incremental regular intra-container updates that keep ABI stable. Packaging is then no longer a client-side app, but just a means of updating an app-container server-side and offering these incremental updates to us, users.

Or, maybe I'm just dreaming and this will all fade away by 2016 ;)

Some unreliable predictions for 2015

Posted Jan 13, 2015 0:56 UTC (Tue) by dlang (guest, #313) [Link]

I don't think that there are 30 different packaging systems in use. The difference between distros is less in the packaging system used and more in the choices related to how each package is created.

Tools like alien can mechanically convert packages from one packaging system to with pretty good reliability, but that makes no impact on the work the different distros do.

Some unreliable predictions for 2015

Posted Jan 11, 2015 21:54 UTC (Sun) by dlang (guest, #313) [Link] (33 responses)

you are mistaken in what the work of creating packages is.

It's not compiling the software into a package that's the hard work.

The hard work is in deciding what version of the software you want to put in the distro, how you want to configure that software (what options do you want to have compiled in, do you want to have it depend on MySQL, PostgreSQL, or neither for example), are there any patches you want to have in it that aren't in the upstream release yet? Are there security problems that show up as a combination of the library versions and software versions that you have picked?

if it was just compiling the software from a git repository into a package, that could be easily automated and would not be a significant amount of work.

Some unreliable predictions for 2015

Posted Jan 12, 2015 20:32 UTC (Mon) by HIGHGuY (subscriber, #62277) [Link] (32 responses)

That clarifies some of the 'invisible' work there is to it, but it still keeps me wondering: If you take each and ever package in every distro: isn't there some subset of choices that are so common that reusing that just removes a lot of duplicate effort?

Also, if one distro ships patch X for a security issue, wouldn't the others want to follow shortly? If a distro finds an issue with app X and lib Y, wouldn't the others also want to benefit from that knowledge (without finding out the hard way?).

I've always found the whole distro/packaging idea a huge throwaway of valuable manpower. I'd say the best thing you can do for packagers is remove as much duplication as possible within distro's.

Some unreliable predictions for 2015

Posted Jan 13, 2015 0:54 UTC (Tue) by dlang (guest, #313) [Link] (31 responses)

> If you take each and ever package in every distro: isn't there some subset of choices that are so common that reusing that just removes a lot of duplicate effort?

not by that much. The number of choices and versions are so large that if two distros independently pick the same options, it's rare. If two distros do have the same options, then it's almost certain that one is derived from the other and just didn't make any changes to that package.

> I've always found the whole distro/packaging idea a huge throwaway of valuable manpower. I'd say the best thing you can do for packagers is remove as much duplication as possible within distro's.

remember that the packagers are the distro maintainers, and they are very aware of what other distro packagers do for that package. The fact that they make the choice to do something different isn't an accident.

Everyone wants the variety to be reduced, they just would like everyone else to change to match what they do ;-) The smart ones accept that this is not going to happen and accept that different people want different things.

Some unreliable predictions for 2015

Posted Jan 14, 2015 1:44 UTC (Wed) by raven667 (subscriber, #5198) [Link] (30 responses)

It seems that the way to fix this is for one scheme and set of options or another to "win", that doesn't mean that others go away but that they aren't relevant and don't consume undue resources. If we can't do that then containers which include a whole user space might be a useful compromise.

Some unreliable predictions for 2015

Posted Jan 14, 2015 5:17 UTC (Wed) by dlang (guest, #313) [Link] (29 responses)

why should one set of options "win"? it's not going to be right for every case, and may not even be right for a very wide range of cases.

There was a reason for people to code the different options in the first place, if they thought it was important enough to write, who are you to decide that "there must be only one" and it can't be used?

"there is only one way to do things" can be argued to be right for language syntax, but outside of that it's very much the wrong approach to take.

Some unreliable predictions for 2015

Posted Jan 14, 2015 16:44 UTC (Wed) by raven667 (subscriber, #5198) [Link] (28 responses)

> why should one set of options "win"?

> Everyone wants the variety to be reduced, they just would like everyone else to change to match what they do ;-)

Standardization is all about one set of options winning because they work better for the widest variety of cases and others can then trust the software, that doesn't mean someone can't create non-standard options and use the software in non-standard ways but then it is clear that what they are doing is outside of the ecosystem. Right now if you want to grow you have to reduce the variety which is expected to be supported, which is what everyone wants, and what distro vendors do currently, but because none of them have "won" sufficiently outside of individual vertical markets there is no "default" ABI to target for developers so it slows progress. There are reasons the most successful Linux desktop has been Android, which is a standard controlled by a single vendor, and not the cacophony of existing X11 software.

Some unreliable predictions for 2015

Posted Jan 14, 2015 19:14 UTC (Wed) by dlang (guest, #313) [Link] (27 responses)

have fun with your standardized system that does exactly what that standard defines.

But don't try to make me use your standardized system that doesn't do what I want it to do.

If you really believed in standardization, you would be using only microsoft products and prohibit anything else from existing.

Some unreliable predictions for 2015

Posted Jan 14, 2015 19:26 UTC (Wed) by dlang (guest, #313) [Link] (26 responses)

putting it another way, people can't even decide to use only one text editor, what makes you think that they will agree to only configure that editor one way?

Gnome keeps trying to do this "this is the only way you should work" and every time it pushes, it looses lots of users. Why do you keep thinking that things need to be so standardized?

Some unreliable predictions for 2015

Posted Jan 14, 2015 21:28 UTC (Wed) by raven667 (subscriber, #5198) [Link] (25 responses)

> people can't even decide to use only one text editor,

That's kind of moving the goal posts isn't it? Weren't we talking about having a better defined standard ABI of system libraries that applications could depend on, like a much more comprehensive version of LSB, rather than each distro and each version of that distro effectively being it's own unique snowflake ABI that has to be specifically targeted by developers because of the lack of standards across distros? No body cares about what applications you use, but there is some care about file formats and a large amount of concern about libraries and ABI.

Standardization has had some success in network protocols like IP or kernel interfaces like POSIX or file formats like JPEG, why could that success not continue forward with an expanded LSB? Right now the proposals on the table are to give up on standardizing anything and just making it as easy as possible to package up whole distros for containers to rely on. I guess it really is that hard to define a userspace ABI that would be useful, or we are at a Nash Equilibrium where no one can do better on their own to make the large global change to kickstart the process to define standards.

Some unreliable predictions for 2015

Posted Jan 14, 2015 21:48 UTC (Wed) by dlang (guest, #313) [Link] (18 responses)

> That's kind of moving the goal posts isn't it? Weren't we talking about having a better defined standard ABI of system libraries that applications could depend on

I didn't think that is what we were talking about.

We were talking about distro packaging and why the same package of a given application isn't suitable for all distros (the reason being that they opt for different options when they compile the application)

As far as each distro having a different ABI for applications, the only reason that some aren't a subset of others (assuming they have the same libraries installed) is that the library authors don't maintain backwards compatibility. There's nothing that a distro can do to solve this problem except all ship exactly the same version and never upgrade it.

And since some distros want the latest, up-to-the-minute version of that library, while other distros want to use a version that's been tested more, you aren't going to have the distros all ship the same version, even for distro releases that happen at the same time (and if one distro releases in April, and another in June, how would they decide which version of the library to run?)

Some unreliable predictions for 2015

Posted Jan 14, 2015 22:35 UTC (Wed) by raven667 (subscriber, #5198) [Link] (6 responses)

> I didn't think that is what we were talking about.

That confusion may be my fault as well, reading back in the thread. You can blame me, it scales well 8-)

> the same package of a given application isn't suitable for all distros
> they opt for different options when they compile

I'd be interested in taking a serious look at what kinds of options are more likely to change and why to see if there are any broad categories which could be standardized if some other underlying problem (like ABI instability) was sufficiently resolved. My gut feeling (not data) is that the vast majority of the differences are not questions of functionality but of integrating with the base OS. Of course distros like Gentoo will continue to stick around for those who want to easily build whatever they want however they want but those distros aren't leading the pack or defining industry standards now and I don't expect that to change. From my Ameri-centric view this would seem to require RHEL/Fedora, Debian/Ubuntu and (Open)SuSE to get together and homogenize as much as possible so that upstream developers effectively have a single target (Linux-ABI-2020) ecosystem and could distribute binaries along with code for the default version of the application. I guess I just don't care about the technical differences between these distros, it all seems like a wash to me, pointless differentiation for marketing purposes. It's not too much to ask developers to package once, if the process is straightforward enough.

Some unreliable predictions for 2015

Posted Jan 14, 2015 23:51 UTC (Wed) by dlang (guest, #313) [Link] (5 responses)

RHEL wants to run extremely well tested versions of software, even if they are old

Fedora wants to run the latest version of software, even if they aren't well tested

how are these two going to pick the same version?

other differences between distros, do you compile it to depend on Gnome, KDE, etc. Do you compile it to put it's data in SQLite, MySQL or PostgreSQL?

Some distros won't compile some options in because they provide support for (or depend on) proprietary software.

Is an option useful or bloat? different distros will decide differently for the same option.

Gentoo says "whatever the admin wants" for all of these things, other distros pick a set of options, or a few sets of options and make packages based on this.

As an example, Ubuntu has multiple versions of the gnuplot package, one that depends on X (and requires all the X libraries be installed), one that requires QT, and one that requires neither.

Some unreliable predictions for 2015

Posted Jan 15, 2015 4:39 UTC (Thu) by raven667 (subscriber, #5198) [Link] (2 responses)

All interesting points to be sure but I'd be more interested in imaging what would need to happen to make this work, a Linux Foundation Standard ABI would look a lot like an enterprise distro, maybe shipping new versions of leaf software (like Firefox) but never breaking core software, or a proprietary system like Mac OS X. Right now every distro claims to be suitable for end-user deployment and a target for third party development, that would be true if there were comprehensive compatibility standards, or only one dominant vendor to support, but there are not so every distros advertising is false on this point.

These deployment and compatibility problems have been solved on the proprietary platforms, some of which are even based on Linux, and the Linux kernel team provides the same or better ABI compatibility that many proprietary systems offer, why can't userspace library developers and distros have the same level of quality control that the kernel has?

Some unreliable predictions for 2015

Posted Jan 15, 2015 9:28 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

> why can't userspace library developers and distros have the same level of quality control that the kernel has?

Maybe because too much userspace software is written by Computer Science guys, and the kernel is run by an irrascible engineer?

There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception. Indeed, I would go as far as to say that the database realm in particular has been seriously harmed by this ... :-)

There are far too few engineers out there - people who say "I want it to work in practice, not in theory".

Cheers,
Wol

Some unreliable predictions for 2015

Posted Feb 2, 2015 20:34 UTC (Mon) by nix (subscriber, #2304) [Link]

There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception.
This may be true in the database realm you're singlemindedly focussed on (and I suspect in that respect it is only true with respect to one single theory which you happen to dislike and which, to be honest, RDBMS's implement about as closely as they implement flying to Mars), but it's very far from true everywhere else. GCC gained hugely from its switch to a representation that allowed it to actually use algorithms from the published research. The developers of most things other than compilers and Mesa aren't looking at research of any kind. In many cases, there is no research of any kind to look at.

Some unreliable predictions for 2015

Posted Jan 15, 2015 17:50 UTC (Thu) by HIGHGuY (subscriber, #62277) [Link] (1 responses)

Actually, you haven't made any point that I think is not technically solvable. Let's say we build this system:
- app/lib-developers do as they've always done: write software
- packagers do as they've always done, with the exception that their output is a container that contains the app and any necessary libraries. They can even choose to build a small number of different containers, each with slightly different options.
- There's S/W available to aid in creating new and updating/patching existing containers. Much like Docker allows you to modify part of an existing container and call it a new one, you can apply a patched (yet backwards compatible) library or application in a container and ship it as an update to the old one.
- the few "normal-use" (i.e. sorry Gentoo, you're not it ;) )distro's that are left then pick from the existing packages and compose according to wishes. Fedora would likely be a spawning ground for latest versions of all packages, while Red Hat might pick some older (but well-maintained) package with all the patching that it has seen. This also means that Red Hat could reuse packages that originally spawned in Fedora or elsewhere.
- Those that care enough can still build and publish a new container for a package with whatever options they like.

In this scheme, a package with a particular set of options gets built just once. Users and distro's get to pick and match as much as they like. Distro's can reuse work done by other distro's/users and differentiate only where they need. Much of the redundant work is gone.

Some unreliable predictions for 2015

Posted Jan 15, 2015 18:32 UTC (Thu) by dlang (guest, #313) [Link]

The problem has never been that it's not technically solvable.

The problem is that the software (app and library) authors don't do what everyone thinks they should (which _is_ technically impossible, because people have conflicting opinions about what they should do)

Let's talk about older, but well maintained versions of packages.

Who is doing that maintenance? In many cases, software developers only really support the current version of an application, they may support one or two versions back, but any more than that is really unusual.

It's usually the distro packagers/maintainers that do a lot of the work of maintaining the older versions that they ship. And the maintinance of the old versions has the same 'include all changes' vs 'only include what's needed (with the problem of defining what's needed)' issue that the distros have in what versions they ship in the first place.

Some unreliable predictions for 2015

Posted Jan 16, 2015 1:43 UTC (Fri) by vonbrand (subscriber, #4458) [Link]

Add that distributions (or users) select different packages for the same functionality: different web servers, C/C++ compilers, editors, document/image viewers, ...

Some unreliable predictions for 2015

Posted Jan 16, 2015 11:51 UTC (Fri) by hitmark (guest, #34609) [Link] (9 responses)

Much of the problem goes away if just the package managers could tolerate to have multiple versions of the same lib installed. versioned sonames exist for a reason...

Some unreliable predictions for 2015

Posted Jan 16, 2015 12:41 UTC (Fri) by anselm (subscriber, #2796) [Link] (7 responses)

I don't see why that would be a problem. On my Debian system I have multiple versions of, say, libreadline, libprocps and libtcl installed at the same time, in each case from separate packages, so the support seems to be there already.

Some unreliable predictions for 2015

Posted Jan 16, 2015 15:44 UTC (Fri) by cortana (subscriber, #24596) [Link] (6 responses)

This doesn't do what you think it does.

$ ls -l /lib/x86_64-linux-gnu/libreadline.so.{5,6}
lrwxrwxrwx 1 root root 18 Apr 27  2013 /lib/x86_64-linux-gnu/libreadline.so.5 -> libreadline.so.5.2
lrwxrwxrwx 1 root root 18 Jan 13 03:25 /lib/x86_64-linux-gnu/libreadline.so.6 -> libreadline.so.6.3

Looks promising at first glance, but what if my application wants libreadline 5.1?

The SONAME is not strongly tied to the version of the library, but to the compatibility level.

Some unreliable predictions for 2015

Posted Jan 16, 2015 15:56 UTC (Fri) by mathstuf (subscriber, #69389) [Link] (5 responses)

Then someone goofed. Either you're either depending on unspecified behavior (which should be un-exported if possible), or a soname bump was missed upstream between 5.1 and 5.2.

Some unreliable predictions for 2015

Posted Jan 16, 2015 16:19 UTC (Fri) by raven667 (subscriber, #5198) [Link] (2 responses)

I think it is true that there are a ton of library ABI breaks where the soname isn't changed because many library authors don't know when the changes they make break it, they just recompile which hides a lot of issues. Distros like Gentoo and tools like OBS rebuild applications when a library dependency changes even if the soname is the same for exactly this reason, they wouldn't bother to do this if the soname was a reliable indicator of compatibility.

Some unreliable predictions for 2015

Posted Jan 16, 2015 17:45 UTC (Fri) by peter-b (guest, #66996) [Link] (1 responses)

So we have a perfectly adequate system that some people don't use properly, in fact. This is news?

Some unreliable predictions for 2015

Posted Jan 16, 2015 18:55 UTC (Fri) by cortana (subscriber, #24596) [Link]

No, we have an inadequate system that is only useful as long as upstream developers, downstream developers and distributors never make mistakes.

Some unreliable predictions for 2015

Posted Jan 16, 2015 18:56 UTC (Fri) by cortana (subscriber, #24596) [Link] (1 responses)

BTW, what happens if I actually need libreadline.so.4?

Some unreliable predictions for 2015

Posted Jan 16, 2015 21:22 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

Either bundle or ask for that version to be packaged. Or provide it in your repo (since you're not likely to have such a package shipped by Debian itself without them handing back patches to update it).

Some unreliable predictions for 2015

Posted Jan 16, 2015 13:55 UTC (Fri) by cesarb (subscriber, #6266) [Link]

> Much of the problem goes away if just the package managers could tolerate to have multiple versions of the same lib installed. versioned sonames exist for a reason...

It's not that simple. Suppose a program is linked against library A which in turn is linked against libpng2, and that program is also linked against library B which in turn is linked against libpng3.

Now imagine the program gets from library A a pointer to a libpng structure, which it then passes to library B.

Some unreliable predictions for 2015

Posted Jan 15, 2015 20:22 UTC (Thu) by flussence (guest, #85566) [Link] (5 responses)

> Standardization has had some success in network protocols like IP or kernel interfaces like POSIX or file formats like JPEG

JPEG is a bad example to use there... everyone dropped the official reference implementation after its maintainer went off the rails and started changing the format in backwards-incompatible ways: http://www.libjpeg-turbo.org/About/Jpeg-9

Some unreliable predictions for 2015

Posted Jan 15, 2015 21:40 UTC (Thu) by raven667 (subscriber, #5198) [Link]

The point is that there is more than one interoperable implementation, not that everyone has forked the reference implementation to get interoperability, a JPEG made on an Ubuntu 12.04 will work on an Ubuntu 10.10 and Ubuntu 14.10 and Fedora 21 and Fedora 15 without "recompiling" or converting, whereas software built on one can't run on the other (while using OS provided shared libraries) because they are too different in the tiny details even though they are broadly running the same software.

JPEG / JFIF

Posted Jan 16, 2015 11:24 UTC (Fri) by tialaramex (subscriber, #21167) [Link] (3 responses)

Well, the deal is a bit more complicated than you've suggested. You've focused on the IJG's library, but there's much more.

JPEG per se isn't a file format. The committee weren't focused on storing the data from their compression algorithm as files, they were thinking you'd transmit it to somewhere and it'd get decompressed and then used. So the actual international standard is completely mute about files on disk.

Early on people who did think we should store data in files wrote JPEG compressed data to the pseudo-standard TIFF. But TIFF is a complete mess, conceived as the minimal way to store output from a drum or flatbed scanner on a computer and thus permitting absolutely everything but making nothing easy - and its attempt to handle JPEG led to incompatible (literally, as in "I sent you the file" "No, my program says it's corrupt" "OK, try this" "Did you mean for it to be black and white?") implementations. There were then a series of Adobe "technical notes" for TIFF that try to fix things, several times attempting a fresh start with little success.

JFIF is the "real" name for the file format we all use today, and it's basically where the IJG comes into the picture. Instead of TIFF's mess of irrelevant or nonsensical parameters you've got the exact parameters needed for the codec being used, and then you've got all this raw data to pass into the decoder. And there's this handy free library of code to read and write the files, so everybody just uses that.

So initially the IJG are great unifiers - instead of three or four incompatible attempts to store JPEG data in a TIFF you get these smaller and obviously non-TIFF JPG files and either the recipient can read them or they can't, no confusion as to what they mean. But then they proved (and libpng followed them for a while) incapable of grasping what an ABI is.

JPEG / JFIF

Posted Jan 16, 2015 12:35 UTC (Fri) by peter-b (guest, #66996) [Link] (2 responses)

In TIFF's defence, I used TIFF files and libtiff extensively during my PhD, since they're the only sane way of storing and communicating remote sensing datasets (complex 32-bit fixed point pixel values on 6 image planes? no problem).

I didn't experience any problems that weren't due to my own incompetence.

JPEG / JFIF

Posted Jan 16, 2015 13:27 UTC (Fri) by rleigh (guest, #14622) [Link]

TIFF is certainly complex, but that's made up for by its unmatched power and sophistication. I've recently been working on TIFF reading/writing of microscopy imaging data, and testing all the different combinations of PhotometricInterpretation with and without LUTs, pixel type and depth (including complex floating point types), orientation, large numbers of channels, all sorts of combinations of tile and strip sizes, bigtiff, etc. It's quite surprising how many programs and graphics libraries get it wrong. The worst I found was the Microsoft TIFF support on Windows, e.g. for thumbnailing and viewing, which was incorrect for most cases, and apparently it's been much improved! Support on FreeBSD and Linux with free software viewers was better, but still not perfect for many cases.

I think this is primarily due to most authors staying well inside the 8-bit grey/RGB "comfort zone". Sometimes this extends to 12/16-bit or maybe float, and not testing with more sophisticated data.

Most of that is simply the author screwing up. For example, when dealing with strips and tiles, it's amazing how many people mess up the image data by failing to deal with the strip/tile overlapping the image bounds when it's not a multiple of the image size, sometimes for particular pixel types e.g. 1 or 2 bit data. Just a simple miscalulation or failure to check particular tiff tags.

I'm not sure what the solution is here. A collection of images which exercise usage of all the baseline tags with all the special cases (and their interactions) would be a good start. I currently generate a test set of around 4000 64×64 TIFF images for the set of tags I care about, but it's still far from comprehensive. I know it works for the most part, but even then it's going to fail for tags I don't currently code for.

JPEG / JFIF

Posted Jan 17, 2015 10:32 UTC (Sat) by tialaramex (subscriber, #21167) [Link]

I am, since the name probably doesn't ring a bell, responsible for GIMP's TIFF loader, and in another previous life as a PhD research student I read and wrote a great many complex tiled TIFFs created by art historians studying/ preserving various great European works.

So, I'm not saying it's garbage because I don't understand how to use it, I'm saying it's garbage because I do understand and I don't sympathise.

Some unreliable predictions for 2015

Posted Jan 19, 2015 17:45 UTC (Mon) by Baylink (guest, #755) [Link] (8 responses)

The problem with systemd, as much as any other problem, is that Lennart and the distro managers who've drunk his FlavorAID decided that my time was theirs to allocate; I had other more important things to do than to spend time learning an entirely new core system component that gives me no measurable advantage.

That's *aside* from how thoroughly it violates nearly every precept of three decades of Unix design philosophy.

Some unreliable predictions for 2015

Posted Jan 19, 2015 22:18 UTC (Mon) by anselm (subscriber, #2796) [Link] (7 responses)

The problem with systemd, as much as any other problem, is that Lennart and the distro managers who've drunk his FlavorAID decided that my time was theirs to allocate

Which is of course a proposition that is entirely different from your deciding that the distribution managers' time is yours to allocate, by requiring them to keep the existing hodgepodge of accidentally-combined components on life support forever.

I had other more important things to do than to spend time learning an entirely new core system component that gives me no measurable advantage.

Great, so go use Slackware.

That's *aside* from how thoroughly it violates nearly every precept of three decades of Unix design philosophy.

That claim has been debunked so often it isn't funny anymore. The main distinguishing characteristic of the existing haphazard setup is that, unlike systemd, it has no discernible design philosophy whatsoever – it is a thrown-together mixture of components from a variety of unrelated sources, with lots of almost-duplication, a disparate zoo of configuration file formats, and dismal documentation.

As far as the famous and much-maligned “Unix design philosophy” is concerned, there can be no doubt that in the traditional setup there are lots of bits and pieces that each do one thing – but it requires quite a fanciful imagination to claim that they do that thing well.

Some unreliable predictions for 2015

Posted Jan 20, 2015 19:31 UTC (Tue) by flussence (guest, #85566) [Link] (2 responses)

> Which is of course a proposition that is entirely different from your deciding that the distribution managers' time is yours to allocate, by requiring them to keep the existing hodgepodge of accidentally-combined components on life support forever.

What a tactless insult to the people whose work you've been freeloading off of for years before systemd.

Some unreliable predictions for 2015

Posted Jan 20, 2015 23:02 UTC (Tue) by anselm (subscriber, #2796) [Link]

It's not the distribution maintainers' fault that the traditional setup is so terrible. It isn't even the upstream software authors' fault. They all had an itch and they scratched it. The problem is that the various bits and pieces arose over a period of 20 years or so, and that nobody really talked to anybody else. For quite some time, the traditional setup used to be the best available solution to the problem at hand, and the distribution maintainers did great and mostly thankless work integrating and improving it. That does not detract from the fact that seen as a complete software system the traditional setup leaves a lot to be desired, especially compared to modern approaches like systemd.

Many systemd detractors do not seem to appreciate that one reason why so many distributions fell over themselves to adopt systemd is that systemd actually makes a distribution maintainer's life a lot easier. It basically saves a distribution from having to implement and maintain a lot of (often distro-specific) basic plumbing that is a hassle to design, build, and keep going. People who instead prefer their Linux distribution to be untainted by systemd are free to do that work themselves, or (as the distribution maintainers have mostly done it for them already) at least shoulder the burden of maintaining it going forward once the original distribution maintainers have decided – as is their privilege – that systemd is a more worthwhile use of their time. In that sense something like Devuan is a great idea because it will hopefully soak up all those folks who would otherwise keep hassling the Debian maintainers about sysvinit.

Some unreliable predictions for 2015

Posted Jan 21, 2015 15:31 UTC (Wed) by judas_iscariote (guest, #47386) [Link]

Truth is a bitter pill .. matter of fact is that the old
stuff barely has any man power to keep it in a working state. the way it is integrated is also beyond terrible.

Some unreliable predictions for 2015

Posted Feb 2, 2015 20:30 UTC (Mon) by nix (subscriber, #2304) [Link] (3 responses)

I dunno. In the world after the horrible but astonishingly still not dead BSD networking so-called "API" and the Unix wars, the claim could be made that being a haphazard mess with no discernible design philosophy *is* the Unix design philosophy. It's just not the one envisaged by Ken Thompson and the early designers, alas :( systemd is definitely a step back towards that early vision.

Some unreliable predictions for 2015

Posted Feb 2, 2015 22:41 UTC (Mon) by raven667 (subscriber, #5198) [Link] (2 responses)

Is the lesson that those who don't understand Multics (and Plan9) are doomed to reinvent it badly and that for all the "bloat" that was removed from the Multics design to make a "simple" UNIX, every bit of it was eventually added back because it was needed but organically without any coherent design, leading to the most deployed, standard system which largely destroyed all others being a creeping inconsistent horror, that we all love dearly 8-)

Reminds me of http://mjg59.livejournal.com/136274.html

A bit of off-topic

Posted Feb 3, 2015 8:43 UTC (Tue) by dgm (subscriber, #49227) [Link] (1 responses)

I think Mathew missed it big time with this post. Firstly, LightDM is *still* Ubuntu's dm of choice (and working great, if you ask me). But more importantly, the main reason for GDM to start a full session is profoundly flawed. In fact, the idea that power management and accessibility belong to the user's session is flawed. And funnily, it's Mathew himself that gives the reason:

> Closing the lid of my laptop should suspend the system regardless of whether it's logged in or not.

Absolutely. That's why power policy should *never* be related to the user's desktop session.

A bit of off-topic

Posted Feb 3, 2015 15:55 UTC (Tue) by raven667 (subscriber, #5198) [Link]

I agree this is off topic but power management does interact with the console and requires communication with whatever is driving the console output, power management changes depending on whether the keyboard is being used or a video is being played or other factors that the UI program knows about and there should be UI elements which allow the user present on the console to suspend or power off the device. There may be a backend daemon which coordinates these changes but they are driven by policy encoded with the console UI itself.

Once you need to have software running to interact with the console present user then you need a context to run that software in, having a user session dedicated to the login screen allows it to have the same protections as any user and make it less of a special case.

I dunno, makes sense to me anyway.


Copyright © 2015, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds