Some unreliable predictions for 2015
We will hear a lot about the "Internet of things" of course. For larger "things" like cars and major appliances, Linux is the obvious system to use. For tiny things with limited resources, the picture is not so clear. If the work to shrink the Linux kernel is not sufficiently successful in 2015, we may see the emergence of a disruptive competitor in that space. We may feel that no other kernel can catch up to Linux in terms of features, hardware support, and development community size, but we could be surprised if we fail to serve an important segment of the industry.
We'll hear a lot about "the cloud" too, and we'll be awfully tired of it by the end of the year. Some of the hype over projects like OpenStack will fade as the project deals with its growing pains. With some luck, we'll see more attention to projects that allow users to own and run their own clouds rather than depending on one of the large providers — but your editor has often been overly optimistic about such things.
While we're being optimistic: the systemd wars will wind down as users realize that their systems still work and that Linux as a whole has not been taken over by some sort of alien menace. There will still be fights — we, as a community, do seem to like fighting about such things — but most of us will increasingly choose to simply ignore them.
There is a wider issue here, though: we are breaking new ground in systems design, and that will necessarily involve doing things differently than they have been done in the past. There will certainly be differences of opinion on the directions our systems should take; if there aren't, we are doing something wrong. There is a whole crowd of energetic developers out there looking to do interesting things with the free software resources we have created. Not all of their ideas will be good ones, but it is going to be fun to watch what they come up with.
There will be more Heartbleed-level security incidents in 2015. There are a lot of dark, unmaintained corners in our software ecosystem, many of which undoubtedly contain ancient holes that, if we are lucky, nobody has yet discovered. But they will be discovered, and we'll not be getting off the urgent-update treadmill this year.
Investments in security will grow considerably as a consequence of 2014's high-profile vulnerabilities, high-profile intrusions at major companies, and ongoing spying revelations. How much good that investment will do remains to be seen; much will be swallowed up by expensive security companies that have little interest in doing the hard work required to actually make our systems more secure.
Investments in other important development areas will grow more slowly despite the great need in many areas. We all depend on code which is minimally maintained, if at all, and there are many unsolved problems out there that nobody seems willing to pick up. The Linux Foundation's Critical Infrastructure Initiative is a good start, but it cannot come close to addressing the whole problem.
Speaking of important development areas, serious progress will be made on the year-2038 problem in 2015. The pace picked up in 2014, but developers worked mostly on the easy part of the problem — internal kernel interfaces. But a real solution will involve user-space changes, and the sooner those are made, the better. The relevant developers understand the need; by the end of this year we'll know at least what the shape of the solution will be.
Some long-awaited projects will gain some traction this year. The worst Btrfs problems are being addressed thanks to stress testing at Facebook and real-world deployment in distributions like openSUSE. Wayland is reaching a point of usability for brave early adopters. Even Python 3, which has been ready for a while, will see increasing use. We'll have programs like X.org and Python 2 around for a long time, but the world does eventually move on.
There has been some talk of a decline in the number of active Linux distributions. If that is indeed the case, any decline in the number of distributions will be short-lived. We may not see a whole lot more general-purpose desktop or server distributions; that ground has been pretty well explored by now, and, with the possible exception of the systemd-avoidance crowd, there does not appear to be a whole lot to be done in that area. But we will see more and more distributions that are specialized for particular applications, be it network-attached storage, routing, or driving small gadgets. The flexibility of Linux in this area is one of its greatest strengths.
Civility within our community will continue to be a hot-button issue in 2015. Undoubtedly somebody will say something offensive and set off a firestorm somewhere. But, perhaps, we will see wider recognition of the fact that the situation has improved considerably over the years. With luck, we'll be able to have a (civil!) conversation on how to improve the environment we live in without painting the community as a whole in an overly bad light. We should acknowledge and address our failures, but we should recognize our successes as well.
Finally, an easy prediction is that, on January 22, LWN will finish its
17th year of publication. We could never have predicted that we would be
doing this for so long, but it has been a great ride and we have no
intention of slowing down anytime soon. 2015 will certainly be an
interesting year for those of us working in the free software community,
with the usual array of ups, downs, and surprises. We're looking forward
to being a part of it with all of you.
Posted Jan 8, 2015 5:55 UTC (Thu)
by okusi (guest, #96501)
[Link]
Posted Jan 8, 2015 7:09 UTC (Thu)
by alison (subscriber, #63752)
[Link] (3 responses)
It may be obvious to LWN readers that Linux is the best choice for automotive, but unfortunately it wasn't obvious to Ford Motor Company or VW Group, who have chosen to go with the incumbent leader, QNX. These choices occurred even though QNX is closed-source and available only from a single vendor. Ford notably ditched Windows but chose QNX. Contributors to Linux should not be so smug as to believe that its triumph everywhere is inevitable. We have to win our position as market leader by continuing to improve.
Posted Jan 8, 2015 9:42 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
It's understandable that Ford doesn't want to experiment anymore after years of failure with Sync, lots of which were caused by outdated WinCE-based software stack.
Posted Jan 8, 2015 22:16 UTC (Thu)
by alison (subscriber, #63752)
[Link]
QNX is the market leader in the large-touchscreen head units that provide "infotainment" (maps, music, traffic data, etc.) as well as Advanced Driver Assistance Systems (ADAS) that will eventually evolve into autonomous driving controllers. In other words, QNX is competing head-on with Linux and often winning straight-up battles. Automotive Grade Linux and the Linux-based GENIVI Consortium are also thriving, but the Linux community cannot take a win in automotive for granted by any means.
Posted Jan 8, 2015 21:32 UTC (Thu)
by mm7323 (subscriber, #87386)
[Link]
Obviously if you are taking device drivers, realtime or any of the advanced QNX features, any porting from QNX to Linux is going to be difficult, and you might be needing the QNX features anyway.
Posted Jan 8, 2015 8:43 UTC (Thu)
by patrakov (subscriber, #97174)
[Link] (6 responses)
Posted Jan 8, 2015 9:17 UTC (Thu)
by ppedroni (subscriber, #6592)
[Link] (1 responses)
Posted Jan 11, 2015 3:43 UTC (Sun)
by giraffedata (guest, #1954)
[Link]
LWN has covered the BSDs.
Posted Jan 8, 2015 15:05 UTC (Thu)
by Uraeus (guest, #33755)
[Link] (3 responses)
Posted Jan 8, 2015 16:49 UTC (Thu)
by smitty_one_each (subscriber, #28989)
[Link] (2 responses)
Posted Jan 9, 2015 8:46 UTC (Fri)
by dgm (subscriber, #49227)
[Link] (1 responses)
Posted Jan 9, 2015 9:19 UTC (Fri)
by smitty_one_each (subscriber, #28989)
[Link]
Posted Jan 8, 2015 16:28 UTC (Thu)
by fredrik (subscriber, #232)
[Link] (2 responses)
What then is the next smaller OS that developers pick to put on their IOT devices? Are L4 microkernel OS:es, say seL4, relevant in the real world? Is that a realistic competitor to QNX?
Posted Jan 8, 2015 19:44 UTC (Thu)
by pj (subscriber, #4506)
[Link]
Posted Jan 9, 2015 13:10 UTC (Fri)
by aleXXX (subscriber, #2742)
[Link]
Posted Jan 9, 2015 10:15 UTC (Fri)
by mirabilos (subscriber, #84359)
[Link] (7 responses)
Posted Jan 9, 2015 16:30 UTC (Fri)
by johannbg (guest, #65743)
[Link]
In Fedora FESCo/FPC in it's infinite wisdom allowed for sysv initscripts being shipped in a separated sub component after unit migration even thou it made no sense and had no practical purpose.
If I can recall correctly maintainers of about 30 components decided to take advantage of that thus supporting both systemd units and the legacy sysv or upstart ( no distro migrated to native upstart configuration file otherwise everybody would have been done arguing since it would have been the same amount of transition pain ) but none of the maintainers of components that make up the core/baseOS and depend on an init system did , hence even if you wanted to use those components on a systemd free system you could not since you could not boot one ;)
I assume other distro went through same/similar "migration" process but in the case of Debian or Debian based distros or simply distro that are supporting more than one init system, the maintainership burden will be multiplied by the number of supported init system as well as bug reports and the user frustration that goes hand in hand with that when things dont work as expected ( things work fine on init system x but not y, the component only ships init system configuration file for init system x not y etc ).
And I dont think the fight for Debian with the freedom to choose the init system will be ongoing but rather the community will simple refer those that want that to that self proclaimed VUA crowd and the fork they based on Debian.
Posted Jan 15, 2015 16:44 UTC (Thu)
by phred14 (guest, #60633)
[Link] (2 responses)
NOTHING in Unix or Linux has been like this - ever. Both emacs and vi are still with us, decades later. We still have sendmail, postfix, courier, exim, etc. We still have DJB versions of various programs, along with the non-DJB versions. About the closest thing we have to a monoculture is XOrg, but that wasn't by Xorg trying to stamp out SVGALIB, and even now XOrg isn't trying to stamp out Mir or Wayland.
I would like to see the systemd wars die down, too. I would just like to see it die down with several distributions using it, several distributions using other init systems, and just have the flaming stop.
Posted Jan 16, 2015 11:47 UTC (Fri)
by hitmark (guest, #34609)
[Link]
Posted Jan 16, 2015 16:28 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
systemd has won, the big three PC distros Debian, Redhat, SuSE are using it, mobile Tizen, Jolla are using it, thats a sufficient de-facto standard. It doesn't matter if Gentoo or Slackware uses it, it'd be better if they didn't so that there is always room around the edges for people to get together and try different things, bsd or sysv or whatever.
Posted Jan 16, 2015 14:22 UTC (Fri)
by ksandstr (guest, #60862)
[Link] (2 responses)
In the mean time, power management in Debian testing and unstable continues to remain broken unless systemd or systemd-equivalent components are installed. Relatedly cryptsetup will still interact incorrectly with sysvinit boot scripts and the boot console, preventing it from setting up encrypted block devices at boot even when the correct passphrase is given. Since power management no longer detects that the computer is attached to a power supply, anacron only runs its jobs at fresh reboot and only if that is done while leashed; this keeps automatic backups from happening as specified.
The situation where power management failure prevents automatic suspend and automatic switch to "lose at most 30 seconds' work instead of 60 minutes" in low-battery conditions is a recipe for catastrophic data loss. The failing of regular automated backups aggravates such a catastrophe to outright disaster by preventing recovery from any backup besides those that were created manually, or pre-date Debian's systemd madness.
The constellation of outright egregious systemd breakage persists, so the war is not over. However, as in any war, propagandists will sacrifice truth and journalistic integrity for an appearance of pious conformity as their patrons require.
Posted Jan 16, 2015 16:32 UTC (Fri)
by raven667 (subscriber, #5198)
[Link]
> propagandists will sacrifice truth and journalistic integrity for an appearance of pious conformity as their patrons require.
hahahahahahahahahahaha, I'm _sure_ that is what is going on (sarcasm), no one actually has real opinions, the entire world is a conspiracy of paymasters against you and that's not even a little bit paranoid.
Posted Jan 16, 2015 20:41 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Posted Jan 9, 2015 12:14 UTC (Fri)
by HIGHGuY (subscriber, #62277)
[Link] (44 responses)
Actually, I would rather propose that the distribution landscape could be stirred significantly due to more thought going into API/ABI compatibility and application deployment (think of Docker, Lennart's suggestion, Ubuntu's efforts, ...)
Application deployment systems will make quite a bit of what distributions duplicate in effort today simply redundant, up to the point where that duplication is stripped.
Distributions can then focus on their core mission and values rather than spending the bulk of their time on packaging.
Posted Jan 10, 2015 2:39 UTC (Sat)
by dlang (guest, #313)
[Link] (43 responses)
different distros will select different versions of an application (and configure that version in different ways) depending on how that distro opts to balance newness vs risk and many other subtle things.
Expecting that that sort of variation will "go away" and that everyone will run things with the compile time options, default configuration, and exact patches that the upstream maintainer provides is just ignoring the benefits of Open Source and wishing that it was like Proprietary Software where you didn't have those sorts of options.
Posted Jan 11, 2015 19:40 UTC (Sun)
by HIGHGuY (subscriber, #62277)
[Link] (42 responses)
They typically have some use-cases in mind that they want to tackle or integrate into the OS a bit better. Perhaps they want to be more user-friendly or target power users. It's those values and mission that ultimately decide how a distro looks like, not whether it's RPM, xPKG or yum or apt or <insert your favorite system here>.
Posted Jan 11, 2015 20:07 UTC (Sun)
by viro (subscriber, #7872)
[Link] (7 responses)
Posted Jan 11, 2015 21:23 UTC (Sun)
by HIGHGuY (subscriber, #62277)
[Link]
But if anything, I think this is _the_ big problem to overcome for those new deployment schemes.
But of course, they'll first need to see daylight and we'll surely learn a lot on the way there.
Posted Jan 11, 2015 22:08 UTC (Sun)
by Cyberax (✭ supporter ✭, #52523)
[Link] (5 responses)
Posted Jan 12, 2015 4:14 UTC (Mon)
by viro (subscriber, #7872)
[Link] (4 responses)
There is no easy solution; the whole point is that it's a bloody hard work that has to be done. And no, "just leave the libraries as app authors shipped" is not a solution either.
If somebody is trying to claim that this will be the year when said library writers will suddenly acquire a less stinky attitude towards compatibility (and better interface design - which is *also* a bloody hard work), well... there's a nice bridge in NY they might want to buy.
Posted Jan 12, 2015 4:28 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (3 responses)
Perhaps, ultra-important urgent bugfixes can be provided in an ad-hoc manner by patching the affected libraries.
Posted Jan 12, 2015 4:32 UTC (Mon)
by dlang (guest, #313)
[Link] (2 responses)
depending on every app developer to package things sanely and blindly running whatever combination they happen to have used at the time of release is even less sane.
Posted Jan 12, 2015 20:23 UTC (Mon)
by HIGHGuY (subscriber, #62277)
[Link] (1 responses)
And going one step further, I think it could make sense to let a package 'container' be updated with ABI-stable backports and fixes. i.e. You first make a container with app X and lib Y and Z, then provide stable updates to each through incremental regular intra-container updates that keep ABI stable. Packaging is then no longer a client-side app, but just a means of updating an app-container server-side and offering these incremental updates to us, users.
Or, maybe I'm just dreaming and this will all fade away by 2016 ;)
Posted Jan 13, 2015 0:56 UTC (Tue)
by dlang (guest, #313)
[Link]
Tools like alien can mechanically convert packages from one packaging system to with pretty good reliability, but that makes no impact on the work the different distros do.
Posted Jan 11, 2015 21:54 UTC (Sun)
by dlang (guest, #313)
[Link] (33 responses)
It's not compiling the software into a package that's the hard work.
The hard work is in deciding what version of the software you want to put in the distro, how you want to configure that software (what options do you want to have compiled in, do you want to have it depend on MySQL, PostgreSQL, or neither for example), are there any patches you want to have in it that aren't in the upstream release yet? Are there security problems that show up as a combination of the library versions and software versions that you have picked?
if it was just compiling the software from a git repository into a package, that could be easily automated and would not be a significant amount of work.
Posted Jan 12, 2015 20:32 UTC (Mon)
by HIGHGuY (subscriber, #62277)
[Link] (32 responses)
Also, if one distro ships patch X for a security issue, wouldn't the others want to follow shortly? If a distro finds an issue with app X and lib Y, wouldn't the others also want to benefit from that knowledge (without finding out the hard way?).
I've always found the whole distro/packaging idea a huge throwaway of valuable manpower. I'd say the best thing you can do for packagers is remove as much duplication as possible within distro's.
Posted Jan 13, 2015 0:54 UTC (Tue)
by dlang (guest, #313)
[Link] (31 responses)
not by that much. The number of choices and versions are so large that if two distros independently pick the same options, it's rare. If two distros do have the same options, then it's almost certain that one is derived from the other and just didn't make any changes to that package.
> I've always found the whole distro/packaging idea a huge throwaway of valuable manpower. I'd say the best thing you can do for packagers is remove as much duplication as possible within distro's.
remember that the packagers are the distro maintainers, and they are very aware of what other distro packagers do for that package. The fact that they make the choice to do something different isn't an accident.
Everyone wants the variety to be reduced, they just would like everyone else to change to match what they do ;-) The smart ones accept that this is not going to happen and accept that different people want different things.
Posted Jan 14, 2015 1:44 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (30 responses)
Posted Jan 14, 2015 5:17 UTC (Wed)
by dlang (guest, #313)
[Link] (29 responses)
There was a reason for people to code the different options in the first place, if they thought it was important enough to write, who are you to decide that "there must be only one" and it can't be used?
"there is only one way to do things" can be argued to be right for language syntax, but outside of that it's very much the wrong approach to take.
Posted Jan 14, 2015 16:44 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (28 responses)
> Everyone wants the variety to be reduced, they just would like everyone else to change to match what they do ;-)
Standardization is all about one set of options winning because they work better for the widest variety of cases and others can then trust the software, that doesn't mean someone can't create non-standard options and use the software in non-standard ways but then it is clear that what they are doing is outside of the ecosystem. Right now if you want to grow you have to reduce the variety which is expected to be supported, which is what everyone wants, and what distro vendors do currently, but because none of them have "won" sufficiently outside of individual vertical markets there is no "default" ABI to target for developers so it slows progress. There are reasons the most successful Linux desktop has been Android, which is a standard controlled by a single vendor, and not the cacophony of existing X11 software.
Posted Jan 14, 2015 19:14 UTC (Wed)
by dlang (guest, #313)
[Link] (27 responses)
But don't try to make me use your standardized system that doesn't do what I want it to do.
If you really believed in standardization, you would be using only microsoft products and prohibit anything else from existing.
Posted Jan 14, 2015 19:26 UTC (Wed)
by dlang (guest, #313)
[Link] (26 responses)
Gnome keeps trying to do this "this is the only way you should work" and every time it pushes, it looses lots of users. Why do you keep thinking that things need to be so standardized?
Posted Jan 14, 2015 21:28 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (25 responses)
That's kind of moving the goal posts isn't it? Weren't we talking about having a better defined standard ABI of system libraries that applications could depend on, like a much more comprehensive version of LSB, rather than each distro and each version of that distro effectively being it's own unique snowflake ABI that has to be specifically targeted by developers because of the lack of standards across distros? No body cares about what applications you use, but there is some care about file formats and a large amount of concern about libraries and ABI.
Standardization has had some success in network protocols like IP or kernel interfaces like POSIX or file formats like JPEG, why could that success not continue forward with an expanded LSB? Right now the proposals on the table are to give up on standardizing anything and just making it as easy as possible to package up whole distros for containers to rely on. I guess it really is that hard to define a userspace ABI that would be useful, or we are at a Nash Equilibrium where no one can do better on their own to make the large global change to kickstart the process to define standards.
Posted Jan 14, 2015 21:48 UTC (Wed)
by dlang (guest, #313)
[Link] (18 responses)
I didn't think that is what we were talking about.
We were talking about distro packaging and why the same package of a given application isn't suitable for all distros (the reason being that they opt for different options when they compile the application)
As far as each distro having a different ABI for applications, the only reason that some aren't a subset of others (assuming they have the same libraries installed) is that the library authors don't maintain backwards compatibility. There's nothing that a distro can do to solve this problem except all ship exactly the same version and never upgrade it.
And since some distros want the latest, up-to-the-minute version of that library, while other distros want to use a version that's been tested more, you aren't going to have the distros all ship the same version, even for distro releases that happen at the same time (and if one distro releases in April, and another in June, how would they decide which version of the library to run?)
Posted Jan 14, 2015 22:35 UTC (Wed)
by raven667 (subscriber, #5198)
[Link] (6 responses)
That confusion may be my fault as well, reading back in the thread. You can blame me, it scales well 8-)
> the same package of a given application isn't suitable for all distros
I'd be interested in taking a serious look at what kinds of options are more likely to change and why to see if there are any broad categories which could be standardized if some other underlying problem (like ABI instability) was sufficiently resolved. My gut feeling (not data) is that the vast majority of the differences are not questions of functionality but of integrating with the base OS. Of course distros like Gentoo will continue to stick around for those who want to easily build whatever they want however they want but those distros aren't leading the pack or defining industry standards now and I don't expect that to change. From my Ameri-centric view this would seem to require RHEL/Fedora, Debian/Ubuntu and (Open)SuSE to get together and homogenize as much as possible so that upstream developers effectively have a single target (Linux-ABI-2020) ecosystem and could distribute binaries along with code for the default version of the application. I guess I just don't care about the technical differences between these distros, it all seems like a wash to me, pointless differentiation for marketing purposes. It's not too much to ask developers to package once, if the process is straightforward enough.
Posted Jan 14, 2015 23:51 UTC (Wed)
by dlang (guest, #313)
[Link] (5 responses)
Fedora wants to run the latest version of software, even if they aren't well tested
how are these two going to pick the same version?
other differences between distros, do you compile it to depend on Gnome, KDE, etc. Do you compile it to put it's data in SQLite, MySQL or PostgreSQL?
Some distros won't compile some options in because they provide support for (or depend on) proprietary software.
Is an option useful or bloat? different distros will decide differently for the same option.
Gentoo says "whatever the admin wants" for all of these things, other distros pick a set of options, or a few sets of options and make packages based on this.
As an example, Ubuntu has multiple versions of the gnuplot package, one that depends on X (and requires all the X libraries be installed), one that requires QT, and one that requires neither.
Posted Jan 15, 2015 4:39 UTC (Thu)
by raven667 (subscriber, #5198)
[Link] (2 responses)
These deployment and compatibility problems have been solved on the proprietary platforms, some of which are even based on Linux, and the Linux kernel team provides the same or better ABI compatibility that many proprietary systems offer, why can't userspace library developers and distros have the same level of quality control that the kernel has?
Posted Jan 15, 2015 9:28 UTC (Thu)
by Wol (subscriber, #4433)
[Link] (1 responses)
Maybe because too much userspace software is written by Computer Science guys, and the kernel is run by an irrascible engineer?
There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception. Indeed, I would go as far as to say that the database realm in particular has been seriously harmed by this ... :-)
There are far too few engineers out there - people who say "I want it to work in practice, not in theory".
Cheers,
Posted Feb 2, 2015 20:34 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Jan 15, 2015 17:50 UTC (Thu)
by HIGHGuY (subscriber, #62277)
[Link] (1 responses)
In this scheme, a package with a particular set of options gets built just once. Users and distro's get to pick and match as much as they like. Distro's can reuse work done by other distro's/users and differentiate only where they need. Much of the redundant work is gone.
Posted Jan 15, 2015 18:32 UTC (Thu)
by dlang (guest, #313)
[Link]
The problem is that the software (app and library) authors don't do what everyone thinks they should (which _is_ technically impossible, because people have conflicting opinions about what they should do)
Let's talk about older, but well maintained versions of packages.
Who is doing that maintenance? In many cases, software developers only really support the current version of an application, they may support one or two versions back, but any more than that is really unusual.
It's usually the distro packagers/maintainers that do a lot of the work of maintaining the older versions that they ship. And the maintinance of the old versions has the same 'include all changes' vs 'only include what's needed (with the problem of defining what's needed)' issue that the distros have in what versions they ship in the first place.
Posted Jan 16, 2015 1:43 UTC (Fri)
by vonbrand (subscriber, #4458)
[Link]
Add that distributions (or users) select different packages for the same functionality: different web servers, C/C++ compilers, editors, document/image viewers, ...
Posted Jan 16, 2015 11:51 UTC (Fri)
by hitmark (guest, #34609)
[Link] (9 responses)
Posted Jan 16, 2015 12:41 UTC (Fri)
by anselm (subscriber, #2796)
[Link] (7 responses)
I don't see why that would be a problem. On my Debian system I have multiple versions of, say, libreadline, libprocps and libtcl installed at the same time, in each case from separate packages, so the support seems to be there already.
Posted Jan 16, 2015 15:44 UTC (Fri)
by cortana (subscriber, #24596)
[Link] (6 responses)
This doesn't do what you think it does.
Looks promising at first glance, but what if my application wants libreadline 5.1?
The SONAME is not strongly tied to the version of the library, but to the compatibility level.
Posted Jan 16, 2015 15:56 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link] (5 responses)
Posted Jan 16, 2015 16:19 UTC (Fri)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Posted Jan 16, 2015 17:45 UTC (Fri)
by peter-b (guest, #66996)
[Link] (1 responses)
Posted Jan 16, 2015 18:55 UTC (Fri)
by cortana (subscriber, #24596)
[Link]
Posted Jan 16, 2015 18:56 UTC (Fri)
by cortana (subscriber, #24596)
[Link] (1 responses)
Posted Jan 16, 2015 21:22 UTC (Fri)
by mathstuf (subscriber, #69389)
[Link]
Posted Jan 16, 2015 13:55 UTC (Fri)
by cesarb (subscriber, #6266)
[Link]
It's not that simple. Suppose a program is linked against library A which in turn is linked against libpng2, and that program is also linked against library B which in turn is linked against libpng3.
Now imagine the program gets from library A a pointer to a libpng structure, which it then passes to library B.
Posted Jan 15, 2015 20:22 UTC (Thu)
by flussence (guest, #85566)
[Link] (5 responses)
JPEG is a bad example to use there... everyone dropped the official reference implementation after its maintainer went off the rails and started changing the format in backwards-incompatible ways: http://www.libjpeg-turbo.org/About/Jpeg-9
Posted Jan 15, 2015 21:40 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Posted Jan 16, 2015 11:24 UTC (Fri)
by tialaramex (subscriber, #21167)
[Link] (3 responses)
JPEG per se isn't a file format. The committee weren't focused on storing the data from their compression algorithm as files, they were thinking you'd transmit it to somewhere and it'd get decompressed and then used. So the actual international standard is completely mute about files on disk.
Early on people who did think we should store data in files wrote JPEG compressed data to the pseudo-standard TIFF. But TIFF is a complete mess, conceived as the minimal way to store output from a drum or flatbed scanner on a computer and thus permitting absolutely everything but making nothing easy - and its attempt to handle JPEG led to incompatible (literally, as in "I sent you the file" "No, my program says it's corrupt" "OK, try this" "Did you mean for it to be black and white?") implementations. There were then a series of Adobe "technical notes" for TIFF that try to fix things, several times attempting a fresh start with little success.
JFIF is the "real" name for the file format we all use today, and it's basically where the IJG comes into the picture. Instead of TIFF's mess of irrelevant or nonsensical parameters you've got the exact parameters needed for the codec being used, and then you've got all this raw data to pass into the decoder. And there's this handy free library of code to read and write the files, so everybody just uses that.
So initially the IJG are great unifiers - instead of three or four incompatible attempts to store JPEG data in a TIFF you get these smaller and obviously non-TIFF JPG files and either the recipient can read them or they can't, no confusion as to what they mean. But then they proved (and libpng followed them for a while) incapable of grasping what an ABI is.
Posted Jan 16, 2015 12:35 UTC (Fri)
by peter-b (guest, #66996)
[Link] (2 responses)
I didn't experience any problems that weren't due to my own incompetence.
Posted Jan 16, 2015 13:27 UTC (Fri)
by rleigh (guest, #14622)
[Link]
I think this is primarily due to most authors staying well inside the 8-bit grey/RGB "comfort zone". Sometimes this extends to 12/16-bit or maybe float, and not testing with more sophisticated data.
Most of that is simply the author screwing up. For example, when dealing with strips and tiles, it's amazing how many people mess up the image data by failing to deal with the strip/tile overlapping the image bounds when it's not a multiple of the image size, sometimes for particular pixel types e.g. 1 or 2 bit data. Just a simple miscalulation or failure to check particular tiff tags.
I'm not sure what the solution is here. A collection of images which exercise usage of all the baseline tags with all the special cases (and their interactions) would be a good start. I currently generate a test set of around 4000 64×64 TIFF images for the set of tags I care about, but it's still far from comprehensive. I know it works for the most part, but even then it's going to fail for tags I don't currently code for.
Posted Jan 17, 2015 10:32 UTC (Sat)
by tialaramex (subscriber, #21167)
[Link]
So, I'm not saying it's garbage because I don't understand how to use it, I'm saying it's garbage because I do understand and I don't sympathise.
Posted Jan 19, 2015 17:45 UTC (Mon)
by Baylink (guest, #755)
[Link] (8 responses)
That's *aside* from how thoroughly it violates nearly every precept of three decades of Unix design philosophy.
Posted Jan 19, 2015 22:18 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (7 responses)
Which is of course a proposition that is entirely different from your deciding that the distribution managers' time is yours to allocate, by requiring them to keep the existing hodgepodge of accidentally-combined components on life support forever.
Great, so go use Slackware.
That claim has been debunked so often it isn't funny anymore. The main distinguishing characteristic of the existing haphazard setup is that, unlike systemd, it has no discernible design philosophy whatsoever – it is a thrown-together mixture of components from a variety of unrelated sources, with lots of almost-duplication, a disparate zoo of configuration file formats, and dismal documentation.
As far as the famous and much-maligned “Unix design philosophy” is concerned, there can be no doubt that in the traditional setup there are lots of bits and pieces that each do one thing – but it requires quite a fanciful imagination to claim that they do that thing well.
Posted Jan 20, 2015 19:31 UTC (Tue)
by flussence (guest, #85566)
[Link] (2 responses)
What a tactless insult to the people whose work you've been freeloading off of for years before systemd.
Posted Jan 20, 2015 23:02 UTC (Tue)
by anselm (subscriber, #2796)
[Link]
It's not the distribution maintainers' fault that the traditional setup is so terrible. It isn't even the upstream software authors' fault. They all had an itch and they scratched it. The problem is that the various bits and pieces arose over a period of 20 years or so, and that nobody really talked to anybody else. For quite some time, the traditional setup used to be the best available solution to the problem at hand, and the distribution maintainers did great and mostly thankless work integrating and improving it. That does not detract from the fact that seen as a complete software system the traditional setup leaves a lot to be desired, especially compared to modern approaches like systemd.
Many systemd detractors do not seem to appreciate that one reason why so many distributions fell over themselves to adopt systemd is that systemd actually makes a distribution maintainer's life a lot easier. It basically saves a distribution from having to implement and maintain a lot of (often distro-specific) basic plumbing that is a hassle to design, build, and keep going. People who instead prefer their Linux distribution to be untainted by systemd are free to do that work themselves, or (as the distribution maintainers have mostly done it for them already) at least shoulder the burden of maintaining it going forward once the original distribution maintainers have decided – as is their privilege – that systemd is a more worthwhile use of their time. In that sense something like Devuan is a great idea because it will hopefully soak up all those folks who would otherwise keep hassling the Debian maintainers about sysvinit.
Posted Jan 21, 2015 15:31 UTC (Wed)
by judas_iscariote (guest, #47386)
[Link]
Posted Feb 2, 2015 20:30 UTC (Mon)
by nix (subscriber, #2304)
[Link] (3 responses)
Posted Feb 2, 2015 22:41 UTC (Mon)
by raven667 (subscriber, #5198)
[Link] (2 responses)
Reminds me of http://mjg59.livejournal.com/136274.html
Posted Feb 3, 2015 8:43 UTC (Tue)
by dgm (subscriber, #49227)
[Link] (1 responses)
> Closing the lid of my laptop should suspend the system regardless of whether it's logged in or not.
Absolutely. That's why power policy should *never* be related to the user's desktop session.
Posted Feb 3, 2015 15:55 UTC (Tue)
by raven667 (subscriber, #5198)
[Link]
Once you need to have software running to interact with the console present user then you need a context to run that software in, having a user session dedicated to the login screen allows it to have the same protections as any user and make it less of a special case.
I dunno, makes sense to me anyway.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
While the name was originally derived from Linux Weekly News, it doesn't mean that anymore (since many years ago) and the topic is really free software.
Lack of BSD predictions in LWN
Some unreliable predictions for 2015
Some unreliable predictions for 2015
(a) makes a strong technical point,
(b) reminds everyone that the BSDs still have their audience, and
(c) ensures the pot stays stirred.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Re: Internet of things not running Linux (Some unreliable predictions for 2015)
Re: Internet of things not running Linux (Some unreliable predictions for 2015)
Re: Internet of things not running Linux (Some unreliable predictions for 2015)
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Poppycock!
Poppycock!
Poppycock!
Some unreliable predictions for 2015
There has been some talk of a decline in the number of active Linux distributions. If that is indeed the case, any decline in the number of distributions will be short-lived
This will also free up a lot of time that can be spent elsewhere, improving the global Linux picture.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Why do people create a new distro or fork an existing one?
I doubt it is because they find that they can do package management better...
Once application deployment is done properly, I think the boring and labor-intensive part of what a distro is will fade.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Also, I'm not saying app packaging will cease to exist. Instead, I think that such schemes may lower the duplication that goes into many distro's doing the same thing equally or slightly differently.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Just stop pretending that libraries are secure and stable. Package everything and then provide strong isolation (using containers, seccomp, SELinux or whatever) for as much stuff as you can.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
I'm saying that we might see the redundant work going away if a good deployment solution is found. I don't think we'll see packaging go away, I think we'll see less of it because there's no 30 flavors of distro each using their own packaging system.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
> they opt for different options when they compile
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Wol
Some unreliable predictions for 2015
There is FAR too much reliance on theory in the computer space in general, and linux (the OS, not kernel) is no exception.
This may be true in the database realm you're singlemindedly focussed on (and I suspect in that respect it is only true with respect to one single theory which you happen to dislike and which, to be honest, RDBMS's implement about as closely as they implement flying to Mars), but it's very far from true everywhere else. GCC gained hugely from its switch to a representation that allowed it to actually use algorithms from the published research. The developers of most things other than compilers and Mesa aren't looking at research of any kind. In many cases, there is no research of any kind to look at.
Some unreliable predictions for 2015
- app/lib-developers do as they've always done: write software
- packagers do as they've always done, with the exception that their output is a container that contains the app and any necessary libraries. They can even choose to build a small number of different containers, each with slightly different options.
- There's S/W available to aid in creating new and updating/patching existing containers. Much like Docker allows you to modify part of an existing container and call it a new one, you can apply a patched (yet backwards compatible) library or application in a container and ship it as an update to the old one.
- the few "normal-use" (i.e. sorry Gentoo, you're not it ;) )distro's that are left then pick from the existing packages and compose according to wishes. Fedora would likely be a spawning ground for latest versions of all packages, while Red Hat might pick some older (but well-maintained) package with all the patching that it has seen. This also means that Red Hat could reuse packages that originally spawned in Fedora or elsewhere.
- Those that care enough can still build and publish a new container for a package with whatever options they like.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
$ ls -l /lib/x86_64-linux-gnu/libreadline.so.{5,6}
lrwxrwxrwx 1 root root 18 Apr 27 2013 /lib/x86_64-linux-gnu/libreadline.so.5 -> libreadline.so.5.2
lrwxrwxrwx 1 root root 18 Jan 13 03:25 /lib/x86_64-linux-gnu/libreadline.so.6 -> libreadline.so.6.3
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
JPEG / JFIF
JPEG / JFIF
JPEG / JFIF
JPEG / JFIF
Some unreliable predictions for 2015
Some unreliable predictions for 2015
The problem with systemd, as much as any other problem, is that Lennart and the distro managers who've drunk his FlavorAID decided that my time was theirs to allocate
I had other more important things to do than to spend time learning an entirely new core system component that gives me no measurable advantage.
That's *aside* from how thoroughly it violates nearly every precept of three decades of Unix design philosophy.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
Some unreliable predictions for 2015
stuff barely has any man power to keep it in a working state. the way it is integrated is also beyond terrible.
Some unreliable predictions for 2015
Some unreliable predictions for 2015
A bit of off-topic
A bit of off-topic