What to expect tomorrow? Integration with PulseAudio maybe?
Changes coming for systemd and control groups
Posted Jun 22, 2013 8:46 UTC (Sat) by heijo (guest, #88363)
Only issue, it must be maintained by people who are well regarded and effective maintainers.
To solve this, I think it might be a good idea to consider merging systemd into the Linux repository, so that Linus, which probably has better reputation than Lennart, would have ultimate authority.
Posted Jun 22, 2013 13:52 UTC (Sat) by drag (subscriber, #31333)
I agree. The Linux kernel provides a unifying effect for all Linux distros.
The same thing can be done with the 'linux plumbing' level of things were you have a low-level functionality that needs to be available to all Linux users for all common cases. Things like: Hardware detection and configuration.. how to configure, load, and manage kernel modules. Network configuration management. USB devices with userland drivers, firmware management. Network configurations, system boot up, etc etc.
So from the 'higher' level application layers this low-level functionality would be essentially a black box. Just common APIs to interact with.
As long as those 'linux plumbing' APIs remain constant then it shouldn't matter what all is providing them. Testing suites and API documentation would be needed to make sure that everybody's version is consistent...
Then you could create 'personalities' that sit on top of all of this using containers or whatever that distro makers could create and modify for the stuff that end users and application developers actually care about.
Gnome personality, KDE personality, Canonical/Mir/Ubuntu personality. LAMP stack personality. "Ruby on rails" style distro, Jboss distro, Android-compatible personality, etc etc.
Each 'vertical' application stack then provide it's own group of software that sits in a container and manages cgroups, ip addresses/ports, etc in it's own little userland. Applications would be allowed to interact with dbus (and whatever) APIs provided by the Linux plumbing layer or have things delegated to their own daemons for their own containers.
Something like that.
Posted Jun 22, 2013 21:22 UTC (Sat) by cataliniacob (guest, #91150)
It's certainly true that most of the people that want alternative init systems are content with one kernel. Pretty much everyone agrees alternative browsers for example are a good idea, how low in the stack to draw the line for common components is useful to ponder instead of assuming kernelspace/userspace is that line.
Posted Jun 23, 2013 3:45 UTC (Sun) by Tobu (subscriber, #24111)
People also go to the trouble of patching kernels, and the kernel's rigid ABI policy wouldn't survive if you couldn't put compatibility hacks in outside libraries.
On the other hand I'm not too convinced by the syscall personality idea, because different semantics would ripple through and prevent processes from interoperating well with each other. It might be useful for running completely incompatible environments like Wine, which is oblivious to all Linux APIs, but for multiple Linux processes, introducing rarely-tested incompatibilities would not be worth it.
Posted Jun 23, 2013 18:13 UTC (Sun) by marcH (subscriber, #57642)
Posted Jun 23, 2013 20:19 UTC (Sun) by khim (subscriber, #9252)
Yes and no. System call level line gave us few quite popular full-blown OSes (not just Android but also NAS systems, wifi-routers and so on).
But it's also easy to see that only OSes which define yet another layer are successful among non-geeks.
This strongly hints that both levels are important: system call level for OS builders and another, higher-level one for application/hardware system builders. System where you can mix and match everything (see Debian or any other desktop distribution) is embodiment of mediocrity: it does a lot of things, basically everything you can ever imagine. The only problem: it does everything equally poorly. Conflicts among the low-level plumbing components are more-or-less constant and system often breaks in strange and usual ways. These things are relatively easy to fix (symlink here, small config file there and everything works again), but it's not something Joe Average can do and it's not something Joe Geek wants to do.
Posted Jun 24, 2013 8:44 UTC (Mon) by justincormack (subscriber, #70439)
Posted Jun 22, 2013 18:00 UTC (Sat) by nwm (guest, #91288)
Posted Jun 25, 2013 5:22 UTC (Tue) by drag (subscriber, #31333)
The nice thing about standards is that there are so many to choose from.
Software is a evolving thing, right? So you want to take a kernel and after it becomes good enough then you can build a operating system on it. Then once the operating system is good enough you can build it into a server, desktop, or some other appliance. Once you get that built you can write software for managing hundreds of users, make application development easy, distribute software to millions of users effortlessly. (oh wait.. traditional Linux spent multiple decades and still hasn't gotten to that level, were as Android did it within months)
You see 'layer' performs a particular function and then people encapsulates that functionality into bigger and more complex balls of functionality to solve bigger and harder problems.
Imagine now: a TCP/IP ethernet network.
Each layer does a job. The 'physical layer' is made up of wires and transistors and signals that run over those wires. The 'Transport' layer performs low-level MAC addressing features. Level 3 provides routing. Then you have mixed into that somewhere IP packets. Then TCP or UDP datagrams. Then on top of that you have the application specific protocols, then protocols built on other protocols, OS network stacks with varying abilities to peer into different layers that filter, send, and forward packets and their contents to other software and hardware. And then finally the applications that process the raw data presented to them by system libraries and kernels and turns it into something that is actually useful.
It's all dirty, but effective, continuously evolving, with formal layers and rules that people must abide with. It doesn't follow the imagined purity of the ISO network stack model, but there are rules you can't break and people are free to do whatever in their particular product with the knowledge that they can test and produce practical products people can use.
Now imagine if the world ran ethernet networks like they run Linux distributions.
Which ethernet distribution are you going to go with? They all use just about the same wires and the same protocols, but they decide to do things different for arbitrary reasons that are mostly meaningless to other people.
Maybe the like the way colors look in a different order in the RJ45 jacks. Maybe the TCP/IP stack uses some different ports. Like Redhat ethernet prefers to have HTTP be port 9234.1G5, while Debian still uses the old AF port and Suse Linux is still stuck on port 80.
Maybe one of the distributions decided that it preferred IPv6 so they made a dual stack IPv4/IPv6 a huge pain the ass to do. Maybe another distribution decided that a IPv5.3 transition protocol would be terrific. Maybe Debian decided that it's going to continue to offer IPv4, IPv5.3, IPv6, and IPX/SPX just in case one or the other might be better. Meanwhile KDE brand routers decided it would be great if they would let people randomly choose whatever firmware version they wanted for their routers while Gnome people figured that the only thing that should work on their stuff is their version of IPv6.2 in order to make everything simple for novice network administrators.
Now keep in mind that they are all using the same stuff. Using the same Ethernet hardware, same wires, same network stacks. Just all configured differently and people have to design and support every potential combination if you want things to work on more then one ethernet distro.
But this is all for the sake of potential innovation, is it not?
How can you continue to improve networks without people just randomly being able to subtly change how all of it works?
For anybody running a large network this is just insane thinking.
With Linux distributions, however, there are no layers besides 2: There is 'Kernel' and then there is 'Everything else'. There are a few attempts at standards, but it's seems all really very arbitrary. Since 2000 my Xserver can support 200+ different window managers and I can always expect that my X clients would at least be mostly functional regardless of which one I choose, however I still can't really expect that I can use any 'Linux' audio recorder program I want to record voice off of my headset.
Change one thing, like need to apply a security patch to a library and it's like a volcano blowing with all the packages that have it's dependencies are automatically regenerated and sent out to the mirrors.
All I am imagining (I can't even say suggesting, because at this point it is just fantasy) is that Linux distributions decide to formalize more layers so that people are free to innovate within their layers without causing huge headaches for everybody else. Of course it will never be perfect, but nothing is.
Posted Jun 24, 2013 3:11 UTC (Mon) by russell (guest, #10458)
Posted Jun 24, 2013 5:29 UTC (Mon) by mjg59 (subscriber, #23239)
Posted Jun 24, 2013 8:20 UTC (Mon) by ebiederm (subscriber, #35028)
Posted Jun 24, 2013 12:46 UTC (Mon) by Karellen (subscriber, #67644)
Posted Jun 24, 2013 16:54 UTC (Mon) by dlang (subscriber, #313)
Posted Jun 24, 2013 23:04 UTC (Mon) by marcH (subscriber, #57642)
One gene mutation or refactoring at a time.
Posted Jun 24, 2013 23:17 UTC (Mon) by dark (guest, #8483)
Posted Jun 26, 2013 15:44 UTC (Wed) by Karellen (subscriber, #67644)
If the old variation is fitter, the new one dies out/is ignored. If the new one is fitter, the old one dies out/is discarded. If they are equally fit, they both spread throughout the population.
(Note, "fitness" is a wide-ranging property. With regards to code changes, if two variants are equally performant and equally simple, then one might be "more fit" simply because it is more widely adopted, and minimising the set of patches you have to others increases the "fitness" of your tree.)
Posted Jun 27, 2013 17:42 UTC (Thu) by marcH (subscriber, #57642)
Posted Jun 28, 2013 1:49 UTC (Fri) by jschrod (subscriber, #1646)
That's not evolution, that's social-darwinistic misinterpretation of evolution.
In nature, something doesn't die out because there's an other species that's "more fit". If that were the case, we wouldn't have so many species on earth. A species dies out if it is _not fit enough_ to adapt to changing environmental circumstances (which might be the pressure of other inhabitants of that environment, but that's seldom the case).
Selection does *not* mean only the fittest survive. It means the unfit-to-adapt die.
Posted Jun 25, 2013 0:14 UTC (Tue) by rgmoore (✭ supporter ✭, #75)
The evolution moves to a different part of the stack.
Posted Jun 27, 2013 2:43 UTC (Thu) by russell (guest, #10458)
Posted Jun 28, 2013 11:23 UTC (Fri) by dgm (subscriber, #49227)
Posted Jun 22, 2013 13:32 UTC (Sat) by ovitters (subscriber, #27950)
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds