|
|
Subscribe / Log in / New account

Poettering: Revisiting how we put together Linux systems

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:24 UTC (Mon) by amck (subscriber, #7270)
In reply to: Poettering: Revisiting how we put together Linux systems by mpr22
Parent article: Poettering: Revisiting how we put together Linux systems

Thats missing the point: filing and fixing the bugs is the procedure for fixing the individual problem, but there is no guarantee that there will ever be a set of software (kernel, runtime, app) that works.

The "runtime" in this picture is a large set of libraries (all of the libs on a distro?). This will break several times a week (look at the security releases on lwn.net). Hence there is no guarantee that this stuff stays stable.

This is what distros do. Its essentially a guarantee : "we've tested this stuff to make sure it all works together, and fixed it when it didn't and froze it when it did to get you release X. There will be point releases of X as there are security fixes, but they won't break your apps' ABI".

This included the kernel. Now you're breaking that by removing the kernel, in order to avoid fixing a problem that has to be fixed within the distro anyway (versioning, compatibility checking): why not look at the work that does on in distros like Debian to ensure that library ABIs and APIs work and learn?


to post comments

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 16:31 UTC (Mon) by ovitters (guest, #27950) [Link]

This is explained in more detail in the Google+ comments. The expectation is that the runtime would be based on a distribution. So the runtime is basically just the collection of packages from a distribution. There is an overlap, but you'd not duplicate the efforts that the distributions are making.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 17:04 UTC (Mon) by ggiunta (guest, #30983) [Link] (14 responses)

I kinda agree that this might lead to less-stable and less-observable systems in the long run, even though it can make life simpler for many people and thus see quick widespread adoption.

What if app X relies on runtime A-1 which has known security bugs and does not claim compatibility with the current runtime A-99? End user will just run unsafe libraries to get X running while the developer of X can still claim his app is fully functional and has little incentive to fix it.

Bloat: even in the age of large ssds, keeping 5 versions times 5 OS-packages installed "just for fun" is not something I'd like to do. Heck, I already resent the constant stream of updates I get on Android for apps I barely use, I really do not need to clog the pipe with 25 x downloads from security.linuxdDistibutionZ.org.
I have seen the rise of "composer" in php-land, which uses a somewhat-related scheme (each app magically gets all the dependencies it needs) and the times for dependency resolution and download are ugly.

What about userland apps which keep open ports? Say LibreOffice plugin X which integrates with Pidgin-29, while Audacity plugin Y integrates with Pidgin-92. Even if there was a namespace for sockets, I'd not like to run 2 concurrent copies of the same IM application.

I wish there was a magical hammer allowing the to move on the other direction instead, and force-push the ABI-stability concept into the mind of each oss developer... (in fact I use windows as my everyday os, mainly because its core apis are stable, and I can generally upgrade any app independently of each other and expect them to work together. True, it is nowhere near linux in flexibility)

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 18:11 UTC (Mon) by xtifr (guest, #143) [Link] (13 responses)

Bloat: even in the age of large ssds, keeping 5 versions times 5 OS-packages installed "just for fun" is not something I'd like to do.

And its not just disk use that will skyrocket. One of the advantages of shared libraries on Linux is shared memory use. If my browser, editor, and compiler each use a different version of glibc, that means a lot more memory used up on different copies of glibc. Not to mention the various applets and daemons I have running. Then factor in the various versions of all the other libraries these various things use. The term "combinatorial explosion" comes to mind.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 19:01 UTC (Mon) by mjthayer (guest, #39183) [Link]

I haven't tested this personally, but I have heard claims that correctly optimised static libraries can actually beat shared ones on disk and memory usage due to the fact that each application only pulls in the parts which it really needs.

Poettering: Revisiting how we put together Linux systems

Posted Sep 1, 2014 20:03 UTC (Mon) by robclark (subscriber, #74945) [Link]

> And its not just disk use that will skyrocket. One of the advantages of shared libraries on Linux is shared memory use. If my browser, editor, and compiler each use a different version of glibc, that means a lot more memory used up on different copies of glibc. Not to mention the various applets and daemons I have running. Then factor in the various versions of all the other libraries these various things use. The term "combinatorial explosion" comes to mind.

so.. running things in a separate VM or chroot (which is what this is an alternative for) is somehow less wasteful?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 16:15 UTC (Wed) by nye (subscriber, #51576) [Link] (10 responses)

In practice, the idea that using shared libraries reduces memory usage is basically irrelevant - even assuming it's true at all, which it may not be, as michaeljt points out.

I've just had a look at the nearest Linux machine to hand: this is only a rough estimate based on what top reports, but it appears that, out of a little under 30GB RSS, there's about 30MB shared - and that's just by adding up the 'shared' column, so I guess it's probably counting memory multiple times if it's used by multiple processes(?)

Either way, I'm not going to lose much sleep over a memory increase on the order of a tenth of a percent if it makes other things simpler.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 17:47 UTC (Wed) by Trelane (subscriber, #56877) [Link] (9 responses)

It would be interesting to have two gentoo installs on the same machine : one compiled statically and one not and otherwise identical.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 20:50 UTC (Wed) by mjthayer (guest, #39183) [Link] (8 responses)

I will just point out to both that I was talking about correctly optimised static libraries. I suspect that these days the only correctly optimised ones are those which specifically target embedded development. I just tried statically linking the X11 libraries (all other libraries were dynamically linked) to a pretty trivial client, xkey for those who know it, and the resulting binary was one megabyte in size after stripping. I actually expected that X11 would be reasonably well optimised, though that probably only applied before the days when libX11 was a wrapper around libxcb.

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 21:52 UTC (Wed) by Trelane (subscriber, #56877) [Link] (7 responses)

Pardon my ignorance, but what does, "correctly optimized" mean precisely?

Poettering: Revisiting how we put together Linux systems

Posted Sep 3, 2014 22:03 UTC (Wed) by zlynx (guest, #2285) [Link] (6 responses)

I believe that a properly put together static library has multiple .o files inside it. Each .o file should be one function, possibly including any required functions that aren't shared.

This is because the static linker reads .a libraries and includes required .o files.

A badly put together static library has one, or just a few .o files in it. Using any function from the library pulls in all of the unrelated code as well.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 5:51 UTC (Thu) by mjthayer (guest, #39183) [Link] (5 responses)

Exactly. Actually something I would expect the compiler and linker to be able to handle, say by the compiler creating multiple .o files, each containing as few functions as possible (one, or several if there are circular references).

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 6:55 UTC (Thu) by Wol (subscriber, #4433) [Link] (1 responses)

That sounds like libraries and the loader on Pr1mos. If your library was recursive it could catch you out (and if you had different functions with the same name in different libraries).

Each time you loaded a library, it checked the list of unsatisfied functions in the program against the list of functions in the library, and pulled them across.

So if one library function referenced another function in the same library, you often had to load the library twice to satisfy the second reference.

I've often felt that was better than the monolithic "just link the entire library", but it does prevent the "shared library across processes" approach.

Cheers,
Wol

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 7:20 UTC (Thu) by mjthayer (guest, #39183) [Link]

That is how static linking works today with the standard GNU toolchain. If you are linking a binary statically you sometimes have to include a given library twice on the linker command line for that reason.

Poettering: Revisiting how we put together Linux systems

Posted Sep 4, 2014 14:14 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

Yeah. --ffunction-sections -fdata-sections -Wl,--gc-sections can do that, in theory, but it makes binaries a lot bigger (and thus *more* memory-hungry) due to ELF alignment rules, and is rarely tested, complicated, and extremely prone to malfunction as a result. Use if wizard or highly confident only.

Poettering: Revisiting how we put together Linux systems

Posted Sep 5, 2014 7:54 UTC (Fri) by mjthayer (guest, #39183) [Link] (1 responses)

Yes, I can see that. Is there any reason though (I am asking you as you know considerably more about the subject than I do) why the linker would not be able to merge ELF sections during the final link if they were not yet relocated?

Poettering: Revisiting how we put together Linux systems

Posted Sep 8, 2014 15:42 UTC (Mon) by nix (subscriber, #2304) [Link]

No reason that I can see (though obviously this must be an optional behaviour: some programs would really *want* one section per function, unlike people who are just using it for GCing.)


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds