Applications and bundled libraries
Package installation for Linux distributions has traditionally separated libraries and application binaries into different packages, so that only one version of a library would be installed and it would be shared by applications that use it. Other operating systems (e.g. Windows, MacOS X) often bundle a particular version of a library with each application, which can lead to many copies and versions of the same library co-existing on the system. While each model has its advocates, the Linux method is seen by many as superior because a security fix in a particular commonly-used library doesn't require updating multiple different applications—not to mention the space savings. But, it would seem that both Mozilla and Google may be causing distributions to switch to library-bundling mode in order to support the Firefox and Chromium web browsers.
One of the problems that distributions have run into when packaging
Chromium—the free software version of Google's Chrome
browser—is that it includes code for multiple, forked libraries. As
Fedora engineering manager Tom "spot" Callaway put it: "Google is
forking existing FOSS code bits for Chromium like a rabbit makes babies:
frequently, and usually, without much thought.
" For distributions
like Fedora, with a "No
Bundled Libraries" policy, that makes it very difficult to include
Chromium. But it's not just Chromium.
Mozilla is moving to a different release model, which may necessitate distribution changes. The idea is to include feature upgrades as part of minor releases—many of which are done to fix security flaws—which would come out every 4-6 weeks or so. Major releases would be done at roughly six-month intervals and older major releases would stop being supported soon after a subsequent release. Though the plan is controversial—particularly merging security and features into the minor releases—it may work well for Mozilla, and the bulk of Mozilla's users who are on Windows.
Linux distributions often extend support well beyond six-months or a year, though. While Mozilla is still supporting a particular release, that's easy to do, but once Mozilla stops that support, it becomes more difficult. Distributions have typically backported security fixes from newer Firefox versions into the versions that they shipped, but as Mozilla moves to a shorter support window that gets harder to do. Backporting may also run afoul of the Mozilla trademark guidelines—something that led Debian to create "Iceweasel". The alternative, updating Firefox to the most recent version, has its own set of problems.
A new version of Mozilla is likely to use updated libraries, different from those that the other packages in the distribution use. Depending on the library change, it may be fairly straightforward to use it for those other applications, but there is a testing burden. Multiple changed libraries have a ripple effect as well. Then there is the problem of xulrunner.
Xulrunner is meant to isolate applications that want to embed Mozilla components (e.g. the Gecko renderer) from changes in the Mozilla platform. But xulrunner hasn't really committed to a stable API, so updates to xulrunner can result in a cascade of other updates. There are many different packages (e.g. Miro, epiphany, liferea, yelp, etc.) that use xulrunner, so changes to that package may require updates to those dependencies, which may require other updated libraries, and so on.
The Windows/Mac solution has the advantage that updates to Firefox do not require any coordination with other applications, but it has its set of downsides as well. Each application needs some way to alert users that there are important security fixes available and have some mechanism for users to update the application. Rather than a central repository that can be checked for any pending security issues, users have to run each of their installed applications to update their system. Furthermore, a flaw in a widely used library may require updating tens or hundreds of applications, whereas, in the Linux model, just upgrading the one library may be sufficient.
It would appear that Ubuntu is preparing to move to the bundled library approach for Firefox in its upcoming 10.04 (Lucid Lynx) release. That is a "long-term support" (LTS) release that Ubuntu commits to supporting for three years on the desktop. One can imagine that it will be rather difficult to support Firefox 3.6 in 2013, so the move makes sense from that perspective. But there are some other implications of that change.
For one thing, the spec mentions the need to "eliminate
embedders
" because they could make it difficult to update Firefox:
"non-trivial gecko embedders must be eliminated in stable ubuntu
releases; this needs to happen by moving them to an existing webkit
variant; if no webkit port exists, porting them to next xulrunner branch
needs to be done.
" Further action items make it clear that finding
WebKit alternatives for Gecko-embedders is the priority, with removal from
Ubuntu (presumably to "universe") being the likely outcome for most of the
xulrunner-using packages.
In addition, Ubuntu plans to use the libraries that are bundled with
Firefox, rather than those that the rest of the system uses, at least
partially because of
user experience issues: "enabling system libs is not officially
supported upstream and supporting this caused notable work in the past
while sometimes leading to a suboptimal user experience due to version
variants in the ubuntu released compared to the optimize version shipped in
the firefox upstream tarballs.
" While it may be more in keeping
with Mozilla's wishes, it certainly violates a basic principle of Linux
distributions. It doesn't necessarily seem too dangerous for one
package, but it is something of a slippery slope.
The release model for Chromium is even more constricting as each new version is meant to supplant the previous version. As Callaway described, it contains various modified versions of libraries, which makes it difficult for distributions to officially package in any way other than with bundled libraries. If that happens in Ubuntu for example, that would double the number of applications shipped with bundled libraries. Going from one to two may seem like a fairly small thing, but will other upstreams start heading down that path?
The Fedora policy linked above is worth reading for some good reasons not to bundle libraries, but there are some interesting possibilities in a system where that was the norm. Sandboxing applications for security purposes would be much more easily done if all the code lives in one place and could be put into some kind of restrictive container or jail. Supporting multiple different versions of an application also becomes easier.
It is fundamentally different from the way Linux distributions have generally operated, but some of that is historical. While bandwidth may not be free, it is, in general, dropping in price fairly quickly. Disk space is cheap, and getting cheaper; maybe there is room to try a different approach. The distribution could still serve as a central repository for packages and, perhaps more importantly, as a clearinghouse for security advisories on those packages.
Taking it one step further and sandboxing those applications, so that any damage
caused by an exploit is limited, might be a very interesting experiment.
The free software world is an excellent candidate for that kind of
trial, in fact it is hard to imagine it being done any other way; the
proprietary operating systems don't have as a free a hand to repackage the
applications that they run. It seems likely that the negatives will
outweigh the advantages, but we won't really know until someone gives it a try.
Posted Mar 17, 2010 20:27 UTC (Wed)
by lotzmana (subscriber, #3052)
[Link]
Many times upstream developers simply don't care, and rightfully so. Their
Posted Mar 17, 2010 20:30 UTC (Wed)
by agl (guest, #4541)
[Link] (37 responses)
Keep in mind that we build on Windows, so much of the code in our
Having said that we do have a number of forks. Here's an unrepresentative
libevent: we needed bug fixes and we needed to be able to run on systems
icu: we need a more recent version than was even provided on Karmic.
libjingle: upstream appears to be unmaintained.
sqlite: we added full-text indexing (now upstream) and several performance
nss: we push patches upstream, but we are working on this heavily. Even so,
I'm going to avoid making philosophical points here. I only wanted to give
Posted Mar 17, 2010 20:37 UTC (Wed)
by jospoortvliet (guest, #33164)
[Link] (20 responses)
I don't care much about diskspace or bandwith, but memory is a different
Posted Mar 17, 2010 21:39 UTC (Wed)
by agl (guest, #4541)
[Link] (19 responses)
In practice, when just running Firefox or Chromium, the duplication is
Probably ICU's tables are the largest amount of duplication and that might
Posted Mar 17, 2010 22:02 UTC (Wed)
by jospoortvliet (guest, #33164)
[Link] (18 responses)
Posted Mar 17, 2010 22:02 UTC (Wed)
by jospoortvliet (guest, #33164)
[Link]
Posted Mar 17, 2010 22:25 UTC (Wed)
by yokem_55 (subscriber, #10498)
[Link] (16 responses)
Posted Mar 18, 2010 4:51 UTC (Thu)
by djao (guest, #4263)
[Link] (14 responses)
It still uses enormous amounts of memory (192M), but alternatives such as evince (147M) are not exactly efficient either. There's no embedded copy of GTK (I checked). It feels just as fast as evince, and in certain important objective respects (such as the proportion of screen space wasted by the chrome) it improves on evince by quite a large margin. In terms of stability, it's superior to the free alternatives -- I've never seen acroread crash, whereas I've seen plenty of crashes with the free pdf readers.
Finally, acroread uses system settings for font antialiasing, including subpixel antialiasing on LCD screens, which evince does not do (although my own copy of evince is patched to support this feature, because I really like it).
As I said above, I dislike the idea of proprietary software, but all things being equal, I'd much prefer acroread to be good than to be bad, and I have to admit it's getting good.
Posted Mar 18, 2010 5:38 UTC (Thu)
by roelofs (guest, #2599)
[Link] (5 responses)
Hoo boy, I sure have. Of course, I use an old version (37M) in preference to the JS-infested privacy disaster Adobe is currently shipping, so that probably has something to do with it. But after a few weeks or months of use, it frequently either locks up with/on an X grab or just blows itself out of the water altogether. It's not frequent enough to be a showstopper, but it is mildly annoying, and I use both Evince and xpdf as well (especially for forms).
Greg
Posted Mar 18, 2010 6:53 UTC (Thu)
by djao (guest, #4263)
[Link] (4 responses)
The best software for PDF forms, without question, is flpsed. It allows arbitrary annotations, even on PDFs that don't include embedded forms. You can save your work at any time and edit it later (hardly worth advertising as a feature, except for the fact that Acrobat Reader doesn't allow it), and the resulting output files are small and correct. It's also free software (GPL) and quite robust and stable.
Posted Mar 18, 2010 7:40 UTC (Thu)
by evgeny (subscriber, #774)
[Link] (3 responses)
I suggest you take a look at xournal (<http://xournal.sourceforge.net/>). Although it was primarily intended for use with a tablet input, later versions also have support for entering typeset text from keyboard. I use it exclusively for PDF annotations.
Posted Mar 18, 2010 8:10 UTC (Thu)
by djao (guest, #4263)
[Link] (2 responses)
Posted Mar 18, 2010 8:33 UTC (Thu)
by evgeny (subscriber, #774)
[Link] (1 responses)
Posted Mar 18, 2010 9:06 UTC (Thu)
by djao (guest, #4263)
[Link]
I still believe, however, that flpsed is better for PDF forms, which invariably consist largely of data entry.
Posted Mar 18, 2010 7:43 UTC (Thu)
by jospoortvliet (guest, #33164)
[Link] (7 responses)
But I find Okular far superior to adobe's product, mostly in the UI
Posted Mar 18, 2010 8:15 UTC (Thu)
by djao (guest, #4263)
[Link] (6 responses)
Perhaps you're using an old or misconfigured version of Acrobat Reader?
Posted Mar 18, 2010 9:46 UTC (Thu)
by jospoortvliet (guest, #33164)
[Link] (5 responses)
In Okular you can just scroll on, no extra movements needed. It's a small touch, extremely intuitive and I only figured out I was using it when someone pointed it out ;-)
Posted Mar 18, 2010 18:33 UTC (Thu)
by djao (guest, #4263)
[Link] (4 responses)
Posted Mar 18, 2010 21:34 UTC (Thu)
by jospoortvliet (guest, #33164)
[Link] (3 responses)
It's tiny, hard to understand (clearly) if you haven't seen it, yet
Similarly nice is finding stuff in Okular, btw. I find the search bar on
Also the automatic scrolling (shift-arrow down) is very nice, I've used
Again, tiny differences, but as I don't use any of the advanced stuff but
Posted Mar 18, 2010 22:17 UTC (Thu)
by dlang (guest, #313)
[Link]
I don't know if this is implemented by the window manager or by the individual app.
Posted Mar 18, 2010 22:22 UTC (Thu)
by djao (guest, #4263)
[Link] (1 responses)
One problem is that, most of the time, when I'm scrolling through a PDF, I want to read the pages from the top down (i.e. scroll forward through the file). However, in order to scroll forward by dragging the main page with the left mouse button, the mouse cursor itself actually has to move up in order for the page content to move down. So my mouse cursor never hits the bottom of the screen like you describe, unless I'm scrolling backwards, which happens very rarely. When I scroll forward, the cursor hits the top of the screen, and when it hits the top, it certainly doesn't automatically wrap the cursor to the bottom.
Since I cannot reproduce this behavior, I have to make certain assumptions about what you mean. Assuming you meant that the mouse cursor wraps from top to bottom, I can see how it would be a worthwhile option, but I would never use it myself. Most of my pdf reading occurs on a laptop, with a touchpad, in which case dragging the page is worse than useless -- it requires holding down a button as well as moving a finger along the touchpad, whereas the scrollwheel is built in to the touchpad and only requires moving a finger along the touchpad, and thus involves strictly less work. The only time I use dragging is for fine (pixel-level) scrolling control that cannot be achieved with the scroll wheel, but in such cases wraparound is unnecessary.
In addition to the lack of utility, my own opinion is that the bottom of the screen should be an absolute boundary to movement, not an invitation to wrap the cursor around to the top of the screen, no matter how worthy the justification may be. Moreover, if the PDF is displayed in a window, rather than full screen, then automatic cursor wraparound would be even more jarring, as it would jump from the top of the window to the bottom of the window rather than the top of the screen to the bottom of the screen.
Posted Mar 19, 2010 17:06 UTC (Fri)
by jospoortvliet (guest, #33164)
[Link]
But I guess everyone's habits are different, as are preferences ;-)
I just wanted to illustrate a very small yet nice feature Okular has which makes it (to me) nicer than Acrobat. It has more of those, of course ;-)
Posted Mar 18, 2010 13:06 UTC (Thu)
by mjthayer (guest, #39183)
[Link]
Posted Mar 17, 2010 20:45 UTC (Wed)
by jspaleta (subscriber, #50639)
[Link] (3 responses)
Can things be engineered such that functionality from rejected/yet-to-be-approved patches to upstream can be disabled cleanly in rebuilds? You've alluded that his is the case for nss can that also be the case for sqlite and others where there is an active upstream?
For libjingle.. if the upstream project is verifiably dead... why doesn't google spin up their libjingle as a separate project for distributors to pull releases from.
-jef
Posted Mar 17, 2010 21:38 UTC (Wed)
by tzafrir (subscriber, #11501)
[Link] (2 responses)
Posted Mar 17, 2010 21:41 UTC (Wed)
by jspaleta (subscriber, #50639)
[Link] (1 responses)
-jef
Posted Mar 20, 2010 12:51 UTC (Sat)
by man_ls (guest, #15091)
[Link]
Posted Mar 18, 2010 0:15 UTC (Thu)
by shahms (subscriber, #8877)
[Link] (1 responses)
As much griping as goes on about Google's forks and patches I generally
For open source projects, it's usually not an issue. You publish the
1) Updating to the upstream version that has your patches
Yes, it's a little extra work for the repository maintainer, but not much.
Now, for closed source apps, the burden is on the application developer
I'm not entirely up to speed on the Firefox/Mozilla situation, but the last
Posted Mar 22, 2010 18:39 UTC (Mon)
by lkundrak (subscriber, #43452)
[Link]
Posted Mar 18, 2010 3:59 UTC (Thu)
by blitzkrieg3 (guest, #57873)
[Link]
No one is decrying your use of a third_party directory. Mozilla does this too.
However, it's a bit harder to use system libraries than you make it seem. For example, looking through spot's src.rpm, I see no less than 7 patches to hardcode stuff to use system libs. Take this commit for example. Pretty much every file references ../third_party/icu, but when you look for the actual directory, it isn't there!
This sounds like an excuse. I don't think that Mozilla is that much better, and spot is incorrect that they don't bundle libraries (--with-system-png is commented out in F13 alpha and I know mozilla has their own brand of cairo), but with a few changes we can duplicate less work and simultaneously get a better product
Posted Mar 18, 2010 9:48 UTC (Thu)
by jengelh (guest, #33263)
[Link] (5 responses)
So what? If $distro ships libevent2-1.4.9 for example, there are two choices:
sqlite: if upstream is not interested, rename it, and let the distro have a sqlite-google package created (this also requires that you make it use a different SONAME than the pristine sqlite), or, if it's API-compatible, have the distro replace the original sqlite/import your patch.
Posted Mar 18, 2010 15:04 UTC (Thu)
by NAR (subscriber, #1313)
[Link] (4 responses)
What happens if libevent.so.2-1.4.10 breaks some other application? There are surprising number of applications that actually depend even on specific bugs being present in a library (because e.g. the bug was there years ago and they've implemented a workaround which will be broken by the proper fix)...
Posted Mar 18, 2010 15:11 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Posted Mar 18, 2010 15:28 UTC (Thu)
by cortana (subscriber, #24596)
[Link] (2 responses)
Posted Mar 19, 2010 11:30 UTC (Fri)
by smurf (subscriber, #17840)
[Link] (1 responses)
Posted Mar 19, 2010 11:36 UTC (Fri)
by jengelh (guest, #33263)
[Link]
Posted Mar 18, 2010 14:25 UTC (Thu)
by alex (subscriber, #1355)
[Link]
One of the reasons I run a source based distro is so I can run with recent
Of course this would involve keeping careful track of the divergence of
Posted Mar 18, 2010 15:42 UTC (Thu)
by judas_iscariote (guest, #47386)
[Link] (1 responses)
Simple, abort build saying libevent x.y.z is needed, distributors will
>icu: we need a more recent version than was even provided on Karmic.
Same here.
>libjingle: upstream appears to be unmaintained.
Huh, what about asking your own company to support its own code ?
> sqlite: we added full-text indexing (now upstream) and several
Contact distributors and ask them to add your patches to the system
Posted Mar 22, 2010 18:43 UTC (Mon)
by lkundrak (subscriber, #43452)
[Link]
Posted Mar 17, 2010 20:39 UTC (Wed)
by cdamian (subscriber, #1271)
[Link]
At the moment I see the distributions as the only voice which stops the bundling of libraries. And with this they also reduce the number of forks for these libraries. For an application developer it is always easier to bundle libraries and patch them up to fit the needs of the library instead of fighting for these changes with upstream.
This will slow down the development of the libraries and also reduce the quality. And once this happens the applications will have nothing left to bundle.
In my opinion one reason why open source is so good is the reuse and improvement of the libraries all the applications are using.
And once libraries get bundled, what will stop the bundling of applications. It might make sense for Firefox or OpenOffice to bundle stuff like MySQL, Gimp, ImageMagick or maybe sshd in slightly changed and "improved" versions.
And then there are still the already mentioned disk space, memory usage and security problems.
Posted Mar 17, 2010 21:03 UTC (Wed)
by ikm (guest, #493)
[Link] (6 responses)
Posted Mar 17, 2010 23:59 UTC (Wed)
by cmccabe (guest, #60281)
[Link] (5 responses)
It sounds like the only thing you're really getting from using the official binary is the branding.
Posted Mar 18, 2010 0:59 UTC (Thu)
by ikm (guest, #493)
[Link] (4 responses)
I understand the need for the distributor when the package is distributed upstream in source form only, without dependencies etc, but I really see no need for that in case of a self-contained self-updating Firefox -- the distributor here only adds latencies, complexities and quirks. Or so is my experience.
p.s. And yes, last and the least, I hate this idiotic 'iceweasel' name.
Posted Mar 18, 2010 1:26 UTC (Thu)
by clump (subscriber, #27801)
[Link] (3 responses)
Iceweasel has always worked well for me, and I like Debian's comittment to security. An added benefit of Iceweasel is how many architectures it runs on.
Posted Mar 18, 2010 19:23 UTC (Thu)
by sytoka (guest, #38525)
[Link] (2 responses)
amd64 armel hppa i386 ia64 kfreebsd-amd64 kfreebsd-i386 mips mipsel powerpc s390 sparc
Let's go on mozilla fondation
Windows MacOSX Linux
Many upstream projects don't mind about their software on many arch...
Support for other arch improve sofware quality also.
Posted Mar 18, 2010 20:27 UTC (Thu)
by ikm (guest, #493)
[Link] (1 responses)
On a side note, I'd like to warn you that the fact of mere availability of a package in Debian for some rare arch doesn't really mean that the program in that package would actually work on that arch just fine. I am e.g. an author of an app which is present in Debian, and mind you, despite the fact that it currently just doesn't work on any big-endian architectures correctly, in Debian it is present for all arches, including the big-endian ones. Of course, there are no bugreports pertaining to those problems -- no one has ever actually tried to use those packages there.
Posted Mar 20, 2010 1:00 UTC (Sat)
by jrn (subscriber, #64214)
[Link]
Posted Mar 17, 2010 21:04 UTC (Wed)
by Frej (guest, #4165)
[Link] (22 responses)
Distro's tend to want monolithic control over everything, even if it potentially hurts users and
In practice it's similar to iphone/appstore, but they are evil and distro's are heroes.... ;)
Posted Mar 17, 2010 23:51 UTC (Wed)
by shahms (subscriber, #8877)
[Link] (7 responses)
And it's not like the distros are iron-fisted dictators. All of the major
Posted Mar 18, 2010 2:11 UTC (Thu)
by drag (guest, #31333)
[Link] (6 responses)
Posted Mar 18, 2010 3:11 UTC (Thu)
by thedevil (guest, #32913)
[Link] (1 responses)
I don't consider myself "average". Why should I welcome a product explcitly targeted at "the average population"? I don't care about world domination one bit. Windows can have 99% of the market for all I care. I want something that serves me (a programmer) well. Windows isn't it, and Linux which works just like Windows (hi, Gnome!) wouldn't be it, either.
Besides, you contradict yourself:
"... diverse needs of the average population."
"... the better off *all* of us are going to be."
It can't very well be both.
Posted Mar 18, 2010 9:14 UTC (Thu)
by dgm (subscriber, #49227)
[Link]
The same logic applies to software. If we hide in the proverbial Ivory Tower, the OS that so wonderfully works for us will languish and die.
The moral being, be little more humble my friend.
Posted Mar 18, 2010 18:47 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
Strongly disagree.
First, developers are interested in developing, let them do that. Let others take over the packaging. lest you loose developers (ouch!).
Second, not all distributions are equal. They do have different aims: There are "enterprise" distributions, who are commited to UI/API/ABI stability at almost any cost; there are "end user distributions" commited to making the ordinary user's life as simple as possible; then there are the "technology showcase" ones, always shipping the latest&greatest; and "source code only" distributions which take pride in shipping (almost) unchanged upstream sources, mostly from the bleeding edge. Asking any developer to ship packages for each of those use cases means driving them to sheer madness.
Third, just shipping some "standard package" doesn't help a bit. Witness what became of LSB, which aimed at providing a common base for shipping binaries. The stuff they standardized on was too minimal and too soon outdated to be of any relevance.
Fourth, in FLOSS we have the option to fix the source, and package that. Most closed source applications are bought once, and users would scream if they had to pay again just because the operating system vendor decided to update some random library. And this works both ways, the applications are forced to aim at a low common denominator (or ship their own environment), while the vendor has to bend over backwards so as to minimize breakage (the story of the Microsoft "fix" for a bug in Simcity is just a case in point). And support the resulting mess for 10 or more years (take Windows XP, which refuses to die to this day, after two successor systems shipped) and even longer (there are "backward compatibility" layers in current Windows systems dating back who knows when).
Lastly, one of the strengths of Linux (and Unixy systems in general) is precisely their diversity. Your comment talks only of Linux distributions, and predicts there will only be one distribution in a short while. Sorry to disappoint you, this has been predicted often during the last 20 or so Linux years, and if anything is farther from reality now than ever before. And then you have to figure in other systems like Solaris, Apple's OSX, the swarm of BSDs, and even stuff like CygWin, not to mention closed and/or obsolescent systems, not to mention the sprouting of embedded systems due to ever growing capacities of "small" systems (my cellphone is way more of a computer than the first machine I programmed ever was...). Think of it as a sort of darwinian breeding ground for software: If it is able to survive in many of those environments, it is probably fit for human consumption. Sure enough, it is also our greatest curse...
Posted Mar 21, 2010 18:31 UTC (Sun)
by ikm (guest, #493)
[Link] (1 responses)
Funnily enough, that's exactly what happens under Windows. When you release a piece of software for Windows, all you have to provide is an .exe installer. Why? Because that's what end users expect and because it's quite easy to do (you only have one single platform and packaging format). And everyone's happy! Yes, libraries get bundled, they waste space, have bugs, but this doesn't seem like a major issue for most programs.
In contrast, under Linux you just can't provide 1) debs for three flavors of Debian and two flavors of Ubuntu, 2) rpms for each of the existing RH-derivatives, 3) ebuilds for gentoo and gentoo-derived distros, 4) whatever else other formats are out there. This is just crazy! Thus the only reliable and easy form is to provide compilable source. And let all those zillions of distros do the rest.
So what plagues linux, in my opinion, is that hailed "diversity". It's just too diverse to provide an easy way to install a piece of software. Linux needs its own Microsoft to make something a standard at last.
Posted Mar 22, 2010 16:06 UTC (Mon)
by Frej (guest, #4165)
[Link]
In short this way normal users actually have a chance of
But i agree, if you want to manage your computer (i think it's fun...) linux is for you. But if you don't, package systems are pretty annoying.
Posted Mar 28, 2010 11:11 UTC (Sun)
by cas (guest, #52554)
[Link]
distributions are made by "big picture" systems people. they want the ENTIRE system to work smoothly as an integrated whole. i.e. they're mostly systems administrators rather than programmers (although there's a lot of crossover there's also a very obvious distinction between the two).
applications like firefox, chrome, etc are made by programmers. their focus is far narrower, all they really care about is their application - even at the expense of the larger system that it will be installed on.
this is not to say one kind of developer or the other is "better" - they're not, and BOTH are absolutely essential. but software works best when programmers and sysadmins work together, rather than try to work around each other.
Posted Mar 18, 2010 0:11 UTC (Thu)
by cmccabe (guest, #60281)
[Link] (2 responses)
Giving developers more power is almost never what you want to do. You want to give power to the users and system administrators.
Some system administrators are conservative. They just don't want to apply any patches except security updates. They might use RHEL 5 or something like this. They ought to be able to follow this policy without interference from developers.
If developers have to add an #ifdef somewhere in the code to make this happen, it's a small price to pay for stability and security.
> Distro's tend to want monolithic control over everything, even if it
Users shouldn't have to manually update every piece of software on their computer. If it weren't the distro's responsibility, security would fall on to the users and system administrators-- another burden.
Microsoft would love to have a single update button that you could press to update all the software on your Windows PC. They've made that a reality for all the software they directly control. But they can't do it for third party software.
Posted Mar 18, 2010 8:40 UTC (Thu)
by Frej (guest, #4165)
[Link] (1 responses)
Good point and I agree. The current system is great for sysadmins. But we need a system that
But the problem isn't just ifdefs. It's a about shipping software to end-users.
>> Distro's tend to want monolithic control over everything, even if it
I never said anyone should update manually. Some software can update itself ;). And True, the
But it should be possible to to create a system that can solve both, we just need to
I'm aware that money pour in by keeping sysadmins happy, and thus the system is designed
>Microsoft would love to have a single update button that you could press to update all the
I can't argue with what what you think MS would love to do.
Posted Mar 20, 2010 23:10 UTC (Sat)
by cmccabe (guest, #60281)
[Link]
I use Fedora Core 12 and I turned on automatic updates.
That is "a system that works for me" and I didn't need to pay or hire anyone to support the system.
There's a lot of areas where the Linux desktop is behind Windows. But in the area of automatic updates, Linux is way ahead. This matters not only because you get nice features, but because updating regularly is an important part of securing your system.
Firefox and Chrome rolled their own update system because their main audience is Windows. There is no system-wide update on Windows, except for Microsoft's code. They could nicely strip out all their updater code on Linux, and cooperate with upstream, but it's more work than just doing things the same way on both platforms.
The bundled libraries issue is the same problem. On Windows, you have to bundle all your libraries with your app, because there's no dependency management framework. You can't really trust the DLLs in the Windows folder because someone else might have put a different version there. And you can't put your version there because you might break somebody else who needs to use the earlier version.
Anyway, web browsers are kind of special. They've almost grown into mini operating systems over the last decade. Unfortunately, most of the wheels they've reinvented were rounder the first time around. At least it's an open platform, by and large.
Posted Mar 18, 2010 3:26 UTC (Thu)
by blitzkrieg3 (guest, #57873)
[Link] (10 responses)
First of all, this is the job of the system packager, not the app dev. Second of all, major distros have package dependencies with automatic resolution these days. I know on my system it is 'yum update firefox'. Why is it better to have a dedicated application installer pulling in /usr/local/lib/libpng.so and having the dependency resolver pull in an update to libpng?
Posted Mar 18, 2010 8:07 UTC (Thu)
by Frej (guest, #4165)
[Link] (9 responses)
Why do we even have a system packager?
Every software must be packaged N times for N distros, and the system packager will always
For a (private) laptop user and the app developer it's a mess. The the distro in complete
The point of a self-controlled application is being able to update the actual app, not libpng.
PS: Anything that requires the terminal is not a real solution for 95% of the world ;)
Posted Mar 18, 2010 11:53 UTC (Thu)
by tzafrir (subscriber, #11501)
[Link] (1 responses)
Posted Mar 18, 2010 20:14 UTC (Thu)
by Frej (guest, #4165)
[Link]
But the point is that the app developer can fix the error (including version incompatabilities) and
I'm not saying this is the best solution for everybody!
Posted Mar 18, 2010 16:51 UTC (Thu)
by viro (subscriber, #7872)
[Link] (1 responses)
Posted Mar 18, 2010 20:34 UTC (Thu)
by Frej (guest, #4165)
[Link]
It a question of trust, why force the user to only trust the distro? Sure other models have the
A distro letting go of control does not mean that the that sysadmin has less control/more work..
Posted Mar 18, 2010 19:02 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link] (3 responses)
You are picking the wrong distro, methinks.
If you want the absolute latest source from the developer's keyboard, grab something like Gentoo or Arch. If you want to run something tested to death and guaranteed stable, pick some Enterprise distribution.
As a end user (and sometime sysadmin, and developer at times) I don't want some random developer dictating what version of their package I should run. I'm happy to run the development snapshots of some stuff at my own risk for non-critical use, but only thoroughly wetted and QA'd software on a enterprise distribution for real-world uses.
Sometimes I might decide that what the distro ships is too outdated, and (carefully counting in the extra cost of keeping track of upstream myself) replace selected packages with newer versions. But never across the board.
Posted Mar 18, 2010 20:45 UTC (Thu)
by Frej (guest, #4165)
[Link] (2 responses)
>You are picking the wrong distro, methinks. If you want the absolute latest source from the
>As a end user (and sometime sysadmin, and developer at times) I don't want some random
I'm sure a sensible system would allow a user to refuse appliation updates.
>Sometimes I might decide that what the distro ships is too outdated, and (carefully counting
Nobody can argue with that, but the app developer still has to wait/hope for N
Posted Mar 19, 2010 18:16 UTC (Fri)
by vonbrand (subscriber, #4458)
[Link] (1 responses)
I'm sorry, but "stable distro" and "latest versions of A, B, C" just don't jibe.
Whenever I really needed installing non-official software like that, after much looking around I usually decided not to do it. And when I did, the "application A" for which I truly, really, no-other-option-works, had to get a later version it was something very localized (not exactly (pieces off) newest Gnome or KDE), and I installed that from source (and created a package for simple installation/update). The "dependency hell problems" mentioned mostly just weren't.
Where I did install a larger set of stuff was when we had Suns with Solaris, where many pieces were almost useless (like the infamous cc or its klunky sh, or its bloated beyond recognition version of X) . There the first step was what somebody called
Posted Mar 21, 2010 0:13 UTC (Sun)
by nye (subscriber, #51576)
[Link]
That is because in your mindset every application is inextricably part of The System. That isn't the way anyone thinks outside of the Linux ecosystem, and it's frustratingly difficult for one side to understand the other.
The average user wants to continue with the same stable system (with the appropriate fixes if they're towards the higher end of the average), but with the option of whatever software versions they choose. A Windows user doesn't expect that updating Firefox may require them to reinstall every other application on their system to support a complex web of interdependencies - the idea would be beyond ludicrous. This highly desirable goal is currently achieved by bundling libraries - perhaps it always will be.
It doesn't have to be that way, but the [overly IMO] rapid pace of Linux distribution releases means that having separate system and applications package trees would rapidly lead to massive combinatorial explosion.
Posted Mar 19, 2010 20:58 UTC (Fri)
by dirtyepic (guest, #30178)
[Link]
You've answered your own question. Because every app developer thinks their app is so important that users need to be using the latest version the day it's released, with no thought to system stability or security. Who cares about libpng indeed.
Posted Mar 17, 2010 21:13 UTC (Wed)
by jmorris42 (guest, #2203)
[Link] (3 responses)
We really have two choices, either invest the effort to create an HTML5 compatible browser for *NIX or accept the fact we are utterly dependent on a port and accept the consequences that logically follow from that. Those being we have to remember we are a parasite and thus must exert every effort to put as little strain on their resources as possible, even if it mean WE have to invest considerable effort to adapt to their alien customs and compromise our best practices and even compromise security.
And no, WebKit isn't any better. It may have started as a KDE effort but it is now an Apple project so if we grow a dependency on that we still are tied to the needs of an alien system.
But does Linux even follow the "UNIX Way" itself? No, just look at the horrors the distributions suffer keeping a stable kernel through a long term release. So lets not get get the pitchforks out at Moz or Google, the problem is a lot bigger.
What would really, really go a long way to solving the problem is if the libraries could actually get to a stable release that wouldn't require chasing the bleeding edge so much. Look through the dependencies of any major software and note how many 0.x versions of libraries they are linking against. Note the comment above from a Google dev, they aren't patching the heck out of libraries to be evil, they are mostly patching because they have to patch bugs and add needed features.
Just one thought on the notion of adopting Windows' every app carries copies of every lib habit. Forget EVER nailing down security because that would be as crazy as expecting Windows to ever be secure.
As for xulrunner, all I can say is Doh! It was pitched as a platform to build other apps with but there was never the slightest promise of the longterm stability that would be required to make it practical and by now we have enough actual history to show it ain't going to happen. Like all browser type products it is a roach motel so not updating isn't really an option and old versions aren't going to get patched. So anyone who was an early adopter can perhaps be forgiven for falling for hype but anyone still using it has to accept they are equally guilty, suck it up and either do the heavy lifting once to rebase on something else or keep up the constant churn involved with chasing the taillights. But either way, no bitchin' allowed.
Posted Mar 17, 2010 22:38 UTC (Wed)
by kov (subscriber, #7423)
[Link]
You say: "And no, WebKit isn't any better. It may have started as a KDE effort
but it is now an Apple project so if we grow a dependency on that we still
are tied to the needs of an alien system." This means you're likely not really aware of what WebKit is, and how its
development model works. Why do you think it is Apple's only, when
Collabora, Igalia, Google, Nokia, RIM, Samsung, are all investing work on
it? What port of WebKit are you talking about? Apple ports? Google's port?
GTK+, Qt, EFL, WinCE, WX, which one? The GTK+ and Qt ports are very similar
to any normal GTK+ and Qt projects you'll find in your normal distribution,
with API stability, and all you'd expect from a normal library. The reason Canonical is going with the webkit branches of all software
it is able to go with (Epiphany, Devhelp, Yelp, Gwibber) is because the
WebKitGTK+ port does not suffer from most of the badness that was listed
above, and provides API/ABI stability, being pushed by a Free Software
friendly team that are mostly GNOME developers. It's a pitty WebKitGTK+ and QtWebKit have been largely ignored by most
of the articles related to this issue, though =(.
Posted Mar 18, 2010 13:26 UTC (Thu)
by gerv (guest, #3376)
[Link] (1 responses)
For Mozilla, that's simply not true - at least not "ported from _Windows_". All the way back to Netscape Navigator, the code has run on Linux and Unix. The port was never an afterthought. And today, if you surveyed Mozilla core developers' laptops for their preferred OS, if any came out on top, it would probably be Mac OS X. And yet no-one claims that the Linux version of Moz is "ported from Mac as an afterthought".
Gerv
Posted Mar 18, 2010 13:38 UTC (Thu)
by pizza (subscriber, #46)
[Link]
Posted Mar 17, 2010 21:25 UTC (Wed)
by Banis (guest, #59011)
[Link]
Games are another example, some folks would like more games developed for Linux. But, a game developer has to ship their software. And, when they look at Linux they see an unholy mess of utterly different distributions. They are either forced to pick one, or package their game in a completely self contained way. More or less like Google choose to do with Chrome.
Another class of Linux software that hits this problem is commercial engineering packages. Matlab for instance comes in a crazy 700M package complete with a java, ghostscript, firefox rendering engine, mesa, tex, libstdc++ library, libXm library, termcap library and a bit more I got tired of looking. They either choose to do this or cannot ship their package in a distro neutral fashion. They probably cant even get out of internal testing without doing this.
This is something the distros would be well served in working towards a better way to handle.
Posted Mar 17, 2010 22:57 UTC (Wed)
by vomlehn (guest, #45588)
[Link] (3 responses)
Posted Mar 18, 2010 9:30 UTC (Thu)
by dgm (subscriber, #49227)
[Link]
Maybe what we need is a scaled down start, attacking the worst actual problems first, and ignoring collateral -and irrelevant- issues like which package format is to be used.
Posted Mar 18, 2010 21:07 UTC (Thu)
by aleXXX (subscriber, #2742)
[Link] (1 responses)
So, I have a binary software built on SUSE 10 (SLES). Somewhere I found
That symbol came from libstdc++, something in std::string, and it was
What was the reason ?
So what to do if you want to ship a binary application ?
Use some other STL implementation, like STLport or stdcxx from Apache ?
Alex
Posted Mar 18, 2010 22:14 UTC (Thu)
by dlang (guest, #313)
[Link]
if the application only uses the things that are defined by LSB, then it would work on either distro.
but if the application uses things that are outside the scope of the LSB, then it may not work.
Posted Mar 18, 2010 10:15 UTC (Thu)
by __alex (guest, #38036)
[Link] (6 responses)
Posted Mar 18, 2010 10:56 UTC (Thu)
by hummassa (subscriber, #307)
[Link] (3 responses)
Posted Mar 18, 2010 12:20 UTC (Thu)
by __alex (guest, #38036)
[Link] (2 responses)
I wonder if people would still be complaining if Google had implemented their versions of some of
Posted Mar 18, 2010 17:55 UTC (Thu)
by dlang (guest, #313)
[Link]
if all the applications link to the system library you update that and everything just works.
if an application ships it's own copy of the library, you have a chance of finding it if you search for it and can then replace that copy (although if it's been tweaked, you may still break that application, but at least you know that application is unsafe after that point)
if an application statically links the library, you have no way of knowing that the application is using that library, and unless the application developer notices the security alert and ships an update to the application, you won't be able to patch the vulnerability, but even worse, you won't be able to find out that the application is vulnerable in the first place.
Posted Mar 19, 2010 10:57 UTC (Fri)
by hummassa (subscriber, #307)
[Link]
Posted Mar 18, 2010 12:54 UTC (Thu)
by clugstj (subscriber, #4020)
[Link] (1 responses)
All other options like
Posted Mar 19, 2010 15:28 UTC (Fri)
by mgedmin (subscriber, #34497)
[Link]
Posted Mar 18, 2010 16:39 UTC (Thu)
by davecb (subscriber, #1574)
[Link]
Shipping modified system libraries may not cause the end-user "DLL hell", but they will cause the application authors to be out of sync with the core developers, and also out of sync with every other application vendor, whether or not they use standard or their own hacked libraries.
That won't scale. It sounds like it's Order 2^n with the the number of versions outstanding
That in turn means that each additional application vendor who does this makes the situation worse for everyone, by increasing n and therefor doubling the work for the community.
Not a friendly act!
--dave
Posted Mar 18, 2010 17:22 UTC (Thu)
by MattPerry (guest, #46341)
[Link] (1 responses)
Posted Mar 18, 2010 19:44 UTC (Thu)
by vonbrand (subscriber, #4458)
[Link]
Yes, with ELF's shared library stuff you can have several versions of a library installed side by side, systemwide, as long as they advertise their ABIs are different.
But the trouble is that application A uses a hacked version of library L, while application B uses another hack on the same base version of L. Both versions of L are "almost" compatible... but not interchangeable.
Decent solution: Don't hack L, fix the application. If an extension is required, pack that as a separate library. If a fix is warranted, push it upstream and require a new enough version of the library. This being FLOSS, everybody is free to get the extension or the fixed version. If none of the above works, fork (but commit to maintaining said fork and/or merge with upstream later on).
Posted Mar 18, 2010 17:23 UTC (Thu)
by marcH (subscriber, #57642)
[Link]
This just looks like the same trade-off as branching or forking source code. Divergence is a slippery slope but sometimes you need the extra speed. Instead of a "pie in the sky" approach better acknowledge this need for speed so the acceleration is kept under control and things can eventually fall back on mainstream/upstream.
I do not think anyone likes the hassles of bundling libraries. Once a library becomes rock-solid all developers eventually rely on the version provided by the operating system. I just ran "ldd" on the latest stable firefox binary and the number of "libXXX" files bundled is way smaller than the total.
Posted Mar 18, 2010 19:13 UTC (Thu)
by pochu (subscriber, #61122)
[Link]
Liferea only supports WebKit in its latest stable release (1.6). Same for Epiphany (since 2.28),
Posted Mar 18, 2010 19:58 UTC (Thu)
by russell (guest, #10458)
[Link] (1 responses)
Posted Mar 18, 2010 22:11 UTC (Thu)
by dlang (guest, #313)
[Link]
Posted Mar 18, 2010 20:20 UTC (Thu)
by mcatkins (guest, #4270)
[Link]
If people get used to shipping their own versions of libraries they might accidentally include such a library, causing horrible system-wide errors, or restrictions ("No you can't run X and Y at the same time" or worse "you can't install X and Y at the same time").
Just say no!
Posted Mar 19, 2010 3:54 UTC (Fri)
by mikov (guest, #33179)
[Link] (6 responses)
There is no equivalent in Linux. The only somewhat standard library
Posted Mar 19, 2010 16:12 UTC (Fri)
by tzafrir (subscriber, #11501)
[Link] (5 responses)
Alternatively, consider any library that comes with the package management system as "standard library" :-)
Now, check what libraries Firefox bundles in its installation for Windows (those are not system libraries)
Posted Mar 19, 2010 17:28 UTC (Fri)
by mikov (guest, #33179)
[Link] (4 responses)
The GUI libraries are the biggest problems I still experience
But there are plenty of other things which come bundled with
The bottom line is that bundling for Linux will create larger and
What would be a "proper" solution? For example, a global repository
It is complex, but it is worthwhile and doable.
Posted Mar 19, 2010 18:02 UTC (Fri)
by dlang (guest, #313)
[Link] (3 responses)
Posted Mar 19, 2010 18:12 UTC (Fri)
by mikov (guest, #33179)
[Link] (1 responses)
Posted Mar 25, 2010 21:38 UTC (Thu)
by rqosa (subscriber, #24136)
[Link]
> The LSB is not adequate to solve
the problems to which bundling is perceived as a solution. Why not? The whole point of the LSB is to have "libraries by default in the OS", and to have an
unchanging ABI for those included libraries, just like they are with
Windows
or Mac OS X. (That is, for a single version of Windows or Mac OS
X, at least, since the API/ABI has changed between releases.)
An application compiled for the LSB can depend on those
libraries without bundling them with the application, and any other
libraries must be bundled with the application (which can be done by
linking
it statically, or can be done by putting the library in a directory under
/opt and then setting RUNPATH or RPATH to that directory). This is
essentially what developers of apps for Windows and Mac OS X
must do already. In short, your proposed solution for Linux to include "libraries
by default in the OS" exists, and it's called the LSB.
Posted Mar 19, 2010 18:48 UTC (Fri)
by halla (subscriber, #14185)
[Link]
Posted Mar 19, 2010 6:05 UTC (Fri)
by nikanth (guest, #50093)
[Link] (1 responses)
Static linking could achieve the goal without violating the Linux model.
Posted Mar 19, 2010 15:32 UTC (Fri)
by mgedmin (subscriber, #34497)
[Link]
Posted Mar 19, 2010 15:38 UTC (Fri)
by dbruce (guest, #57948)
[Link] (6 responses)
There is just simply no chance in hell that a centrally managed, closed system like the current package management status quo can ever hope to scale and meet the diverse needs of the average population."
I don't buy that at all. The fact that there are "millions" of Windows programs available (assuming that is even true, which I doubt) just points out the inefficiency of closed-source development, where it is not possible to build on someone else's program without explicit licensing, fees, etc. The "millions" of programs are the problem with Windows, not an advantage of Windows. When I show people a Linux system, they invariably are more impressed by centralized package management than by any argument concerning freedom.
If you look at everything that is in Debian, it is hard to come up with a niche that isn't covered. Some of the OSS programs may lack important functionality, but in principle they could be improved to cover everything that their commercial counterparts provide.
I just don't see any need to provide "millions" of programs. If you have some need that isn't addressed by the ~25K packages in Debian, it probably requires a custom application anyway.
Posted Mar 19, 2010 18:33 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
Going even further, most people actually need only the same small fraction of these 25K packages. So it would be nice to make this centralized repository much more modular, because every apt-XXX invocation is F.... dead slow on any low-end machine.
Posted Mar 31, 2010 6:30 UTC (Wed)
by oblio (guest, #33465)
[Link]
You really think that 25.000 applications can replace millions? No they can't.
First of all, if you look really closely, those aren't 25.000 applications. They're 25.000 *packages* - including libraries (who cares about those?), documentation, meta packages, maybe even development versions of packages (sources).
Secondly, you're forgetting about statistics and evolution. More variation and competition means there's a greater chance of success. Only a small percentage of applications are any good.
Thirdly, for each niche there are many design decisions. Out of 10.000 crappy applications for a certain niche, you'll have 100 decent applications, 10 really good ones, and 3 great ones, each having a different design philosophy (therefore you can't replace great application A with B or C).
Should I write my own "custom" application for everything that doesn't fit that tight collection of 25.000 "applications"? (answer: no, on Windows you find a tool already made for 99% of regular desktop activities)
Posted Apr 8, 2010 18:27 UTC (Thu)
by wookey (guest, #5501)
[Link] (3 responses)
On the energy monitoring front there are plenty of things like mango, diyzoning, owfs and temploggerd whcih are basic apps for this stuff but none are in Debian yet (or weren't last time I looked). No doubt they will be at some point but until then anyone wanting to use them has an installation problem. I tried local building but found this to be very hard indeed for the two java-based apps there when targeting an arm box. Obviously that wasn't something the developers had tried.
I do agree with those who say that developers should not be doing packaging and system integration. They are not well-placed to understand the issue of less-common architectures, complex system interactions and really have better things to be doing with their time.
I am inclined to agree that a bit more modularisation of distro repositories would be a good thing. Ubuntu's model of 'core stuff', 'common stuff' and 'everything else' is a step in this direction. Debian's monlithic tools are becoming stretched, especially on small systems (where the package management overhead can amount to 40% of the rootfs storage requirement). I'm not sure there are 'millions' of applications, but there are many tens of thousands and dealing with that efficiently is a challenge.
One other thing which I don't think has been said loudly enough in this debate is that users really do value stability. The central repository model does provide a lot more of that than the 'install random stuff off the net' model. It really is worth something, and whilst they also value being able to install 'latest' of a few apps there is a real potential tradeoff there which needs to be managed somehow. The users I deal with are _much_ more interested in having a stable system than they are in the latest and greatest. They only upgrade apps when they find they need to for some significant feature. Making that easy and reliable _and discoverable_ for them is where we have much room for improvement.
I find that the Debian backports model works pretty well for this, and could be greatly extended to provide a much larger chunk of the new stuff users want. However it is not a particularly discoverable mechanism - not helped at all by the way many users are used to the Windows model of 'just click here to install this stuff'. There is a significant element of user education to get them to stop and think for a mo and see if what they want is already in the packaging system, or would be if they/it added a suitable repo. Most software websites don't help at all with this, as they encourage the Windows model and rarely mention the 'distro' model.
So, it's a thorny issue, and I agree there is room for improvement, but I also feel that the good part of what we have is very valuable, to everybody, including naive users on laptops, even if they don't really appreciate it. Make it easier to go outside that model by all means, when necessary, but try hard not to just make thing unreliable and/or insecure as a result.
Posted Apr 8, 2010 18:41 UTC (Thu)
by dlang (guest, #313)
[Link] (2 responses)
Posted Apr 9, 2010 13:20 UTC (Fri)
by wookey (guest, #5501)
[Link] (1 responses)
Posted Apr 9, 2010 18:46 UTC (Fri)
by dlang (guest, #313)
[Link]
if you don't do an apt-get clean after doing the update debian will keep the .deb files of the packages that you downloaded around (in /var/cache/apt/archives)
if you are wanting your product to update directly from debian's public servers then you need to plan to support the large package list (any way for them to split the package list is going to put _someone's_ critical package in an optional repository), but you can run your own repository and only put packages in it that you want to make available to your product. This will also save you time in the updates.
a minor point, I'll also point out that most of this space is in /var, which is not what most people would think of when you said it was in the rootfs
Posted Mar 19, 2010 20:57 UTC (Fri)
by gurulabs (subscriber, #10753)
[Link]
For example:
VMWARE_USE_SHIPPED_GTK="force" /usr/bin/vmware
Posted Mar 22, 2010 20:38 UTC (Mon)
by Bones0 (guest, #8041)
[Link] (1 responses)
If the result of those KSM scans could be remembered between reboots, increased CPU usage should not be an issue. And if the file system supported shared chunks analogously to KSM's shared pages, disk space might not be a problem either.
Posted Mar 22, 2010 21:06 UTC (Mon)
by johill (subscriber, #25196)
[Link]
Posted Mar 25, 2010 16:19 UTC (Thu)
by DarthCthulhu (guest, #50384)
[Link]
It's already possible (and easy!) for a user to get the benefits of both models. All you need is the right distribution. And that distro is GoboLinux.
Software installation is very, very easy on GoboLinux and you can have multiple versions of a software all running happily together without the need for every software bundling its own libraries. If some bit of software needs a specific library version, no problem! It will install that version which will still live happily with others.
How is it able to do this? Through a better directory layout, some symlink magic, and Recipes. Recipes are the means to install software; at its base, it's really just a URL with the software location and a list of prerequisites. The system will automatically go through and download/copy/compile all the needed libraries and so on before installing the main software. Recipes can install both source and precompiled binaries.
Currently, the distro is centrally-based, but there's no reason why that has to be the case. It's quite easy for individual developers to make their own Recipes for software installation and publish them. There's a local recipe store just for that, in fact.
It's also very easy to write a Recipe. In fact, there's a tool that does it mostly for you (albeit command line). Most of the time you only need to know the URL of the software you're interested in.
The main downside of the distro is that it's basically developed by two guys at present, the official release is positively ancient (you'll need to set aside a day or so to download and upgrade Recipes), and that, if things go wrong, it can take a little technical knowledge to fix (never try compiling glibc -- always go precompiled with that one). I've also, personally, never had the automatic kernel upgrade work correctly; it always requires manually moving the kernel modules to the proper place in the directory tree. Irritating, but fairly straightforward once you understand the way the new directory tree works.
GoboLinux really is the best of both worlds. Give it a try.
Posted Mar 28, 2010 10:53 UTC (Sun)
by cas (guest, #52554)
[Link]
what this describes is classic "DLL Hell".
If Mozilla and Chrome want to import that disastrous practice into linux, they're going to find that a lot of people say "thanks, but no thanks"(*)
firefox is good, but it's not so good or so irreplacable that it's worth tolerating that nightmare.
(*) quoted text passed through an extreme politeness translator.
Such an approach can greatly help GUI developers in general
myriad of combinations of compatible library versions on the variety of
target distributions.
time should be focused on the actual development of features, bug fixes or
security fixes. If developing free software is not paid and done after hours,
then there is not much time left to squander in the compatibility impasse
that sometimes distributions throw at you.
Applications and bundled libraries
third_party directory is there because we need that code on Windows. There
are a number of configure time flags to switch between the third_party and
system versions of libraries like zlib, libevent etc.
selection of them covering some of the reasons:
which didn't have them.
improvements which are rather specific to our use case. We don't want to do
without them and upstream aren't too interested.
the default Chromium build uses the system version and disables the
features that we've added.
some of the real-life reasons why we carry forks of some of this code.
Applications and bundled libraries
memory twice? And if all apps would do that, would that increase the memory
load of an average system? Any idea how much, if so?
matter...
Applications and bundled libraries
pretty minimal. The big stuff (GTK etc) is certainly shared (at least with
Chromium).
run to 5MB (to about a factor of two).
Applications and bundled libraries
stuff like GTK or Qt, not a big issue. If all apps would start doing it, and
if they would require specific versions of Qt/KDElibs/Gnome/GTK, things
would get ugly, I suppose...
Applications and bundled libraries
Applications and bundled libraries
I installed acroread the other day and I was, frankly, shocked at how good it is now. And I say this as one who strongly supports free software.
Applications and bundled libraries
I've never seen acroread crash
Applications and bundled libraries
Yes, the old versions of acroread were terrible, and that's why I was surprised at the stability of the latest version.
Applications and bundled libraries
Applications and bundled libraries
I was not aware of xournal; thanks for the tip. I just checked it out. I find that flpsed is a better fit for my needs, for the following reasons:
Applications and bundled libraries
I may be wrong about xournal's capabilities, as I only tried it out for a few minutes, so please correct me if I am wrong about xournal. I'm happy to hear about it, since it is more useful than flpsed in some situations (mainly when one has to annotate a graphical page of some kind, rather than a data form).
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
windows if you don't have admin rights ;-)
department. Acrobat is just horrible, while Okular has 95% of the features
yet a clean and efficient interface. Just try using your mouse to scroll
through a page - when you hit the bottom of the page, things stop in
Acrobat. Okular just goes on, you won't even notice such features but they
matter once you have to go back to Acrobat.
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
the mouse (keep left mouse button pressed) you can drag it to the bottom
of the screen. But then the dragging stops, doesn't it, as you've reached
the bottom of the screen... But in Okular, you can continue to drag
because it 'wraps' the screen by moving the mouse to the top of the screen
and you just drag on.
completely intuitive and unobtrusive. Just a nice touch.
the left, which only shows the pages where the search results show up so
you get a quick overview of where the term you were looking for is, far
superior to Acrobat's approach. Acrobat lacks such a simple yet effective
search - you have to go through everything with F3.
Okular a lot to read from the screen, adjusting speed with the arrow keys
(shift-arrow, again). Press shift to stop scrolling, shift again to
continue. Space to move one page further, shift-space to go back. Sure,
Acrobat offers auto scrolling with the mouse, like word and most
webbrowsers, but it's far less nice imho.
just read and search for stuff (and annotate sometimes) Okular is perfect.
Applications and bundled libraries
I have okular installed here (Fedora 12) and I cannot reproduce the behavior you describe.
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
>qt4 libraries which can be easily replaced with symlinks to the system libs.
VirtualBox provides packages for several distributions, as well as a script-based installer
which is supposed to run "anywhere". Only the packages which have to run on older
distributions ship their own copies of Qt4.
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
It may seem obvious, but I have to ask -- why not revive libjingle and push the patches upstream?
Applications and bundled libraries
Applications and bundled libraries
find their approach to be responsible and practical. The focus is on
getting something working well, even if that means patching and bundling
system libraries, followed by pushing those patches upstream and
unbundling, if possible. Or, occasionally, becoming upstream.
source of your application. You publish patches to or patched versions of
the necessary dependencies. Packagers/maintainers then have the option of:
2) Patching the shipped version
3) Bundling the library with the application (in some manner)
It's also a (very) little extra work for the application developer to make
sure they can be built and linked against the system version of the
library. As a former Fedora maintainer, I've had to resort to some version
of that several times. Including coordinating with the maintainers of
dependent packages. Of course, Ubuntu has kind of shot themselves in the
foot on this issue with the LTS stuff, but that's (kind of) beside the
point.
and, frankly, I don't want them trying to use the more change-prone system
libraries. Bundle. Install into /opt/<application> and go away. That's
all they usually do anyway.
time I checked at least part of the problem was distributions packaging
mostly-internal unstable libraries as "system" libraries and then linking
other packages against them. Oh, and failing to understand/use ELF
versioning. Most distributions are perfectly capable of having multiple
versions of libraries installed and linked appropriately, provided the
library authors follow the .so versioning rules.
Applications and bundled libraries
Applications and bundled libraries
Keep in mind that we build on Windows, so much of the code in our
third_party directory is there because we need that code on Windows.
There are a number of configure time flags to switch between the
third_party and system versions of libraries like zlib, libevent etc.
Having said that we do have a number of forks. Here's an unrepresentative
selection of them covering some of the reasons:
libevent: we needed bug fixes and we needed to be able to run on systems
which didn't have them.
icu: we need a more recent version than was even provided on Karmic.
These are the distributors problem. Either they configure --with-system-libevent and get the libevent maintainer to update their part or build and ship with the library.
libjingle: upstream appears to be unmaintained.
Several have mentioned Google is the upstream maintainer. Either whip the talk guys into shape or get commit access to their tree.
sqlite: we added full-text indexing (now upstream) and several performance
improvements which are rather specific to our use case. We don't want to do
without them and upstream aren't too interested.
Applications and bundled libraries
>
>libevent: need new version for bugfixes
>icu: we need a more recent version than was even provided on Karmic.
>[and so on]
1. you need libevent.so.2-1.4.10: fine, make it a dependency of chromium and let the distro update their libevent2.
2. you need libevent.so.3-1.5.0: fine, make it a dependency of chromium and let the distro add a package libevent3.
If $distro ships libevent2-1.4.9 for example, there are two choices:Applications and bundled libraries
1. you need libevent.so.2-1.4.10: fine, make it a dependency of chromium and let the distro update their libevent2.
Applications and bundled libraries
The world would probably have a lot more compile/runtime failures and churn if distros did not do what they are already doing.
Applications and bundled libraries
is bumped (that is, going from libevent.so.2 to libevent.so.3 in the example).
Applications and bundled libraries
Applications and bundled libraries
Help for the source built systems?
If the autoconf (or whatever build system chromium uses) could do a version
check on the system library at build time this might reduce the pain
somewhat.
libs on my system which is handy as a developer.
patches between the bundled libs and upstream. I'm guessing this is a wider
problem in free software that needs a decent solution.
Applications and bundled libraries
>which didn't have them.
figure it out.
>performance improvements which are rather specific to our use case. We
>don't want to do without them and upstream aren't too interested.
libraries.
Applications and bundled libraries
Applications and bundled libraries
My experience
My experience
My experience
My experience
My experience
My experience
Does your program have a test suite? Even basic tests can do a lot of good,
both in making non-functional functionality more obvious and spurring people
to fix it. You might be interested in Mike Hommeys recent work on Mozilla:
before and after.
Test suites can help
Applications and bundled libraries
application without being dependent on N different distributions somewhat random selection of
software, it also avoids "Oh to get our next update, you need to update your distro" which is
completely unacceptible from an administration point of view.
developers. The problem with security updates wouldn't be the distro's responsibility if they
didn't have control every bit of software on your computer.
Yes i know about principal freedom, but time is also a cost...
Applications and bundled libraries
users. It might make developers' lives slightly more difficult by requiring
them to actually think about deployment instead of wrapping up their
/usr/lib and pushing it out in one friggin' huge binary, but the end result
is a *better* experience for users as they get regular bug fixes, security
updates and (hopefully) tested upstream changes rather than whatever "fixes"
the application developers thought expedient to cobble together so they
could get the app out the door.
ones do development in the open and are generally welcoming of input,
patches, etc.
Still though packaging should be treated as part of the developer's responsibility and not
something that is 'downstream'.
Applications and bundled libraries
It simply is not realistic to depend on third parties to know the proper way to build everything
and know exactly the right combination of dependency versioning and compile flags that you
(as a developer) have intended and tested against.
It can work for a few hundred packages easily enough to do it the 'apt-get way'. It can scale
upwards to several thousands. But to be on par with something like what Windows provides
you have to scale to millions and you have a shitload of programs that nobody in the Debian
project (or Fedora or Redhat or anybody else) will never be aware of, much less know enough
to package and build them for end users.
There is just simply no chance in hell that a centrally managed, closed system like the current
package management status quo can ever hope to scale and meet the diverse needs of the
average population.
It has to be a distributed system.. And the
only logical way (that I see it) is to do a distributed packaging system is by having it be
treated as part of the
programming of software and have all packaging happen 'upstream'. "Make install" should not
drop binaries onto /usr/local, it should produce a 'deb' or 'rpm' file. Software then should not
be distributed through tarballs or central source code repositories, but through built
packages.
Distributers would, then,
work with upstream package creation and aid and correct problems as they come up and
then collect as many popular packages as possible for the convience of the end users.
Distributions cannot behave that they are the sole source of the software and
libraries people are going to want and require.
To do that it's going to require substantial changes in how Linux, as a operating system,
is managed and coordinated. Centralized repositories are stop-gap at best and only exist
through pure brute force of volenteers. It won't continue for ever. People will get burned out
doing the same old thing again and again.
The quicker people realize this and be willing to abandon technically excellent solutions for
real-
world practical/useful ones, the better off all of us are going to be.
Applications and bundled libraries
(emphasis mine)
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
With OSX it's better. For many apps you just drag the 'icon' to wherever you want. This simplicity is for users the same as when we want everything to be a file. It can be expressed as 'everything is just an object' for users. Of course this can't cover all cases, but the simplicity is attractive.
Why isn't a program just a file?
1) Locating the program after 'installing it' (Since they decided the location)
2) Uninstall is just dragging same file (program) to trash.
Applications and bundled libraries
Applications and bundled libraries
> your customers/users application without being dependent on N different
> distributions somewhat random selection of software, it also avoids "Oh to
> get our next update, you need to update your distro" which is completely
> unacceptible from an administration point of view.
> potentially hurts users and developers. The problem with security updates
> wouldn't be the distro's responsibility if they didn't have control every
> bit of software on your computer.
Applications and bundled libraries
>> your customers/users application without being dependent on N different
>> distributions somewhat random selection of software, it also avoids "Oh to
>> get our next update, you need to update your distro" which is completely
>> unacceptible from an administration point of view.
>Giving developers more power is almost never what you want to do. You want to give power
>to the users and system administrators.
>Some system administrators are conservative. They just don't want to apply any patches
>except security updates. They might use RHEL 5 or something like this.
>They ought to be able to follow this policy without interference from developers. If
>developers have to add an #ifdef somewhere in the code to make this happen, it's
>a small price to pay for stability and security.
works for both users,devs and sysadmins. Currently it works for companies (they pay), where
there are people dediated to support the system. You don't have that on a personal computer
;).
>> potentially hurts users and developers. The problem with security updates
>> wouldn't be the distro's responsibility if they didn't have control every
>> bit of software on your computer.
>Users shouldn't have to manually update every piece of software on their computer. If it
>weren't the distro's responsibility, security would fall on to the users and system
>administrators-- another burden.
N different updaters aren't a good solution either. It doesn't really matter for the home user,
but the sysadmin would hate it.
seperate mechanism (fetching new code) from policy (per app lib or system lib, where to
check for updates). It's not a simpler system, but it could solve different needs for more than
just sysadmins.
for them. Changing that, is difficult to from a business point of view, but that is ok!
>software on your Windows PC. They've made that a reality for all the software they directly
>control. But they can't do it for third party software.
Applications and bundled libraries
> need a system that works for both users,devs and sysadmins. Currently it
> works for companies (they pay), where there are people dediated to support
> the system. You don't have that on a personal computer ;).
Applications and bundled libraries
> your customers/users napplication without being dependent on N different
> distributions somewhat random selection of software, it also avoids "Oh to > get our next update, you need to update your distro" which is
> completely unacceptible from an administration point of view.
Applications and bundled libraries
distros
>have package dependencies with automatic resolution these days. I know on my system it is
>'yum update firefox'. Why is it better to have a dedicated application installer pulling in
>/usr/local/lib/libpng.so and having the dependency resolver pull in an update to libpng?
It's a source of contention, say you just released important update 1.4.0, but no users can get
it!
And you have no control of when they get it. It may even be they never get it, because the
distro
think it's too big a change.
say "NO!" to any 1.2->2.0 update for a stable dist. Even if the developer want it and some of
the users wan't it.
control
of your laptop, but nobody really needs that. But the the current complete control/handling of
your system is only great for admins who wan't to control a multiuser system or a webserver.
Who
cares about libpng. And it's better because the app developers is in control of their own app
and in direct contact with it's users.
Applications and bundled libraries
Applications and bundled libraries
Note that the app developer can choose which the subset K to bundle so the remaining part, M-
K are those such that they are less likely to cause problems.
push an update, fixing any problems without the middle-layer of a distro.
Applications and bundled libraries
Including laptops. _Wonderful_.
Applications and bundled libraries
happen. But i don't agree the assumption is always true ;).
problem you state - but why can't we build a system where it doesn't happen? just because
applications bundle stuff, they can still hook in to the same updater the admin/user runs.
We can do both, it's just harder and new territory.
Applications and bundled libraries
Applications and bundled libraries
>developer's keyboard, grab something like Gentoo or Arch. If you want to run something
>tested to death and guaranteed stable, pick some Enterprise distribution.
What if i want stable distro/system but with newest app a,b and c? And developer of a b or c
doesn't want to support N distros? They can just bundle the libs they need.
>developer dictating what version of their package I should run. I'm happy to run the
>development snapshots of some stuff at my own risk for non-critical use, but only thoroughly
>wetted and QA'd software on a enterprise distribution for real-world uses.
>in the extra cost of keeping track of upstream myself) replace selected packages with newer
>versions. But never across the board.
distros/packagers.
Applications and bundled libraries
GNU > /usr/local (which did include X Windows, TeX and an assortment of other pieces). But it was still done carefully and as limited as reasonable.
Applications and bundled libraries
Applications and bundled libraries
Note the common thread here
Note the common thread here
Note the common thread here
What is the same with both Moz and Chromium is interesting. Both are highly complex packages PORTED from Windows as an afterthought.
Note the common thread here
Applications and bundled libraries
In the ideal world, these compatibility issues would be dealt with by organizations such as the Linux Standard Base (LSB). To really make that work, however, requires a very agressive approach both to adding features and to developing and maintaining the test suites that would be required. I've heard of very few standards-type organizations that are funded for and capable of this kind of sustained effort. And asking for much funding in the open source world is an uphill battle, at best. (Disclaimer: I used to be a member of the LSB)
Applications and bundled libraries in the Ideal World (LSB)
Applications and bundled libraries in the Ideal World (LSB)
Applications and bundled libraries in the Ideal World (LSB)
Maybe I did it wrong or I had wrong expectations.
that SUSE 10 is LSB 3.0 compatible.
So is RHEL 4/Centos 4, according to some webpage.
So I expected that the software built on SUSE 10 would also run on Centos
4. It didn't, it was expecting a symbol not present on Centos 4.
referenced by log4cplus. So I actually expected that log4cplus wouldn't
compile on Centos4, since there that symbol was missing, but it compiled
nevertheless successfully.
std::string has a lot of inline-functions, which call more or less
internal functions of libstdc++. Since they are inline, these calls to
internal functions end up in the resulting binary, instead of being
hidden in the binary libstdc++.
Not sure. Link statically against libstdc++ ? Not too easy, as the web
told me.
Ship with a copy of libstdc++.so which works ? I tried, it seems to work.
But I'm not sure whether this doesn't have any other issues.
Maybe, haven't tried yet.
Applications and bundled libraries in the Ideal World (LSB)
Applications and bundled libraries
Applications and bundled libraries
of security patching.
Applications and bundled libraries
not sure why it would effect security patches though.
the things they depend on instead of using existing libraries with their own patches. It seems like
just because Chromium depends on things that have the same name as things already in the distro
that everyone thinks they *are* the same thing. sqlite is a pretty good example of this. Upstream
don't want the changes. So essentially the 'sqlite' used in Chromium isn't 'sqlite' anymore. It's an
entirely new project tied to Chromium. It makes no sense to think of it as a library now.
Applications and bundled libraries
Applications and bundled libraries
one.
About modified-sqlite: yes, it is a library. if it is required by chrome and supercedes regular-sqlite
(without any api/abi incompatibilities), it should be packaged as another, modified, version (and
SONAMEd accordingly); else, it should be packaged as another package altogether (and anyway, yes,
security patching must be done in each package, regular-sqlite and chrome-sqlite, but in the first
case, you can have only chrome-sqlite in memory even if another program wants to use sqlite)
Applications and bundled libraries
1) Getting fixes upstream
2) Changing the name of forked libraries
3) Linking staticly
all require more work on the part of the developer, so they skip out on doing it and just bundle mystery versions with their product and push the headache down to the distributors/users.
Applications and bundled libraries
What happens to Linus's "get it upstream first" policy?
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
and Yelp has a WebKit branch in git (that we use in Debian, rather than the Gecko one).
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Windows and Linux. Windows already caries a huge amount of libraries
by default in the OS. That may not be apparent, but all the thousands
of Win32 APIs, are in essence standard libraries available everywhere
(more or less). This includes GUI controls, etc.
is libc. A "bundled" Linux app would have to bundle everything under
the sun - X11 libraries, GUI toolkits, etc. This is crazy and in no
way comparable to Windows.
Applications and bundled libraries
Applications and bundled libraries
management are not really standard, or we wouldn't be having this
discussion at all.
horror trying to get vmware-server-console working on a new system.
Windows, like secure sockets (Firefox doesn't use them, but Chrome
does AFAIK) and so on.
clumsier applications than their Windows equivalents. I am not
advocating against it - it is a solution, but I think it is a
really really ugly solution. In this case it is better to work on a
proper solution than to settle for a horrible one.
with libraries which can all exist side by side. For example, if
Firefox wants to bundle a library "libfoo", it will be called
"libfoo-firefox-e0b146e7-26e5-4c2c-90e0-bec9bac7218e.so.1.2" .
Other packages can use it, and decrease the duplication. There will
have to be extensive metadata, etc.
Applications and bundled libraries
Applications and bundled libraries
the problems to which bundling is perceived as a solution. Neither is
distribution package management.
Applications and bundled libraries
Yes, they are. Both Qt4 and GTK2 are part of LSB. See http://refspecs.linux-foundation.org/LSB_4.0.0/LSB-Desktop-
generic/LSB-Desktop-generic.html.
Applications and bundled libraries
Static Library
Static Library
the addition that you can't easily see which apps bundle which libraries,
making e.g. security support that much harder.
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
Applications and bundled libraries
How VMWare does it
I wonder what implications page sharing via KSM has for the memory use discussion. As long as all the private libraries are built with the same compiler options, isn't there a good chance that the majority of the memory could be shared behind the scenes?
KSM
KSM
and aligned at the same point within a page.
Applications and bundled libraries
Applications and bundled libraries
