|
|
Subscribe / Log in / New account

Ubuntu's multisearch surprise

Ubuntu's multisearch surprise

Posted Aug 8, 2009 2:10 UTC (Sat) by drag (guest, #31333)
In reply to: Ubuntu's multisearch surprise by MattPerry
Parent article: Ubuntu's multisearch surprise

The ultimate solution is that packaging gets done upstream, not by distributions.

Instead of it being:
./configure
make && make install

It should really be:

./conifgure
make && make rpm
rpm --install *.rpm

For example take a mundaine example like 'gnome-games'. You have OpenSuse, Redhat, Debian, Ubuntu, Gentoo, etc etc.

So you have the 'gnome-games' source code associated with Gnome 2.24.

Each distro has to individually package, build, and troubleshoot the packages. There is subtle and rather insigificant implementation details that makes it difficult to track bugs and work together to create better software. And each distro has to re-do all the work that every other distro does.

So with OpenSuse, Fedora, Debian, Ubuntu, and Mandriva... Assuming that they have one person in each originization that is in charge of maintaining 'gnome-games' you have 5 individuals doing the same packaging with the same software versions with the same goals and are aiming for the same potential audiance.

That's 500% more work then it takes to produce a installer for Windows. And it doesn't really make sense. For every 1 man-hour that is needed to get software packaged you waste 4 man-hours re-doing all the same work.

That's 4 units of wasted effort for ever 1 unit of real work done. That's 4 units of time and money and effort that could go into bug squashing or improving documentation or working on some other peice of software.

And, ironically, it's those binaries produced for Debian vs Ubuntu vs Fedora, etc etc... all are compatible with each other.

When people like Opera produce proprietary software for Linux they have dozens and dozens of packages for each distro.... but actually if you look at the checksums on the binaries and library files they install it's all identical. Opera installs the same exact hunk of software on every distro, except very old ones (and that is due to the C++ ABI breakage from years ago).

So it's mostly just stuff as silly as versioning numbers and how large projects are carved up into smaller packages that are causing incompatibilities.


to post comments

Ubuntu's multisearch surprise

Posted Aug 8, 2009 2:19 UTC (Sat) by drag (guest, #31333) [Link] (5 responses)

Well besides my bad math I am sure everybody gets the point. :)

Really Debian needs Ubuntu and Ubuntu needs Debian.

They have to work together. It's reality.

Ubuntu could not exist without the high quality packages that Debian produces. It would be _impossible_.

And Ubuntu is able to provide something that Debian so far has failed utterly... Make a friendly Linux desktop. Debian _can't_ do it. I don't know why. But they can't.

And I actually prefer Debian over Ubuntu for my desktop. Compared to Fedora or Ubuntu the Debian desktop experience is nothing but a huge PITA. It takes about 4-5 hours of nerd work to make their Gnome stuff as usable as what I can get from Fedora in 10 minutes.

I can't even get PA to work properly for me. The volumes controls for Gnome and Compiz and all that are all goofed up. I don't know what to do to fix it. With Fedora it was just right.

Debian needs Ubuntu for that sort of stuff.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 3:03 UTC (Sat) by srivasta (guest, #7075) [Link] (4 responses)

I am not sure why you think Debian needs Ubuntu to make a friendly desktop -- when a couple of line down you confess you prefer Debian's desktop. (As do I, for what it's worth).

What does Debian gain from _other_ operating systems having an (allegedly) friendlier desktop anyway?

Ubuntu's multisearch surprise

Posted Aug 8, 2009 7:01 UTC (Sat) by tajyrink (subscriber, #2750) [Link] (3 responses)

They can import the usability-enhancing patches where applicable?

Ubuntu's multisearch surprise

Posted Aug 8, 2009 7:34 UTC (Sat) by drag (guest, #31333) [Link] (2 responses)

And design.

There is more to getting a good dekstop then just doing the 'make install'. A userfriendly desktop must provide all-working functionality as well as the normal set of expected functionality.

Something like that. It's a total package thing. Just providing working software is a first step.

-----

The trick is that whatever it takes Debian hasn't been able to do it. If Debian had the ability to get good default configuration out to users on a timely basis then there would of never been any need for Ubuntu in the first place. There would of been no market for it... people would of just used Debian.

Ubuntu's multisearch surprise

Posted Aug 9, 2009 17:15 UTC (Sun) by srivasta (guest, #7075) [Link] (1 responses)

Hi,

Part of the so called improvements in the desktop have been pruning away of choices presented to the user. Instead of 23 MUA's, ship 3. Who gets to decide which 3? Why, your distro overlords, of course. The other part consists on streamlining the distribution down to a couple of thousand packages, relegating the rest to a non-core multiverse. However, the former is the probably the major reason for the popularity: when one removes choices, and makes decisions on behalf of the end user, one can offer a slick presentation -- as long as you like the decisions made; and most people usually do not care.

Debian went the direction of choices, allowing people to tailor the distribution to their liking. This makes for more questions, and perhaps more configuration choices, and perhaps, confusion for the novice. But one is only a novice for so long, annd I am glad that the easier shoices prof erred by Debian exist.

If there is a way of creating something that is slick _and_ manages to offer the choices, I think Debian folk would be happy to hear about it.

manoj


Ubuntu's multisearch surprise

Posted Aug 11, 2009 17:03 UTC (Tue) by kov (subscriber, #7423) [Link]

You're helping him make his point, from my perspective.

Need EASY for END-USER install of source code

Posted Aug 8, 2009 14:45 UTC (Sat) by dwheeler (guest, #1216) [Link] (5 responses)

I agree in principle that a source code install should work with the package manager (so that you can easily remove it later, upgrade, know what you're getting, etc.). And behind-the-scenes, the list of operations seems reasonable in principle.

But what is needed is a trivial point-and-click interface that auto-configures, makes, and installs (working with the distro package manager). One that works well enough automatically so that people don't NEED to read the README file to install it the "usual" way. If developers would simply follow the standard conventions for source distribution, this would work well; it'd be even easier with software that would automate DESTDIR (I intend to release software soon to do the latter). That wouldn't solve all the problems (in particular, there need to be conventions so that dependencies are downloaded and installed), but that would help.

Need EASY for END-USER install of source code

Posted Aug 8, 2009 17:39 UTC (Sat) by nix (subscriber, #2304) [Link] (3 responses)

Nice paper.

One point though. You said:

n most software, the "make install" command only uses a few simple commands to actually install the software. In my experience, the most common command by far is "install", which is hardly surprising. Other common commands used in "make install" that might need redirecting from privileged directories (like /bin, /usr, and /etc) include cp, mkdir, ln, mv, touch, chmod, chown, ls, rm, and rmdir.
You forgot the horror which is libtool. That runs the compiler at install time to do a link, so you have to wrap the linker! You may get away without wrapping it if it turns out that it always asks the compiler to put things in the srcdir, and then uses 'install' to move them out.

Need EASY for END-USER install of source code

Posted Aug 9, 2009 1:35 UTC (Sun) by dwheeler (guest, #1216) [Link] (2 responses)

Nice paper.

Thanks.

You forgot the horror which is libtool. That runs the compiler at install time to do a link, so you have to wrap the linker! You may get away without wrapping it if it turns out that it always asks the compiler to put things in the srcdir, and then uses 'install' to move them out.

Ah, libtool. Well, I'm definitely aware of libtool; I wrote the Program Library HOWTO, after all. But from what I've seen, someone who uses libtool will typically support DESTDIR as well (I speculate that if you're a libtool user, you're probably more interested in portability and thus more likely to support DESTDIR).

And even if that's not true, my current approach can accommodate it anyway. I've ended up deciding that to automate DESTDIR, it's best to simply create "wrappers" of the same name and put their location at the front of the PATH. It seems hackish, but it has absolutely NO security issues, and it runs really quickly. Which means that it is more likely to actually be ACCEPTABLE to distros and users. I haven't tried to wrap libtool (yet), but I think that should work well if it turns out to be necessary. I've already wrapped cp, ln, and so on, and tried them out on several test programs without problems.

Need EASY for END-USER install of source code

Posted Aug 9, 2009 10:26 UTC (Sun) by nix (subscriber, #2304) [Link]

But from what I've seen, someone who uses libtool will typically support DESTDIR as well (I speculate that if you're a libtool user, you're probably more interested in portability and thus more likely to support DESTDIR).
True. A quick audit here shows few examples: APR/apr-util/apache are the primary ones, but we know they're seriously weird to the point of not supporting biarch building without ugly hacks (and they have INSTALL_ROOT which you can use instead). (I mean, keeping around a copy of libtool from your configure process and reusing it in other projects? Ew.)

Auto-DESTDIR released

Posted Aug 17, 2009 18:25 UTC (Mon) by dwheeler (guest, #1216) [Link]

By the way, my "Auto-DESTDIR" program has now been released at http://www.dwheeler.com/auto-destdir - it supports DESTDIR in source installs, even when the original makefile doesn't support DESTDIR. One you have Auto-DESTDIR install, just run make-redir instead of make when you do an install. I.E., use "DESTDIR=... make-redir install" instead of "make install".

Need EASY for END-USER install of source code

Posted Aug 9, 2009 14:07 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

And then you'll need it to be able to upgrade itself, track dependencies, uninstall, etc.

In short, you need a PACKAGE system. So back to square 1.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 15:46 UTC (Sat) by MattPerry (guest, #46341) [Link] (3 responses)

> The ultimate solution is that packaging gets done upstream, not by
> distributions.

Yes, exactly.

> ./conifgure
> make && make rpm
> rpm --install *.rpm

That's a great idea. If that could be reduced to an action that happens when someone double-clicks on an installable package then it would be ideal.

> For every 1 man-hour that is needed to get software packaged you waste 4
> man-hours re-doing all the same work. [...] And, ironically, it's those
> binaries produced for Debian vs Ubuntu vs Fedora, etc etc... all are
> compatible with each other.

I thought that the Linux Standards Base was supposed to make it so that LSB compliant binaries would work on LSB compliant distros. If more effort was put into LSB compliance from the distro and application providers maybe we could achieve single-click installation and execution for applications. It would be a win for end-users and for the distros too as they wouldn't spend the time duplicating work.

Ubuntu's multisearch surprise

Posted Aug 8, 2009 19:15 UTC (Sat) by drag (guest, #31333) [Link] (1 responses)

LSB works for some things, but obviously it's limited in scope.

Different standards organizations work in different manners. For example Freedesktop.org has it's XDG stuff which is 'next gen' (when it was developed). This has added considerably in helping users experience consistent behavior and addressing many of the annoyances that come from having different desktop environments.

A big example is previous to things like "xdg-open" application developers that wanted to call a seperate browser for documentation or linking online had to hardcode it into their installation scripts or in their applications. The best they could do was to allow people to go into their application preferences and set a default browser. Most people just hardcoded 'mozilla' since that was the premiere browser at the time.

Obviously this is less then preferable. Either forcing everybody to use 'mozilla' or forcing everybody to configure a browser for each and every application that happened to need to call a browser.

LSB is a bit different though. They are a 'best practices' type orginization that goes and finds what is consistent and supportable across all distros and then formalizes it. The amount of stuff that LSB tries to impose on distros is relatively small and distros tend to respond very negatively when they do (like standardizing packages to be RPMs and forcing some new directory structure).

So they don't assume a position of making people work together. Their position is much more centered aorund documenting and making formal in what ways they already do that. So that when you have third party developers (ISVs) or you have people new to Linux then they have a place to go to learn what parts of "Linux OS" they can depend on.

So things that cause incompatibility and problems for developers who try to target multiple distros that LSB doesn't cover should be mostly because distros refuse to work together about those things. It's not LSB's position to try to force those sort of issues.

Traditionally the biggest problem is going to be because of GUI applications. The 'GNU' part in 'GNU/Linux' is very consistant normally, but the differences in how people setup KDE vs Gnome vs Whatever causes all sorts of headaches.

------------------

The latest thing that LSB is trying to deal with is the GUI applications through the Moblinv2 specifications developed by Intel.

This is only targeting smaller devices, but I am hoping that it will extend to desktops and workstations.

Moblin dictates that you need to have a Gnome-based environment and that you have all the dependencies that it says you should have for it. It includes specific libraries and specific versions of libraries. It also assumes that if you have newer then the specified libraries then those libraries should be backwards compatible with the versions that it dictates. It has bunches of PDFs outlining all of this, plus test cases and a certification process.

So far it seems that Novell, Ubuntu, and Fedora are releasing Moblin compliant versions of their operating distros, as well as smaller custom-commercial companies like Xandros.

--------------

the main problem with Moblinv2 is that it's more for smaller devices, but I am hoping that it'll be move 'up' and include desktops and workstations. It's heavily influencing the direction of Gnome 3.0 I believe.

The other big problem is that it's very Gnome centric and thus KDE fans and people who care about older and slower devices will have a tendency to dismiss it early on.

My thoughts on this goes like this:

Gnome stuff, while heavy in nature when running, does not really take up that much disk space. Thus the price you pay for Moblin compatibility is rather low. Even old computers should be handle the additional disk space required without blinking.

And what gets you is a solid and constant base for users to build off of.

----------------------------

I learned this when dealing with OpenLDAP.

People have, in the past, had huge issues learning LDAP with OpenLDAP. There is a wealth of documentation and examples on LDAP, but very little actually covers the initial setup and deployment of OpenLDAP. When you install OpenLDAP from Debian and Fedora it's built correctly, but it's a blank canvas...

So users will read the documentation and want to experiment with OpenLDAP, but the initial setup to get a actual working example is a very high barrier and they get confused and dismayed and usually give up.

So if they were given a nice default setup, even if it wasn't what they needed ultimately then it makes things MUCH easier to learn and deploy and would increase the value of the software by a large amount.

This is the advantage that companies get with Microsoft Small Business Server setup. Everything is designed to work together by default. Even if SBS is not a good match for the company initially then having a consistent and working basis (as well as documentation) is invaluable to build off of and modify so that it does end up being a good match.

------------------------

Linux desktops and desktop application development is not like that. There is no solid basis, no solid and constant foundation.

It's like walking on shifting sand... everything is moving around under your feet. People get the feeling that they are building a house on wet clay... no matter how well and how strong you build your house if the foundations are not done correctly and are not consistent then your screwed. The more you effort/time/money you spend on it the worse off you are.

If users and developers are presented with a solid, working Desktop, that has all the basic features and services done correctly and is working then that makes using and customizing the desktop much easier. Even if 'Moblin' or 'Gnome' is not what they want, having that solid working base to fall back to and build off of makes things much much easier.

What is the fun in getting Awesome WM configured and all EXCELLENT if you have to struggle around with Pulseaudio or Alsa and need to compile patches in Network-Manager (or uninstall it entirely) to get basic OS features working correctly?

With everything 'working by default', then developers only have to worry about the stuff they care about. They don't have to worry and care about things that they don't want care about and don't have in-depth knowledge about.

--------------------------

'Usable By Default' is along the same lines as 'Secure By Default'.

When a user installs a secure OS like OpenBSD or Debian's 'base install' it has almost no chance of providing what the user wants ultimately. They want a network router or a web server or email server or something like that.

However what it gets you is that you can deploy a HTTP server without having to worry about SNMP security and SMTP security and SMB security and NFS security, etc etc. etc. You only have to concentrate on the stuff you need to concentrate on and you still end up with a secure system.

---------------------------

Like if I am developing a video game using Blender.. Idon't want to care about sound APIs. I don't want to have to worry if PulseAudio or Alsa or OSS is being used. I don't want to worry about if the users have the ability to connect to wireless networks correctly for playing multiplayer or if they have a decent browser that is compatible with my documentation. I don't want to worry about if their distro has a good MESA implementation or if the user is using proprietary drivers or if the user does not know that they need drivers installed at all. I don't want to care if they have the ability to full-fill the python requirements or have goofed up python libraries all the time.

If I have to worry and deal with every little detail with the desktop and have to provide work-arounds and documentation and support for every little thing that can screw my game up... then I won't have any time to make the actual game!

OpenLDAP

Posted Aug 12, 2009 4:15 UTC (Wed) by TRS-80 (guest, #1804) [Link]

People have, in the past, had huge issues learning LDAP with OpenLDAP. There is a wealth of documentation and examples on LDAP, but very little actually covers the initial setup and deployment of OpenLDAP. When you install OpenLDAP from Debian and Fedora it's built correctly, but it's a blank canvas...

So users will read the documentation and want to experiment with OpenLDAP, but the initial setup to get a actual working example is a very high barrier and they get confused and dismayed and usually give up.

So if they were given a nice default setup, even if it wasn't what they needed ultimately then it makes things MUCH easier to learn and deploy and would increase the value of the software by a large amount.

The Debian packages these days actually provide a base entry and admin in the database, although this isn't well documented and requires a purge to do it again if you mess up. There are a few documents that properly cover setup, like Debian GNU: Setting up OpenLDAP although that assumes you'll use Kerberos instead of SSL to protect passwords, and my own LDAP for the Lazy Sysadmin which aims to be useful, if a bit ranty. But yes, most LDAP documentation assumes you already know LDAP and is more of a reference, or is just cargo-cult "do this to make it work" with no insight in to what's going on and how to fix it when it breaks.

Ubuntu's multisearch surprise

Posted Aug 9, 2009 17:20 UTC (Sun) by srivasta (guest, #7075) [Link]

I think you are paying far too little attention to systems integration. Anyone can package individual packages, but a bunch of independently packaged software does not a good distribution make.

By far the most impressive bit of Debian is the Technical policy manual, and how the packages create a more integrated whole by following the dictum of policy (which, BTW, is not a statid dead set of rules, but a vibrant, breathing, changing document). Very few upstream packaged software pays much attention to it, as can be seen by running lintian -vi on the packages produced.

manoj

Ubuntu's multisearch surprise

Posted Aug 10, 2009 0:42 UTC (Mon) by ringerc (subscriber, #3071) [Link] (5 responses)

That looks good in principle, but tends to break easily when faced with different apps requiring different versions of libraries that don't maintain ABI compatibility. You can change the soname, but you're still in trouble if you encounter a linkage chain like:

  • libx1.1
    • libthirdparty
    • libx1.2
      • theapplication

Symbol versioning of libx can help, but doesn't seem to solve all the issues.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 11:21 UTC (Mon) by mjthayer (guest, #39183) [Link] (4 responses)

That is more a consequence of problems in the *nix linking model, where all shared objects land in a global namespace. It is desirable for a few things like the C library, but for most shared objects it causes more problems than it solves. It would be so nice if build-time linking had the equivalent of dlopen's RTLD_LOCAL flag. I think this should even work without breaking LD_PRELOAD, as preloaded libraries would still be loaded as "RTLD_GLOBAL" libraries, so would still override the subsequent RTLD_LOCAL loaded ones.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 18:53 UTC (Mon) by nix (subscriber, #2304) [Link] (3 responses)

It can still be a problem even if you have RTLD_LOCAL, if objects managed
by one version of a shared library end up being passed to another version
in the same address space, or if both of them are contending trying to
manage some shared resource (classic examples, both from libc: wtmp and
the malloc arena).

What RTLD_LOCAL fixes is the 'whoops, symbol name clashes between totally
different shared libraries because they dared to use a symbol with the
same name' problem.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 19:08 UTC (Mon) by mjthayer (guest, #39183) [Link] (2 responses)

Right, those are the "few things like the C library" that I meant: things which are global to the application by nature. Having a link-time RTLD_LOCAL would not be a panacea, and would be bound to introduce a few problems of its own. On the other hand, since it would be an opt-in thing for each shared object linked into any ELF binary, the problems could be sorted out incrementally.

Ubuntu's multisearch surprise

Posted Aug 11, 2009 9:22 UTC (Tue) by ringerc (subscriber, #3071) [Link] (1 responses)

It affects more than just a few C library details. In particular, `static' variables can be a problem. Link-time RTLD_LOCAL appears to imply the presence of multiple instances of the library executable mapped into memory, or at least multiple copies of their rw segments. Otherwise you'd encounter funky behaviour differences depending on whether two unrelated libs happened to link to the same versions of some 3rd lib or different versions of it.

That makes it hard when libraries at the same linkage "level" (say libh and libi) want to pass around objects that may, either overtly or behind the scenes, rely on shared static data (library-internal caches or the like) in a library (say libj) they both use. Of course, they can't pass libj objects across library boundaries anyway unless they know they're using at least a compatible version of libj, since their layout might and/or meaning not be compatible, so they may as well share the same instance of libj's rw data segment too, as if it were RTLD_GLOBAL.

This sort of thing is very common in things like application frameworks (qt/gtk/etc) and plugins.

I seem to remember that Mac OS X tackles this by defaulting to something like RTLD_LOCAL linkage, but allowing objects to specify global linkage instead. That's vague memory only though, and I haven't hunted down any documentation to confirm it.

Ubuntu's multisearch surprise

Posted Aug 11, 2009 9:34 UTC (Tue) by mjthayer (guest, #39183) [Link]

Still, I would have thought that whoever is linking in the shared object would have a good idea at link time whether or not they needed to share objects from that object with others. To take your example, if libh and libi share objects created by libj, then they should both know to link in libj into the global scope - that is, not to opt in to local linkage for it.

Ubuntu's multisearch surprise

Posted Aug 10, 2009 8:13 UTC (Mon) by mjthayer (guest, #39183) [Link]

> The ultimate solution is that packaging gets done upstream, not by distributions.

I would suggest a slight correction to your assertion. At least for "true" FOSS applications, the ultimate solution might be for the distribution packagers to work directly on the upstream code base instead of on a distribution fork. The upstream code base would then support your "make rpm", but it would be Redhat/Fedora/SUSE/whoever who would actually run that to create the packages. And the distributino packagers would then have the additional task of monitoring the changes made by packagers for other distributions and making them harmonise (or at the very worst adding upstream "#ifndef FEDORA"s and suchlike) with their own needs, and making sure that upstream versions branches fit in with their own versioning requirements


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds