|
|
Log in / Subscribe / Register

LCA: Disintermediating distributions

By Jonathan Corbet
February 6, 2008
One of the mini-confs which happened ahead of linux.conf.au proper was the "distribution summit," meant to be a place where representatives and users of all distributions could talk about issues of interest to all. The highlight of this event, perhaps, was Jeff Waugh's talk on disintermediating distributions - or, as he rephrased it, "distributed distributions." If his ideas take hold, they could be the beginning of a new relationship between free software projects and their users.

It all started, says Jeff, some years ago, when he ran into Mark Shuttleworth fresh from a visit to Antarctica. Mark's pitch, says Jeff, "sounded like crack" at the time. By 2003 or so, it just didn't seem like there was a whole lot of room for a new distribution. But Mark had some interesting ideas, and Jeff signed on; the result, of course, was Ubuntu.

Ubuntu has clearly had some success, but, in some important ways, it has failed to work out - at least for Jeff. He found himself distracted by Ubuntu's lack of participation in Debian, from which it derived its product. There was a real tension between tracking Debian and tracking upstream projects more directly. Despite Jeff's insistence that Ubuntu should be tracking (and pushing updates into) Debian's unstable distribution, Ubuntu often chose to go with upstream, resulting in what is, in effect, a fork of the Debian distribution - in terms of both the technology and the community.

[Jeff
Waugh] What Ubuntu was doing was taking upstream packages, modifying them, bringing in shiny new features, and generally looking for ways to differentiate itself from the other distributors. So, for example, the first Ubuntu release contained a great deal of Project Utopia work (aimed at making hardware "just work" with Linux) which had been done by developers from other distributions; Ubuntu shipped it first, though, and got a lot of credit for it. Novell's behind-closed-doors development of Xgl was motivated primarily by the wish to keep Ubuntu from shipping it first. Meanwhile, Red Hat had slowly learned that trying to differentiate itself by diverging from upstream was a path to pain. So Red Hat's developers created AIGLX, in an open, community oriented manner; the result is that AIGLX has proved to be the winning technology.

Events like these led Jeff to wonder about just where the integration of packages should be done - upstream or downstream? From Jeff's (GNOME-based) upstream point of view, he wonders why he doesn't have a direct relationship with his users. While most projects deliver their code through middlemen (distributors), there is an example of a project which has managed to maintain a much more direct relationship: Firefox. Most Firefox users are direct clients of the project - though most of them are Windows users. The Firefox trademark has been used to ensure that, even when distributors are involved, the upstream developers get a say in what is delivered to users.

So, what happens if you take out the middleman? It's instructive to look back at what life was like before there were distributors. It was, Jeff says, much like pigs playing in mud; perhaps they enjoyed it, but it was messy. There are, in fact, a lot of good things that distributors have done for us. You can get a fully integrated stack of software from one source, and the distributor acts, in a way, as the user's advocate toward the upstream project. We don't want to lose out on all that.

But, if one were to look at facilitating a more direct relationship between development project and their users, one would want to take advantage of a number of maturing technologies. These include:

  • OpenID. Any process of distributing distributions must look at distributed identity, and OpenID is the way to do it.

  • DOAP. "Sounds terrible" but it's a useful way of describing a project with XML. With a DOAP description, a user can find a project's mailing lists, bug tracker, source repository, etc.

  • Atom. This is how projects can distribute information about what they are doing.

  • XMPP. This is a Jabber-based message queueing and presence protocol. It can be used to more active publishing of information than Atom can do.

  • Distributed revision control. Lots of functionality for integration between projects, and between upstream and downstream. Jeff sees git as a step backward, though; some of the other offerings, he thinks, have much better user interfaces.

Also important are the packaging efforts which are underway in a number of places. These include Fedora, which is "becoming competitive with Debian" as a community project. OpenSUSE has put together a build system which can create packages for a number of distributions. Debian has had a community build system for years; there is interest in Debian in going the next step, though - ideas like building packages directly from a distributed version control system. Ubuntu's Launchpad was "a spectacular vision," though the reality is "a bit of a snore"; it didn't achieve its goal of helping upstream and downstream work together.

Then there's Bugzilla, which is the "bug filing gauntlet" between projects and their users. The Debian bug tracking system has done a better job of facilitating bug reports by allowing them to be submitted by email. But most big projects are using Bugzilla. It would be much improved by using OpenID (so that users would not have to register to file bugs) and some sort of Atom-based feed which would make querying bugs easy.

If you take out the distribution, what do you replace it with? How do we achieve consistency? We need to create standards for how we interact with each other. And we can, in fact, be very good at consistency and standards when the need is clear. Good release management is a step toward that goal. GNOME once had very bad release management, but has pulled it together. Doing time-based releases was a hard sell, but few developers would want anything else now. Now GNOME release management just works.

Consistency in source management is needed. Once upon a time that was done through CVS, but CVS is no longer up to the job, and now every project is using a different distributed version control system. But, sooner or later, one of the competing projects will win out and "hopefully we'll have clarity again." Autotools and pkgconfig can also go a long way toward creating consistency between projects.

So, if we can push the available tools up into the upstream projects, those projects can get better at producing packages for distributions themselves. Once the tools (like bug trackers) can talk to each other, people will start making more use of them and network effects will take over. But, at the moment, the knowledge about integration remains at the distribution level.

Debian, Jeff thinks, is well placed to take on a project like this and push its integration knowledge upstream. While Debian has typically been ten years ahead of everybody else in its packaging and integration abilities, it currently has a "relevancy problem." Finding ways to help upstream projects support their users more directly while maintaining overall integration and consistency would be a perfect way for Debian to maintain its leadership in this area. That could change the game for everybody, bringing projects closer to their users and making us all "happy as pigs in mud."


to post comments

LCA: Disintermediating distributions

Posted Feb 6, 2008 18:02 UTC (Wed) by aleXXX (subscriber, #2742) [Link] (109 responses)

> Once upon a time that was done through CVS, but CVS is no longer up to 
> the job, and now every project is using a different distributed version 
> control system. But, sooner or later, one of the competing projects 
> will win out and "hopefully we'll have clarity again." Autotools and
> pkgconfig can also go a long way toward creating consistency between
> projects.

He is right for cvs.
IMO obviously he isn't right for autotools.
Since some time a lot of projects are switching away from it, mainly to 
CMake (and Scons).
Autotools (beside from being really hard to learn for developers) have 
one real problem: they require a shell. Free software is becoming more 
and more portable, also to Windows. Typical Windows users/developers 
don't have a UNIX shell, so autotools don't work there (yes, there is 
cygwin and different levels of mingw with shell, but IMO these are all 
work-arounds). Look e.g. at Python, it has an autotools based build 
system and additionally to that it has MSVC project files. I don't know 
how e.g. Mozilla or Gimp or OpenOffice handle this, but having to 
maintain multiple independent buildsystems in parallel is obviously no 
good solution.

And, honestly, pkgconfig has a big design flaw. It stores information 
about how a library is to be used. But it stores it in a format which can 
be used just for one purpose: executing it and feeding the results 
directly as arguments into gcc. If the information would be available in 
some structured way, it could be actually processed and also used for 
other things (e.g. linking a library with pkgconfig information with 
MSVC, or getting the include dirs and definitions for use in an IDE so it 
can do proper autocompletion).

Alex

LCA: Disintermediating distributions

Posted Feb 6, 2008 18:49 UTC (Wed) by stevenj (guest, #421) [Link] (88 responses)

Autotools (beside from being really hard to learn for developers) have one real problem: they require a shell. Free software is becoming more and more portable, also to Windows.

Windows is now the only major platform that doesn't have a POSIXish shell; I'm somewhat suspicious of claims that the free software community should change its entire build toolchain because of one proprietary Microsoft operating system.

You call Cygwin/Mingw "workarounds", but this is not too different from requiring that any other build tool (cmake or scons whatever) be installed on the user's system. It takes 5 minutes to install Cygwin. And Windows developers participating in FLOSS projects are going to need to know something about Unix since that is the primary target of most such projects.

LCA: Disintermediating distributions

Posted Feb 6, 2008 19:42 UTC (Wed) by drag (guest, #31333) [Link]

Pretty much. 

There are only two major platforms that realy matter nowadays.. and that's Windows and Linux. 

Every major Unix platform now has Linux compatability as a high priority and most Linux
developers consider cross-platform important, so it works out that OS X, FreeBSD, Solaris, and
friends can be essentially treated as 'Linux' with a little effort. Most of these sorts of
differences are abstracted away from common app developers by the GNU stuff.

The temptation is that by supporting Windows as a primary OS open source projects can
dramaticly increase their profile. They can attract media attention, get financing, and
hopefully attract developers. These are all good things.

The bad things about supporting, as I see it, Windows is that your going to massively increase
your support costs and make a number of very-easy-to-do things in Linux very hard because now
you have to do it the Windows way also.

There is one major project that I know of that has a Windows port, more or less, but refuses
to release it. This is Ardour, a significant audio production application for Linux. They
don't release it because they feel the costs of supporting Windows users with common audio
problems is to much for them to handle. (something like that, ardour folks forgive me if I am
wrong) If somebody that is a expert in Windows and is willing to spend all their time
supporting Ardour on it then they'd probably release it.

Other effects of Windows on Linux stuff I see is things like in popular high-level languages
they tend to shun forking and using sockets for IPC and concentrate on making dozens of
different high-level schemes for doing  shared memory IPC and multi-threading stuff. Most of
which has litle significant advantage over traditional 'posix-style' methods in a Linux-only
world, but matter dramaticly in the Windows one. (At least that was my impression.)

LCA: Disintermediating distributions

Posted Feb 6, 2008 20:13 UTC (Wed) by aleXXX (subscriber, #2742) [Link] (86 responses)

> Windows is now the only major platform that doesn't have a POSIXish
> shell; I'm somewhat suspicious of claims that the free software
> community should change its entire build toolchain because of one
> proprietary Microsoft operating system.

There are several points here.
While Windows is the only major platform which doesn't have a POSIX 
shell, it is on the other hand the platform which has like 80 or 90% 
market share. So one could also say "only 10 to 20% of installed systems 
have a POSIX shell".

I also have the impression that open source projects which are 
portable between UNIX and Windows are the most successful projects: 
Firefox, Thunderbird, OOo, gcc, Python, Ruby, Apache, MySQL, php, ...
Also if you are in a setting where you have Windows and UNIX users, 
software which runs on both systems has a significant advantage. So IMO 
it makes sense to support also Windows if you want to spread the use of 
free software and give people power over their data again.

Beside portability that there are enough other reasons to switch from 
autotools to something easier to use. I know there are gurus which 
understand autotools, but my impression is that the big majority of Linux 
developers doesn't really understand it. That's my personal experience in 
KDE and with other developers I know from real life. It's also hard to 
convince e.g. students who have a hard time learning C or C++ that they 
additionally have to learn autoconf, automake, libtool, shell, m4 and 
make syntax. It just doesn't make sense if the build tool is more 
complicated than the programming language itself.

About cygwin: I really like it, but I come from Linux background. While 
it doesn't take too long to install it, you end up with a UNIXish 
environment, which takes maybe around 100 MB or more on the harddisk and 
which feels really alien to Windows-only users/developers. And all that 
just to get a tool to generate build files ?

One of the nice things about cmake is that doesn't have any additional 
requirements like some libraries or scripting languages, you just install 
it and will work together with your native build tool (make, Xcode, 
Visual Studio).

Alex

LCA: Disintermediating distributions

Posted Feb 6, 2008 20:50 UTC (Wed) by asamardzic (guest, #27161) [Link] (1 responses)

Beside portability that there are enough other reasons to switch from autotools to something easier to use. I know there are gurus which understand autotools, but my impression is that the big majority of Linux developers doesn't really understand it.

I can only second that. I consider myself rather experienced, developing mostly for Unix, both my open-source projects and at my work, for number of years and heavily using autotools during the first time; still, I was never able to confidently accomplish anything more complicated with it, for example to create my own set of macros to find given library or package. Discovering CMake, while it certainly has its own set of quirks, was like nirvana for me - I was finally able to properly understand my build system, I was able to read generated makefiles, and - yes, I was able to write modules (in CMake terminology) for finding other stuff on the system without any problem.

So, while I couldn't care less about Windows portability (well, at least for my open-source work), I really couldn't find call for some kind of uniting around autotools (if I understood that part of the arcticle properly) compelling.

Thumbs up on CMake

Posted Feb 7, 2008 6:48 UTC (Thu) by robla (guest, #424) [Link]

We're in the process of switching over to CMake for the Second Life viewer, and from what
we've seen so far, it looks great.  When you have a lot of Windows and Mac programmers on your
team, it's a huge win for those people to be able to use the dev tools they are used to (e.g.
Visual Studio and XCode). The fact that CMake generates build files for those and a lot more
is a pretty unique characteristic among build systems.

LCA: Disintermediating distributions

Posted Feb 6, 2008 22:56 UTC (Wed) by stevenj (guest, #421) [Link] (79 responses)

It's also hard to convince e.g. students who have a hard time learning C or C++ that they additionally have to learn autoconf, automake, libtool, shell, m4 and make syntax. It just doesn't make sense if the build tool is more complicated than the programming language itself.

Skilled programmers don't have a hard time learning a new syntax. What makes a tool like autoconf difficult is that the underlying portability problem it solves is quite hard and intrinsically complicated.

In my experience, non-trivial projects that avoid autoconf tend to start re-inventing it, badly, as soon as they move beyond compiling just on GNU/Linux systems and interacting in non-trivial ways with the build environment. There's a reason why most major free projects use it (except for Python and Perl programs, which live in their own self-contained little build universes, although once they start depending on other external non-Python/Perl libraries it can quickly get messy again).

Similar things happen with automake, although admittedly some of automake's complexity comes from having to deal with the underlying deficiencies of 'make'. But there is also a fair amount of intrinsic complexity from the fact that you want to support a lot of make targets (make dist, make clean, make distclean, make check, make install, make uninstall, etc. etc.), VPATH builds, source files that are built by scripts or programs in some projects, multiple languages and compilers, different compile/link flags for different programs in the same projects, building shared libraries on multiple platforms with very different shared-library build procedures and semantics, etcetera etcetera.

By the time you understand the underlying problem, using the actual tool is pretty straightforward and well documented (see e.g. the "autobook").

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:26 UTC (Wed) by vmole (guest, #111) [Link] (11 responses)

What makes a tool like autoconf difficult is that the underlying portability problem it solves is quite hard and intrinsically complicated.

Actually, the problems that autoconf "solves" are the *easy* parts of portability. Actually dealing with the semantic differences between OS's is the hard part. The worst part is that autoconf encourages a really bad programming style of ifdefs and the use of the 6 slightly OS-specific versions of foofunc() rather than just following the dang standard. Henry Spencer told us why that was a bad idea [warning: PDF] 15 years ago, but the GNU people failed to heed him.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:42 UTC (Wed) by vapier (guest, #15768) [Link] (7 responses)

the void you describe is largely addressed by things like the gnulib project.  it takes care
of all the OS-specific issues by checking to see if the function in question is usable on the
host system.  if it isnt, the gnulib version is provided.  thus the code that *you* write is
able to freely assume that function in question is available.

as for encouraging ifdef's, there are simply some things you cannot solve otherwise.  how do
you propose people integrate optional support for addon libraries ?  perhaps you can support a
wide range of graphic formats, but only if the external library is available ... yes, you can
write autotool code to largely avoid this (make most pieces standalone files that are
optionally compiled), but it doesnt make the issue magically go away due to the nature of the
code (assuming C/C++ here).

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:01 UTC (Thu) by vmole (guest, #111) [Link] (6 responses)

No, you can't get rid of them entirely, and Mr. Spencer doesn't claim you can. They were added to the language for a reason, after all. But all too much autoconf code has stuff like this:

#ifdef HAVE_INDEX
p = index(s, c)
#elif HAVE_STRCHR
p = strchr(s, c)
#endif

Yeah, that's a really old example, but while the names of the functions change, the style doesn't.

And while I'm at it, why do I still have to sit through messages like

Checking for C89...okay
Checking for stdio.h...found
Checking for strcpy()...found
...

Hey, if it's C89, then ALL THOSE FUNCTIONS AND HEADER FILES ARE THERE. If not, the implementation is broken WAY beyond what autoconf can solve.

And, not to go off on a rant (too late!), why do I have to sit through 3 minutes of autoconf masturbation just to go to a 3 second compile of 300 lines of standard C?

Yes, I know that isn't really autoconf's "fault". It can be used in reasonable ways. But the autoconf culture encourages such bad choices, because 95% of the users don't understand it and just copy other peoples' bad choices.

Feh.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:28 UTC (Thu) by stevenj (guest, #421) [Link] (3 responses)

In my experience with developing many actual programs using autoconf, the checks in the configure script are a direct consequence of porting to some platform or another. ("Crap, MinGW doesn't support gettimeofday, we'll need to check for it and whatever alternative is available.")

Most new code these days assumes C89 support at least, and doesn't check any more for functions like "strcpy". On the other hand, as recently as a few years ago the default Solaris compiler didn't support "const" by default unless you jumped through hoops, and hence a lot of programs had checks for things like that.

It's true that, at some point, older programs might be able to remove checks for things that only break on ancient platforms, but it can be really hard to decide exactly when to remove such checks. It's safer to leave them in and wait the extra seconds than to break the build on an extant architecture.

Regarding the time for configure to run, its true that the configure script is often slower these days than compiling the program. For large programs, parallel make speeds up things a lot, while parallelizing configure tests is extremely tricky (although it has been discussed a lot by the autoconf developers). For small projects, the configure script may call the compiler more times than the Makefile itself, but long experience has shown that actually trying to compile something in the configure script is by far the most reliable way of performing a feature test. In any case, the number of times that one has to run "configure" is small (most people don't run it at all anymore, since prepackaged binaries from distros are ubiquitous), and it's better to sacrifice a little build time than to sacrifice robustness.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:52 UTC (Thu) by vmole (guest, #111) [Link] (1 responses)

In my experience with developing many actual programs using autoconf, the checks in the configure script are a direct consequence of porting to some platform or another.

My experience says otherwise. Things like checking C++ related stuff (and even Fortran!) in projects that are pure C. And the sucessful checks for C89 followed by checks for individual C89 standardized functions is widespread, and almost certainly because there's some autoconf macro that does all of it. It's just stupid and annoying.

LCA: Disintermediating distributions

Posted Feb 7, 2008 1:40 UTC (Thu) by stevenj (guest, #421) [Link]

The checks for C++ and Fortran in C-only programs was due to a libtool 1.5 bug that has since been fixed, IIRC.

And it's true that AC_PROG_CC (the check for a C compiler) automatically calls a check to make sure the compiler is in C89 (ISO C90) mode, and if not it tries to find an option to put the compiler in C89 mode. Reliably checking whether the compiler is in C89 mode involves, among other things, checks for stdio.h. These checks were still required fairly recently — e.g. on AIX circa 2003 you had to use "-qlanglvl=extc89" or it didn't handle macro parameters in completely ANSI fashion, and on HPUX the compiler was non-ANSI by default until at least the late 90s. And many of these systems were still running long, long after their release dates (e.g. I heard from Solaris users with a compiler that defaulted non-ANSI as recently as a few years ago).

When autoconf has a default check, there's usually a good reason for it; most developers don't have experience on a wide enough variety of platforms to appreciate this. Try to get a patch accepted into autoconf sometime and you'll see what they have to deal with and why so many complaints about autoconf are founded in ignorance.

(And if your configure time is dominated by the default check for an ANSI compiler, your project is pretty small indeed. As I said, in my experience most configure scripts times for projects of decent size are dominated by checks that the programmers explicitly included in response to portability problems --- or by checks that are invoked by those checks like the case above, which the autoconf developers put in there for good reason. And, really, it's not like this is a huge problem—how often do you run configure scripts, and how many seconds would you be willing to shave off at the risk of losing portability to some machine?)

LCA: Disintermediating distributions

Posted Feb 7, 2008 7:51 UTC (Thu) by aleXXX (subscriber, #2742) [Link]

> In my experience with developing many actual programs using autoconf,
> the checks in the configure script are a direct consequence of porting
> to some platform or another. ("Crap, MinGW doesn't support
> gettimeofday, we'll need to check for it and whatever alternative is
> available.")

Yes, basically the configure checks are necessary.
Autoconf itself wasn't so bad. But then additionally you have to 
understand how it works together with automake and libtool. This is what 
made it really hard for me.

And I'm sure this is also the reason why many programs run much more 
configure checks than they actually need. They take the autotools stuff 
from some existing project and just change it so that it can build their 
program. With this approach you end up with all the configure checks the 
original script had, and maybe later on you have to add some more. IMO it 
is simply too hard to build a simple program using autotools. Who wants 
to learn that just to build hello world ?

Using other build systems, as e.g. cmake, makes this trivial:

add_program(helloworld hello.c)

That's all, cmake will do a few checks to find the compiler, check that 
it actually works and figure out if it's a 32 or 64 bit compiler, and 
that's all.
I guess for Scons it's similar.

Also autotools don't doesn't push you to use some modular style, at least 
in KDE we ended up with a few huge scripts which did a lot of magic. E.g. 
CMake strongly encourages a modular system (while it is of course still 
possible to throw everything into one file, but then you intentionally 
work around its features).

Alex

LCA: Disintermediating distributions

Posted Feb 7, 2008 20:04 UTC (Thu) by vapier (guest, #15768) [Link] (1 responses)

the issues you raise i dont really see being "solved" or even really addressed any differently
with other build systems (cmake/scons/whatever).  a build system cannot really compensate for
a coder's inability to write clean code.  what do you see in other build systems that
encourages different style of coding ?

the example you cite could easily be relegated to a header file without making the real code
messy.  while it is an older example, there are many similar situations that could generally
be solved the same way: take care of OS-differences in one location (sep source or header
file) and keep everything else clean.

LCA: Disintermediating distributions

Posted Feb 7, 2008 21:22 UTC (Thu) by vmole (guest, #111) [Link]

the example you cite could easily be relegated to a header file without making the real code messy.

Bingo. But they're not. Why not? Because people just copy the way previous programmers did things. Maybe the autotools problem is that there are just way too many bad examples out there.

As for other build systems, all I can do is quote the comment immediately above yours, from Alex, apparently a KDE developer: "CMake strongly encourages a modular system (while it is of corse still possible to throw everything into one file, but then you intentionally work around its features)." Autoconf does makes bad usage patterns as easy (or even easier, on first glance) as good ones.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:55 UTC (Wed) by stevenj (guest, #421) [Link] (2 responses)

There are lots of cases where just following the dang standard is not practical, or not sufficient. For one thing, not all platforms implement the dang standard, and if you don't want to fail completely when this happens you need some workaround. For another thing, in some applications it's extremely useful to support functionality that may not be available on all platforms—for example, SSE instructions or high-resolution timers.

Also, a lot of what autoconf deals with is checking for things in the build environment which essential but not standardized, such as how to link shared libraries. e.g. POSIX threads and OpenMP are two examples of formally standardized libraries that you can depend on, but each compiler and OS has its own command to link with them (see here and here). Or suppose you want to use features from the 1999 ANSI C standard, which has been out for 9 years now but compilers (including gcc) still make you jump through hoops to enable support for it, and of course each compiler has its own hoop (which autoconf will detect).

Also, free-software projects often build upon other projects so as to avoid re-inventing the wheel, and there are lots of extremely useful libraries (from GNU readline to HDF5 to LAPACK to Expat to Boost to...I'm just picking things at random) that are not standardized by any standards body. Part of autoconf's job is to help you detect whether such a library is present and contains the function you want (it may not, e.g. if it is the wrong version).

And heaven help you if you want to link together multiple languages, e.g. you have a C++ program and you want to link Fortran numerical libraries (e.g. LAPACK), without autoconf to help you detect how to do it with your compiler (each one has a different incantation).

Also...well, just look at the autoconf documentation for the variety of kinds of things one has to check for. As I said, there's a reason for its popularity, which extends far beyond "the GNU people"...it fills a real need. People who don't understand what it does are doomed to reinvent it badly.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:12 UTC (Thu) by vmole (guest, #111) [Link] (1 responses)

I agree that those are all problems that autoconf/libtool/etc. claim to solve. My experience (which is extensive) is that they don't reliably work, and I spend a *LOT* more time figuring out the problem and fixing it than I did with packages that simply ask me to set a few variables in the beginning of the Makefile.

I'd guess that if all you ever work with is Linux, BSD, and possibly Solaris, these tools do work, mostly. OTOH, those are the really easy ones.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:39 UTC (Thu) by stevenj (guest, #421) [Link]

The autoconf developers go to great lengths to support systems beyond Linux and BSD, and it's simply untrue that the tools break on other systems. I personally work on software that has run for years on everything from HPUX to Tru64 to UNICOS to AIX to MinGW using the autotools.

It's true that many people don't know what to do when configure fails. The usual mistake is to start poring over the configure script (which is essentially object code) rather than RTFM. configure --help gives a clue: most problems can be solved by setting an environment variable on the command line with configure LDFLAGS=... or whatever. The most common problems in my experience are due to libraries installed in nonstandard locations, and in this case there's simply no way around requiring the user to tell you where things are (which in autoconf is done by setting LDFLAGS and CPPFLAGS).

It's also true that some programmers misuse autoconf. e.g. even though the autoconf manual strenuously recommends doing feature tests by actually compiling and linking things, autoconf also provides a macro to get a canonical target name (e.g. i386-linux-gnu) and some programmers take the shortcut of explicitly testing this when they shouldn't. The difficulty is that if you are testing for a feature that does not have a built-in autoconf test, writing a portable feature test is hard, especially if you don't have many platforms to test on—but again, I think this is somewhat intrinsic to the problem and is not really autoconf's fault.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:35 UTC (Wed) by DonDiego (guest, #24141) [Link] (13 responses)

What 'underlying deficiencies of make' are you talking about?  I don't see any such thing.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:45 UTC (Wed) by vapier (guest, #15768) [Link] (8 responses)

autotools allows you to create a build system that will work on any system that has a shell
and make.  that make may not be GNU make.  it may have bugs.  its feature set may be severely
limited compared to what you're used to.  many projects out there who set out to write their
own build system do so using GNU make which means they write code that Works For Them.
non-portable GNU-isms creep in which means the build system is no longer portable to many
targets which sort of defeats the original intent: making it portable.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:17 UTC (Thu) by vmole (guest, #111) [Link] (7 responses)

"Don't write portable Makefiles, use a portable make." I don't remember the source of that quote, but I do know that trying to deal with each broken vendor version of make leads to nothing pain and tears before bedtime. If you're building software, just install gmake and be done with it.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:38 UTC (Thu) by nix (subscriber, #2304) [Link] (5 responses)

Well, I know Paul D. Smith (the GNU make author) has said that in the 
past, and I agree with him (for what very little it's worth).

LCA: Disintermediating distributions

Posted Feb 7, 2008 1:11 UTC (Thu) by stevenj (guest, #421) [Link] (4 responses)

Note that GNU make itself is built using automake, so it seems the GNU make developers have voted with their feet on this one.

LCA: Disintermediating distributions

Posted Feb 7, 2008 1:41 UTC (Thu) by stevenj (guest, #421) [Link] (3 responses)

(Although, to be fair, it's not like they could require GNU make to build.  =)

LCA: Disintermediating distributions

Posted Feb 7, 2008 7:05 UTC (Thu) by madscientist (subscriber, #16861) [Link] (2 responses)

Although GNU make does come with an automake environment, it also provides a shell script that
can be used to build make.  Obviously this will recompile and relink everything every time you
run it, but GNU make is not such a huge program that this is a problem.  And once you've got
it built once, you can use that make for subsequent builds.  Having this avoids the catch-22
of needing some make to build make.

That said, automake is awesome especially if you're developing highly portable tools, which
most of the GNU tools are.  For GNU make I don't so much care about the portability aspects,
although that's nice too (but the shell script above would be enough of an "out" for GNU make
itself).  The great thing about automake is all the default rules it provides, including
things like distcheck for building new packages, etc.  These rules save huge amounts of time
and effort for package developers/maintainers.

LCA: Disintermediating distributions

Posted Feb 15, 2008 11:51 UTC (Fri) by ekj (guest, #1524) [Link] (1 responses)

The catch-22 of needing a make-program to compile gnu make isn't that much of a problem
really.

You need a C compiler to compile GCC too. If you want it self-compiled you need to compile it
twice:

First use whatever C-compiler you happen to have lying around to compile GCC. 

Then use your fresh gcc to compile gcc. 

LCA: Disintermediating distributions

Posted Feb 15, 2008 22:00 UTC (Fri) by nix (subscriber, #2304) [Link]

With recent versions, thanks to the magic of top-level bootstrap, `make' 
should give you a compiler and libraries byte-for-byte identical to what 
you'd have got if you did the recompile-it dance. (Older versions wouldn't 
have recompiled libiberty with the new compiler before linking that 
compiler with it; top-level bootstrap has fixed that.)

LCA: Disintermediating distributions

Posted Feb 7, 2008 20:11 UTC (Thu) by vapier (guest, #15768) [Link]

it's really a mindset.  do you tell every user their system sucks and they should install
Linux and up-to-date utilities ?  or do you use automake and everyone out there can build your
project without a problem ?  some people will choose the former while others will pick the
latter.  in the latter case, automake is the only realistic solution.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:15 UTC (Thu) by stevenj (guest, #421) [Link] (3 responses)

Try using raw 'make' in a project with subdirectories sometime.

Another obstacle is that 'make' relies on 'sh' (a fairly primitive language) if you want to do anything nontrivial, and portable shell programming requires extreme care. (This is also a source of unfortunate complexity in autoconf and automake, but they provide tools to lessen your dependence on sh, at least.)

LCA: Disintermediating distributions

Posted Feb 7, 2008 4:39 UTC (Thu) by roelofs (guest, #2599) [Link] (1 responses)

Try using raw 'make' in a project with subdirectories sometime.

Have:

        $(MAKE) -C subdir
or
        cd subdir && $(MAKE)

I believe even MS nmake circa 1990 supported that much, and it's not even close to a standard make. Every Unix make I've used (couple dozen) was better than that.

But it has been quite a few years since I messed with any of this stuff, so craniorectal impaction is always a possibility.

Greg

LCA: Disintermediating distributions

Posted Feb 7, 2008 10:05 UTC (Thu) by epa (subscriber, #39769) [Link]

Have a look at the article linked from the parent post.  Calling one 'make' from another
makefile tends to lead to all sorts of crufty problems.  The paper gives examples, and my own
experience certainly confirms it.  That said, you can use make in a large project, just not by
recursive invocation.

LCA: Disintermediating distributions

Posted Feb 7, 2008 11:52 UTC (Thu) by cortana (subscriber, #24596) [Link]

> Try using raw 'make' in a project with subdirectories sometime. 

Then you have already failed. Why on earth would anyone want to call make recursively? The
only reason I have ever heard is "because I don't understand what I'm doing, and I cargo
culted my build system from someone else, who also did not understand what he was doing".

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:46 UTC (Wed) by wingo (guest, #26929) [Link] (43 responses)

Side note: I'm enjoying this discussion, really nice comments.

I agree with you, stevenj, except that I think that GNU make is now sufficient: there is no
need for automake. The creator of automake thinks so, too: http://tromey.com/blog/?p=394

automake vs. GNU make

Posted Feb 7, 2008 0:05 UTC (Thu) by stevenj (guest, #421) [Link] (42 responses)

I'm not sure what the context of that post is, but I suspect he's talking about a rather specialized case. Realize what automake gives you. With two lines:
bin_PROGRAMS = hello
hello_SOURCES = hello.c hello.h

automake will generate a Makefile that will support all the GNU standard targets (make install, uninstall, clean, distclean, clean, dist, ...), support VPATH builds properly, integrate with autoconf's detection of the compiler and compiler flags, automatically do dependency tracking (e.g. it will figure out that hello.c depends on hello.h, assuming it does), and so on.

GNU make does none of that for you. The advantages of automake over raw make, even GNU make, are even more dramatic for projects with subdirectories (recursive "make" is notoriously difficult to get right), or for projects that have to build shared libraries (the only reasonable way to do this portably is with GNU libtool, but calling libtool by hand is a PITA).

automake vs. GNU make

Posted Feb 7, 2008 0:33 UTC (Thu) by vmole (guest, #111) [Link] (40 responses)

recursive "make" is notoriously difficult to get right

And there's no good reason for it, except habit. Peter Miller has a nice paper of why recursive make is bad, and a solution. And yes, I converted a fairly large build system from recursive make to his approach, without moving any code. The no-act case went, where everything is up-to-date, went from over a minute to about 5 seconds. More importantly, builds got much more reliable and much more efficient, re-building everything that needed to be rebuilt, and nothing that didn't. Lovely.

the only reasonable way to do this portably is with GNU libtool

HAHAHAHAHAHA. You funny. Libtool is the devil's spawn. If it worked, it would worth it. Maybe.

automake vs. GNU make

Posted Feb 7, 2008 0:51 UTC (Thu) by stevenj (guest, #421) [Link] (32 responses)

Regarding recursive make considered harmful, his solution is to use one gigantic Makefile. Why not put all of your source code in one gigantic .c file while you're at it, and then you can skip "make" entirely? In any case, to each his own, but many many projects have voted with their feet on this one.

I've used libtool for years, and it works (on essentially every Unix-like system, and even on Windows with MinGW); implying it doesn't is baseless FUD. Yes, it has had bugs from time to time. Yes, it is slow as hell because it launches a shell process for each compile. Yes, it is hard to use properly if you don't use automake too. But what is the alternative if you want to build shared libraries portably?

automake vs. GNU make

Posted Feb 7, 2008 1:05 UTC (Thu) by vmole (guest, #111) [Link] (27 responses)

No, his solution is NOT put everything in one giant make file. You use "include" (yes, that makes it GNU make specific. See previous post). I've done this. It's a lot easier to maintain than recursive make files, because most of the sub-directory makefile fragments are just the list of source files.

My experience is that libtool does not work reliably on AIX, and it never gets fixed (or it gets fixed, then broken again). I found that simply writing a small (<10 lines) shell script for each implementation was far more reliable and easier to debug. Admittedly, this was 2001-2006. Maybe it's better now. But a 8500 line shellscript is inherently fragile.

automake vs. GNU make

Posted Feb 7, 2008 1:51 UTC (Thu) by stevenj (guest, #421) [Link] (19 responses)

With AIX there are difficulties with shared libraries because of the limitations of that operating system: it doesn't (or didn't, at least) allow inter-library dependencies between shared libraries. MinGW (Windows) is the same way. If you want to use libtool on these systems you have to pass the "-no-undefined" flag to tell it you have no shared-library dependencies, which libtool cannot assume for you in general.

Of course, this is documented in the manual.

Again, you're blaming the tool for the complexity of the underlying problem.

(And your suggestion of writing a script for every system that you want to support shared libraries on is basically saying that every programmer should re-implement libtool themselves, probably badly because they aren't aware of all the platform variations in shared-library semantics/syntax or even a large subset thereof.)

automake vs. GNU make

Posted Feb 7, 2008 3:59 UTC (Thu) by jamesh (guest, #1159) [Link] (1 responses)

I think you've got things backwards: the -no-undefined flag doesn't really make sense _unless_
shared library dependencies exist.

With the -no-undefined flag set, your library must directly link to all the libraries whose
symbols you used (leaving no "undefined" symbols).

Even on platforms where -no-undefined is not required, it is usually worth using since the
linker can see which library each referenced symbol comes from, making the job of runtime
linking easier.

automake vs. GNU make

Posted Feb 7, 2008 5:02 UTC (Thu) by stevenj (guest, #421) [Link]

Even if there are no library inter-dependencies in your library, you must specify -no-undefined or libtool will refuse to build it on AIX or MinGW, since libtool is apparently unable to detect automatically whether there are unresolved symbols at link time. But you're right that the flag can also be used if you specify library dependencies with -l flags explicitly at link time.

I've also heard that this is useful for prelinking as well, as you mention.

(Apparently, -no-undefined used to be the default, which seems sensible to me, but supposedly they got too many complaints.)

automake vs. GNU make

Posted Feb 7, 2008 16:59 UTC (Thu) by vmole (guest, #111) [Link] (10 responses)

No, my point is that problem is so complex that pretending you can hide it with libtool is misleading. The developers have know and care that AIX and MinGW are different, and need special variations, or the end-user, trying to build the package, has to track down the docs and figure out where to add the magic option.

You're right: I'm pretty ignorant about the internal details of autoconf et. al. But as an end user, who has 20+ years of experience building software on a wide variety of platforms, I've found that I spent a lot more time fighting autoconf/automake/libtool problems than I did for packages that asked me to uncomment the appropriate variable settings in the Makefile. Why? Because if the appropriate variable was available in the Makefile, then someone had actually, really, already built it on that platform. If not, then it was pretty easy to see what needed to be done.

I should clarify: by "autoconf problems" I don't necessarily mean problems with the autoconf system itself. It does what it's told. The problem is software developers who believe that using autoconf et. al. magically solves all their portability issues, and all they need to do is copy a few scripts/templates from some other project. This is the "autoconf culture" problem I mentioned way back in my first comment. It may not be what the autoconf developers intended or wanted, but it sure is what has happened.

That's all for me.

automake vs. GNU make

Posted Feb 7, 2008 21:36 UTC (Thu) by nix (subscriber, #2304) [Link] (9 responses)

No, my point is that problem is so complex that pretending you can hide it with libtool is misleading. The developers have know and care that AIX and MinGW are different, and need special variations, or the end-user, trying to build the package, has to track down the docs and figure out where to add the magic option.
Actually, if you use libtool and libltdl you *don't* need to care that Linux, Solaris, MacOS X, AIX, HP-UX 10 and Windows all use quite different methods to build shared libraries with different names and semantics; at least not unless you're trying to go beyond what static libraries allow and do symbol versioning or something like that.

The end-user need have no clue at all.

libtool has its problems (astonishing sloth being one of them: why anyone complains that configure takes too long to run for very small projects, when libtool slows down building drastically regardless of project size, I have no idea) but you don't seem to know what those problems are. At least you're focusing on men made of straw when there are perfectly visible giants of problems looming a few feet away.

automake vs. GNU make

Posted Feb 7, 2008 21:55 UTC (Thu) by vmole (guest, #111) [Link] (8 responses)

Actually, if you use libtool and libltdl you *don't* need to care that Linux, Solaris, MacOS X, AIX, HP-UX 10 and Windows all use quite different methods to build shared libraries with different names and semantics ... The end-user need have no clue at all.

Yes, that's the promise. But in my experience (and in many others), as the end user of several libtool using projects, it's not reality. Is it that all these projects are setting things up wrong? Possibly. I don't know, and I don't really care, because if so, it's apparently as hard to get right as using the actual OS tools would be, which would at least be debuggable by normal human beings.

And I'm well aware and have suffered extensively at libtool's sloth. And its insanely long invocation lines, which make make logs painful to read and hide errors.

automake vs. GNU make

Posted Feb 7, 2008 23:18 UTC (Thu) by nix (subscriber, #2304) [Link]

Your reality differs from my reality. I've had some cosmetic problems on 
Solaris 2.4 and 2.5.1 (we're talking prehistoric here) and I needed to 
install GNU sed on HP-UX and AIX, but other than that, nothing really.

(Mind you, the need for GNU sed *was* unacceptable --- it was also a bug.)

automake vs. GNU make

Posted Feb 7, 2008 23:25 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (3 responses)

> [libtool is] apparently as hard to get right as using the actual OS
> tools would be, which would at least be debuggable by normal human
> beings.

"at least be debuggable" - very well put, I completely agree.
Why does automake actually need libtool at all ? I mean it generates the 
makefile code, it could as well just generate the code for calling the 
actual OS tools directly in the makefiles. This would remove this one 
layer of indirection.

Alex

automake vs. GNU make

Posted Feb 8, 2008 0:44 UTC (Fri) by nix (subscriber, #2304) [Link] (2 responses)

automake can't generate the commands libtool executes because automake 
runs on the distributor's machine, not the builder's, and doesn't have a 
clue what sort of system the build will take place on.

automake vs. GNU make

Posted Feb 8, 2008 7:20 UTC (Fri) by aleXXX (subscriber, #2742) [Link] (1 responses)

Ah, yes, indeed.
(I didn't work with autotools in the last years).

Then, couldn't the configure script handle that ? It runs on the build 
machine.

Alex

automake vs. GNU make

Posted Feb 8, 2008 21:28 UTC (Fri) by nix (subscriber, #2304) [Link]

Yes, it could: but in libtool 1.5 it doesn't :(

I never said libtool didn't suck. It's just better than anything else 
around right now for the job it does, and it has a *lot* of hard-won 
knowledge of shared library wierdness on manifold systems encoded into it 
(as autoconf does of other cross-system variation).

automake vs. GNU make

Posted Feb 7, 2008 23:54 UTC (Thu) by stevenj (guest, #421) [Link] (2 responses)

You didn't read the manual, and as a result it failed on AIX because you asked libtool to support semantics unavailable on that platform, and as a result you loudly complain here that libtool is broken and buggy. This is not a convincing critique of libtool.

libtool solves a big part of the problem: its manual specifies the lowest common denominator of shared library semantics, tells you how to indicate whether you obey those semantics, and builds the resulting library on every system that supports the semantics you request. The fact that you still have to know that there might be some differences between systems, and that you might have to read the manual to learn how to deal with these differences, is not a reason to throw it out and rewrite everything yourself from scratch (the only other "solution" you have suggested). No matter what portability tool you use, developers will still need to know something about the differences between platforms.

automake vs. GNU make

Posted Feb 8, 2008 0:23 UTC (Fri) by vmole (guest, #111) [Link] (1 responses)

You didn't read the manual, and as a result it failed on AIX because you asked libtool to support semantics unavailable on that platform

*I* didn't ask libtool to do anything. I'm the end-user. I'm not supposed to have know about libtool, or the variations in shared library implementations. Right? Isn't that the whole friggin point?

And if your response is "But there's too many differences, libtool can't hide everything", then we are in 100% agreement. Where we differ is whether or not it's worth the attempt, and whether libtool is the correct direction. So be it. But if you continue to ignore those of us who have problems, and blame the users, well, it will never get better.

automake vs. GNU make

Posted Feb 8, 2008 0:42 UTC (Fri) by stevenj (guest, #421) [Link]

You haven't suggested any way for things to get better, you've just flamed a tool because you've encountered a couple of buggy packages that misused it, and it didn't correct for all the deficiencies of the developers. Then you tried to debug the problem yourself and failed because you didn't bother to read the fine manual.

Your suggestions, as far as I can tell, have been either for every developer to reinvent the wheel by rolling their own platform-dependent scripts, or for the developers to push the whole problem onto the end-users by giving them a raw Makefile and telling them to fix the compiler options themselves. Neither of these seems like an improvement.

automake vs. GNU make

Posted Feb 8, 2008 5:42 UTC (Fri) by lovelace (guest, #278) [Link] (5 responses)

Again, you're blaming the tool for the complexity of the underlying problem.

But, isn't this tool supposed to take care of the underlying complexity? So, haven't you completely invalidated your argument for it? It's supposed to make the task of creating libraries similar across dissimilar systems yet by your own description it does not.

I don't remember all the details now since it's been a while, but back when we were trying to port KDE 3.x to native Qt on the Mac libtool was a constant source of problems.

automake vs. GNU make

Posted Feb 8, 2008 7:19 UTC (Fri) by aleXXX (subscriber, #2742) [Link] (1 responses)

libtool on the Mac (a native binary for dealing with libraries on the 
Mac) is something different than the autotools libtool (portable shell 
script which should deal with shared libs on all systems).

Alex

automake vs. GNU make

Posted Feb 8, 2008 16:05 UTC (Fri) by lovelace (guest, #278) [Link]

Hi Alex!

Yep, I'm aware of that, but that's not what I was referring to.  I was referring to the
autotools libtool.  Unfortunately, like I said, that was about 5 years ago and I cannot
remember the specifics.  So, rather than make more unsubstantiated accusations, I'll just
leave it at that.  I will mention, though, that that episode in particular cemented my general
dislike of the autotools libtool that still stands to this day.

automake vs. GNU make

Posted Feb 8, 2008 21:26 UTC (Fri) by nix (subscriber, #2304) [Link] (2 responses)

It reduces the complexity, but fundamentally these systems have different 
*runtime* semantics, which can't be entirely hidden. For simple uses 
(DT_NEEDED-style simple symbol lookup of libraries with no undefined 
symbols, and dlopen()/dlsym()/dlclose()-style dynamic loading), libtool 
does a good job. Anything more complex will hit trouble on one system or 
another.

automake vs. GNU make

Posted Feb 8, 2008 21:46 UTC (Fri) by lovelace (guest, #278) [Link] (1 responses)

Ah, now I'm remembering more about what the problems where.

1. Shared libraries and modules are the same on Linux (and lots of other Unices) but are
different on the Mac.  Libtool had a difficult time understanding this.
2. Until fairly recently (Tiger, iirc) shared libraries couldn't be easily dlopened on the
mac, only modules could.

Since KDE makes extensive use of dlopen-ing modules to accomplish things this made things
quite tricky and libtool wasn't really that much help.

So, yeah, quite a different runtime system.  Newer versions of OS X have gotten quite a bit
better on the dlopen-ing front, but they are still fundamentally different.  And, I wouldn't
even try to use libtool to create OS X frameworks....

automake vs. GNU make

Posted Feb 8, 2008 23:18 UTC (Fri) by nix (subscriber, #2304) [Link]

I'd expect MacOS X support for modules to have been OK since

2003-03-20  Peter O'Gorman  <peter@pogma.com>

        * ltmain.in: Always use $echo not echo for consistency.
        Changes for darwin building. Warn if linking against libs linked
        with -module. Use module_cmds if available and building a module,
        move convenience double lib check,

What else is wrong?

And, yes, your point 2 is hard for libltdl to overcome: if you build the 
library as a lib, not a module, you'd have been stuck whatever libtool 
did.

automake vs. GNU make

Posted Feb 7, 2008 1:53 UTC (Thu) by sward (guest, #6416) [Link] (6 responses)

I've also used his include-based Makefile structure on projects that were originally written
using recursive make.  The Makefile's got smaller, simpler, faster, and more reliable.
Hierarchy still takes *some* thought, but it really wasn't difficult.

automake vs. GNU make

Posted Feb 7, 2008 3:04 UTC (Thu) by stevenj (guest, #421) [Link] (5 responses)

I have no doubt about that, if you compare hand-written recursive make with GNU make+include. I'm more doubtful if you compare to automake input files, which are pretty compact (and support a portable include directive if you have definitions you want to share between directories etc.).

Of course, it's possible that Tom Tromey or someone may come up with some very clever include file that we can all use in our GNU Makefiles that gives us 95% of automake's functionality in a cleaner and faster way (at the expense of requiring GNU make to build, which in this day and age isn't too much of a burden). And then we'll just use 'include' for subdirectories. But until such a thing gets released, the alternative is to write the include files yourself, support all of the standard targets yourself, be careful to use only portable sh constructs yourself, and so on; in the vast majority of cases, automake seems a simpler and more robust solution at present.

A technical question about using include instead of recursive make via automake (recursive make by hand is crazy): how do you separate the namespaces for the subdirectories? For example, if you have a source file with the same name in two subdirectories (e.g. main.c), do you have to jump through hoops to prevent it from getting confused? Or what if you want a 'make clean' target in two subdirectories? I didn't see any obvious way to handle such separation cleanly in the GNU make manual.

automake vs. GNU make

Posted Feb 7, 2008 4:25 UTC (Thu) by masuel (guest, #28661) [Link]

>For example, if you have a source file with the same name in two 
> subdirectories (e.g. main.c), do you have to jump through hoops to 
> prevent it from getting confused?

The way I do it is always reference the tree from the build root.

so a/main.c is never looks like b/main.c

and yes I can start a build from the subdirs etc.

non-recursive is the way to go. 

Just require gnu make don't try supporting version 8.79 (rh9) clients just 
make people upgrade....


automake vs. GNU make

Posted Feb 7, 2008 4:41 UTC (Thu) by sward (guest, #6416) [Link] (3 responses)

Files (source or derived) in separate directories are *always* in separate namespaces, in
Miller's approach.  The 'make' is run once, in the root directory, and all sources and targets
are described from that point of view.  So you don't build 'main.c', you build 'foo/main.c'.
You use target-specific overrides if you need to customize things, e.g.:

   # in foo/include.mk:
   $(FOO_OBJ): CPPFLAGS += -Ifoo

Now, if you wanted a selective clean (for one subdir), I agree there isn't a good way to do
this in the include-based structure (other than calling it something else, e.g. 'foo/clean').
Of course, you could also include a local Makefile in the subdir just to make things more
familiar:

   # in foo/Makefile:
   clean: 
       cd ..; $(MAKE) foo/clean

Did that help, or would a more complete example be clearer?

automake vs. GNU make

Posted Feb 7, 2008 4:55 UTC (Thu) by vmole (guest, #111) [Link]

Even neater: Stick the following as GNUMakefile in each of your subdirs:
CURMOD:=$(shell pwd | sed -e 's,.*/src/\(.*\),\1,')
all :
        @cd $${PWD%$(CURMOD)} && $(MAKE) --no-print-directory $(CURMOD)

% :: FORCE
        @cd $${PWD%$(CURMOD)} && $(MAKE) --no-print-directory $(CURMOD)/$@

FORCE:
which automatically transfers any make command up to the top level and re-invokes with the same arguments. (You'll need to tweak the sed command to match your layout.)

automake vs. GNU make

Posted Feb 7, 2008 5:18 UTC (Thu) by stevenj (guest, #421) [Link] (1 responses)

That's clear enough. Yes, using foo/main.c and bar/main.c in the Makefile rules will clearly work (unless you have one of the rare compilers that doesn't support -c -o, although you can hack around that with enough effort I suppose).

It seems a bit ugly to me, though, that your subdirectory Makefiles (to be included) must be written differently depending on the name of the directory they are in. The analogous thing in a programming language would be, if you had a subroutine foo(), to require that all local variables be named with a "foo_" prefix.

automake vs. GNU make

Posted Feb 7, 2008 5:54 UTC (Thu) by sward (guest, #6416) [Link]

That isn't really necessary for variable names (that you don't need to reference later), only
the source and targets need to be qualified.  A fairly typical example:

# Define sources for this component:
MY_SRC = $(wildcard foo/*.c)
MY_OBJ = $(MY_SRC:.c=.o)
MY_TGT = foo/libfoo.a

# Additional flags and dependencies for building this component:
$(MY_OBJ): CPPFLAGS += -I foo
$(MY_TGT): $(MY_OBJ)

# Add these to the master Makefile's lists:
SOURCE += $(MY_SRC)
TARGET += $(MY_TGT)
CLEAN  += $(MY_OBJ)

The next component can reuse MY_SRC etc.  You wouldn't even need these variables, if you had
no need for target-specific variables or dependencies.  (But of course, you usually do).  If
you wanted to keep a reference to 'foo/libfoo.a', for use in other Makefiles, you might
replace MY_TGT with LIBFOO.

In short, you only need unique names for global variables; local ones can be overwritten.

You do need the component path ('foo' in this example) in a few spots, but it is trivial to
change that if the directory structure changes.

automake vs. GNU make

Posted Feb 7, 2008 7:41 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (3 responses)

> But what is the alternative if you want to build shared libraries
> portably?

I don't know about Scons, but you can use CMake, and it will just work:

add_library(foo SHARED foo.c bar.c)

This will give you depending what you need:
-UNIX Makefiles (not quite sure if it requires GNU make, I don't think 
so)
-MS nmake makefiles
-Borland Makefiles
-Watcom Makefiles
-cygwin Makefiles
-XCode projects
-Visual Studio >= 6 projects.

I wondered for years why suddenly the comopiler can't be called directly 
anymore, to find out that it's just libtool and you can use the compiler 
very well directly (which removes one layer of indirection -> makes it 
easier to understand).

Alex

automake vs. GNU make

Posted Feb 7, 2008 17:11 UTC (Thu) by stevenj (guest, #421) [Link] (2 responses)

UNIX Makefiles (not quite sure if it requires GNU make, I don't think so)
You are sorely mistaken if you think that there is one way to build shared libraries that works on all flavors of Unix. If you only care about Windows, GNU/Linux and similar systems with gcc, and MacOS X, then yes, generating three makefiles will work, but that's not solving anything like the whole problem.

automake vs. GNU make

Posted Feb 7, 2008 23:17 UTC (Thu) by aleXXX (subscriber, #2742) [Link]

CMake knows how to build shared libs on all supported platforms (which 
support shared libs) with all supported toolchains (GNU, IBM, Sun, 
Borland, MS, Portland, Intel, HP and more). For several 
platforms/toolchains this is tested every night (unfortunately there 
aren't nightly tests for all supported combinations).
http://www.cmake.org/Testing/Dashboard/20080206-0100-Nigh...

Alex

automake vs. GNU make

Posted Feb 8, 2008 5:50 UTC (Fri) by lovelace (guest, #278) [Link]

The makefiles CMake generates are not portable and can only be used on the system that they're generated on. If you move to a different system, you use CMake to generate makefiles for that system. That's how they can reliably create shared libraries on multiple systems using makefiles.

automake vs. GNU make

Posted Feb 7, 2008 7:15 UTC (Thu) by madscientist (subscriber, #16861) [Link] (6 responses)

libtool isn't so bad except for one thing, which makes it unusable.  No support for
cross-compilation.  It's so utterly lame that every part of the GNU build environment
(binutils, gcc, autoconf, automake) is very cognizant of cross-compilation and works very,
very hard to make setting it up and using it as simple as possible... except libtool.

Whenever I see a package come through that uses libtool I absolutely cringe because I know it
will be a huge PITA.

automake vs. GNU make

Posted Feb 7, 2008 17:17 UTC (Thu) by stevenj (guest, #421) [Link] (5 responses)

That's totally untrue; automake+libtool supports cross-compiling just fine (note that you can hardly use automake for libraries without using libtool, so saying that automake supports cross-compiling but libtool doesn't is a little strange). I use libtool for cross-compiling all the time. (e.g. on one of my projects we cross-compile shared libraries for Windows from GNU/Linux using mingw).

automake vs. GNU make

Posted Feb 7, 2008 22:57 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (4 responses)

I don't know much about the cross compiling support of autotools.
Do autotools support cross compiling for systems e.g. with 8 bit 
microcontrollers, which maybe don't have an operating system at all ?

Alex

automake vs. GNU make

Posted Feb 7, 2008 23:23 UTC (Thu) by nix (subscriber, #2304) [Link]

8-bit, I'm not sure. Things like arm-elf (without an OS), sure, and has 
for next to forever.

automake vs. GNU make

Posted Feb 11, 2008 1:43 UTC (Mon) by GreyWizard (guest, #1026) [Link] (2 responses)

Autotools support cross compilation in the abstract: to the extent that they work for one
cross compiling target they should work for any that the underlying compiler can support.  As
for that, GCC does support at least some 8-bit microcontrollers, as attested here:

    http://gcc.gnu.org/ml/gcc/2002-08/msg01809.html

automake vs. GNU make

Posted Feb 11, 2008 8:37 UTC (Mon) by aleXXX (subscriber, #2742) [Link] (1 responses)

What about compilers != gcc, e.g. sdcc ?

Alex

automake vs. GNU make

Posted Feb 11, 2008 20:01 UTC (Mon) by nix (subscriber, #2304) [Link]

Other compilers are generally supported. autoconf supports anything which 
runs as $CC and supports -g, -o, and -c with the customary meanings; 
automake supports anything with the caveat that dependency analysis may be 
inaccurate without compiler support (said support being a few lines of 
shell script in the depcomp script: if you need it for your compiler, 
please submit a patch!)

libtool's support is necessarily compiler-by-compiler and 
platform-by-platform, and so cannot cover everything (its entire purpose 
being to smooth out variation in the way different compilers and platforms 
create shared objects): nonetheless, on Linux, 1.5.26 supports GCC, KCC, 
Intel C++, the Portland Group's compiler, Compaq C++, and even Sun C, 
although who'd be using that on a Linux box I have no clue. (For a 
complete list of compiler * platform combinations, look in libtool.m4.)

(libtool obviously doesn't support sdcc, since as far as I can tell sdcc 
can't generate PIC code at all :) )

automake vs. GNU make

Posted Feb 7, 2008 0:37 UTC (Thu) by nix (subscriber, #2304) [Link]

The intent is to do it all for you with a suitably-smart include file, 
using nothing but GNU make language and portable shell within it (perhaps 
with some extensions on the make front).

I've done similar things in the past (replacing horrible proprietary build 
systems). It's doable.

LCA: Disintermediating distributions

Posted Feb 7, 2008 1:21 UTC (Thu) by stevenj (guest, #421) [Link] (5 responses)

By the way, don't get me wrong, the autotools are far from perfect. Life would be much much simpler for the autotools developers and users alike if they could rely on more than a small portable subset of sh and make being available on the build hosts. For one thing, it means that they have to use use three languages (sh, m4, and make) instead of one. For another thing, portable sh doesn't have functions so the generated configure scripts are huge and slow (launching lots of processes). For another thing, make is not integrated with the configure tests, so you can't parallelize the tests or run only those tests that are needed when something changes, and so on.

The problem is that valid, constructive criticisms of the autotools largely get lost in the noise of people who argue from ignorance of the real complexity of the problems that these tools have to address, and which aren't addressed by most of the suggested alternatives.

LCA: Disintermediating distributions

Posted Feb 7, 2008 7:57 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (3 responses)

> who argue from ignorance of the real complexity of the problems that
> these tools have to address, and which aren't addressed by most of the
> suggested alternatives. 

As maintainer of the KDE4 build system, which is able to build KDE4 on
-Linux
-FreeBSD
-Solaris
-Mac OSX (makefiles or XCode)
-Windows (mingw + GNU make, cl + nmake or Visual Studio, not cygwin)

I am well aware of the complexities and cmake handles them all.

Alex

LCA: Disintermediating distributions

Posted Feb 7, 2008 17:21 UTC (Thu) by stevenj (guest, #421) [Link] (2 responses)

Libtool handles many, many more Unix flavors than that. Yes, if you are willing to restrict yourself to recent versions of five platforms, it simplifies life a lot.

LCA: Disintermediating distributions

Posted Feb 7, 2008 19:09 UTC (Thu) by bronson (subscriber, #4806) [Link]

In theory.

In practice, I've spent many awful hours chasing libtool breakage on even the most common
platforms.

libtool adds extreme complexity to packages in an attempt to make them build everywhere.  More
software means more bugs.  The end result is preductable: libtool packages often fail to build
on even the most common platforms.

First you need to track down and install the exact version of automake used by the packager,
twiddle the configure file to remove syntax errors, try to figure out the magic combination of
undocumented --enable and --disable switches that will actually compile, and so on...  This
whole process is more abstract and opaque than editing Makefiles directly, but is it better?
Does it actually save time?

libtool is a bad abstraction.  Like CORBA, it tries to gloss over horrifyingly complex
problems but, instead of solving them, it seems to just push them around.  Perhaps I'm getting
philisophical, but I think they too hard to insulate the user from the real world.  D-Bus
doesn't try as hard as CORBA and, as a result, it's much more usable and reliable.  And a heck
of a lot smaller.  I feel like libtool could learn the same lesson.

I worked as a release engineer, so I admittedly had to wrestle with libtool a lot more than
the average person.  The missing ' in configure was the winner; finding that that took 7 hours
of binary searching an unreadable 350K shell script.  For comparison, it takes 3-4 hours to
learn how to use cmake from scratch!  libtool solves some problems, creates others, and the
end result seems to be a wash.

Anyhow, that's just my experience.  Not good.  Allow me to cast my vote for a simpler libtool
in the future.

LCA: Disintermediating distributions

Posted Feb 7, 2008 22:52 UTC (Thu) by aleXXX (subscriber, #2742) [Link]

The platforms I listed are the platforms where I know that KDE4 
successfully builds.

I don't know if anybody already tried to build KDE4 on NetBSD, AIX, QNX, 
HP-UX, IRIX, BeOS, DragonFly, Hurd or any of the other platforms CMake 
supports. I'm quite sure the bigger problems in getting KDE4 work there 
will be to get all the required libraries working on these platforms 
(dbus, some multimedia lib like xine, a compatible X, etc.) and to get 
the compiler actually compile the C++ code without errors (some compilers 
don't accept everything what gcc accepts, some because they are older, 
some because they are more picky).

Alex

LCA: Disintermediating distributions

Posted Feb 10, 2008 20:40 UTC (Sun) by oak (guest, #2786) [Link]

My experience (mostly from about 10 years ago) has also been that 
Autotools (not just how they are used) are much larger problem for 
portability than the software they try to build.  Autotools scripts work 
only on platforms that Autotools officially support.  If you have just 
(GNU) make and shell like claimed above, Autotools falls down on its face, 
soils itself and cries for Mama.

It was much easier just to build GNU make and re-write the build scripts 
as cleaner Makefiles instead of trying to port first the huge mass of 
Autotools dependencies (Perl being one of the first/larger roadblocks) and 
then debug what other software the scripts need (after a long build fails) 
+ iterate that.

For me the solution seems obvious.  Solve the problem instead of kludging 
around it.

It seems insane/impossible to try to make the scripts autotools generate 
to projects portable to "everything".  Why not instead reduce Autotools 
script dependencies and make sure that those dependencies do everything 
needed and are portable to/buildable everywhere in addition to being 
standards compliant, small and of exemplary clean design?

The very small downside would be that then those couple of binaries need 
to be built for given platform before the much cleaned up, debuggable, 
faster, saner and otherwise better new autotools scripts can be run.  But 
as those programs are small & very portable, that should be trivial and 
including their binaries to CYGWIN and GNU coreutils would solve this for 
>90% of the developers.

As a result, all world's embedded developers would thank you!

LCA: Disintermediating distributions

Posted Feb 7, 2008 8:49 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (2 responses)

> Skilled programmers don't have a hard time learning a new syntax. 

Yes, true (still why do I need multiple languages just to build 
software ?).
Anyway, but I was talking about students starting to learn C or C++. They 
have enough problems with C/C++ itself. Under Linux you can 

-require them to learn basic Makefile syntax: ok as long as the software 
is very simple (write makefile, then run make)

-require them to learn autoconf+automake+shell+Makefile: this is 
unrealistic (write Makefile.am, then write configure.in, then run 
automake, then run autoconf, then run configure, then run make)

-require them to learn e.g. cmake: ok, also for slightly more complicated 
things, as e.g. building a shared library and installing it (write 
CMakeLists.txt, run cmake, then run make or hit F8 in kdevelop)

Alex

LCA: Disintermediating distributions

Posted Feb 7, 2008 17:28 UTC (Thu) by stevenj (guest, #421) [Link] (1 responses)

Students just learning C and C++ for the first time are not going to be able to write non-trivial software that builds on a dozen different systems without modification. Or perhaps I should say, they are probably not going to be able to write non-trivial software on their own. (Or complex projects with more than a handful of source files, or shared libraries, or mix languages, etcetera.) No matter what build system you use, they are going to need help with packaging, porting, and distribution, not to mention many other aspects of the software development.

In short, the case of students just learning how to program is simply not relevant for the autotools, because the autotools address problems that those students aren't even close to dealing with. Yes, such students should be perfectly fine with raw 'make', and shouldn't try for configure scripts.

LCA: Disintermediating distributions

Posted Feb 7, 2008 22:38 UTC (Thu) by aleXXX (subscriber, #2742) [Link]

I agree, but not completely.

Having students write makefiles like:

foo: main.c foo.c
    gcc -Wall -O2 main.c foo.c -o foo

is ok, splitting this into separate rules for compiling and linking may 
still be ok. Actually I think it is a good thing if they learn how 
makefiles work.
But this is only acceptable for very simple projects (maybe I have become 
lazy). The next step from that to autotools is huge IMO. The step to 
cmake is small:

add_library(foolib SHARED foo.c bar.c)
add_executable(hello main.c)
target_link_libraries(hello foolib)

install(TARGETS hello DESTINATION bin)
install(TARGETS foolib DESTINATION lib)
install(FILES foo.h DESTINATION include)

I really think this is doable. I'd say that this is understandable even 
without reading documentation. 
It works on all platforms, including the shared lib. It creates MSVC 
projects if you want it, or Eclipse projects if you want that. It builds 
also out-of-source. It gives you complete targets: all, foolib, hello, 
install, clean, help (!), foo.o, foo.s, foo.E (!).
The good thing is basic things are very simple with cmake, and building 
on that you can add functionality/required configure checks one by one 
(not students, but if somebody actually wants to have the software 
portable).

Alex

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:38 UTC (Wed) by vapier (guest, #15768) [Link] (2 responses)

your statement on the availability of POSIX shells, while true, is irrelevant.  autotools does
not depend on a POSIX shell.  it is written (by design) in portable shell, not POSIX shell.
the difference is that portable shell works on all of those fun ancient systems that lack POSIX
features and/or have broken aspects to them.  that is why the generated code is so large.  so
that it works *everywhere* (barring the systems which lack shells).

scons in such a case is a pipe dream.  your system is too old to have a stable POSIX shell but
new enough to have python usable for scons ?  doubtful.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:36 UTC (Thu) by vmole (guest, #111) [Link] (1 responses)

Solaris /bin/sh is one of those ancient shells. Python works fine on Solaris. Or you can edit everything to look for /usr/xpg4/bin/ksh, if it's there. Have fun.

LCA: Disintermediating distributions

Posted Feb 7, 2008 20:30 UTC (Thu) by vapier (guest, #15768) [Link]

i dont need to edit anything when using autotools.  it's already aware of the Solaris
idiosyncrasy to store up-to-date utilities in /usr/xpg4/bin/.  with literally no effort on my
part or anyone building one of my projects on Solaris, autotools will locate that directory
and use those utilities.  fantastic ! :)

LCA: Disintermediating distributions

Posted Feb 7, 2008 11:27 UTC (Thu) by mdz@debian.org (guest, #14112) [Link]

> While Windows is the only major platform which doesn't have a POSIX 
> shell, it is on the other hand the platform which has like 80 or 90% 
> market share. So one could also say "only 10 to 20% of installed systems 
> have a POSIX shell".

On the third hand, there is a POSIX shell, and build tools based on it, available for
approximately 100% of these.  They aren't necessarily installed by default (many Linux systems
don't have a C compiler by default either these days), but developers are more than capable of
installing a software package for this purpose.

LCA: Disintermediating distributions

Posted Feb 6, 2008 20:19 UTC (Wed) by jengelh (subscriber, #33263) [Link] (4 responses)

The fun thing of autotools is that you don't need perl, python, cmake, scons or any additional
package (ignoring the Microsoft mess for a moment), but only a handful of sh tools which most
unix-style OSes have.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:00 UTC (Wed) by bronson (subscriber, #4806) [Link] (3 responses)

Yes, you just need bash...  a bunch of core utils...  and one heck of a lot of time and
specialized knowledge.

Autotools is a nightmare for any non-trivial task (speaking as an ex-release-engineer).  How
many different autoconfs do they ship with Debian now?  Three?  Four?  How much time has been
lost to otherwise rational people being forced to wade through impenetrable shell scripts?

I really look forward to the day that standardization comes to packaging.  I doubt that it
will involve autotools though!

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:32 UTC (Wed) by vapier (guest, #15768) [Link] (2 responses)

hardly correct.  nothing in autotools depends on bash, and the "core utils" you refer to is a
pretty small list that any and all *nix system provides (awk/grep/sed/etc...).

anyone doing serious work does not care about older versions of autotools (there are actually
only 2 major versions of autoconf), which means you really have to work with the latest
release.

autotools does provide a consistent interface for distribution maintainers to utilize and
package up things.  in fact, most autotool based packages can trivially be handled with the
same template build code.

LCA: Disintermediating distributions

Posted Feb 7, 2008 9:24 UTC (Thu) by bronson (subscriber, #4806) [Link] (1 responses)

> anyone doing serious work does not care about older versions of autotools

You ever try to compile an older package with a newer version of autotools?  Failure city.
There's a reason Debian is forced to include so many version of autotools....  and, I assure
you, they are doing serious work.

LCA: Disintermediating distributions

Posted Feb 7, 2008 17:32 UTC (Thu) by stevenj (guest, #421) [Link]

You are talking about different things. Anyone actively developing a program is going to want to use a recent version of autotools. The older versions in Debian are only there to build programs that have not been seriously updated in several years.

This is no different from the reason why Debian ships multiple versions of Python, or gcc, or many other development tools.

LCA: Disintermediating distributions

Posted Feb 6, 2008 21:11 UTC (Wed) by nix (subscriber, #2304) [Link]

The information *is* available in a structured way: search PKG_CONFIG_PATH 
and open the file named ${package}.pc therein (or in /usr/lib/pkgconfig if 
PKG_CONFIG_PATH is not set) and just parse the file directly. Its format 
is *really* not that complicated. I've parsed it in a few dozen lines of 
shell...

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:37 UTC (Wed) by vapier (guest, #15768) [Link] (13 responses)

too bad cmake and scons continue to re-solve the same problems autotools solved eons ago
(cross-compiling / tool discovery / custom flags).  scons in particular is just terrible and
not in any sort of usable shape for serious projects.

LCA: Disintermediating distributions

Posted Feb 7, 2008 1:33 UTC (Thu) by modernjazz (guest, #4185) [Link] (12 responses)

So why are so many projects, which already have "working" autoconf build 
systems, doing all that "useless work" to switch to CMake? The next thing 
that happens is most of them seem to express great satisfaction and 
relief with the result. Is this just self-delusion to justify the work?

LCA: Disintermediating distributions

Posted Feb 7, 2008 10:25 UTC (Thu) by drag (guest, #31333) [Link] (6 responses)

Sometimes. 

Like the migration lots of projects have done from going from CVS to SVN to Git or whatever
else.

When mucking around with a program have you ever noticed that once you program something out,
but find some reason to rewrite it, that it turns out to be better then your original version?
Ya sure you could of spent that time bugfixing the old code or hacking new features into it,
but your going to be almost certain that the new code is going to make your job of maintaining
and improving it just that much better.

I have a feeling that if many of those projects just stripped out all the stuff they used make
(or autotools or whatever) for and then reimplimented it from scratch then they would of been
nearly as happy with it.

Also projects that are ho-hum about converting to cmake are not all of a sudden going to turn
around and broadcast to the world that they spent a great deal of their time on something that
ended up not mattering a whole lot. It's not like they are going to end up being examples
while other projects just love it.

LCA: Disintermediating distributions

Posted Feb 7, 2008 10:59 UTC (Thu) by modernjazz (guest, #4185) [Link] (5 responses)

Sure, a simple project works great with any of several build strategies, 
so of course not everyone will be thrilled by switching. But if CMake 
only lacks the "I got burned by #!@%*& CMake" contingent, it's already 
ahead of the competition.

LCA: Disintermediating distributions

Posted Feb 7, 2008 12:36 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

My *eyes* got burned by cmake's language.

Haven't they learned that capital letters make things *harder* to read? Have we learned
nothing since the days when Lisp was written in capitals?

LCA: Disintermediating distributions

Posted Feb 7, 2008 14:26 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (3 responses)

Since CMake 2.4.3, released July 2006 (or around that version) the 
commands can be written lowercase, with the coming version 2.6 this is 
even the preferred style (i.e. which is used in the documentation).

Alex

LCA: Disintermediating distributions

Posted Feb 7, 2008 21:28 UTC (Thu) by nix (subscriber, #2304) [Link] (2 responses)

YES! (Time to upgrade. Is my cmake really that old?

... 2.4.1. dammit.)

LCA: Disintermediating distributions

Posted Feb 7, 2008 22:10 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (1 responses)

2.4.1 was a beta version, a lots of bugs were fixed for 2.4.3. Version 
2.4.8 has been released a few weeks ago, I recommend you use this. If 
there is no package for your distro, just download the binary package 
from www.cmake.org and just unpack it in some place you like, it will 
work.

Alex

LCA: Disintermediating distributions

Posted Feb 7, 2008 23:19 UTC (Thu) by nix (subscriber, #2304) [Link]

Yeah, like I said, it was really stupid of me not to upgrade. In fact I've 
*got* a more recent version installed: it's just this bloody old version 
in /usr/local/bin was hiding it... *sigh* chkdupexe time, I think.

LCA: Disintermediating distributions

Posted Feb 7, 2008 20:24 UTC (Thu) by vapier (guest, #15768) [Link] (4 responses)

i didnt say it wasnt working *for them*.  they probably wouldnt have made the build system
change if it wasnt working *for them*.  the trouble is when *anyone else* tries to build the
package.  you want to cross-compile it ?  build it on a different platform ?  build with your
own compiler/flags ?  sorry, but the $flavor-of-the-month build system never thought of that.
time to go re-implement the wheel even though autotools already had it solved.

LCA: Disintermediating distributions

Posted Feb 7, 2008 21:47 UTC (Thu) by nix (subscriber, #2304) [Link]

The worst I've found for this is Boost.Jam. It should *not* take 7Kb of 
diffs and a kilobyte of build-system shell scripting to do the equivalent 
of setting --prefix and CXXFLAGS when building Boost!

LCA: Disintermediating distributions

Posted Feb 7, 2008 22:17 UTC (Thu) by aleXXX (subscriber, #2742) [Link] (2 responses)

I'm not sure I get your point here, so I just state what cmake offers 
here:

CMake cvs (2.6.0 will be released soon) supports cross compiling (without 
scratchbox or any other emulators, but of course it can also be used 
inside scratchbox).

If you want to use your own CFLAGS/CXXFLAGS with CMake, you have at least 
two ways to do it:
set CFLAGS/CXXFLAGS when you run cmake, cmake will use them.
Or, later on, run "make edit_cache" and edit the 
CMAKE_C_FLAGS/CMAKE_CXX_FLAGS directly to what you want.

If you build the software on some system where it has never been built 
before it may very well be necessary that you have to do something on the 
buildsystem, add some more checks, add some other locations, other names 
for libraries (e.g. z lib has a lot of different names on Windows). I 
guess this is true for any buildsystem.

Alex

LCA: Disintermediating distributions

Posted Feb 7, 2008 22:40 UTC (Thu) by vapier (guest, #15768) [Link] (1 responses)

your comment backs up my point.  all of these alternative build systems consistently re-solve
the exact same problems that autotools solved eons ago.

cmake *just* added support for cross-compiling (and it isnt even in any released version) ?
without even looking at anything else about cmake, that tells me the project is useless to me.
i'm not saying my needs are the same as everyone out there, just that you cant champion a
replacement for autotools if it isnt a replacement.  i'm glad *you've* found it useful, but if
your target compiling audience is more than just you, then i feel sorry for those poor chaps
(where chaps != you).

the things i cite are just common examples ive come across quite frequently when dealing with
non-autotooled packages as a distribution maintainer (whether it be cmake or scons or hand
rolled or whatever).  they certainly a complete list.  i imagine there are numerous other
portability fixes autotools has which these "replacements" lack.

LCA: Disintermediating distributions

Posted Feb 7, 2008 23:22 UTC (Thu) by nix (subscriber, #2304) [Link]

autoconf-generated configure scripts also support the godsends which are 
site-config files. Nobody else seems to remember about that, which means 
you need to wrap another build system around your build system just to get 
your CFLAGS et al consistent.

LCA: Disintermediating distributions

Posted Feb 6, 2008 18:49 UTC (Wed) by smitty_one_each (subscriber, #28989) [Link]

>Doing time-based releases was a hard sell, but few developers would want anything else now.
Now GNOME release management just works. 

So, maybe the pain isn't in the ad-hoc vs. scheduled releases, but rather the transition
between management styles.

LCA: Disintermediating distributions

Posted Feb 6, 2008 19:01 UTC (Wed) by tzafrir (subscriber, #11501) [Link] (9 responses)

So the Mozilla foundation, OpenOffice.org and the Mono project should handle all the
integration issues on their own?

The Mozilla distribution is a really good (bad?) example that shows how many wheels a vendor
needs to reinvent:

* A build far for many platforms (and they're always too far behind on Linux)
* Independent bug tracking system
* flawed integration (because the project needs to be self contained)

We will end up with a system made of klik packages.

A few other good programs whose author wanted to have the final say on their usage is qmail
and the rest of the djb programs. Great programs. But never got properly integrated.

LCA: Disintermediating distributions

Posted Feb 6, 2008 20:35 UTC (Wed) by vonbrand (subscriber, #4458) [Link]

A few other good programs whose author wanted to have the final say on their usage is qmail and the rest of the djb programs. Great programs. But never got properly integrated.

Well, if you can get away with "all problems are the platform's fault alone" and do not allow distributing modified versions, your software isn't going to integrate anywhere...

LCA: Disintermediating distributions

Posted Feb 6, 2008 21:52 UTC (Wed) by TRS-80 (guest, #1804) [Link] (6 responses)

As you say, getting upstream to make packages for distros seems like an enormous waste of time, unless you tacitly admit there's only a few distros worth caring about. Integration is what distros do, and is driven by the overarching decisions of the distro - do they allow the user flexibility (Debian), or do they shortcut and say "my way or the highway" (Ubuntu)?. Should developers have to know about the idiosyncrasies of integration on OpenSolaris, instead of just writing portable code and letting the OpenSolaris specialists handle the integration? A far better approach is madduck's proposal to get distros to work together. And even then distros like Ubuntu are expecting upstream to waste their time on bureaucracy instead of accepting clearly stable changes. If "standards for how we interact with each other" means taking upstream's word that the changes in the stable tree are really stable, then that's a good move, but at the moment Ubuntu's procedures seem cargo-culted from Solaris without an understanding of how to make them work effectively. Solaris being the home of Architecture Review Committees, interface stability and contracts between modules for use of private interfaces, which OpenSolaris is hoping to streamline and make it more appropriate to open source development.

Other bits from the main article: There's been a OpenID plugin for Bugzilla for 2.5 years (now bitrotted), and it converted from RSS to Atom feeds for searches nearly 2 years ago. What I suspect jdub really wants to prevent is distros doing their development internally as happened with the example in the article (Xgl) and intlclock.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:16 UTC (Wed) by jdub (guest, #27) [Link]

No, absolutely not -- experiences such as these were merely jumping-off points for the
philosophical challenge (particularly for someone such as myself, who deeply values the
differentiation that the existence of distributions provides us against our proprietary
competitors).

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:18 UTC (Wed) by jdub (guest, #27) [Link] (4 responses)

Note that the thought experiment was not about "making packages for distributions", but "what
would happen if we take distributions out of the picture?". I think that misunderstanding is
the basis for your response.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:33 UTC (Wed) by TRS-80 (guest, #1804) [Link] (3 responses)

And I don't think you can take distros out of the picture - as I said, "Integration is what distros do, and is driven by the overarching decisions of the distro" so you're forcing upstream to make these decisions. Ubuntu's choice of upstart is great, but OpenSolaris uses SMF - why should an upstream project have to choose one or be burdened by supporting both? Maybe D-Bus activation is a better choice for some programs, but not all.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:40 UTC (Wed) by jdub (guest, #27) [Link] (2 responses)

If your conclusion is that distributions can't be taken out of the picture, then you're not
taking part in the thought experiment. That's okay, but I'm more interested in the thought
experiment than defending the status quo (which I can do very well for myself, but find it
unchallenging and lacking in philosophical value). :-)

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:52 UTC (Wed) by tzafrir (subscriber, #11501) [Link] (1 responses)

The "experiment" is ongoing. You can always install packages directly and not through a
distribution. 

The fact is that most of us use a proper distribution and don't build Linux from scratch. 

Binary packages provided by third parties to Linux are known to be of lower quality (to say
the least). And if you want to completely eliminate distributions, you won't really have
reference platforms, and binary packages may be even more of a pain.

LCA: Disintermediating distributions

Posted Feb 7, 2008 0:02 UTC (Thu) by jdub (guest, #27) [Link]

... but the key to this is figuring out what could be better, rather than actively choosing
the worst outcome. :-)

LCA: Disintermediating distributions

Posted Feb 7, 2008 22:49 UTC (Thu) by man_ls (guest, #15091) [Link]

The Mozilla distribution is a really good (bad?) example that shows how many wheels a vendor needs to reinvent: [...]
Don't forget:
  • A notification method for detecting new versions,
  • A download mechanism for new versions,
  • And online upgrading, which can also detect and turn off incompatible extensions.
Any of these problems is hard enough on its own; the solutions given by the Mozilla project don't even work properly on many platforms. On Windows they do work because the platform is mostly dumb about software upgrades (except for Microsoft stuff).

Note that on GNU/Linux systems these jobs are provided by distributions in a quite labor-intensive fashion. Someone must track upstream versions, repackage them, and rebuild all depending and dependent software. For software providers there is no easy generic mechanism to integrate with the distribution; adding repositories for each independent package does not scale at all.

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:19 UTC (Wed) by bronson (subscriber, #4806) [Link] (1 responses)

> Once the tools (like bug trackers) can talk to each other...

Amen brother!  I keep hoping that Trac will sprout this feature.  Why can't I run my own
personal Trac which just keeps track of all the bugs I've filed upstream?  Alas, Trac seems to
be stumbling over its 0.11 release at the moment.  Just not enough contributors I guess.

Just think if Canonical had decided to modularize and help Trac instead of putting untold
man-years into creating their do-everything cathedral named Launchpad?

Sigh.  Oh well.  Guess I'll keep hoping!

LCA: Disintermediating distributions

Posted Feb 8, 2008 14:23 UTC (Fri) by holstein (guest, #6122) [Link]

What's even worse with Launchpad is how they keep it close source. 

What I understand of the move is the desire to bring launchpad.net as the center-point of
focus for the project they host. I suppose they probably fear seeing various project popping
up separate installation of launchpad (I recall reading something stating about that but I
can't find it back ATM).

We looked at "source-forge-like" tools at work some times ago ; most of them were not in a
shape that fit with what we wanted. But something like Launchpad would have been really great;
all the tools we needed, built-in with the concept of fetching upstream releases. But it's
closed. So, no bug fix from us...

LCA: Disintermediating distributions

Posted Feb 6, 2008 23:59 UTC (Wed) by vapier (guest, #15768) [Link] (11 responses)

it's good to see that distributions are slowly awakening to the idea that there are others out
there.  i've always found it backward to have distributions championing this "open source"
stuff while turning around and adding features, fixing bugs, etc... and keeping all these
changes local.  why does an upstream maintainer need to search through all the random places
that package their code in case someone has changed something ?  distributions should be
taking more responsibility for working with upstream developers.  what is great for
$random-distribution is usually great for everyone.  redhat was horrendous in this respect,
but hopefully the fedora project will help them mend their ways.

may also be worthwhile to mention that this isnt just something to look at for linux
distributions as the bsd projects are largely the same thing.  they bring to the table the
core pieces (libc/kernel), but beyond that they are often just using the same packages.  also
good to point out that they are terrible at pushing code back to upstream.  once code gets
imported into their system, it's like a black hole where it never comes out again :(.

getting interconnectivity integrated in bugzilla would have a lovely trickle down affect i
would think.  we can talk about how people should be doing XYZ, but by simply making XYZ
readily available, it would more come into its own.

LCA: Disintermediating distributions

Posted Feb 7, 2008 13:31 UTC (Thu) by zooko (guest, #2589) [Link] (9 responses)

Here are a couple of distributed bug tracking projects:

http://www.distract.wellquite.org/

http://www.ditrack.org/

I like your idea about running your own Trac instance which tracs bugs you've reported.

Launchpad is working okay for me, so far.

LCA: Disintermediating distributions

Posted Feb 7, 2008 16:41 UTC (Thu) by bronson (subscriber, #4806) [Link] (8 responses)

Launchpad is still not free software and I have little hope it ever will be.

All these free software fanatics, all those volumes of email slagging Linus for putting vital
information irretrievably into a closed system, where are you now?? Why aren't you keeping
Launchpad honest?   At least BK left your info on your hard drive, it just kept it in a binary
format.  Launchpad swallows it completely and only returns it on its terms!  It's a FAR worse
scenario than BK, yet nobody seems to care.  It's weird.

All you Launchpad users, how can you bring yourselves to rely on a system that can be taken
away at any time?

Why is Canonical so serious about solving distributed problems with Bazaar and yet so ignorant
about them with Launchpad?  They should have named it "Cathedral".

LCA: Disintermediating distributions

Posted Feb 7, 2008 18:20 UTC (Thu) by tzafrir (subscriber, #11501) [Link] (6 responses)

So I gather you also disapprove of the fact that some projects are hosted in SourceForge,
Berlios.de and Google-Code, right?

As for Bazzar: It is a distributed version control system. Hence if you happen to have an
up-to-date copy of the repository of project FooBar that is hosted on EvilBazzarServer and a
moment later the owner of EvilBazzarServer decides to shut it down, you still have a copy of
FooBar. Complete with the history. And you can start your own repository.

(And for the record: I personally avoid using LaunchPad until they keep to their words of
releasing it)

LCA: Disintermediating distributions

Posted Feb 7, 2008 19:50 UTC (Thu) by bronson (subscriber, #4806) [Link] (5 responses)

Erm, SourceForge and BerliOS are open source.  Google code, maybe...  But Google doesn't
present itself as a paragon of openness the way Canonical does, so I don't really find
Google's behavior surprising.

As for Bazaar, that's exactly my point!  Why is Canonical so aware of the benefits of
distributing source code, yet so ignorant about the benefits distributing bugs?  It's truly
baffling.

LCA: Disintermediating distributions

Posted Feb 7, 2008 21:04 UTC (Thu) by tzafrir (subscriber, #11501) [Link] (4 responses)

SourceForge used to be free software. GForge (e.g: GNU Savannah) is based on that. Berlios.de
is based on an older version of that, but IIRC they don't publish its source with the various
adjustments they made there.

As for distributed bug-tracking system: I'm not sure this can actually work well. How do you
merge the comment I have added to the bug and the comment you have added to the bug? And if I
also closed that bug?

LCA: Disintermediating distributions

Posted Feb 7, 2008 21:17 UTC (Thu) by zooko (guest, #2589) [Link] (3 responses)

"How do you merge the comment I have added to the bug and the comment you have added to the
bug? And if I also closed that bug?"

Isn't this the same problem as decentralized revision control?

:-)

LCA: Disintermediating distributions

Posted Feb 7, 2008 21:50 UTC (Thu) by nix (subscriber, #2304) [Link] (1 responses)

Threading of bug report comment threads, with individual thread termini 
having their own closedness state, branchpoints inheriting the closedness 
state of their downstream termini, and the bug as a whole marked as closed 
when all termini are closed or when the owner closes it...

... well, I can dream, can't I? Anyway, it's only a bit more than what 
threaded newsreaders were doing in 1988.

LCA: Disintermediating distributions

Posted Feb 7, 2008 23:01 UTC (Thu) by tzafrir (subscriber, #11501) [Link]

Apart from the coolness factor, can you give any practical advantages?

What I read from your description is that when you submit a bug report you really can't be
sure about its state. Until it got merged by "upstream". 

So reporting bugs becomes more time consuming.

LCA: Disintermediating distributions

Posted Feb 7, 2008 23:10 UTC (Thu) by graydon (guest, #5009) [Link]

Yeah. I initially meant to support storing and associating bugs and test-cases /
test-run-states inside monotone databases, transferring them around with revs etc; the crypto
stuff in there was to support independent quality auditing. I think it's viable, it just needs
time and energy.

Never got around to it. Getting the software itself right was hard enough!

LCA: Disintermediating distributions

Posted Feb 9, 2008 0:53 UTC (Sat) by cortana (subscriber, #24596) [Link]

> All you Launchpad users, how can you bring yourselves to rely on a system 
> that can be taken away at any time?

Unfortunately, if I want to report bugs in Ubuntu, I have no other choice, since the Ubuntu
folks don't seem to give a damn about the (far superior) reportbug utility.

LCA: Disintermediating distributions

Posted Feb 7, 2008 18:01 UTC (Thu) by tzafrir (subscriber, #11501) [Link]

Debian's bug tracking system supports linking Debian bugs to upstream bugs:

http://bts-link.alioth.debian.org/

The Middleman Seems to Work

Posted Feb 7, 2008 3:24 UTC (Thu) by clump (subscriber, #27801) [Link]

I switched my home machines to Debian in 2001 mainly because I found building from upstream so
difficult. I'm happy that I can build from source, but installing a Debian package meant that
the software worked (was tested and supported) and was easily managed within my system.  In
2001 software for Linux often didn't "just work".  Much of my time was spent chasing
dependencies (development libraries or binary dependencies) and once I was able to compile
something I'd generally leave it alone.

My experience has been that distros have made life better for users by packaging software.
Logically this seems counterintuitive as you do have a "middleman" but prior to the Debian
software model you had chaos.  "Pigs playing in mud" wasn't a fun or effective use of my time.

LCA: Disintermediating distributions

Posted Feb 8, 2008 2:09 UTC (Fri) by omez (guest, #6904) [Link]

I have 683 packages installed on my machine. If a distributor is not going to provide
dependency resolution information, I might spend three hours resoling the dependencies
manually for each package update. Optionally, I have (potentially) 683 different people who
don't communicate with each other providing incongruent dependency information. How is this a
good thing?

Bug fix cycle for most users is 100 x slower than it need be

Posted Feb 8, 2008 7:48 UTC (Fri) by Richard_J_Neill (subscriber, #23093) [Link] (2 responses)

There is one important thing that hasn't been mentioned so far, namely the speed of the
bug-fix cycle. Consider the majority of LWN readers; technically literate, able to file good
bug reports, and probably write some software of our own. Yet, we don't want to delve into
every single program. 

For example, I run Ubuntu, and I update every 6 months. I can't deal with all the
instabilities of running a devel version on my primary system. Let's say I find a bug in KDE,
which particularly annoys me.

(1) The KDE folks will quite reasonably expect me to test with the latest release before
filing the bug report. But I'm running a version somewhere between 3-9 months old, even though
I track the latest Ubuntu release. It's a major effort to re-build KDE, just to verify a
bug-report.

(2) If the bug gets fixed, (or has already been fixed), it will take an average of 3 months
before I get to evaluate, comment on, or benefit from the fix.

As a result:
  * The vast majority of technical users produce bug reports of limited value to upstream;
this is a huge waste of potential talent.
  * Motivation is limited, since most people don't get the benefit of the fixes for the bugs
that they report, in a timely manner.
  * Multiple people hit the same bug, even if it's fixed, because it doesn't get pushed
downstream. Again, a huge waste of time.
  * The oops-report-fix-enjoy cycle takes up to 9 months, instead of 3 days.

My suggestion is that distros should automatically backport every package on a nightly basis,
and provide binaries of the latest CVS, as well as the latest stable-release. These packages
must be built against the stable distro, and should be installable via the package manager.
(they should not be brought in by default though).

I also suggest that most end-users should file most of their bug reports upstream.
Distro-developers only have the resources to be dealing with bug-fixes that they can push out
to all users, eg security and application-crash bugs. 

Bug fix cycle for most users is 100 x slower than it need be

Posted Feb 8, 2008 10:10 UTC (Fri) by tzafrir (subscriber, #11501) [Link]

You can use Ubuntu Hardy right now. Or Debian Unstable. Or Fedora RawHide. OR whatever. But
most users are not able to cope with the nightly upgrades and weekly breakages.

It takes some debugging to get a stable distibution that will "just work" for a user.

Also: what do you mean by "must be built against the stable release"? A KDE 4 program should
be built against the KDE4 libraries in the stable version or from the nightly build? The libc
version from the stable or from the nightly? Xorg from the stable or from the nightly?

Bug fix cycle for most users is 100 x slower than it need be

Posted Feb 15, 2008 2:28 UTC (Fri) by dkite (guest, #4577) [Link]

I have always graduated towards rolling release distros, or in the case 
of KDE, rolling cvs/svn builds.

Derek


Partially distributed distributions

Posted Feb 8, 2008 18:44 UTC (Fri) by JLCdjinn (guest, #1905) [Link] (8 responses)

Over the past two weeks, I have periodically encountered a crashing bug in VIM. I am running Kubuntu 7.10, and I run the update manager whenever it indicates that there is a pending update. vim --version tells me that the current binary includes patches 1-56. As of this writing, however, there are a total of 244 patches for VIM 7.1, and a bit of investigation indicates that a number of the remaining 188 patches fix crashing bugs; one of them could fix my bug. Naturally, I could compile, run, and test an upstream version, but, if it fixes the problem, I want to inject my upstream version into my system until I "return control" to the package manager. I would like to tell Kubuntu that I want to take full control of the VIM package, that I will be responsible for it, but I don't know how to do that.

This is a specific case of a general problem. There are many situations in which I would like to manually maintain the artifacts of a package on my system. I might want to do this in order to fine tune a package, such as the Linux kernel; when I want to test a bug fix, such as with KDE as described by Richard_J_Neill above; or when I want to actively develop a particular piece of software in the context of my overall system. Certainly, these cases are far from mutually exclusive. In order to do this cleanly, I need support from the distribution. I need a way to flip a switch in the package management system, which would tell the distribution that I will provide the dependencies filled by a particular package. I need the distribution to provide me with sufficient information that I can faithfully fill those dependencies. And then I need a way to relinquish control to the distribution, when I no longer need to closely control a particular package. It would be great if I could link any findings to both a distribution issue tracking system as well as any upstream issue tracking system.

To what degree do distributions provide this functionality now? How much documentation about this functionality exists? For example, on Debian- or Ubuntu-based distributions, how could I install a custom Linux kernel without getting in the way of the package management system? Having this ability might help us move in the direction suggested by Mr. Waugh, if I understand his intent.

Partially distributed distributions

Posted Feb 8, 2008 22:03 UTC (Fri) by magnus (subscriber, #34778) [Link] (5 responses)

I wish there was a "get source" command in the package manager, that would download the source
for a package configured exactly the same way as the installed package was configured. The
package would be flagged as "customized" in the package database. You could then experiment
with different patches, code changes etc and when you're done either revert back to the
original package or keep your custom changes. It could also support automatically generating
patches to send to the distributor or upstream (in case you managed to fix a bug), creating
custom binary packages etc. 

I think something like that would help a lot to reduce the barrier between users and
developers. Does any distribution already have such a system? 

Partially distributed distributions

Posted Feb 9, 2008 1:59 UTC (Sat) by tzafrir (subscriber, #11501) [Link]

'apt-get source' has been here for a while.

Debian also has now 'debcheckout', but this is more for the package maintainer's version
control system.

You can certainly build your own package and install it. apt considers also locally-installed
packages.

Partially distributed distributions

Posted Feb 9, 2008 14:49 UTC (Sat) by alextingle (guest, #20593) [Link] (3 responses)

$ apt-get source PACKAGE
$ sudo apt-get build-dep PACKAGE
Then, once you've made your change / applied the patch / whatever...
$ dpkg-buildpackage -rfakeroot -b
...and hey-presto, your very own fixed package.

Partially distributed distributions

Posted Feb 10, 2008 15:55 UTC (Sun) by Lennie (subscriber, #49641) [Link] (2 responses)

Also you can use apt-pinning to get a newer version for Debian (Ubuntu ?) testing or even
experimental.

If you add the other 'version' of the distribution (like Debian testing) to your
/etc/apt/sources.list and specify which package you want to upgrade in /etc/apt/preferences
(that's the pinning-part).

It will automatically upgrade any dependencies as well.

This way you can test a newer version and even go back to the distrbution original version
from for example Debian-stable very easily.

If you created a patch for a package (if you did the apt-get source, dpkg-source -x;
dpkg-buildpackage), don't forget to change the version-number in the debian/control file, tag
your initial or organisation at the end of the existing version number with a 1 or 2, etc. so
you can upgrade the package later.

And other way to upgrade a package is to add only the source lines to your
/etc/apt/sources.list and do a apt-get source, dpkg-source, dpkg-buildpackage. You might need
to change the debian/control-file to specify a different version number as dependency or
install that first.

On upgrade (apt-get -u dist-upgrade) or even update of your distribution it will automatically
upgrade your packages, etc. if needed.

Partially distributed distributions

Posted Feb 10, 2008 16:03 UTC (Sun) by Lennie (subscriber, #49641) [Link] (1 responses)

And I forgot to mention the apt-get build-dep command.

Which will install the build-dependencies of the package.

apt-get build-dep: better than sliced bread?

Posted Feb 22, 2008 7:24 UTC (Fri) by goaty (guest, #17783) [Link]

Have you ever tried apt-get build-dep on a fresh install that doesn't have any build tools
installed? It will pull in the entire build system: gcc, header files, everything, with one
command! With a fast disk and a fast network it is insanely pleasurable.

Kids growing up in the Age of APT may never realise that building stuff used to be *hard*.
It's lucky there are other operating systems around for them to use and understand what pain
means.

Going upstream

Posted Feb 9, 2008 2:00 UTC (Sat) by JoeBuck (subscriber, #2330) [Link]

For relatively simple, self-contained programs that you invoke from a terminal, it's not hard to experiment with upstream versions. The key point is that the familiar (at least to developers) configure, make, make install sequence installs programs into /usr/local by default, which is an area that distros don't touch. Alternatively, you can built it in a special place, to make it easy to blow away whenever your distro fixes the problem.

So, if you want the upstream vim, download the source tarball, and do

tar jxf vim-7.1.tar.bz2
cd vim71
./configure --prefix=/usr/local/vim71
make
sudo make install

You now have built the upstream vim and installed it in /usr/local/vim71/bin, without disturbing your distro's vim.

(You will of course need to have the appropriate development tools installed).

Partially distributed distributions

Posted Feb 17, 2008 21:48 UTC (Sun) by muwlgr (guest, #35359) [Link]

Of course you could build your own kernel, pack it into proper .deb file and install it by
dpkg. Just run "aptitude install kernel-package", then "man make-kpkg".


Copyright © 2008, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds