LWN.net Logo

Advertisement

GStreamer, Embedded Linux, Android, VoD, Smooth Streaming, DRM, RTSP, HEVC, PulseAudio, OpenGL. Register now to attend.

Advertise here

KS2011: Afternoon topics

By Jonathan Corbet
October 26, 2011
2011 Kernel Summit coverage
The afternoon discussion on the second day of the 2011 Kernel Summit covered a wide range of topics, including shared libraries, failure handling, the media controller, the kernel build and configuration subsystem, and the future of the event itself.

Writing sane shared libraries

Lennart Poettering and Kay Sievers, it seems, have grown tired of dealing with the mess that results from kernel developers trying to write low-level user-space libraries. So they proposed and ran a session intended to convey some best practices. For the most part, their suggestions were common sense:

  • "Use automake, always." Nobody wants to deal with the details of writing makefiles. Automake is ugly, but the ugliness can be ignored; like democracy, it is messy but it is still the best system we have. Nobody, they said, wants to see a kernel developer's makefile creativity. There is a bit of a learning curve, but, it was suggested, it should be well within the capabilities of somebody who has mastered kernel programming.

  • Licensing: they recommended using LGPLv2 with the "or any later version" clause. This drew some complaints from developers who thought they were being told which license to use for their code. It is really just a matter of compatibility with other code, though; a GPLv2-only (or LGPLv2-only) library doesn't mix with many other licenses, so distributors may have a hard time shipping it.

  • Never have any global state. Code should also be thread aware, "but not thread-safe." Thread-level locking can create problems at fork() time, it is best avoided, especially in low-level libraries. GCC constructors should be avoided for the same reason.

  • Files should be opened with O_CLOEXEC, always. There is no telling when another thread might do something and carry off a library's file descriptors with it.

  • Basic namespace hygiene: no exporting variables to applications, use prefixes on all names, and use versioned symbols for all drop-in library use. It is also best to use naming conventions that application developers will expect.

  • No structure definitions in header files; they will only cause trouble when the ABI evolves in the future.

Kay and Lennart had more suggestions, but their time ran out. They have developed a skeleton shared library that they intend to post for developers to work from when creating these libraries in the future.

Failure handling

Roland Dreier ran a session on failure handling; his core point was that error paths in the kernel tend to be buggy, and those bugs are a problem: things that should be a recoverable error turn into a system crash instead. But, since error paths tend to get a lot less testing, we end up shipping a lot of those bugs. He noted that he lacked any sort of silver-bullet fix for the problem; his hope was that the group would fill that in during the talk.

Roland's examples were mostly in the filesystem and storage area. He noted that unplugging a block device can still bring the system down. The interfaces in that area, he said, approach a score of -10 on the famous Rusty Scale: they are nearly impossible to get right. A number of filesystems also run into all kinds of problems if memory allocations fail. It would be nice to do a better job there, he said.

Work is being done in that area. Dave Chinner noted that xfs is slowly getting a full transaction rollback mechanism into place; once that happens, it will be possible to return ENOMEM to user space for almost any operation that fails with memory allocation errors. Whether that is a good thing is another question: applications tend not to be on the lookout for that kind of failure, so better out-of-memory handling in the filesystems could turn up a lot of strange bugs in user space. Ted Ts'o said that he is more than open to patches improving allocation failure handling in ext4, but, in practice, those bugs tend not to bite users. What happens instead is that the out-of-memory killer starts killing processes before allocations start to fail within the filesystem.

Andrew Morton reminded the room that the kernel does have a nice fault injection facility that works well for the testing of error paths. But, he said, nobody bothers to use it. Meanwhile, in the real world, people are not hitting bugs related to memory allocation failures.

Alan Cox asserted that the design of the block layer is wrong, that it destroys data structures too soon when a device is removed. In fact, the layer was designed to do the right thing: it tries to keep the relevant structures around until there are no more users of them. The problem is that all the reference counting logic was added late in the game - pluggable devices were not an issue when a lot of that code was written - and the job has not been done completely or well. There will be a fair amount of work required to fix things properly; after some talk, it was agreed that the design of an improved block subsystem could be handled over email.

Media controller

Mauro Carvalho Chehab talked for a bit about the media controller subsystem. His main point was that, while Video4Linux2 is the first user of the media controller, the two are not equivalent. The media controller is a generic API that allows the configuration of data paths from user space; it is applicable in a number of places. There is currently interest in using it in the sound, fbdev, graphics, and DVB subsystems; thus far, only ALSA has preliminary patches available.

An upcoming challenge for the media controller is the advent of programmable hardware, where the entities and their connections can now come and go dynamically.

There was a question about integration with GStreamer; the answer is that they have different domains (software for GStreamer, hardware for the media controller), but that the media controller developers have tried to at least match their terminology with that used by GStreamer. It wasn't really discussed in the room, but the idea of using GStreamer pipeline notation to configure hardware via the media controller seems like it could be interesting: that sort of configuration is currently only done by proprietary applications that understand a specific piece of hardware. It would not be easy, but making the configuration more generic in this manner could maybe make the whole thing more accessible to users.

Kbuild and kconfig

Michal Marek is the current maintainer of the kernel build system. He took that position, he said, because nobody else wanted it; the room gave him a round of applause for having stepped up to the job. The system "more or less works," he said, but there are always things that could be done better. He had a small to-do list to start with, but hoped that the group would tell him about other things they would like to see improved.

For the most part, the discussion covered various little glitches and annoyances. The one substantive discussion had to do with dependency resolution. Anybody who has spent time configuring kernels knows how irritating it can be when a specific option cannot be enabled because one of its dependencies is missing; nobody disagreed with the notion that it would be nice to turn on dependencies automatically instead of forcing developers to go digging through the source to figure out what is missing.

There is a Summer of Code project out there which hooks a SAT solver into the kernel configuration system to automatically figure out dependencies, but it hasn't gotten a lot of attention on the lists. Michal will try to pull that code in and see what happens with it. There was a fair amount of talk about whether the solver is overkill and whether it might bog down the kernel build process; Linus noted that the history in this area is not entirely pleasant. He seemed a bit frustrated that this problem has been discussed many times, but no solution has emerged yet. A patch is said to be coming soon; perhaps, this time, the necessary pieces for a real solution will be there.

The future of the Kernel Summit

Kernel Summit program committee member Steve Rostedt ran a session on the organization of the event itself. The format of the summit was changed a bit for 2011; did those changes work out? Additionally, he said, finding suitable topics for the summit has gotten harder over the years; there aren't that many things that are of interest to the whole crowd. That, he said, is why we end up talking about things like kbuild and git.

The discussion was unstructured and hard to summarize. Everybody agrees that minisummits (which made up the first day of the event this year) are a good thing, but it's not entirely clear if they should all be brought together with the kernel summit or not. The closed session clearly has some value and will probably continue to exist in some form, even though Linus said he didn't think it worked all that well. The practice of bringing in high-profile users - a common feature at previous summits - may not return; if nothing else, the increasing presence of companies like Google at the summit ensure that there is plenty of visibility into the problems of large data centers.

It is hard to say what changes will come to the summit next year. About the only thing that had widespread agreement was that more unstructured time (including the return of the hacking session) would be a good thing, as would more beer (not that beer has been in short supply in Prague).

Closing

The day concluded with elections for the Linux Foundation's technical advisory board (TAB) and a key-signing party. The TAB election saw late candidacies by James Morris and Mauro Carvalho Chehab, but, in the end, the five incumbents (Alan Cox, Thomas Gleixner, Jonathan Corbet, Theodore Ts'o, and Greg Kroah-Hartman) were re-elected for another two years. (The other board members, whose terms end next year, are James Bottomley, Chris Mason, John Linville, Grant Likely, and Hugh Blemings). The key signing will, with any luck, result in a core web of trust that can be used to secure access to kernel.org and to verify pull requests.

The attendees then rushed off for some unstructured time with beer while surrounded by suits of armor in a downtown Prague restaurant.


(Log in to post comments)

"drop-in library use"

Posted Oct 26, 2011 15:01 UTC (Wed) by xav (guest, #18536) [Link]

Can someone explain me what "drop-in library use" means ? (In the userspace libs stuff context)

Thanks

"drop-in library use"

Posted Oct 26, 2011 22:41 UTC (Wed) by BenHutchings (subscriber, #37955) [Link]

I took that to mean a statically-linked library.

"drop-in library use"

Posted Oct 31, 2011 18:58 UTC (Mon) by mezcalero (subscriber, #45103) [Link]

A library whose .c and .h files you just drop in your own code, use internally and which should never be visible to the outside.

KS2011: Afternoon topics

Posted Oct 26, 2011 17:16 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

>Automake is ugly, but the ugliness can be ignored; like democracy, it is messy but it is still the best system we have.

The faster Automake dies the better off we'll all be. Anything is better than automake: scons, waf, cmake, etc.

Please, start to use them. Samba switched to waf for samba4 and results are great so far. And it's hard to find a body of code more complicated than samba4.

KS2011: Afternoon topics

Posted Oct 26, 2011 17:37 UTC (Wed) by nix (subscriber, #2304) [Link]

From a distributor's perspective scons is an utter nightmare and not at all a replacement for the Autoconf/Automake combination. scons is not a build system: it's a framework which lets you write your own build system in Python. This Python is rather ugly (because it's single-use), and, worse, every distributor has to read it and understand it and, often, extend it, because it is almost guaranteed not to support analogues of --prefix, --exec-prefix and other paths at configure or install time (the former for encoding paths into the binary, the latter for installing into temporary directories for packaging), DESTDIR support, CFLAGS/LDFLAGS/*FLAGS overriding, cross-compilation...

The list of things autoconf and automake do for you in a consistent fashion is endless, and scons does next to none of them. Neither do projects which use scons, and some of those facilities (notably CFLAGS overriding and install-time prefix or DESTDIR support) are essential for any project to be easily packaged. The best you'll get with scons is a means of doing this which is inconsistent for every single package: not very useful.

(cmake used to be bad in this regard too, but is much better these days, though it still supports no analogue of the useful site-config files, so you need wrapper scripts to simulate that. waf appears to provide some of them, but in a profoundly non-extensible fashion, apparently under the belief that if you want to configure CFLAGS the same way across many packages, you'll be happy to hack the waf scripts for every one of them to ensure they're not overriding it. Wrong.)

KS2011: Afternoon topics

Posted Oct 26, 2011 19:03 UTC (Wed) by cjwatson (subscriber, #7322) [Link]

Amen to all of that. The analogy with democracy is very apt.

KS2011: Afternoon topics

Posted Oct 26, 2011 19:46 UTC (Wed) by tetromino (subscriber, #33846) [Link]

Indeed. Using scons or waf may be a reasonable solution for something that will only be built on your personal development machine, but it makes it a pain to package your software for distributions, and imho counts as a black mark against your package.

KS2011: Afternoon topics

Posted Oct 27, 2011 0:19 UTC (Thu) by nix (subscriber, #2304) [Link]

It also makes it a pain for people to build your software themselves when it turns out it's not packaged. "What? I have to install *another* build tool? Where's it going to be installed? What? I have to read a bunch of Python to figure it out? WTF?"

KS2011: Afternoon topics

Posted Oct 27, 2011 20:04 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Waf can create a self-contained configure script which does the build. The only tool you need is compiler in that case (and required libs, of course).

And installing scons is just as simple as installing one package. So what's a big deal with it?

KS2011: Afternoon topics

Posted Oct 27, 2011 22:15 UTC (Thu) by nix (subscriber, #2304) [Link]

The installation isn't that bad. The "oh heck now I have to learn it and (possibly) learn enough Python to be getting on with *and* figure out how this one-package build system works just so I can set CFLAGS and libdir appropriately for my biarch system, since they used lib and I use lib64 or vice versa or something like that"... *that* is not nice at all. In fact it was sufficient reason for me to avoid scons (and everything that used it to build completely) until recently.

(Sure, the problem goes away if it's a Python package... but if it's a Python package, why the heck aren't they using distutils, which supports prefix and destdir and everything necessary already, with zero effort needed from the packager?)

KS2011: Afternoon topics

Posted Oct 28, 2011 10:28 UTC (Fri) by Rudd-O (subscriber, #61155) [Link]

waf supports CFLAGS and all the GNU dirs.

You don't have to learn anything new to use waf. If you know how to use configure, you know how to use waf.

KS2011: Afternoon topics

Posted Oct 28, 2011 16:50 UTC (Fri) by nix (subscriber, #2304) [Link]

No site-config files, which means I have to figure out which of the autoconfish variables I set are actually supported by waf (not many) and pass them on the command-line instead. Yes, it's a valid approach, but it *is* something new to handle and it does need special handling by packagers / autobuilders. You cannot treat it just like autoconf.

KS2011: Afternoon topics

Posted Oct 31, 2011 9:57 UTC (Mon) by pkern (subscriber, #32883) [Link]

Wasn't waf the insane thing that didn't want to keep any backwards compatibility to recreate its configure stuff from old sources because it's ok to stick with an old version? (And hence it got removed from Debian, for instance?) So you need to ship it with your sources?

Also wasn't it the insane thing which actually shipped a compressed tarball with Python libs in sources that use it? (The "waf" binary alongside "configure".)

I know Debian had to patch a bunch of packages because waf was stupid and the result didn't build on hppa. You couldn't just regenerate the output because you needed the right version of waf to do that, i.e. the shipped one. So that needed patching. And you couldn't just apply the same patch neither because the build system is in a compressed blob.

Oh my, please spare us of this ridiculous thing.

KS2011: Afternoon topics

Posted Oct 31, 2011 13:43 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Nope, situation is more complex.

Waf can produce 'wafscript' which is basically a 'configure' file. That's not a preferred form for modifications, like you wouldn't want to modify 'configure' script directly (and not 'configure.ac'). Why waf won't build on hppa - I have no idea, it's a pure Python app.

Their stance regarding system-wide installation is curious, but not a problem.

KS2011: Afternoon topics

Posted Oct 26, 2011 20:05 UTC (Wed) by pj (subscriber, #4506) [Link]

What about one of the newer build systems like fabricate (<http://code.google.com/p/fabricate/>) that autodiscovers dependencies via strace() ?

KS2011: Afternoon topics

Posted Oct 26, 2011 21:44 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Build systems can be replaced and things are fine. Make less so. There are many things that replace make that do not support things like -j, -k, -n, or other nicities for debugging broken builds.

Personally, I'm more partial to CMake than other build systems for C/C++. Some languages have it better: Python has setuptools or distribute, and Haskell has cabal. These are built with the language in mind and work fine until other things are needed (e.g., asciidoc/xmlto for man pages, doxygen for documentation, and non-standard testing suites (especially when language bindings are concerned and you can't build/use the bound language without hacks)). Others languages still have it hard such as Java with maven or whatever the XML horror du jour is.

What I *really* want is a build system that generates Makefiles (or Ninja files, or Visual Studio projects, or XCode projects, or what have you) (CMake), defaults to out of source builds (CMake and whatever autotools magic glibc has that no one bothers to copy), has a declarative syntax (cabal), and has no need to ship generated files (CMake, raw Makefiles).

I have hacks for CMake to handle LaTeX builds (including double and triple+bibtex passes) with out-of-source builds, symlinking my dotfiles, generating cross-referenced doxygen, and more, but I think a build system that supports something more akin to make's generator rules (something like Prolog/Mercury maybe?) would be nicer to work with (CMake's escaping and argument parsing is less than ideal to manage with complicated things). Implicit know-how of supporting system copies and bundled libraries with automatic switches (which can be disabled if there are patches which make in-the-wild copies not work) would be wonderful as well. CMake's external_project_add gets close, but still has some rough edges (such as needing manual switches for system copies support).

KS2011: Afternoon topics

Posted Oct 27, 2011 19:52 UTC (Thu) by pbonzini (subscriber, #60935) [Link]

Autotools supports out-of-tree builds with zero effort from the autotools user. All programs using autotools should support mkdir build; cd build; ../configure && make.

KS2011: Afternoon topics

Posted Oct 27, 2011 20:11 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

Yes, you can configure out-of-tree, but you can't autoreconf -i -f out-of-tree. I want *zero* files in the source tree that are not meant to be committed. So yes, autotools does support out-of-tree builds, but it still leaves a dirty source tree behind.

Out-of-source *build* is probably bad wording. More precisely, .gitignore should be empty and git status (and the equivalent in the other VCS's) should also be empty starting with no generated files.

KS2011: Afternoon topics

Posted Oct 29, 2011 19:05 UTC (Sat) by fuhchee (guest, #40059) [Link]

"I want *zero* files in the source tree that are not meant to be committed."

The cure to that is to mean to commit autoconf/automake-generated files.

KS2011: Afternoon topics

Posted Oct 31, 2011 20:46 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> The cure to that is to mean to commit autoconf/automake-generated files.

Which, IMO, is worse than having them in the source tree after their generation. The autoconf/automake *generated* files belong in the build directory.

KS2011: Afternoon topics

Posted Oct 31, 2011 20:55 UTC (Mon) by fuhchee (guest, #40059) [Link]

"The autoconf/automake *generated* files belong in the build directory."

If you say so. :-)

KS2011: Afternoon topics

Posted Nov 1, 2011 14:30 UTC (Tue) by nix (subscriber, #2304) [Link]

Well, that's true if your source directory is a version-control checkout, because anything else is a recipe for conflict hell as different developers use slightly different versions of the generating tools (producing slightly different output).

It's certainly not true if your source directory is a release tarball (or other release medium). Autoconf et al should have been run for you by that point, and the result tested. That way end users don't need anything but a shell and ordinary build tools to run the build. (This is one area where cmake falls down: all the builders need a copy of it.)

KS2011: Afternoon topics

Posted Nov 1, 2011 14:41 UTC (Tue) by fuhchee (guest, #40059) [Link]

"anything else is a recipe for conflict hell"

In practice, if people are pragmatic, it's fine.
Developers can regenerate the files at will with any version that works.
In the case of version control branch merge problems, regenerate them again.

KS2011: Afternoon topics

Posted Oct 28, 2011 18:51 UTC (Fri) by anatolik (subscriber, #73797) [Link]

Most builds systems are work fine until you have a small project (less than 10K files), but all of them suck when the project grows. Basically all these build systems are not scalable.

I really like the idea of Tup build system http://github.com/gittup that stores graph into local sqlite database and reparses/rebuilds graph only when files are changed - this makes iterative development much more pleasant. Another cool feature is dependencies autodiscovering - under the hood it uses fuse (and fuse-like) library for that (this works on linux, macosx and windows). And the third feature that I like is "monitor" - inotify based daemon that updates graph of dependencies in background while you change files in your editor.

I made some experiments with my project (100K) and found that null build takes ~1.6 sec without monitor and 0.09 sec with monitor. Null build for my gmake based build system on the same project takes 42 secs (it parses makefiles files, builds graph of dependencies, scans files for modification, but does not run any commands).

KS2011: Afternoon topics

Posted Oct 28, 2011 20:13 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

That actually looks to be close to what I'd like to see for the build system itself. The rules for "how to build" are declared in the top-level and then piped through later. It also appears to have very little in the way of hard-coded knowledge about C/C++ and extending it to also support things like latex pipelines, documentation builds, and more would be well-supported.

It doesn't support -B or -n make flags though. The percentage output is nice. However, it doesn't seem to support out-of-source builds (I sort of hacked bootstrap.sh to do the bootstrap successfully, but there's no further support to get a full build working). The code base looks clean, so maybe I can get a patch and convince the maintainer to accept out-of-source as an option.

KS2011: Afternoon topics

Posted Oct 28, 2011 20:38 UTC (Fri) by anatolik (subscriber, #73797) [Link]

> It doesn't support -B
One of the mottos of the project - "you'll never need to do clean build". Things like "clean" and -B are used in case if build is broken by incorrect dependency information (esp it happens often for recursive make). Tup provides "build consistency" - the output is always correct because dependencies are always correct.

> make -n
"tup todo"

> it doesn't seem to support out-of-source builds
AFAIK the tup author in favor of adding it. It is better to contact the maillist as I am not sure about his plans.

Oh, yeah, I remembered 4th thing that I like in tup - "buffered output". Output from commands is always printed atomically. No more interlaced output! The interlaced output is especially annoying if you have an error in one of widely used header files - this makes error messages really difficult to read.

KS2011: Afternoon topics

Posted Oct 28, 2011 21:00 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

> One of the mottos of the project - "you'll never need to do clean build". Things like "clean" and -B are used in case if build is broken by incorrect dependency information (esp it happens often for recursive make). Tup provides "build consistency" - the output is always correct because dependencies are always correct.

So if I update a system library or header, it will relink the relevant parts? That's usually the corner case I've run into.

> AFAIK the tup author in favor of adding it. It is better to contact the maillist as I am not sure about his plans.

That's good.

> Oh, yeah, I remembered 4th thing that I like in tup - "buffered output". Output from commands is always printed atomically. No more interlaced output! The interlaced output is especially annoying if you have an error in one of widely used header files - this makes error messages really difficult to read.

I usually do `make -kj8` followed by `make` to do this, so this should help that. I rarely do parallel builds from vim however (<F8> is bound to "build", autodetecting CMake, make, pdflatex, cabal, rpmbuild, and a few others based on the current file) so interlaced output never really bothered me there.

KS2011: Afternoon topics

Posted Oct 27, 2011 0:06 UTC (Thu) by mhelsley (subscriber, #11324) [Link]

Discovering dependencies with strace is not reliable. You would also need to check code coverage of the strace'd run to know if you have anything resembling a reliable set of dependencies. Even if you have 100% code coverage (and few *small* programs can claim that) obscure data-dependent or time-dependent code paths may still be hidden.

The alternative is probably something vaguely like static analysis of the code. Static analysis is notoriously complicated and often produces a flood of false-positives though.

So my guess is we'll still have humans involved in dependency generation and maintenance for quite some time -- even with use of tools like strace :).

KS2011: Afternoon topics

Posted Oct 27, 2011 2:28 UTC (Thu) by nlhepler (subscriber, #54047) [Link]

Fabricate uses strace to find all uses of open() during the build (gcc or whichever compilation tool) to build a dependency tree for each the sources specified, for each target built, so that later re-builds only re-compile the files modified since the last build. This way, you don't need to specify the dependency structure yourself.

As for standardizing on a build system, I have mixed feelings about using automake. It's a PITA, even the presenters admit this. All the problems with regenerating the configure, autoconf-ing, incompatibilities between versions, etc make it an absolute pain to use. Extending something like waf or fabricate to perform all the tests that are needed (is libA around? what about libB?, etc), and to build a monolithic C function to grab platform-specific information seems like a much less painful approach. Also, fabricate is a single python file that can easily be included with your package -- not the best approach, but it could give something like a make.py a fallback if it's not available system-wide.

KS2011: Afternoon topics

Posted Oct 27, 2011 20:27 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Why do you need to modify CFLAGS externally? It should be done at most to pass additional optimizations flags.

And automake sucks at that, by blindingly propagating flags even where they shouldn't be propagated. To get the same effect in SCons you just need to add ENV=os.environ to Environment constructor.

Scons supports cross-compilation just fine, toolchain files are much easier to use than tons of --with-yet-another-fscking-lib switches that automake requires (and then fails nonetheless). And if have a complex build that requires building an executable tool as a part of the process... Well, scons and cmake are much better in that regard.

Autotools might be doing things consistently, but it's doing them consistently fugly.

KS2011: Afternoon topics

Posted Oct 27, 2011 22:23 UTC (Thu) by nix (subscriber, #2304) [Link]

Why would I want to modify CFLAGS? Let's see. -fstack-protector, might want that if I want a bit of extra security. -fomit-frame-pointer, might want that if I like a fairly large speedup on x86-32. -g, might want that (particularly if I'm a distributor generating packages with separated debugging information). -D__NO_STRING_INLINES, might want that because the string inlines still slow glibc down more than they speed anything up in my benchmarks. -L and -I, might well need them if some packages are installed in non-standard prefixes. This is a small subset of the reasons I tweak CFLAGS, and most of them are applicable to any random distributor you could name.

Automake gets it *just* right: the CFLAGS, LDFLAGS and so on are propagated correctly, supplemented with makefile-specific flags for those targets where it is necessary, and possibly occasionally overridden for those very very few targets where that is necessary (though this elicits a warning). This is done *by default*: there is no need for the makefile author to hack about with anything or remember to do anything (which nearly all will of course forget).

Automake doesn't take any --with- switches at all: are you thinking of Autoconf? Even *that* only requires them if autodetection fails, in which case you are installing things in nonstandard prefixes and should expect to have to do things by hand.

Oh, and it doesn't require building any kind of 'executable tool' at all, only a portable shell script. The entire *reason* why configure scripts are a shell script is because the distributor / builder needs nothing but a shell and core Unix shell tools: they do *not* need Autoconf or Automake. For scons you need a whole flipping Python installation -- fairly common these days, you think? I wish you were right: Python is by no means ubiquitous. The shell is. Until recently Autoconf's shell requirements were so lax that it actually allowed Solaris 2.6-and-below /bin/sh, which doesn't support shell functions: so we're talking a 1980s shell here!

KS2011: Afternoon topics

Posted Oct 27, 2011 22:35 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> Automake gets it *just* right: the CFLAGS, LDFLAGS and so on are propagated correctly, supplemented with makefile-specific flags for those targets where it is necessary, and possibly occasionally overridden for those very very few targets where that is necessary (though this elicits a warning). This is done *by default*: there is no need for the makefile author to hack about with anything or remember to do anything (which nearly all will of course forget).

So create a proper configuration with these flags instead of hacking them in using command line or system environment. That way they'll be cleanly visible in the build file instead of being tucked-in in a some obscure driver script.

>Automake doesn't take any --with- switches at all: are you thinking of Autoconf? Even *that* only requires them if autodetection fails, in which case you are installing things in nonstandard prefixes and should expect to have to do things by hand.

Yes, sorry. Of course I meant autoconf, and you have to do a lot of --with if you try to do cross-compilation with even remotely non-standard libdirs.

>Oh, and it doesn't require building any kind of 'executable tool' at all, only a portable shell script. The entire *reason* why configure scripts are a shell script is because the distributor / builder needs nothing but a shell and core Unix shell tools: they do *not* need Autoconf or Automake.

Portable? They don't work with MSVC which is probably the most popular C++ compiler. I can hardly call them 'portable'. Even between Unix platforms it's a hit-and-miss - most Linux software requires tweaking to build correctly on HP-UX.

KS2011: Afternoon topics

Posted Oct 28, 2011 16:57 UTC (Fri) by nix (subscriber, #2304) [Link]

So create a proper configuration with these flags instead of hacking them in using command line or system environment. That way they'll be cleanly visible in the build file instead of being tucked-in in a some obscure driver script.
So... I pointed out that scons requires understanding and hacking every single build script rather than just putting configuration state in one centralized place (be that a site-config file, as, autoconf, or a wrapper that runs the tool, as cmake)... and you suggest that I should fix this by... understanding and hacking every build script!

I suspect that you don't understand what I'm driving at. A build system which requires distributors to hack every build script in order to make simple adjustments to flags and paths will never be popular among distributors, and packages that use it will forever be discriminated against and make packagers and builders despair when they meet them.

Portable? They don't work with MSVC which is probably the most popular C++ compiler.
Now I know you're just trying to win regardless of logic. POSIX programs are very rarely portable to MSVC: I can count the number of programs on my Linux system that can also be built with MSVC on the fingers of two hands (and if I rule out SDL and its subsidiary packages, that's one hand). Generally such packages need radical internal surgery to make them work on Windows at all, and working without an abstraction layer such as Cygwin is even harder.

So, sorry, portability to MSVC is rarely worth considering when looking at Unix build systems.

Even for such programs, msys provides a platform specifically intended to allow configure scripts to run. (It's not heavyweight.)

KS2011: Afternoon topics

Posted Oct 30, 2011 23:09 UTC (Sun) by cortana (subscriber, #24596) [Link]

Actually I believe there is work going into making Automake work with MSVC. With CC=cl, automake creates a 'compile' script in the source directory which takes GCC-like options and invokes MSVC with the equivalents (where possible).

KS2011: Afternoon topics

Posted Nov 23, 2011 22:33 UTC (Wed) by oak (guest, #2786) [Link]

> Oh, and it doesn't require building any kind of 'executable tool' at all, only a portable shell script. The entire *reason* why configure scripts are a shell script is because the distributor / builder needs nothing but a shell and core Unix shell tools: they do *not* need Autoconf or Automake. For scons you need a whole flipping Python installation

I think python-minimal is quite a bit smaller than POSIX base system (shell, perl, awk, GNU core utils etc). I don't understand why Autools generate scripts that are supposed to work with some foobar non-POSIX shells that don't support functions (which makes configure scripts pretty much unreadable), but then requires Perl etc.

KS2011: Afternoon topics

Posted Nov 25, 2011 18:46 UTC (Fri) by nix (subscriber, #2304) [Link]

Autools generate scripts that are supposed to work with some foobar non-POSIX shells that don't support functions
This was dropped some time ago. configure scripts now use shell functions to some degree.

KS2011: Afternoon topics

Posted Oct 28, 2011 10:27 UTC (Fri) by Rudd-O (subscriber, #61155) [Link]

waf fixes all your complaints about scons.

the only problem with waf is that the documentation is arcane and the code even more so.

but those are problems with the autocrap too, so nothing new there.

KS2011: Afternoon topics

Posted Oct 28, 2011 11:33 UTC (Fri) by josh (subscriber, #17465) [Link]

waf also encourages projects to copy waf into their source tree, a nightmare for distro maintainers.

KS2011: Afternoon topics

Posted Oct 30, 2011 17:20 UTC (Sun) by foom (subscriber, #14868) [Link]

I don't see how it's any worse than autotools, which does pretty much the same thing. Some distros try to have a "always rerun automake/autoconf" rule, but that frequently runs into problems because of version incompatibilities.

I'm sure a similar rule could be made for waf-using packages, with similarly problematic results.

KS2011: Afternoon topics

Posted Oct 30, 2011 18:12 UTC (Sun) by josh (subscriber, #17465) [Link]

Automake has remained mostly backward compatible since after 1.4. waf makes no guarantees whatsoever, and replacing the version of waf included in a package requires much more work.

KS2011: Afternoon topics

Posted Oct 30, 2011 21:31 UTC (Sun) by foom (subscriber, #14868) [Link]

> Automake has remained mostly backward compatible since after 1.4

Yet, it's still necessary for Debian to include automake 1.4, 1.7, 1.9, 1.10, and 1.11, and autoconf 2.13, 2.59, 2.64, and 2.67.

If it were that easy to upgrade, I'd expect that everything would've been converted to at least automake 1.10 by now (which came out in 2006).

KS2011: Afternoon topics

Posted Oct 30, 2011 23:11 UTC (Sun) by cortana (subscriber, #24596) [Link]

Are those old autoconf/automake packages shipped in order to resolve build-dependencies for other packages in the archive, or are they shipped so that end users can still build their software which may be less well-maintained than your typical open source project?

KS2011: Afternoon topics

Posted Nov 1, 2011 2:42 UTC (Tue) by foom (subscriber, #14868) [Link]

> Are those old autoconf/automake packages shipped in order to resolve build-dependencies for other packages in the archive

Yes. Some reference material.

http://lists.debian.org/debian-qa/2011/03/msg00039.html
http://lists.debian.org/debian-devel/2011/10/msg00373.html

KS2011: Afternoon topics

Posted Nov 4, 2011 0:36 UTC (Fri) by geofft (subscriber, #59789) [Link]

Debian includes those because there hasn't been a huge reason not to. There's been discussion about removing some of them, and some thoughts about removing 1.4 -- it's definitely only in Debian for users' convenience, not because other things in Debian depend on it.

KS2011: Afternoon topics

Posted Nov 7, 2011 0:23 UTC (Mon) by foom (subscriber, #14868) [Link]

Nope, still not unused: there's 3 packages remaining build-depending on it in unstable, down from 30 in lenny and 8 in squeeze. (counting packages in main only). And of course that's not looking at packages whose maintainers aren't calling automake at all, but are shipping pre-built files generated with automake 1.4.

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds