User: Password:
|
|
Subscribe / Log in / New account

KS2011: Afternoon topics

KS2011: Afternoon topics

Posted Oct 26, 2011 17:16 UTC (Wed) by Cyberax (✭ supporter ✭, #52523)
Parent article: KS2011: Afternoon topics

>Automake is ugly, but the ugliness can be ignored; like democracy, it is messy but it is still the best system we have.

The faster Automake dies the better off we'll all be. Anything is better than automake: scons, waf, cmake, etc.

Please, start to use them. Samba switched to waf for samba4 and results are great so far. And it's hard to find a body of code more complicated than samba4.


(Log in to post comments)

KS2011: Afternoon topics

Posted Oct 26, 2011 17:37 UTC (Wed) by nix (subscriber, #2304) [Link]

From a distributor's perspective scons is an utter nightmare and not at all a replacement for the Autoconf/Automake combination. scons is not a build system: it's a framework which lets you write your own build system in Python. This Python is rather ugly (because it's single-use), and, worse, every distributor has to read it and understand it and, often, extend it, because it is almost guaranteed not to support analogues of --prefix, --exec-prefix and other paths at configure or install time (the former for encoding paths into the binary, the latter for installing into temporary directories for packaging), DESTDIR support, CFLAGS/LDFLAGS/*FLAGS overriding, cross-compilation...

The list of things autoconf and automake do for you in a consistent fashion is endless, and scons does next to none of them. Neither do projects which use scons, and some of those facilities (notably CFLAGS overriding and install-time prefix or DESTDIR support) are essential for any project to be easily packaged. The best you'll get with scons is a means of doing this which is inconsistent for every single package: not very useful.

(cmake used to be bad in this regard too, but is much better these days, though it still supports no analogue of the useful site-config files, so you need wrapper scripts to simulate that. waf appears to provide some of them, but in a profoundly non-extensible fashion, apparently under the belief that if you want to configure CFLAGS the same way across many packages, you'll be happy to hack the waf scripts for every one of them to ensure they're not overriding it. Wrong.)

KS2011: Afternoon topics

Posted Oct 26, 2011 19:03 UTC (Wed) by cjwatson (subscriber, #7322) [Link]

Amen to all of that. The analogy with democracy is very apt.

KS2011: Afternoon topics

Posted Oct 26, 2011 19:46 UTC (Wed) by tetromino (subscriber, #33846) [Link]

Indeed. Using scons or waf may be a reasonable solution for something that will only be built on your personal development machine, but it makes it a pain to package your software for distributions, and imho counts as a black mark against your package.

KS2011: Afternoon topics

Posted Oct 27, 2011 0:19 UTC (Thu) by nix (subscriber, #2304) [Link]

It also makes it a pain for people to build your software themselves when it turns out it's not packaged. "What? I have to install *another* build tool? Where's it going to be installed? What? I have to read a bunch of Python to figure it out? WTF?"

KS2011: Afternoon topics

Posted Oct 27, 2011 20:04 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Waf can create a self-contained configure script which does the build. The only tool you need is compiler in that case (and required libs, of course).

And installing scons is just as simple as installing one package. So what's a big deal with it?

KS2011: Afternoon topics

Posted Oct 27, 2011 22:15 UTC (Thu) by nix (subscriber, #2304) [Link]

The installation isn't that bad. The "oh heck now I have to learn it and (possibly) learn enough Python to be getting on with *and* figure out how this one-package build system works just so I can set CFLAGS and libdir appropriately for my biarch system, since they used lib and I use lib64 or vice versa or something like that"... *that* is not nice at all. In fact it was sufficient reason for me to avoid scons (and everything that used it to build completely) until recently.

(Sure, the problem goes away if it's a Python package... but if it's a Python package, why the heck aren't they using distutils, which supports prefix and destdir and everything necessary already, with zero effort needed from the packager?)

KS2011: Afternoon topics

Posted Oct 28, 2011 10:28 UTC (Fri) by Rudd-O (guest, #61155) [Link]

waf supports CFLAGS and all the GNU dirs.

You don't have to learn anything new to use waf. If you know how to use configure, you know how to use waf.

KS2011: Afternoon topics

Posted Oct 28, 2011 16:50 UTC (Fri) by nix (subscriber, #2304) [Link]

No site-config files, which means I have to figure out which of the autoconfish variables I set are actually supported by waf (not many) and pass them on the command-line instead. Yes, it's a valid approach, but it *is* something new to handle and it does need special handling by packagers / autobuilders. You cannot treat it just like autoconf.

KS2011: Afternoon topics

Posted Oct 31, 2011 9:57 UTC (Mon) by pkern (subscriber, #32883) [Link]

Wasn't waf the insane thing that didn't want to keep any backwards compatibility to recreate its configure stuff from old sources because it's ok to stick with an old version? (And hence it got removed from Debian, for instance?) So you need to ship it with your sources?

Also wasn't it the insane thing which actually shipped a compressed tarball with Python libs in sources that use it? (The "waf" binary alongside "configure".)

I know Debian had to patch a bunch of packages because waf was stupid and the result didn't build on hppa. You couldn't just regenerate the output because you needed the right version of waf to do that, i.e. the shipped one. So that needed patching. And you couldn't just apply the same patch neither because the build system is in a compressed blob.

Oh my, please spare us of this ridiculous thing.

KS2011: Afternoon topics

Posted Oct 31, 2011 13:43 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Nope, situation is more complex.

Waf can produce 'wafscript' which is basically a 'configure' file. That's not a preferred form for modifications, like you wouldn't want to modify 'configure' script directly (and not 'configure.ac'). Why waf won't build on hppa - I have no idea, it's a pure Python app.

Their stance regarding system-wide installation is curious, but not a problem.

KS2011: Afternoon topics

Posted Oct 26, 2011 20:05 UTC (Wed) by pj (subscriber, #4506) [Link]

What about one of the newer build systems like fabricate (<http://code.google.com/p/fabricate/>) that autodiscovers dependencies via strace() ?

KS2011: Afternoon topics

Posted Oct 26, 2011 21:44 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Build systems can be replaced and things are fine. Make less so. There are many things that replace make that do not support things like -j, -k, -n, or other nicities for debugging broken builds.

Personally, I'm more partial to CMake than other build systems for C/C++. Some languages have it better: Python has setuptools or distribute, and Haskell has cabal. These are built with the language in mind and work fine until other things are needed (e.g., asciidoc/xmlto for man pages, doxygen for documentation, and non-standard testing suites (especially when language bindings are concerned and you can't build/use the bound language without hacks)). Others languages still have it hard such as Java with maven or whatever the XML horror du jour is.

What I *really* want is a build system that generates Makefiles (or Ninja files, or Visual Studio projects, or XCode projects, or what have you) (CMake), defaults to out of source builds (CMake and whatever autotools magic glibc has that no one bothers to copy), has a declarative syntax (cabal), and has no need to ship generated files (CMake, raw Makefiles).

I have hacks for CMake to handle LaTeX builds (including double and triple+bibtex passes) with out-of-source builds, symlinking my dotfiles, generating cross-referenced doxygen, and more, but I think a build system that supports something more akin to make's generator rules (something like Prolog/Mercury maybe?) would be nicer to work with (CMake's escaping and argument parsing is less than ideal to manage with complicated things). Implicit know-how of supporting system copies and bundled libraries with automatic switches (which can be disabled if there are patches which make in-the-wild copies not work) would be wonderful as well. CMake's external_project_add gets close, but still has some rough edges (such as needing manual switches for system copies support).

KS2011: Afternoon topics

Posted Oct 27, 2011 19:52 UTC (Thu) by pbonzini (subscriber, #60935) [Link]

Autotools supports out-of-tree builds with zero effort from the autotools user. All programs using autotools should support mkdir build; cd build; ../configure && make.

KS2011: Afternoon topics

Posted Oct 27, 2011 20:11 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

Yes, you can configure out-of-tree, but you can't autoreconf -i -f out-of-tree. I want *zero* files in the source tree that are not meant to be committed. So yes, autotools does support out-of-tree builds, but it still leaves a dirty source tree behind.

Out-of-source *build* is probably bad wording. More precisely, .gitignore should be empty and git status (and the equivalent in the other VCS's) should also be empty starting with no generated files.

KS2011: Afternoon topics

Posted Oct 29, 2011 19:05 UTC (Sat) by fuhchee (guest, #40059) [Link]

"I want *zero* files in the source tree that are not meant to be committed."

The cure to that is to mean to commit autoconf/automake-generated files.

KS2011: Afternoon topics

Posted Oct 31, 2011 20:46 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> The cure to that is to mean to commit autoconf/automake-generated files.

Which, IMO, is worse than having them in the source tree after their generation. The autoconf/automake *generated* files belong in the build directory.

KS2011: Afternoon topics

Posted Oct 31, 2011 20:55 UTC (Mon) by fuhchee (guest, #40059) [Link]

"The autoconf/automake *generated* files belong in the build directory."

If you say so. :-)

KS2011: Afternoon topics

Posted Nov 1, 2011 14:30 UTC (Tue) by nix (subscriber, #2304) [Link]

Well, that's true if your source directory is a version-control checkout, because anything else is a recipe for conflict hell as different developers use slightly different versions of the generating tools (producing slightly different output).

It's certainly not true if your source directory is a release tarball (or other release medium). Autoconf et al should have been run for you by that point, and the result tested. That way end users don't need anything but a shell and ordinary build tools to run the build. (This is one area where cmake falls down: all the builders need a copy of it.)

KS2011: Afternoon topics

Posted Nov 1, 2011 14:41 UTC (Tue) by fuhchee (guest, #40059) [Link]

"anything else is a recipe for conflict hell"

In practice, if people are pragmatic, it's fine.
Developers can regenerate the files at will with any version that works.
In the case of version control branch merge problems, regenerate them again.

KS2011: Afternoon topics

Posted Oct 28, 2011 18:51 UTC (Fri) by anatolik (subscriber, #73797) [Link]

Most builds systems are work fine until you have a small project (less than 10K files), but all of them suck when the project grows. Basically all these build systems are not scalable.

I really like the idea of Tup build system http://github.com/gittup that stores graph into local sqlite database and reparses/rebuilds graph only when files are changed - this makes iterative development much more pleasant. Another cool feature is dependencies autodiscovering - under the hood it uses fuse (and fuse-like) library for that (this works on linux, macosx and windows). And the third feature that I like is "monitor" - inotify based daemon that updates graph of dependencies in background while you change files in your editor.

I made some experiments with my project (100K) and found that null build takes ~1.6 sec without monitor and 0.09 sec with monitor. Null build for my gmake based build system on the same project takes 42 secs (it parses makefiles files, builds graph of dependencies, scans files for modification, but does not run any commands).

KS2011: Afternoon topics

Posted Oct 28, 2011 20:13 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

That actually looks to be close to what I'd like to see for the build system itself. The rules for "how to build" are declared in the top-level and then piped through later. It also appears to have very little in the way of hard-coded knowledge about C/C++ and extending it to also support things like latex pipelines, documentation builds, and more would be well-supported.

It doesn't support -B or -n make flags though. The percentage output is nice. However, it doesn't seem to support out-of-source builds (I sort of hacked bootstrap.sh to do the bootstrap successfully, but there's no further support to get a full build working). The code base looks clean, so maybe I can get a patch and convince the maintainer to accept out-of-source as an option.

KS2011: Afternoon topics

Posted Oct 28, 2011 20:38 UTC (Fri) by anatolik (subscriber, #73797) [Link]

> It doesn't support -B
One of the mottos of the project - "you'll never need to do clean build". Things like "clean" and -B are used in case if build is broken by incorrect dependency information (esp it happens often for recursive make). Tup provides "build consistency" - the output is always correct because dependencies are always correct.

> make -n
"tup todo"

> it doesn't seem to support out-of-source builds
AFAIK the tup author in favor of adding it. It is better to contact the maillist as I am not sure about his plans.

Oh, yeah, I remembered 4th thing that I like in tup - "buffered output". Output from commands is always printed atomically. No more interlaced output! The interlaced output is especially annoying if you have an error in one of widely used header files - this makes error messages really difficult to read.

KS2011: Afternoon topics

Posted Oct 28, 2011 21:00 UTC (Fri) by mathstuf (subscriber, #69389) [Link]

> One of the mottos of the project - "you'll never need to do clean build". Things like "clean" and -B are used in case if build is broken by incorrect dependency information (esp it happens often for recursive make). Tup provides "build consistency" - the output is always correct because dependencies are always correct.

So if I update a system library or header, it will relink the relevant parts? That's usually the corner case I've run into.

> AFAIK the tup author in favor of adding it. It is better to contact the maillist as I am not sure about his plans.

That's good.

> Oh, yeah, I remembered 4th thing that I like in tup - "buffered output". Output from commands is always printed atomically. No more interlaced output! The interlaced output is especially annoying if you have an error in one of widely used header files - this makes error messages really difficult to read.

I usually do `make -kj8` followed by `make` to do this, so this should help that. I rarely do parallel builds from vim however (<F8> is bound to "build", autodetecting CMake, make, pdflatex, cabal, rpmbuild, and a few others based on the current file) so interlaced output never really bothered me there.

KS2011: Afternoon topics

Posted Oct 27, 2011 0:06 UTC (Thu) by mhelsley (guest, #11324) [Link]

Discovering dependencies with strace is not reliable. You would also need to check code coverage of the strace'd run to know if you have anything resembling a reliable set of dependencies. Even if you have 100% code coverage (and few *small* programs can claim that) obscure data-dependent or time-dependent code paths may still be hidden.

The alternative is probably something vaguely like static analysis of the code. Static analysis is notoriously complicated and often produces a flood of false-positives though.

So my guess is we'll still have humans involved in dependency generation and maintenance for quite some time -- even with use of tools like strace :).

KS2011: Afternoon topics

Posted Oct 27, 2011 2:28 UTC (Thu) by nlhepler (subscriber, #54047) [Link]

Fabricate uses strace to find all uses of open() during the build (gcc or whichever compilation tool) to build a dependency tree for each the sources specified, for each target built, so that later re-builds only re-compile the files modified since the last build. This way, you don't need to specify the dependency structure yourself.

As for standardizing on a build system, I have mixed feelings about using automake. It's a PITA, even the presenters admit this. All the problems with regenerating the configure, autoconf-ing, incompatibilities between versions, etc make it an absolute pain to use. Extending something like waf or fabricate to perform all the tests that are needed (is libA around? what about libB?, etc), and to build a monolithic C function to grab platform-specific information seems like a much less painful approach. Also, fabricate is a single python file that can easily be included with your package -- not the best approach, but it could give something like a make.py a fallback if it's not available system-wide.

KS2011: Afternoon topics

Posted Oct 27, 2011 20:27 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

Why do you need to modify CFLAGS externally? It should be done at most to pass additional optimizations flags.

And automake sucks at that, by blindingly propagating flags even where they shouldn't be propagated. To get the same effect in SCons you just need to add ENV=os.environ to Environment constructor.

Scons supports cross-compilation just fine, toolchain files are much easier to use than tons of --with-yet-another-fscking-lib switches that automake requires (and then fails nonetheless). And if have a complex build that requires building an executable tool as a part of the process... Well, scons and cmake are much better in that regard.

Autotools might be doing things consistently, but it's doing them consistently fugly.

KS2011: Afternoon topics

Posted Oct 27, 2011 22:23 UTC (Thu) by nix (subscriber, #2304) [Link]

Why would I want to modify CFLAGS? Let's see. -fstack-protector, might want that if I want a bit of extra security. -fomit-frame-pointer, might want that if I like a fairly large speedup on x86-32. -g, might want that (particularly if I'm a distributor generating packages with separated debugging information). -D__NO_STRING_INLINES, might want that because the string inlines still slow glibc down more than they speed anything up in my benchmarks. -L and -I, might well need them if some packages are installed in non-standard prefixes. This is a small subset of the reasons I tweak CFLAGS, and most of them are applicable to any random distributor you could name.

Automake gets it *just* right: the CFLAGS, LDFLAGS and so on are propagated correctly, supplemented with makefile-specific flags for those targets where it is necessary, and possibly occasionally overridden for those very very few targets where that is necessary (though this elicits a warning). This is done *by default*: there is no need for the makefile author to hack about with anything or remember to do anything (which nearly all will of course forget).

Automake doesn't take any --with- switches at all: are you thinking of Autoconf? Even *that* only requires them if autodetection fails, in which case you are installing things in nonstandard prefixes and should expect to have to do things by hand.

Oh, and it doesn't require building any kind of 'executable tool' at all, only a portable shell script. The entire *reason* why configure scripts are a shell script is because the distributor / builder needs nothing but a shell and core Unix shell tools: they do *not* need Autoconf or Automake. For scons you need a whole flipping Python installation -- fairly common these days, you think? I wish you were right: Python is by no means ubiquitous. The shell is. Until recently Autoconf's shell requirements were so lax that it actually allowed Solaris 2.6-and-below /bin/sh, which doesn't support shell functions: so we're talking a 1980s shell here!

KS2011: Afternoon topics

Posted Oct 27, 2011 22:35 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

> Automake gets it *just* right: the CFLAGS, LDFLAGS and so on are propagated correctly, supplemented with makefile-specific flags for those targets where it is necessary, and possibly occasionally overridden for those very very few targets where that is necessary (though this elicits a warning). This is done *by default*: there is no need for the makefile author to hack about with anything or remember to do anything (which nearly all will of course forget).

So create a proper configuration with these flags instead of hacking them in using command line or system environment. That way they'll be cleanly visible in the build file instead of being tucked-in in a some obscure driver script.

>Automake doesn't take any --with- switches at all: are you thinking of Autoconf? Even *that* only requires them if autodetection fails, in which case you are installing things in nonstandard prefixes and should expect to have to do things by hand.

Yes, sorry. Of course I meant autoconf, and you have to do a lot of --with if you try to do cross-compilation with even remotely non-standard libdirs.

>Oh, and it doesn't require building any kind of 'executable tool' at all, only a portable shell script. The entire *reason* why configure scripts are a shell script is because the distributor / builder needs nothing but a shell and core Unix shell tools: they do *not* need Autoconf or Automake.

Portable? They don't work with MSVC which is probably the most popular C++ compiler. I can hardly call them 'portable'. Even between Unix platforms it's a hit-and-miss - most Linux software requires tweaking to build correctly on HP-UX.

KS2011: Afternoon topics

Posted Oct 28, 2011 16:57 UTC (Fri) by nix (subscriber, #2304) [Link]

So create a proper configuration with these flags instead of hacking them in using command line or system environment. That way they'll be cleanly visible in the build file instead of being tucked-in in a some obscure driver script.
So... I pointed out that scons requires understanding and hacking every single build script rather than just putting configuration state in one centralized place (be that a site-config file, as, autoconf, or a wrapper that runs the tool, as cmake)... and you suggest that I should fix this by... understanding and hacking every build script!

I suspect that you don't understand what I'm driving at. A build system which requires distributors to hack every build script in order to make simple adjustments to flags and paths will never be popular among distributors, and packages that use it will forever be discriminated against and make packagers and builders despair when they meet them.

Portable? They don't work with MSVC which is probably the most popular C++ compiler.
Now I know you're just trying to win regardless of logic. POSIX programs are very rarely portable to MSVC: I can count the number of programs on my Linux system that can also be built with MSVC on the fingers of two hands (and if I rule out SDL and its subsidiary packages, that's one hand). Generally such packages need radical internal surgery to make them work on Windows at all, and working without an abstraction layer such as Cygwin is even harder.

So, sorry, portability to MSVC is rarely worth considering when looking at Unix build systems.

Even for such programs, msys provides a platform specifically intended to allow configure scripts to run. (It's not heavyweight.)

KS2011: Afternoon topics

Posted Oct 30, 2011 23:09 UTC (Sun) by cortana (subscriber, #24596) [Link]

Actually I believe there is work going into making Automake work with MSVC. With CC=cl, automake creates a 'compile' script in the source directory which takes GCC-like options and invokes MSVC with the equivalents (where possible).

KS2011: Afternoon topics

Posted Nov 23, 2011 22:33 UTC (Wed) by oak (guest, #2786) [Link]

> Oh, and it doesn't require building any kind of 'executable tool' at all, only a portable shell script. The entire *reason* why configure scripts are a shell script is because the distributor / builder needs nothing but a shell and core Unix shell tools: they do *not* need Autoconf or Automake. For scons you need a whole flipping Python installation

I think python-minimal is quite a bit smaller than POSIX base system (shell, perl, awk, GNU core utils etc). I don't understand why Autools generate scripts that are supposed to work with some foobar non-POSIX shells that don't support functions (which makes configure scripts pretty much unreadable), but then requires Perl etc.

KS2011: Afternoon topics

Posted Nov 25, 2011 18:46 UTC (Fri) by nix (subscriber, #2304) [Link]

Autools generate scripts that are supposed to work with some foobar non-POSIX shells that don't support functions
This was dropped some time ago. configure scripts now use shell functions to some degree.

KS2011: Afternoon topics

Posted Oct 28, 2011 10:27 UTC (Fri) by Rudd-O (guest, #61155) [Link]

waf fixes all your complaints about scons.

the only problem with waf is that the documentation is arcane and the code even more so.

but those are problems with the autocrap too, so nothing new there.

KS2011: Afternoon topics

Posted Oct 28, 2011 11:33 UTC (Fri) by josh (subscriber, #17465) [Link]

waf also encourages projects to copy waf into their source tree, a nightmare for distro maintainers.

KS2011: Afternoon topics

Posted Oct 30, 2011 17:20 UTC (Sun) by foom (subscriber, #14868) [Link]

I don't see how it's any worse than autotools, which does pretty much the same thing. Some distros try to have a "always rerun automake/autoconf" rule, but that frequently runs into problems because of version incompatibilities.

I'm sure a similar rule could be made for waf-using packages, with similarly problematic results.

KS2011: Afternoon topics

Posted Oct 30, 2011 18:12 UTC (Sun) by josh (subscriber, #17465) [Link]

Automake has remained mostly backward compatible since after 1.4. waf makes no guarantees whatsoever, and replacing the version of waf included in a package requires much more work.

KS2011: Afternoon topics

Posted Oct 30, 2011 21:31 UTC (Sun) by foom (subscriber, #14868) [Link]

> Automake has remained mostly backward compatible since after 1.4

Yet, it's still necessary for Debian to include automake 1.4, 1.7, 1.9, 1.10, and 1.11, and autoconf 2.13, 2.59, 2.64, and 2.67.

If it were that easy to upgrade, I'd expect that everything would've been converted to at least automake 1.10 by now (which came out in 2006).

KS2011: Afternoon topics

Posted Oct 30, 2011 23:11 UTC (Sun) by cortana (subscriber, #24596) [Link]

Are those old autoconf/automake packages shipped in order to resolve build-dependencies for other packages in the archive, or are they shipped so that end users can still build their software which may be less well-maintained than your typical open source project?

KS2011: Afternoon topics

Posted Nov 1, 2011 2:42 UTC (Tue) by foom (subscriber, #14868) [Link]

> Are those old autoconf/automake packages shipped in order to resolve build-dependencies for other packages in the archive

Yes. Some reference material.

http://lists.debian.org/debian-qa/2011/03/msg00039.html
http://lists.debian.org/debian-devel/2011/10/msg00373.html

KS2011: Afternoon topics

Posted Nov 4, 2011 0:36 UTC (Fri) by geofft (subscriber, #59789) [Link]

Debian includes those because there hasn't been a huge reason not to. There's been discussion about removing some of them, and some thoughts about removing 1.4 -- it's definitely only in Debian for users' convenience, not because other things in Debian depend on it.

KS2011: Afternoon topics

Posted Nov 7, 2011 0:23 UTC (Mon) by foom (subscriber, #14868) [Link]

Nope, still not unused: there's 3 packages remaining build-depending on it in unstable, down from 30 in lenny and 8 in squeeze. (counting packages in main only). And of course that's not looking at packages whose maintainers aren't calling automake at all, but are shipping pre-built files generated with automake 1.4.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds