User: Password:
|
|
Subscribe / Log in / New account

libabc: a demonstration library for kernel developers

At the 2011 Kernel Summit, Lennart Poettering and Kay Sievers talked about how kernel developers could write better low-level shared libraries. They have now released a sample library called libabc that is intended to demonstrate their recommended practices. "Please have a look, and even if you are not a kernel hacker there might be something useful to know in it, especially if you work on the lower layers of our stack."
(Log in to post comments)

libabc: a demonstration library for kernel developers

Posted Nov 2, 2011 19:38 UTC (Wed) by quotemstr (subscriber, #45331) [Link]

Maybe I'm getting old, but libabc's README comes across as unprofessional and more than a little sloppy. I also disagree with some particular points made in this README:

> Make your library threads-aware, but *not* thread-safe!

This piece of advice is confusing. I think the author meant to make libraries thread-agnostic: don't bend over backwards to accommodate access to the same data from multiple threads, but don't unnecessarily couple different pieces of data either.

> avoid hidden fork()/exec() in libraries

Great advice. We can just use posix_spawn instead: err, wait. How do I call this function under Linux? The author also should be more specific about pthread_atfork's alleged brokenness.

> You must place #ifndef libabc, #define libabc, #endif in your header files. There is no other way.

Actually, there is a better way: #pragma once (http://en.wikipedia.org/wiki/Pragma_once). It's shorter than traditional header guards (one line versus three), and it eliminates the risk of name collisions. #pragma once is nonstandard, but it's supported by every compiler that matters.

> executing out-of-process tools and parsing their output is not acceptable in libraries. Ever.

I strongly disagree with this statement. Doing work out-of-process gives robustness (and sometimes security) guarantees that are just not possible with calls into libraries. The world would be a better place if PAM, for example, did authentication separately. (I once had to debug a nasty file descriptor leak in pam_krb5 that wouldn't have been an issue had the PAM work been some in an ephemeral context.)

> separate 'mechanism' from 'policy'

I don't think this phrase means what the author thinks it means.

libabc: a demonstration library for kernel developers

Posted Nov 2, 2011 19:44 UTC (Wed) by bagder (subscriber, #38414) [Link]

> supported by every compiler that matters

For us who aim at C89 compatibility level and want the code to build for just about anything 32 bit or more, that's just not true.

libabc: a demonstration library for kernel developers

Posted Nov 2, 2011 19:59 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

And this is relevant for Linux exactly how?

Besides, then a question of autotools arises. It certainly does not work on 'anything 32-bit' or even 'more than about 10% of computers in the world'.

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 7:43 UTC (Thu) by bagder (subscriber, #38414) [Link]

It is relevant to Linux because lots of libraries have this idea of portability. Including many of the most popular ones on Linux.

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 20:11 UTC (Thu) by bfields (subscriber, #19510) [Link]

The audience for this is kernel developers writing low-level libraries that use Linux-specific interfaces.

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 13:01 UTC (Fri) by HelloWorld (guest, #56129) [Link]

Portability is a really good reason to not use autotools. You may have heard of this thing called Windows which doesn't have a bourne-like shell.

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 18:53 UTC (Fri) by raven667 (subscriber, #5198) [Link]

Windows is so different though that you might as well complain about portability to MVS as well while you are at it.

libabc: a demonstration library for kernel developers

Posted Nov 5, 2011 0:01 UTC (Sat) by cmccabe (guest, #60281) [Link]

Even if we're just talking about POSIX operating systems, I would argue that autotools doesn't have a great record on portability.

Basically, autotools puts the burden of ensuring portability on you, the programmer. Did you create an m4 macro that expanded out to some shell code that is interpreted differently on Solaris or HPUX? Suddenly your supposedly portable project is not portable any more. And since hardly anyone has 100 different UNIX flavors around to test on, the odds that your project is broken somewhere is practically 100%.

CMake gets around this problem by defining its own language, which works the same on all platforms. This removes a great burden from the programmer and enables the creation of actually (as opposed to theoretically) portable code. (CMake also allows you to specify a minimum CMake version or activate compatibility modes if you want.)

If you can not test something - it's broken.

Posted Nov 5, 2011 1:06 UTC (Sat) by khim (subscriber, #9252) [Link]

Basically, autotools puts the burden of ensuring portability on you, the programmer.

Sure, but this is always the case.

CMake gets around this problem by defining its own language, which works the same on all platforms.

Does it come with it's own portable POSIX implementation, too? Practically speaking when I've faced portability problems in the past differences between Solaris or HPUX shell were the simplest problems to solve. Bugs in standard library were much harder to code around.

This removes a great burden from the programmer and enables the creation of actually (as opposed to theoretically) portable code.

It solves about 5% of problem and creates it's own problems instead: where in autoconf I can just write shell code which can do practically anything in CMake I'm in straightjacket which makes it harder to make mistakes but also makes sure they are harder to fix as well.

Sorry, but from my experience CMake only makes sense for projects which decided for one reason or another to support Windows. And we are talking about low-level linux libraries here.

If you can not test something - it's broken.

Posted Nov 5, 2011 4:37 UTC (Sat) by cmccabe (guest, #60281) [Link]

As you correctly pointed out, build systems can only make portability harder, never easier. They can't write the portable C (or whatever) code for you. Autotools makes portability harder by closing off an entire platform to you (Windows), and by forcing you to write possibly non-portable shell code to do routine tasks.

You can also write shell code in CMake if you desire. 99% of the time, it just is not necessary because the build system, you know, manages the build for you.

As a "bonus" automake-infested projects are impossible to subject to any kind of automated analysis, because of the many Turing-complete steps you need to go through to even figure out what the heck you are supposed to be building. As a further bonus, those steps add 20 seconds of coffee break time to each build, even if you're only building one thing.

Nope, automake is not my favorite thing...

Hmm...

Posted Nov 5, 2011 9:28 UTC (Sat) by khim (subscriber, #9252) [Link]

Autotools makes portability harder by closing off an entire platform to you (Windows)

s/harder/easier/

Windows is so alien compared to the rest of the universe that it's better to leave it be. Whoever want to mess with this monstrousity have my condolences, but I don't see why I should bother crippling things for the Windows sake. In general I think it's right approach: compare Git (which started with the assumption that Windows is not important) and Hg (which was developed with Windows in mind). I've worked with a lot of projects and the number of kludges they had for Windows usually exceed the number of projects for sane (POSIX-compatible) platforms combined. This is not an accident, but result of deliberate sabotage on Microsoft's part (NT was sold as POSIX-compatible system, remember?) so I don't think we should spend our time thinking about Windows. Let the poor sods who are forced to use that platform do whatever they want - as long as it does not affect the rest of us much.

by forcing you to write possibly non-portable shell code to do routine tasks.

IMNSHO it's better to write routine tasks in well-known (albeit weird) language, rather then learn "yet another language of the day".

Nope, automake is not my favorite thing...

Ah, no objections here. Automake is horrible. But it's standard - and that's what counts. Kind of reverse to the usual saying: all automake projects are unhappy - but they are unhappy in the same way, all CMake/scons/whatever projects [may] be happy (I'm not sure), but they are happy in their own way and demand induvidual approach.

Besides you can use autoconf without automake - often it's good enough approach, albeit for libraries automake is better because of libtool integration.

As a further bonus, those steps add 20 seconds of coffee break time to each build, even if you're only building one thing.

Well, this is true to some degree, but then these "20 seconds" are split as 15seconds/5seconds + 1second for libabc (./autogen.sh or ./configure, ./make) and you are not supposed to rerun ./autogen.sh constantly. In fact only libabc developers should run it. Yes, autotools are pigs, but today's systems are more then powerful enough to cope with them.

Hmm...

Posted Nov 7, 2011 0:31 UTC (Mon) by raven667 (subscriber, #5198) [Link]

Nope, automake is not my favorite thing...
Ah, no objections here. Automake is horrible. But it's standard - and that's what counts. Kind of reverse to the usual saying: all automake projects are unhappy - but they are unhappy in the same way, all CMake/scons/whatever projects [may] be happy (I'm not sure), but they are happy in their own way and demand induvidual approach.

This is exactly. The point that lennart seemed to be trying to make, not that autotools are particularly awesome but that they are a de-facto standard. Downstream users such as sys admins, packagers, automated build tools, etc know how to deal with automake and not any other flavor of the week systems.

Hmm...

Posted Nov 7, 2011 3:09 UTC (Mon) by alankila (guest, #47141) [Link]

Maybe it would be possible to make autotools not-crappy by adopting a similar strategy as with cmake:

1) treat configure.ac as the definition of project's compiling rules, like the CMakeList.txt file;
2) compiling system uses this new utility, let's call it "fastconf", to read configure.ac and directly generates Makefile without ever invoking shell, m4, or generating any other files;
3) all compiling rules are written to a single Makefile, to avoid the horrors of recursive make invocations.

It's obvious that a whole lot of the functionality of autotools would be lost, but maybe this transition would nevertheless be worth it. Because my experience with autotools is that every time this tool updates, there's always a project which no longer builds because something that used to work doesn't work anymore for whatever reason. And it's surprisingly hard to force autotools to rebuild all of its intermediate files, which makes debugging autotools build issues doubly frustrating.

I think autotools' problem is the fundamental design on utility written in one programming language generating code to be run with another programming language that generates code/data for yet more programming languages. Use of code generators always seems to result in an unworkable mess, so they really ought to be off the table. (Ideally there would not even be a Makefile as the output, but maybe that file is acceptable enough...)

Hmm...

Posted Nov 7, 2011 8:56 UTC (Mon) by khim (subscriber, #9252) [Link]

Maybe it would be possible to make autotools not-crappy by adopting a similar strategy as with cmake.

No, it'll not work.

compiling system uses this new utility, let's call it "fastconf", to read configure.ac and directly generates Makefile without ever invoking shell, m4, or generating any other files;

You've lost me here. How the usual sequence of "./configure ; make ; make install" work after that? How your system will interoperate with other packages?

It's obvious that a whole lot of the functionality of autotools would be lost, but maybe this transition would nevertheless be worth it.

No. You'll just create N+1 build system which will be ostracized by distributions and avoided if at all possible. Sure, if you are large and established package (like KDE) you can push whatever you want - people will grumble but accept it - but if it's just a small library without a lot of cheerleaders then it's not a good choice.

Note how similar approach (let's redo everything and drop bunch of features to make everything better) is "welcomed" by GNOME community (we have long thread right here) or by KDE community (well, it happened few years ago so the screams are not were loud now). The important prerequisite for that approach is abandonement of "old style": only them you can push "new style" down the [unwilling] peoples throat. And I don't believe people plan to abandon autotools support.

Because my experience with autotools is that every time this tool updates, there's always a project which no longer builds because something that used to work doesn't work anymore for whatever reason.

Funny. I experience this phenomenon with "modern" build systems, too. In fact (if you count number of autoconfiscated projects and number of projects built with "modern" tools) I'd say autotools are more robust.

Distribution cope with this problem just fine: they just provide few versions of autotools and allow selection.

Use of code generators always seems to result in an unworkable mess, so they really ought to be off the table. (Ideally there would not even be a Makefile as the output, but maybe that file is acceptable enough...)

Somehow in my experience the worst problems arose from systems like SCONS which replace make completely. Autotools are in fact the most robust solution - simply because it's big changes are in the past, today it's mostly small bugfixes.

Hmm...

Posted Nov 7, 2011 14:56 UTC (Mon) by cmccabe (guest, #60281) [Link]

[snip windows discussion]

Some projects are definitely better off not supporting Windows. But it ought to be up to the developer to dictate that, not the build system. CMake manages to support Windows, Mac, Linux, and many other platforms, so I don't see why we should accept anything less out of a supposedly portable build system.

> Ah, no objections here. Automake is horrible. But it's standard - and that's
> what counts

This is the same argument most people use for using Windows instead of Linux, especially in a corporate environment. I have to ask: don't your developers and users deserve something a little better than "barely good enough"? Are you really too old to learn something new?

Hmm...

Posted Nov 7, 2011 16:46 UTC (Mon) by khim (subscriber, #9252) [Link]

CMake manages to support Windows, Mac, Linux, and many other platforms, so I don't see why we should accept anything less out of a supposedly portable build system.

Because it refuses to support the most important platform: GNU. GNU standard is "./configure ; make ; make install". If you don't support that standard that means that you've put portability to other systems ahead of portability to GNU system. Which is kinda stupid for the Linux library because Linux is often used as part of GNU system. Even libraries which end up installed in non-GNU systems (like Android or OpenWRT) are usually build on GNU system and should follow GNU conventions.

I have to ask: don't your developers and users deserve something a little better than "barely good enough"?

Yes. They deserve system where hundred of packages can be changed using the same approach (config.site). They deserve system where said packages can be multiarch compiled in regular manner and installed using the same approach (make "install-exec"/make "install-data"). They deserve system where you can combine different packages in one "superpackage" easily. In short: they deserve to have capabilities of autotools.

This all works only if you use autotools exclusively. It may be possible to create something like this starting from CMake, I don't know, but since GNU is build around autotools... autotools that is.

Are you really too old to learn something new?

I'm not too old to learn something new, but I'm definitely too old to start pointless crusade with goal of total replacement of autotools with CMake. And world with a mix of autotools and CMake is much, much, worse then world of pure autotools.

Hmm...

Posted Nov 7, 2011 19:47 UTC (Mon) by cmccabe (guest, #60281) [Link]

[snip discussion of other platforms]

Android doesn't use autotools. They knew better than touch that mess. They use plain old makefiles. However, they do use GNU Make extensions. I have not used OpenWRT, so I don't know what build system they use.

> I'm not too old to learn something new, but I'm definitely too old to
> start pointless crusade with goal of total replacement of autotools with
> CMake. And world with a mix of autotools and CMake is much, much, worse
> then world of pure autotools.

We already live in a world with a mix of autotools and other things.

The kernel doesn't use autotools (which makes kernel hackers recommending autotools unintentionally ironic.) KDE uses CMake. MySQL is transitioning from autotools to CMake. Anything that is portable to Windows (like openoffice, the gimp, etc.) will need to have a parallel build system to handle that platform, because autotools can't.

Android itself has serious NIH syndrome...

Posted Nov 8, 2011 6:15 UTC (Tue) by khim (subscriber, #9252) [Link]

Android doesn't use autotools. They knew better than touch that mess.

I know. They create their own mess instead. As usual it makes life easier for them (albeit not by much: lots of Android developers just like lots of Chrome developers hate GYP - but since they must support Windows it stays). When you develop software for android you don't need to bother with all that and use can easily use NDK and autotools.

We already live in a world with a mix of autotools and other things.

No. We live in a world of autotools with some additional abominations here and there. Every time you hit yet another such project (be it pure Makefile-based project like libgd or CMake-based project like OpenSceneGraph) you have a problem. Often the problem is eventually solved by autoconfiscation (last version of libgd uses autotools like sane package), sometimes you must tolerate the abomintation (bzip2 is standard example), but if your project needs such a library it's always a problem.

KDE uses CMake.

Thankfully KDE is it's own "distribution in the distribution": it's hard to use KDE libraries outside of KDE for reasons other then build system thus it's not important what it uses.

MySQL is transitioning from autotools to CMake.

This one is problematic, yes, but one can always switch to MariaDB.

As I've said: non-autoconfiscated libraries are always a problem, but if you are well-established and important project you can get away with it. New library will be just rejected.

The kernel doesn't use autotools (which makes kernel hackers recommending autotools unintentionally ironic.)

This is not a recommendation of kernel hackers. This is recommendation for kernel hackers by people who develop userspace plumbing. Kernel uses it's own conventions because, in a sense, kernel is it's own universe: it can not use standard libraries at all, it must use some foul tricks to make the whole thing work on bare metal, etc.

Android itself has serious NIH syndrome...

Posted Nov 8, 2011 17:39 UTC (Tue) by cmccabe (guest, #60281) [Link]

> I know. They create their own mess instead. As usual it makes life easier
> for them (albeit not by much: lots of Android developers just like lots of
> Chrome developers hate GYP - but since they must support Windows it
> stays). When you develop software for android you don't need to bother
> with all that and use can easily use NDK and autotools.

I work on Android for NVidia. We do not use gyp. In fact, I am not familiar with that build system. We use the Android build system, which is simply makefiles which use GNU extensions. If you want to learn more about it, you can check it out here: http://git.android-x86.org/?p=platform/build.git;a=summary

I have developed software for Android before, including using the NDK. The NDK also relies on makefiles. There is no automake or autotools component. You have to write an Android.mk file to build whatever you're building. The Java software you develop on Android builds with ant. Again, there is no autotools.

What it boils down to is this: we both agree that autotools is mediocre at best. Are you willing to tolerate mediocrity? I'm not.

Android itself has serious NIH syndrome...

Posted Nov 8, 2011 20:26 UTC (Tue) by mpr22 (subscriber, #60784) [Link]

Unfortunately, it appears that no potential replacement for autotools is sufficiently widely regarded as having a clear and compelling edge over it - indeed, most of them appear to be Marmite.

Hmm...

Posted Nov 10, 2011 14:04 UTC (Thu) by nix (subscriber, #2304) [Link]

As an aside, I don't have a problem with cmake, except that it is pointlessly hard to get the equivalent of 'configure --help': you have to make a new directory, cd into it, then do ccmake .. and hit 'c', then note down the flags you want, quit, and wipe the directory (assuming you want to note those flags down for your autobuilder rather than make it by hand).

But as for supporting an equivalent of config.site... well, all my config.site does is runs a shell script under 'eval', passing in as arguments those 'variables set by options' by configure whose default value is non-NULL. All I have to do for cmake is arrange for my autobuilder to add default arguments to the cmake invocation which are computed by calling the same shell script and doing a few trivial transformations of the output with sed (to change configure-style variable names to cmake-style command-line arguments). This is not hard. My configure-running script is 85 lines; my cmake-running script is 32, mostly comments. I don't see the point of whining over 32 lines and half an hour's shell script hacking.

Hmm...

Posted Nov 10, 2011 16:17 UTC (Thu) by cladisch (✭ supporter ✭, #50193) [Link]

> it is pointlessly hard to get the equivalent of 'configure --help'

And why isn't there a wrapper script named 'configure' that offers an autotools-compatible interface? This should be possible for all build systems that can generate makefiles.

Hmm...

Posted Nov 11, 2011 16:50 UTC (Fri) by nix (subscriber, #2304) [Link]

Quite. The command-line interface of CMake is pretty horrible all round: the -DCAPITAL_LETTERS=FOO interface is very silly. That D is pure excise. Why the cmake authors thought that modelling the command line on a C preprocessor rather than on getopt_long() -- a far more common program invocation syntax -- is quite beyond me.

Hmm...

Posted Nov 7, 2011 15:12 UTC (Mon) by cmccabe (guest, #60281) [Link]

> Kind of reverse to the usual saying: all automake
> projects are unhappy - but they are unhappy in the same way, all
> CMake/scons/whatever projects [may] be happy (I'm not sure), but they are
> happy in their own way and demand induvidual approach.

Ok, I know I already replied to this, but I have to make one other comment here. automake projects are not all "unhappy in the same way." There are a lot of different ways of using autotools, and there is no standard.

For example, some projects use libtool. Other projects don't.

Some projects use autoconf, but not automake. The end result is that given a file name, I can't tell whether it's a generated file or an input file without digging around in the source tree and figuring out the overall structure.

Some projects have a shell file that you're expected to run before the build, to encapsulate the necessary autotools commands for that project. Other projects don't provide that.

Different versions of autotools behave differently-- sometimes something valid for one version is a syntax error for another.

In contrast, in CMake, there is one program-- CMake-- that you either have installed, or you don't. There is one way of doing things. That is real standardization.

Well, there are...

Posted Nov 7, 2011 16:31 UTC (Mon) by khim (subscriber, #9252) [Link]

Ok, I know I already replied to this, but I have to make one other comment here. automake projects are not all "unhappy in the same way." There are a lot of different ways of using autotools, and there is no standard.

"./configure ; make ; make install" is pretty well-known standard. Most automake projects follow it.

For example, some projects use libtool. Other projects don't.

Some projects use autoconf, but not automake. The end result is that given a file name, I can't tell whether it's a generated file or an input file without digging around in the source tree and figuring out the overall structure.

Why does it matter?

Some projects have a shell file that you're expected to run before the build, to encapsulate the necessary autotools commands for that project. Other projects don't provide that.

I've not seen many projects which require that from the library user. Sure, someone who'd like to pull files from VCS will need to do that, but mere developer? Rarely if ever.

Different versions of autotools behave differently-- sometimes something valid for one version is a syntax error for another.

Apparently you are doing something very obscure if you go beyond usual things like CFLAGS, --prefix, etc which work everywhere.

In contrast, in CMake, there is one program-- CMake-- that you either have installed, or you don't. There is one way of doing things. That is real standardization.

Somehow this does not stop anyone from creating bunch of FindFoo.cmake files.

I think our fundamental assumptions are different: you assume needs of the developers who writes the library are the most important. This is so stupid it's not even funny. If you write library for your own use then you can use whatever you need. But if you write library for someone else then the amount of human time needed to build it by new developer from a tarball is the most important metric. If "./configure ; make ; make install" works - then you are golden. If not - then your library is part of the problem and it'll be good to replace it with something else.

Well, there are...

Posted Nov 7, 2011 19:33 UTC (Mon) by cmccabe (guest, #60281) [Link]

> [snip comment abut FindFoo.cmake]

The purpose of CMake find scripts is to allow you to find a given library in a standard way. It's similar to pkgconfig. You can also just use pkgconfig from CMake if you want.

> I think our fundamental assumptions are different: you assume needs of the
> developers who writes the library are the most important. This is so
> stupid it's not even funny. If you write library for your own use then you
> can use whatever you need. But if you write library for someone else then
> the amount of human time needed to build it by new developer from a
> tarball is the most important metric. If "./configure ; make ; make
> install" works - then you are golden. If not - then your library is part
> of the problem and it'll be good to replace it with something else.

First of all, as I made clear, CMake provides a better experience for both developers and people building tarballs.

Second of all, most users never have to build tarballs. They simply install the binary package provided by their distribution.

Maybe what you're trying to refer to, in a very indirect way, is that distribution guys who package software are more familiar with autotools than CMake. That is true, but again, it's just the popularity contest aspect again. Most system administrators are more familiar with Windows than Linux; does that mean we should switch?

This is stupid argument...

Posted Nov 8, 2011 5:47 UTC (Tue) by khim (subscriber, #9252) [Link]

Second of all, most users never have to build tarballs. They simply install the binary package provided by their distribution.

So what? For them CMake or autoconf make no difference. You may as well talk about preference of people who never seen a computer at all.

First of all, as I made clear, CMake provides a better experience for both developers and people building tarballs.

May be. But most people who touch your build system are neither developers of your project nor people who are building tarballs - they are people who are using these tarballs. And for them Scons/CMake/etc suck - simply because they offer nothing new over autotools and must be treated quite differently to get the same result.

Maybe what you're trying to refer to, in a very indirect way, is that distribution guys who package software are more familiar with autotools than CMake.

Not just distribution guys. Developers, too. If I'm developer and you library is autoconfiscated then I can drop the directory with this library in my project, add few lines to configure.ac/Makefile.am - and that's all. If your library uses CMake or SCONS I need to do a lot of manipulations to convince it to play along.

Most system administrators are more familiar with Windows than Linux; does that mean we should switch?

This depends on your goal. Sometimes it's good idea to start with Windows and add Linux port later. Note that some administrators know Windows and some know Linux, but few know both well. The same is true for developers. When you try to use some "universal solution" you usually just make life miserable for everyone. It's much better to use Visual Studio projects on Windows and autotools on Linux rather then try to use SCONS or CMake for both.

This is stupid argument...

Posted Nov 8, 2011 21:57 UTC (Tue) by cmccabe (guest, #60281) [Link]

> Not just distribution guys. Developers, too. If I'm developer and you
> library is autoconfiscated then I can drop the directory with this library
> in my project, add few lines to configure.ac/Makefile.am - and that's all.
> If your library uses CMake or SCONS I need to do a lot of manipulations to
> convince it to play along.

Bundling unrelated libraries with your project is a big no-no on Linux. Whether or not you agree with it, most distributions have a policy of "no bundled libraries."

http://fedoraproject.org/wiki/Packaging:No_Bundled_Libraries

Bundling libraries is common and expected on Windows. But as we've discussed ad nauseum, autotools does not support that platform.

> When you try to use some "universal solution" you usually just make life
> miserable for everyone. It's much better to use Visual Studio projects on
> Windows and autotools on Linux rather then try to use SCONS or CMake for
> both.

Have you ever actually developed a piece of cross-platform software using CMake or Scons? I have. You will be able to develop software faster and with fewer hassles.

Imagine if Lennart had been discussing systemd with someone making the same arguments as you.

Well, Lennart, this systemd stuff looks good, but you know, people are familiar with SysV init scripts, and they won't be familiar with your new stuff. Alternate init systems are "like crack." "It will come back to you if you choose anything else, sooner or later. Why? think... installation/uninstallation,... standard adherence... portability between distros, ..."

Would Lennart have accepted (his own) argument, and abandoned the systemd project? Of course not. He knew that making better software sometimes requires breaking compatibility with the old, obsolete software.

But when it comes to build systems, people are still repeating the old myth that there are no viable alternatives to autotools-- that everything else is somehow suspect or tainted, that nobody will ever be able to learn the new thing. This is complete BS.

Anyway, I can't keep posting in this thread. I just want to say to anyone reading this, don't be afraid to try something new. Your productivity will be much higher than people using the obsolete stuff. If your project is small enough, a plain old makefile can also be a good choice. Just don't use something you know is terrible.

That's why I've said "not just distribution guys".

Posted Nov 9, 2011 13:42 UTC (Wed) by khim (subscriber, #9252) [Link]

Bundling unrelated libraries with your project is a big no-no on Linux.

I don't know why will you ever want to add "unrelated" library - usually you bundle related library :-)

Indeed, even GNU Hello does that.

Whether or not you agree with it, most distributions have a policy of "no bundled libraries."

That's different. That's for packagers in distributions. And yes, autotools support them just fine, too. Aforementioned GNU Hello will only use bundled library if gettext is not available on the host system (and on host system with GLibC it's always available).

Have you ever actually developed a piece of cross-platform software using CMake or Scons?

Sure. In fact that's why I'm so against them. I do know them - and I hate them. We use Scons here for Native Client development - and I hate it. Simple things which are trivial with autotools (for example: "compile the same sources for four different platforms: one native and three cross" or "compile the some sources twice using different compilers to compare output") become serious problem and require a lot of kludges.

You will be able to develop software faster and with fewer hassles.

Somehow I'm not seeing this. Not only build is unbearably long (this is Scons problem, CMake does not share this particular problem), but I have nothing like "make distcheck". A lot of similar simple amenities are lost when you start using "modern advanced build systems".

Imagine if Lennart had been discussing systemd with someone making the same arguments as you.

I'd laugh very loud.

installation/uninstallation

...are hard to do with SysV init scripts...

standard adherence

...every distribution is unique, there are no standard amond sysv-init based systems...

portability between distros

...does not exist: if you'll try to move startup script from RedHat to Debian or back it'll not work... even if we'll forget about things like Gentoo...

Would Lennart have accepted (his own) argument, and abandoned the systemd project?

Why would he abandon it? The right answer (and this is exactly what Lennart did) is to discuss the common tasks people need to perform and translate "sysv-init solutions" to "systemd solutions". In fact Lennart spent sizable effort doing it.

He knew that making better software sometimes requires breaking compatibility with the old, obsolete software.

Bwa-ha-ha. Don't make me explode. The very description of systemd is systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts (emphasis mine). One of the PulseAudio FAQs is Can I get OSS and ALSA applications to work with PulseAudio? and answer is, of course, yes.

Lennart is low-level guy, not GNOME guy. He knows compatibility is important. That is why his creations are accepted as a replacement for the "old way". Apparently "new, improved" build systems are created by guys who don't know that. Oh, well, their loss, then.

Compatibility can never be perfect, I understand that, but when I'm presented with "new super-puper build system" and ask how to do simple and important (to me) tasks I usually hear either "nobody needs that" or "you can probably invent a kludge" or even "patches are welcome". Sorry, guys, but it's your resposibility to provide backward-compatibility with "the old way" in your creation, not mine. I can live with rare and insignificant problems but when I ask trivial, simple, obvious, question "how to add autoconfiscated library to my build" and hear some tales about how I should not do that... well, sorry, no way I'd accept this build system.

But when it comes to build systems, people are still repeating the old myth that there are no viable alternatives to autotools-- that everything else is somehow suspect or tainted, that nobody will ever be able to learn the new thing. This is complete BS.

Sorry, but no, this is not a "complete BS". You can say anything you want about Lennart, but the important fact is that both systemd and pulseaudio do include support for important predecessors while CMake, Scons, Waf and others in their arrogance ignore autotools. Well, it's their problem, not mine. They may be interesting for people who need to support Windows more then they need to support GNU, but in the GNU world autotools will remain standard till someone invents good and compatible replacement.

That's why I've said "not just distribution guys".

Posted Nov 10, 2011 1:22 UTC (Thu) by cmccabe (guest, #60281) [Link]

I've never actually used SCons. I used Cons, which was the predecessor system. Cons was fairly slow, but not more so than autotools. CMake is quite fast.

> Simple things which are trivial with autotools (for example: "compile the
> same sources for four different platforms: one native and three cross" or
> "compile the some sources twice using different compilers to compare
> output") become serious problem and require a lot of kludges.

I don't see why cross-compilation would be any harder (or easier) on CMake versus autotools.

I think you are missing the point of my systemd vs. SysV init comparison. SysV init sucks. Everyone who is technically knowledgeable in this area admits this. But system administrators are very familiar with it. It is standardized in the Filesystem Hierarchy Standard.

You might argue that that standardization is a mirage, because every distribution uses it differently. In the same way, autotools standardization is a mirage because everyone uses a slightly different set of macros and generator programs. Some projects don't even use automake at all, but simply autoconf. Some projects have wrapper scripts, and some do not. Some projects check in certain generated files and others don't. And so, on as we discussed.

Like autotools, SysV wins in every way except the way that's actually important: actually being good! If you are going to recommend that people use an inferior build system because of compatibility and the difficulty of retraining, you should also recommend that they use SysV init, for the same reasons.

In fact, whenever systemd comes up in a discussion, you see a lot of posts by system administrators who are scared of the change-- and rightly so. They know that there will be a period when they will not be as familiar with the new system as the with old. They also know that the new system will have rough edges and possibly a regression or two. But guess what-- positive change can never happen when people refuse to learn something new.

That's why I've said "not just distribution guys".

Posted Nov 10, 2011 6:32 UTC (Thu) by khim (subscriber, #9252) [Link]

Cons was fairly slow, but not more so than autotools.

Sadly this is not true. Autotools scale, scons does not - it's that simple.

CMake is quite fast.

Yup. That's why I'm not concentrating on speed when I discuss "modern build system". Some of them can be rejected outright for that reason alone, but some of them are fast.

I don't see why cross-compilation would be any harder (or easier) on CMake versus autotools.

It's harder with SCons. CMake actually somewhat acceptable here. The fact that cross-compilation was added so late is still felt sometimes, but yes, the difference is not that large today. It's still large enough to be a pain, though.

In the same way, autotools standardization is a mirage because everyone uses a slightly different set of macros and generator programs.

Sure, but there are big difference: I can not drop init script from Debian in RedHat and hope that it'll work. I can add any autoconfiscated project in my own project - and that works (sometimes with minor issues, but it works).

And so, on as we discussed.

And as discussed it's minor issue. I'm deeply involved (enough for that difference to matter) with just a couple of projects. Ok, may be half-dozen, but still enough to count them on my hands. But I'm using many dozens of different other projects and they all looks the same from the outside: "./configure ; make ; make install".

If you are going to recommend that people use an inferior build system because of compatibility and the difficulty of retraining, you should also recommend that they use SysV init, for the same reasons.

Nope. As I've said: all "new" init systems (upstart, systemd) have support for "old" SysV init scripts - and they had SysV init compatibility support from the day one. They understand that they must support "old way" of doing things till "new way" takes over. Transition takes years - and all that time old way should be supported seamlessly. Autotools compatibility in build systems either does not exist at all or is an afterthought. The question arises: if developers of these systems can not make even such fundamental thing "right" then what can they do right? And indeed - there are numerous other shortcomings in these build systems.

In fact, whenever systemd comes up in a discussion, you see a lot of posts by system administrators who are scared of the change-- and rightly so.

Sure. And then you see a lot of posts where they are relived to find out that they can still use most their SysV init tricks with systemd.

But guess what - positive change can never happen when people refuse to learn something new.

Sure. But positive change can and should happen piecemeal. People must have the right to learn "something new" at their own pace. Sometimes (when change is big like with pulseaudio or systemd) disruption is inevitable - but the authors should still try to reduce it as much as possible. Build system creators stance is "forget everything you knew, you should learn our new system right away" - and this just will not do.

For example cross-compilation: I can not use my usual way (./configure --target=blah-blah-blah"). Instead I must learn to write .cmake files right away. Why? I can understand the desire to get rid of M4. Ok. But why get rid of all the amenities offered by autoconf at the same time? Ah, you want to "build new world free from problems of the old one". Ohkay. That's you choice. If you just want to create bunch of new problems - you can do that. Just... do that somewhere where I'm not involved, Ok?

That's why I've said "not just distribution guys".

Posted Nov 10, 2011 21:43 UTC (Thu) by cmccabe (guest, #60281) [Link]

> Sadly this is not true. Autotools scale, scons does not - it's that simple.

As I said before, I don't have any direct experience with scons. Thanks for posting the link to Electric Cloud's analysis. It does seem that scons has significant overhead.

Just because scons doesn't scale, though, we can't conclude that autotools does scale. The comparison in that article is not against automake, but against gmake, which is basically an implementation of plain old make. In my experience, CMake is much faster than autotools.

You keep claiming that not having a "configure" command is some kind of fundamental stumbling block for you when understanding other build systems. Really? The 'cmake' command is pretty much the CMake equivalent of configure. Like configure, you can use it to set up CFLAGS, specify build options, and so forth. Also like autotools, once you've invoked this command once, you don't have to do it again.

I have no doubt that upstart and systemd have some kind of support for dropping in sysV init scripts. But to a system administrator trying to understand why some daemon is not starting the way he wants, that is cold comfort. In practice, using the new system require retraining and maybe even (gasp!) learning some new commands. Positive change won't happen until we make it happen.

It's not important

Posted Nov 10, 2011 23:28 UTC (Thu) by khim (subscriber, #9252) [Link]

Just because scons doesn't scale, though, we can't conclude that autotools does scale. The comparison in that article is not against automake, but against gmake, which is basically an implementation of plain old make.

Automake uses simple make "behind the scenes" so it's speed is similar to make.

In my experience, CMake is much faster than autotools.

Yes, but difference is constant, it does not depend on size of project. I can live with one munite null build: not perfect, but acceptable. But Scons can easily take minutes just to say that you don't need to build anything!

You keep claiming that not having a "configure" command is some kind of fundamental stumbling block for you when understanding other build systems.

This is not a stumbling block. This is litmus test. If people don't bother to even provide such a simple thing then what else have they decided to redo without a good reason? In case of CMake the asnwer is: almost everything. Perhaps first-class Windows support requires that, I don't know, but since I'm not interested in first-class Windows support...

The 'cmake' command is pretty much the CMake equivalent of configure.

It's not backward compatible, it assumes the whole world adopted CMake already, it provides no way to use existing autoconfiscated projects as subprojects. Basically: it's my way or the highway. Not a good way to offer replacement. When KDE4 and GNOME3 did that lots of people decided to keep old version alive for as long as possible. With autotools this decision makes even more sense because autotools developers have no intention to drop everything in favor of CMake so you can continue to use autotools and ignore CMake.

I have no doubt that upstart and systemd have some kind of support for dropping in sysV init scripts.

They both work as drop-in replacements for Sysv init. You can just replace Sysv init with upstart and systemd - and everything just continue to work. However. Systemd decided not to support upstart compatibility - and as a result Ubuntu ignored it.

In practice, using the new system require retraining and maybe even (gasp!) learning some new commands.

Sure. This is where systemd largely dropped the ball. But at least it provided a cheat sheet and tried to support the same capabilities. CMake (like most "modern" systems) decided to redo everything. For example: why "./configure CC=/my/super/duper/cc" becomes "cmake -DCMAKE_C_COMPILER=/my/super/duper/cc" ?

CMake was clearly built by people who wanted to "throw away all the legacy" - eventually they added most of it back, but in mangled and crippled form. And they arrogantly assume that everyone will want to adopt CMake: all tutorials assume I want to convert everything to CMake right away and that I'll never want to go back. This level of arrogance does not inspire confidence, sorry.

That's why I've said "not just distribution guys".

Posted Nov 11, 2011 16:48 UTC (Fri) by nix (subscriber, #2304) [Link]

FWIW, autotools scales reasonably well. It only really falls over speed-wise when you start using things like gnulib, which because they test for every bug you can possibly imagine tend to take minutes to run a multimegabyte configure script.

The actual build, well, since cmake generates makefiles, the problem must be something other than cmake-versus-GNU Make. And it is. Firstly, a lot of projects, especially older projects, are rife with recursive make, which the advent of parallel compilation has made particularly clear was a bad idea from the start. But nonrecursive makefiles aren't hard to write with Automake, even for larger projects (see e.g. ImageMagick for an example). The payoff is just relatively low for the work that needs doing. The second problem is Libtool, which thought it is much faster than it was, really works at the wrong level: it should be part of Automake, so that the generated makefiles directly contain the appropriate magic incantations to generate shared libraries. Unfortunately if you fix that you break the people who are running Libtool directly, and those were its original user base: Automake integration came later.

That's why I've said "not just distribution guys".

Posted Nov 16, 2011 0:17 UTC (Wed) by cmccabe (guest, #60281) [Link]

Yeah, but having to cram everything into one giant Makefile sucks. It's like putting all your code into one .c file because your compiler is to dumb to handle multiple files. It's 2011. We shouldn't have work around these kind of tool limitations.

That's why I've said "not just distribution guys".

Posted Nov 16, 2011 2:53 UTC (Wed) by nix (subscriber, #2304) [Link]

You don't have to cram everything into one giant makefile. You just say

include $(shell find . -mindepth 2 -name Makefile)

then have subsidiary makefiles say, in part, something like

ifeq ($(words $(MAKEFILE_LIST)),1)
all:
make -C .. # path to toplevel makefile from here
else
# everything else
endif

And now you have lots of independent fragments of makefiles drawing on a common library declared in the toplevel makefile, new ones are automatically picked up, the makefiles can themselves be automatically built using make rules and the changes picked up, and anyone typing make in subdirectories gets an automatic make from the toplevel instead. (With further trivial adjustments you can arrange to automatically build only the subdir's targets, and automatically compute the toplevel makefile, and have all of that boilerplate inserted with a single $(eval ...) so the people writing those subsidiary makefiles never need to know all this stuff.)

Not only is this not hard, it is stuff which it is quite easy to autogenerate if you so desire. It is also stuff which is designed into GNU Make (hence the auto-reloading feature of "include" if the included makefiles are rebuilt), so it is not in any way a "workaround".

(Non-GNU makes are often a lost cause in this area. Anyone who's not using GNU Make is just trying to be difficult by this point: GNU Make runs everywhere and is more capable than just about all the competition.)

That's why I've said "not just distribution guys".

Posted Nov 16, 2011 21:55 UTC (Wed) by cmccabe (guest, #60281) [Link]

I believe you just described how the Android build system works, except that they use a Python script to locate Android.mk files, rather than the find command. If you could find a way to apply this trick to autoconf projects, you probably could speed up quite a few builds-- at the cost of making things even more complex. The other alternative is just to use CMake, which will do this all for you, without any boilerplate. But I guess you're probably sick of hearing about that by now :)

libabc: a demonstration library for kernel developers

Posted Nov 13, 2011 14:42 UTC (Sun) by Baylink (guest, #755) [Link]

> Windows is so different though that you might as well complain about portability to MVS as well while you are at it.

s/MVS/VMS/

libabc: a demonstration library for kernel developers

Posted Nov 2, 2011 20:43 UTC (Wed) by RCL (subscriber, #63264) [Link]

>> executing out-of-process tools and parsing their output is not acceptable in libraries. Ever
> I strongly disagree with this statement. Doing work out-of-process gives robustness (and sometimes security) guarantees that are just not possible with calls into libraries.

But then you rely on parsing tool's output - worst thing to rely ever. This is not to mention that you rely on ability to start an external program - that is, you make *a lot* of assumptions (about filesystems mounted and their layout, etc).

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 21:54 UTC (Thu) by nix (subscriber, #2304) [Link]

It's quite all right to rely on parsing a tool's output if you control the tool as well. (Although in this case you should probably provide a library interface to it as well as one that requires exec()ing something, this is not always practical nor possible.)

libabc: a demonstration library for kernel developers

Posted Nov 2, 2011 20:58 UTC (Wed) by aliguori (subscriber, #30636) [Link]

> This piece of advice is confusing. I think the author meant to make
> libraries thread-agnostic: don't bend over backwards to accommodate
> access to the same data from multiple threads, but don't unnecessarily
> couple different pieces of data either.

My take is simply don't use global variables or anything that would break a threaded application. Return contexts such that when you hold a lock around a context, things work as expected.

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 21:55 UTC (Thu) by nix (subscriber, #2304) [Link]

I'd say that you can use global variables even in something supporting unlimited numbers of contexts, but if you do they should be threadsafe, not just thread-agnostic. (This is sometimes useful for cross-context caching and the like.)

I think it's pretty clear, actually...

Posted Nov 2, 2011 21:04 UTC (Wed) by khim (subscriber, #9252) [Link]

This piece of advice is confusing. I think the author meant to make libraries thread-agnostic: don't bend over backwards to accommodate access to the same data from multiple threads, but don't unnecessarily couple different pieces of data either.

The advice is to leave threading issues to the higher-level libraries or to the program itself but to be ready to be called from different threads simultaneously for different datasets. It's explained quite detailed there.

Great advice. We can just use posix_spawn instead: err, wait. How do I call this function under Linux?

Easy: you don't.

The author also should be more specific about pthread_atfork's alleged brokenness.

It's hard to be more specific in a README file. No, really. It's README file, not a PhD thesis. If you'll actually follow the suggested advice and read the POSIX man page you'll see that half of it is dedicated to the detailed explanation for why pthread_atfork should never be used.

Actually, there is a better way: #pragma once (http://en.wikipedia.org/wiki/Pragma_once). It's shorter than traditional header guards (one line versus three), and it eliminates the risk of name collisions.

Sadly it guarantees collisions when your library is embedded in other, bigger, library. #pragma once can be good choice for the application, but it's not acceptable for a library.

Doing work out-of-process gives robustness (and sometimes security) guarantees that are just not possible with calls into libraries.

You know, I think author of PulseAudio will agree with you here - it uses daemon for a reason.

But since there are no way to create "ephemeral context" (no fork/exec, remember) the only way to do that is to create separate daemon which is accessible over some RPC mechanism (dbus, for example).

> separate 'mechanism' from 'policy'

I don't think this phrase means what the author thinks it means.

I think it means exactly what it means. "Price" of a new process may vary wildly in different contexts. If the library creates such processes itself that it embeds the process creation policy - and this is just wrong. Think Android: it's process manager keeps processes around if there are enough resources and kills them if it's under memory or CPU pressure. Library-created processes just mess everything up in such a situation.

I think it's pretty clear, actually...

Posted Nov 2, 2011 21:08 UTC (Wed) by quotemstr (subscriber, #45331) [Link]

> Sadly it guarantees collisions when your library is embedded in other, bigger, library.

How so?

Simple

Posted Nov 2, 2011 21:31 UTC (Wed) by khim (subscriber, #9252) [Link]

When include files are copied around they sometimes treated as the same from "#pragma once" POV (if copying process keeps the timestamps) and sometimes as different (if you put them in GIT and pull back).

Thus "#pragma once" is great way to create unreproducible build failures. With explicit include guard you sometimes trigger the GCC optimization (GCC does not reread file with include guard it it can understand that it's the same file) and sometimes it fails and GCC actually loads and parses file again - but it only affects compilation speed, never correctness.

Simple

Posted Nov 2, 2011 21:36 UTC (Wed) by quotemstr (subscriber, #45331) [Link]

> When include files are copied around they sometimes treated as the same from "#pragma once" POV (if copying process keeps the timestamps) and sometimes as different (if you put them in GIT and pull back).

I've never seen this behavior. Any decent implementation of #pragma once should not flag two files with different contents as identical. Very old compilers were sometimes confused by various links of link, but this issue hasn't cropped up in a very long time.

Please provide a pointer to a bug report or at a set of setps to demonstrate the behavior you describe.

Hmm... Very simple test...

Posted Nov 3, 2011 13:20 UTC (Thu) by khim (subscriber, #9252) [Link]

$ mkdir test1
$ mkdir test2
$ echo '#pragma once' > test1/test.h
$ echo 'abc' >> test1/test.h
$ cp -ai test1/test.h test2/test.h
$ echo '#include "test1/test.h"' > test.c
$ echo '#include "test2/test.h"' >> test.c
$ gcc -E test.c
# 1 "test.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "test.c"
# 1 "test1/test.h" 1

abc
# 2 "test.c" 2
$ touch test2/test.h
$ gcc -E test.c
# 1 "test.c"
# 1 "<built-in>"
# 1 "<command-line>"
# 1 "test.c"
# 1 "test1/test.h" 1

abc
# 2 "test.c" 2
# 1 "test2/test.h" 1

abc
# 2 "test.c" 2
$ gcc --version
gcc (Ubuntu 4.4.3-4ubuntu5) 4.4.3
Copyright (C) 2009 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

This is just a preprocessor test, but compiler does the same thing.

Note that usually we do want this behavior (these files are identical - they come from the same source, after all) - and usually it works, but sometimes when you do complex manipulations (git in my case, but of course it's not the only possibility) everything blows up.

There is always a well-known solution to every human problem--neat, plausible, and wrong.

Well, "#pragma once" is such a solution - don't use it.

Simple

Posted Nov 3, 2011 0:56 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link]

We have a project with 20 identically named .h files (don't ask). #pragma once works just fine.

I think it's pretty clear, actually...

Posted Nov 2, 2011 21:45 UTC (Wed) by gus3 (guest, #61103) [Link]

No, really. It's README file, not a PhD thesis.
As a demonstration of "do this, not that," the why's and therefore's are entirely appropriate. Sure, the authors state that "even the POSIX standard admits it's broken," but a URL would be nice.

I think it's pretty clear, actually...

Posted Nov 2, 2011 21:49 UTC (Wed) by quotemstr (subscriber, #45331) [Link]

> half of it is dedicated to the detailed explanation for why pthread_atfork should never be used.

No it isn't. Did you read the linked manpage? Its lengthy rationale section explains why one *would* want to use pthread_atfork, and why a bare fork (i.e. *not* using pthread_atfork) is discouraged.

Also, most of these criticisms don't apply for fork-exec: who cares whether a mutex is held in a child when all that child will ever do is exec?

I think it's pretty clear, actually...

Posted Nov 2, 2011 22:33 UTC (Wed) by gus3 (guest, #61103) [Link]

The pthread_atfork() page (check the link I provide in my earlier comment) does list in the "Rationale" section several reasons why using the function is a dicey proposition.

I think it's pretty clear, actually...

Posted Nov 3, 2011 3:54 UTC (Thu) by RCL (subscriber, #63264) [Link]

> does list in the "Rationale" section several reasons why using the function is a dicey proposition

Where exactly?

It describes problems with fork() and "solution" of using fork() then exec().

Then it proceeds to describe pthread_atfork() as a means to resolve the problem.

Then it describes an example usage of pthread_atfork().

In the last two lines it describes the order of registering atfork() handlers.

*Nowhere* in the document it warns against using pthread_atfork() nor acknowledges its 'brokenness'.

I think it's pretty clear, actually...

Posted Nov 6, 2011 22:18 UTC (Sun) by foom (subscriber, #14868) [Link]

Well, in my experience (writing private software), every time someone has wanted to use pthread_atfork, I've recommended that they not. For one very simple reason: it does not distinguish between the common activity of spawning a process (fork/exec) and the relatively rare activity of forking and keeping both halves.

So, for example, you might shutdown some auxiliary threads in a pthread_atfork prefork handler, to ensure that the threads aren't in the middle of corrupting your library's state when the child process wants to call into your library. But, that's entirely unnecessary work if the next action was going to be exec! It just causes fork/exec to be slower, for no good reason.

Instead, we use an explicit teardown function that you can call if you like, before non-exec forks.

Actually fork does THAT for you...

Posted Nov 7, 2011 17:32 UTC (Mon) by khim (subscriber, #9252) [Link]

So, for example, you might shutdown some auxiliary threads in a pthread_atfork prefork handler, to ensure that the threads aren't in the middle of corrupting your library's state when the child process wants to call into your library.

You don't need to do that. After fork just one thread survives. But you do need to either restart "zombie" threads or free fresources assigned to them. And to do that correctly different libraries must cooperate - and it's not clear "how". If the whole machinery described in pthread_atfork does not look like something designed to give you countless problems then I'm not sure you must write libraries at all.

Actually fork does THAT for you...

Posted Nov 15, 2011 3:39 UTC (Tue) by foom (subscriber, #14868) [Link]

> You don't need to do that. After fork just one thread survives.

Sure, but cleaning up after whatever the other thread was doing at the instant of the fork is generally impossible. You either need to grab a lock (or similar) in your atfork prefork handler to force the other thread into a known quiescent state, or shut it down. But, you probably don't really want to do either one before a fork-exec, it's just a waste of time.

> If the whole machinery described in pthread_atfork does not look like something designed to give you countless problems then I'm not sure you must write libraries at all.

I think the only thing pthread_atfork is *really* useful for is to ensure that libc's malloc() will keep working after fork().

libabc: a demonstration library for kernel developers

Posted Nov 2, 2011 21:54 UTC (Wed) by wahern (subscriber, #37304) [Link]

The world would be a better place if PAM, for example, did authentication separately
This is such a better world, if by Linux you mean OpenBSD and by PAM you mean BSDAuth.

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 12:14 UTC (Thu) by aleXXX (subscriber, #2742) [Link]

One more: it is false to claim that there is no reasonable alternative to autotools.
CMake, scons, waf, probably more. CMake (I'm one of the developers) is of course the best choice, without doubt ;-)

Alex

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 18:19 UTC (Thu) by cmccabe (guest, #60281) [Link]

I agree. We ought to be putting our effort into next generation build systems like CMake, rather than figuring out yet more ways to work around unfixable problems in autotools.

I've done a few conversions of projects from custom makefiles and autotools to CMake. In every case, the conversion was done in two days, and productivity was much higher afterwards.

I think some people are still wary of CMake because Linux distribution packaging folks are not as familiar with it as with autotools. But really, it's just a matter of time.

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 16:31 UTC (Fri) by HelloWorld (guest, #56129) [Link]

I have used CMake for a few years, and the CMake language is a horrible pain in the ass. You should have embedded an interpreter for a sensible language, like lua or elk or whatever.

libabc: a demonstration library for kernel developers

Posted Nov 2, 2011 22:35 UTC (Wed) by andresfreund (subscriber, #69562) [Link]

Is it just me? The amount of imperatives in that document is so large that I inevitably want to do the contrary just to spite them. Completely independent from the fact that I agree with most of what they are saying.
Compared to David Zeuthen's series - which I recall to be formulated as guidelines which are reasonable in most but not all circumstances - the positive effect of this piece isn't large.

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 4:21 UTC (Thu) by RCL (subscriber, #63264) [Link]

No, it's not just you. Lennart is a member of open source ego-maniac elite who always speak as if they knew better than anyone else. Compare Greg K-H, Drepper, Linus (sometimes), de Raadt etc

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 4:43 UTC (Thu) by dlang (subscriber, #313) [Link]

the issue isn't how they talk initially, it's how they react to people disagreeing with them

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 0:38 UTC (Fri) by HelloWorld (guest, #56129) [Link]

> Lennart is a member of open source ego-maniac elite who always speak as if they knew better than anyone else.
Yeah, and guess what? More often than not, he *does* know better than most other people. Which is why many distros ship with pulseaudio today and most will ship with systemd at some point (well, except for Ubuntu because of NIH syndrome).

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 15:41 UTC (Fri) by alankila (guest, #47141) [Link]

I agree. I have utmost respect for the fact that Poettering is one of the few people who are trying to do global optimization of the linux system stack. If we had a few more people more like him, I think we'd make serious progress towards making it easier to write good applications on Linux.

I'm impressed by the fact that a lot of people are starting to realize that choice is not good but in fact very bad. And that forcing people to do things in a particular way has benefits. It might be that eventually a new mindset takes root which is less interested in things like "unix philosophy" and cares more about "how can we make it easier for mere mortals to write kick-ass applications".

It is necessary to design the system software stack well once. I think pulseaudio is excellent piece of engineering from programmer's point of view, as the simple API is simple indeed. I only wish someone took the time to add all sorts of nice goodies to it like windows 7 type audio effects and global equalizer and things like that. I have shit like that written for cyanogenmod, but integrating it into PA makes my head hurt for various reasons. However, because PA is in principle capable of doing this kind of stuff, it'd be lovely to show it off as evidence that you get interesting new features that you did not have before when running that daemon.

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 15:41 UTC (Fri) by alankila (guest, #47141) [Link]

Apologies. When _not_ running that daemon, of course.

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 23:52 UTC (Fri) by cmccabe (guest, #60281) [Link]

I'm pretty sure that Lennart would disagree with you about your dislike of UNIX philosophy.

Apps are fun, but let's not forget that most Linux systems that exist now in 2011 are either embedded devices or servers, and have little use for them. Linux is flexibile enough to be adapted to the biggest and the smallest systems precisely because we do not "force people to do things a particular way."

libabc: a demonstration library for kernel developers

Posted Nov 5, 2011 1:07 UTC (Sat) by alankila (guest, #47141) [Link]

I don't dislike unix philosophy. I'm just saying that it's irrelevant. Results matter, especially those that make linux appealing to the 99 % of people who apparently do not use it yet.

libabc: a demonstration library for kernel developers

Posted Nov 5, 2011 0:13 UTC (Sat) by dlang (subscriber, #313) [Link]

if you think choice is bad, go use an apple where you have no choice.

oh, you think _your_ ability to choose is good, but other people shouldn't be annoyed by the ability to choose? sorry, you have crossed the line.

without the ability to choose and have multiple options, none of these things that are claiming that the ability to choose is a bad thing would be allowed to start with.

It's not the question of if, it's question when

Posted Nov 5, 2011 0:55 UTC (Sat) by khim (subscriber, #9252) [Link]

oh, you think _your_ ability to choose is good, but other people shouldn't be annoyed by the ability to choose? sorry, you have crossed the line.

It's not "the ability to choose". It's the list of choices. There are "Linux way": you can pick your distribution which have different init systems, different desktop environments, different set of compilers and support libraries... and handful of games or office suites. Or you can have "Android way": only one set of libraries (but evolving one), but 500'000 applications including dozen of office suites and thousands of games.

Sure, 500'000 Android applications are greatly inflated number (most of them is trivial crap), but there are a lot of good ones, too. Linux can never achieve anything remotely similar till it'll provide one supported environment and not dozens of incompatible ones with hudnreds of possibilties and subpossibilities.

As was said in the good article on the subject: flexibility begets complexity, complexity begets problems. Flexibility is good, problems are bad so you need the right balance.

Linux world is skewed to the side of flexibility (and accompanying problems) to such a degree that it's useless for most people unless someone removes most of the choices. At some point we can reach the state where we'll have not enough flexibility, but we so, sooo, SOOO far from that point that we can stop worrying about that for the next 10 years.

P.S. Kernel people understand that better then userspace, again. If you'll take a look on the list of options presented in "make config" you'll see literally hundreds of options. But if you'll try to push something without very serious justification... you'll be disappointed. Think LVM2 vs EVMS. Or, further up stack: linuxthreads vs NPTL vs NGPT. Few years ago we had "choice", now only NPTL remains... but is it such a bad thing? Yet further up stack we have a mess where dozens of choices coexist just because noone bothered to eliminate them.

It's not the question of if, it's question when

Posted Nov 6, 2011 20:54 UTC (Sun) by jlokier (guest, #52227) [Link]

Fyi, on some architectures (e.g. ARM without MMU) we're still using LinuxThreads because NPTL isn't ported to them.

There is nothing wrong with use of the old version

Posted Nov 6, 2011 21:54 UTC (Sun) by khim (subscriber, #9252) [Link]

Do you use glibc 2.5 or have you ported linuxthreads to glibc 2.14?

Last time I've checked ARM for MMU-less CPUs had no support for glibc at all and was able to compile only tiny number of programs - usually old versions.

There are nothing wrong with playing with old/unusual hardware and software but it just makes no sense to worry about ARM for MMU compatibility when you develop libraries for modern Linux. Probably even less sense then trying to support Windows by default.

There is nothing wrong with use of the old version

Posted Nov 7, 2011 22:58 UTC (Mon) by jlokier (guest, #52227) [Link]

The GP comment seemed to imply that we should eliminate choice where it's no longer useful, and that NPTL is now good enough to be universal on Linux. Eventually I hope it is, there's no technical reason why it shouldn't be, but for the moment, there is still useful work being doing on Linux systems that NPTL doesn't yet support. That's all I wanted to point out.

No I haven't ported glibc. You wouldn't do that for a low-memory system (glibc is not small (but I look hopefully at eglibc)); uclibc is a better choice. I've built a Gentoo x86 system using uclibc in the past, so it's not impractical to use. There aren't a lot of glibc-isms in apps, and uclibc tries fairly hard to be compatible.

ARM with no MMU runs uclinux and uclibc, like anything with no MMU, and that's no too bad, there's quite a lot of software that builds on it.

There's only two real issues: No fork (affects fewer programs than you'd think), and no ELF shared objects, which is a bigger restriction as lots of things assume shared objects nowadays. Some no-MMU platforms do support ELF, and ARM could, but nobody's implemented it for ARM; I started, but had to move on. Almost everything else works the same as with MMU; file mmap is limited, but very few programs depend on it. And of course low memory, but that's not unique to no-MMU systems.

I accept the charge of unusual hardware, but just for some perspective: (a) count the number of home routers and set top boxes in the world; and (b) in the last 2 years I've worked for 4 different companies developing on no-MMU hardware, 3 of them ARM, and another architecture that I probably shouldn't name because it's not mainlined. Another no-MMU-only architecture got merged in Linux 3.1. It is niche, but it's still part of current Linux activity in the embedded universe, not limited to museum pieces. I would still strongly recommend choosing a chip with MMU if at all possible though, even for tiny systems :-)

Responding to "it just makes no sense to worry about ARM for MMU compatibility": Generally in a library, there's no need for specific "ARM-without-MMU" support. Assuming it works effortlessly on ARM (which it should these days), it would be just about the MMU, independent of CPU architecture. Which most things do fine, and a few need small changes.

I wouldn't be surprised if GNOME and Firefox worked with fewer changes than you'd expect on a no-MMU target (so long as it had ELF), even one still using LinuxThreads, if there was enough memory. But of course there isn't enough memory :-)

Well, people are free to do whatever they want with your library...

Posted Nov 8, 2011 7:23 UTC (Tue) by khim (subscriber, #9252) [Link]

Responding to "it just makes no sense to worry about ARM for MMU compatibility": Generally in a library, there's no need for specific "ARM-without-MMU" support. Assuming it works effortlessly on ARM (which it should these days), it would be just about the MMU, independent of CPU architecture. Which most things do fine, and a few need small changes.

That's fine. If your library is used on some exotic platform (be it Windows, VMS or ARM without MMU) then I see no problem with it. But if you start changing everything from the start to support them... that's another thing.

libabc: a demonstration library for kernel developers

Posted Nov 5, 2011 1:13 UTC (Sat) by alankila (guest, #47141) [Link]

I'm already an OS X user. And I think it does a whole lot of things so much better that transition from linux to OS X has been painless. After adding macports and some bunch of open source software, the practical differences are:

1) ready availability of development environments like xcode or flash studio;
2) the ability to install and play some games on Steam.
3) 2.5x times the battery life I used to be getting with my linux laptop. (I'm mostly complaining about the significantly increased power drain associated with new kernel versions.)

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 5:14 UTC (Thu) by josh (subscriber, #17465) [Link]

Standards documents always contain a long list of imperatives on what to do and what not to do. This project attempts to provide a standards document, hence the tone.

libabc: a demonstration library for kernel developers

Posted Nov 3, 2011 21:24 UTC (Thu) by alan (subscriber, #4018) [Link]

Tone aside, putting out an example library and a best practises document provoked this conversation on the topic at the very least, and thus has value. I wouldn't go so far as to want to attack Lennart & Co for their efforts, and even the tone of this particular message isn't too far off from an imperative standards document. However when you have built a reputation for being disagreeable it tends to color further communications.

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 0:40 UTC (Fri) by wahern (subscriber, #37304) [Link]

It's all for naught. The coding style in the Linux kernel is horrendous. For one thing, they have their names backwards. In the Linux kernel (and often older GNU projects) it's common to write do_foo_to_bar(). In other words, verb, object, subject, plus often a superfluous literal "do". Best practice in userland is bar_foo_do()--subject, object, verb--or maybe bar_dofoo--subject, verb, object.

Linux kernel code is extremely difficult to follow because the horrible naming conventions mean that it's quite difficult to spot related code across the tree. Also, symbol names are often too literal, which is similar to writing superfluous comments--it's tells you what it's doing, but not why. In other words, kernel folks are helluva lazy. This isn't going to change, and neither will the quality of Linux kernel developer libraries. It's the culture, for better or worse. (I think worse, but the end product is still pretty nice.)

Interestingly, BSD kernel folks tend to be much better in this regard. Groking a *BSD kernel is insanely easier. And that goes even for bleeding edge features. The code for the feature is usually much more well structured. There's a strong correlation between how well a feature is modeled conceptually and how regular and sane the symbol names.

There's a difference between pretty and useful, of course. But for what it's worth, BSD code is definitely prettier.

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 2:32 UTC (Fri) by sfeam (subscriber, #2841) [Link]

"it's common to write do_foo_to_bar(). In other words, verb, object, subject"

I suppose it's pointless to note that this is really [implicit subject], verb, direct object, indirect object. In other words, normal English sentence order. As in
"What do you do?"
"[I] give food to trolls."

Anyhow, that's a strange definition of "coding style".

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 3:02 UTC (Fri) by cpeterso (guest, #305) [Link]

const char *response = troll_feed(&food);
(void)response;

libabc: a demonstration library for kernel developers

Posted Nov 4, 2011 18:54 UTC (Fri) by dpquigl (guest, #52852) [Link]

Maybe it is because I first learned kernel coding with Linux but I find that to be the exact opposite. I had a much harder time figuring out BSD and OpenSolaris kernel code for my NFS work than I did with the Linux NFS implementation. In terms of coding style I often run into projects that I wish had the same coding style as the kernel. The code is formatted in a way that is easy to read. The size of functions are mostly pretty easy to handle. The organization of the code in the tree's hierarchy makes sense. The layering in the kernel is handled pretty well. It may just be a case of familiarity since I've been working on Linux for so long but I had a hard time with other unix operating systems.

libabc: a demonstration library for kernel developers

Posted Nov 5, 2011 11:25 UTC (Sat) by adobriyan (guest, #30858) [Link]

do_*() functions are clustered near system call entry points and signal an entry to non-arch specific code.

Oh, and there is do_div(). :-)

libabc: a demonstration library for kernel developers

Posted Nov 6, 2011 3:00 UTC (Sun) by butlerm (guest, #13312) [Link]

>In other words, kernel folks are helluva lazy. This isn't going to change, and neither will the quality of Linux kernel developer libraries. It's the culture, for better or worse.

And this is why the Linux kernel is deployed on three orders of magnitude more devices than *BSD kernels are? Laziness?

libabc: a demonstration library for kernel developers

Posted Nov 6, 2011 6:16 UTC (Sun) by nybble41 (subscriber, #55106) [Link]

Of course. Everyone knows that smart programmers who are also lazy write better code; after all, low-quality code means more maintenance and debugging work down the road. It's the short-sighted ones you have to look out for. I don't think anyone would argue that the kernel developers are lacking in either intelligence or vision, ergo in their case laziness is definitely a virtue.


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds