LWN.net Logo

Michaelsen: One

On his blog, LibreOffice hacker Bjoern Michaelsen celebrates the conversion to make for LibreOffice builds. Michael Meeks congratulated Michaelsen and the others responsible for "killing our horrible, legacy, internal dmake". Michaelsen looks at the speed improvements that came with the new build system, which reduced the "null build" (nothing to do) from 5 minutes (30 minutes on Windows) to 37 seconds. "There are other things improved with the new build system too. For example, in the old build system, if you wanted to add a library, you had to touch a lot of places (at minimum: makefile.mk for building it, prj/d.lst for copying it, solenv/inc/libs.mk for others to be able to link to it, scp2 to add it to the installation and likely some other things I have forgotten), while now you have to only modify two places: one to describe what to build and one to describe where it ends up in the install. So while the old build system was like a game of jenga, we can now move more confidently and quickly."
(Log in to post comments)

want speed? get rid of make(1)

Posted Mar 1, 2013 3:15 UTC (Fri) by HelloWorld (guest, #56129) [Link]

37 seconds to work out that nothing needs to be done? That's WAY to slow if you ask me. Perhaps they should try tup.

http://gittup.org/tup/make_vs_tup.html

want speed? get rid of make(1)

Posted Mar 1, 2013 4:24 UTC (Fri) by tnoo (subscriber, #20427) [Link]

I'm not so sure about the introduction on that site: that beam of light (e.g. radio waves) traverses my laptop of 30 cm length within 1 ns. In that time, the processor, running at 3 GHz, does 3 instructions. Is that enough for tup? ;-)

want speed? get rid of make(1)

Posted Mar 1, 2013 4:32 UTC (Fri) by rsidd (subscriber, #2582) [Link]

Compared to the build time of LibreOffice, 37 seconds is negligible and certainly the saving is not worth the pain of ripping out make and replacing it with something new and non-standard.

want speed? get rid of make(1)

Posted Mar 1, 2013 5:22 UTC (Fri) by HelloWorld (guest, #56129) [Link]

37 seconds may be negligible if you need to build all of LibreOffice from scratch. But that's not the use case that matters: most people don't need to do that as they'll obtain binary releases from somebody else. What matters is developer turnaround time, i. e. how long does it take for me to see the effects of the 5-line code change I just made? And for this, 37 seconds are all but negligible: they're enough to disrupt one's train of thought, which is about the worst thing you can do to a software developer.

Oh, and I don't care at all whether some stick-in-the-mud considers tup a "standard" tool or not. Being "standard" doesn't make make any faster.

want speed? get rid of make(1)

Posted Mar 1, 2013 5:40 UTC (Fri) by edomaur (subscriber, #14520) [Link]

Well, that's still a standard tool, with many users who knows how it get work done. Tup ? First time I did hear of it... Sorry.

Also, I don't think moving to make caused them to rewrite too much makefiles (but I should verify this)

want speed? get rid of make(1)

Posted Mar 1, 2013 13:59 UTC (Fri) by pbonzini (subscriber, #60935) [Link]

It actually was a complete rewrite.

want speed? get rid of make(1)

Posted Mar 1, 2013 9:26 UTC (Fri) by Wol (guest, #4433) [Link]

And how do you know tup will be any faster?

Bearing in mind MOST of the stuff this is checking is likely to NOT be in cache, what's the betting a lot of that 37 seconds is spent waiting for the disk to retrieve the data?

In case you're wondering, I'm a casual LO developer. I know from experience how long it takes to build. And on my 16Gb X3 that is a few hours. 37 seconds is nothing.

Cheers,
Wol

want speed? get rid of make(1)

Posted Mar 1, 2013 9:49 UTC (Fri) by rsidd (subscriber, #2582) [Link]

Oh, and I don't care at all whether some stick-in-the-mud considers tup a "standard" tool or not. Being "standard" doesn't make make any faster.
Way to win new users! The tup website is not very persuasive, and all the numbers are for fake projects (as far as I can see). A real-world example of converting a make-based project (with a few dozen subdirectories and a few hundred or few thousand C files) to tup, the time it took to do the job, and the speed gain in building, would be much more interesting. If it is feasible to convert libreoffice to tup, it should certainly be feasible to do, say, gimp or inkscape or something like that, and explain exactly how hard it is and what the gain is.

want speed? get rid of make(1)

Posted Mar 1, 2013 10:02 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

Gimp or Inkscape? Nah. Nethack. For bonus points, do it without rearranging the source code.

want speed? get rid of make(1)

Posted Mar 1, 2013 10:07 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

By the time you've factored in "recompile touched object file, relink target, relaunch LO, re-run test case", 37 seconds doesn't seem a particularly burdensome disruption (whereas five minutes clearly is).

want speed? get rid of make(1)

Posted Mar 1, 2013 16:21 UTC (Fri) by HelloWorld (guest, #56129) [Link]

Sure, there are other things that also need to be optimised. And there has been progress in recent years, clang and gold are way faster than what was there before. However, at some point you need to make more intrusive changes to improve things further. The build system is one example, and there are people who work on a module system for the C language family (you may want to check out this talk: http://lambda-the-ultimate.org/node/4649). So clearly there are people besides me who value a quick turnaround time.

want speed? get rid of make(1)

Posted Mar 2, 2013 0:55 UTC (Sat) by mathstuf (subscriber, #69389) [Link]

I've been following the Tup mailing list for a while and there are things that Tup is missing for the big time. The first one is install targets; Tup does nothing to help with installing anything (granted, make doesn't either, but I don't think you can tell Tup to do "all but X" builds; maybe I'm mistaken). The second is that if you have any processes which have indeterminate inputs (files read) or outputs (files written) (say, a git command, a make dist equivalent, LaTeX, Doxygen (or probably any documentation generation tool for that matter), Tup can't do it (since you need to list these yourself). The suggestions for this is to…use make for those parts. The other problem that I have is that Tup requires FUSE to do its full thing properly (IIRC, it can work without it, but not optimally). On Windows, Tup needs to match your architecture so it can DLL inject your tools to do open/read/write tracking. Maybe an strace backend would be better for *nix, but who knows.

Tup has some good ideas, but I don't know if anything that uses Tup is going to have an easy time fitting with distro build systems (the lack of install targets is really a killer).

individual sub-module makes for iterations

Posted Mar 4, 2013 14:58 UTC (Mon) by mmeeks (subscriber, #56090) [Link]

> But that's not the use case that matters:

Agreed :-)

> What matters is developer turnaround time, i. e. how long does
> it take for me to see the effects of the 5-line code change
> I just made ?

Absolutely agreed; so - as a thoroughly clueful LWN reader, you can see which module you edited the file in - eg. vcl/ and - assuming it's not an ABI impacting header change - you can just do:

make vcl

from the top-level, and (very much more rapidly) - order of a few seconds you'll have a new vcl built.

Of course, for the case when you change: 'obscureheaderfoo.hxx' it is nice to know that 37 seconds + 3 re-builds scattered across the project later - that you have a consistent set of C++ object files again ;-) - ABI ripple is also something that can bite big projects quite badly.

So - it really is rather cool in my view at least. Certainly having one make system instead of two and being in transition is a really nice feature.

individual sub-module makes for iterations

Posted Mar 4, 2013 19:05 UTC (Mon) by HelloWorld (guest, #56129) [Link]

Hi Michael,

> Absolutely agreed; so - as a thoroughly clueful LWN reader, you can see which module you edited the file in - eg. vcl/ and - assuming it's not an ABI impacting header change - you can just do:
>
> make vcl
>
> from the top-level, and (very much more rapidly) - order of a few seconds you'll have a new vcl built.
But this is the kind of thing I don't even want to think about. I just want to start the build system and it should do exactly the amount of work needed and nothing more, and it should figure out what that is in as little time as possible, preferably less than one second. I shouldn't need to know what I modified and tell the build system about it, because I will forget something at some point.
My ideal build system would also detect whether changes to header files actually affect the files that include said header file. For example, adding a new function prototype should be possible without recompiling everything. But this is hard; perhaps something like this will come when the C language family obtains a proper module system. A few people seem to be working on this:
http://isocpp.org/blog/2012/11/modules-update-on-work-in-...

individual sub-module makes for iterations

Posted Mar 4, 2013 19:59 UTC (Mon) by andresfreund (subscriber, #69562) [Link]

> My ideal build system would also detect whether changes to header files actually affect the files that include said header file. For example, adding a new function prototype should be possible without recompiling everything. But this is hard;
It seems pointlessly hard. You'd need to make sure there are no conflicts due to the new prototype (e.g. same function, same parameters, different returntype). Thats not an argument against a proper module support, but I think this level of incremental compilation support is not realistic.
The effort can be better spent improving compilation speed et al.

individual sub-module makes for iterations

Posted Mar 4, 2013 22:29 UTC (Mon) by HelloWorld (guest, #56129) [Link]

It doesn't have to be perfect, it would already be an improvement if recompilation could be avoided for certain common cases, i. e. adding a function prototype with a new identifier (so you don't have to worry about overloading).

> The effort can be better spent improving compilation speed et al.
There's no way to make compilation as fast as not compiling at all.

individual sub-module makes for iterations

Posted Mar 4, 2013 23:11 UTC (Mon) by Sweetshark (guest, #89619) [Link]

"But this is the kind of thing I don't even want to think about."

You dont. As a developer in a project as big as LibreOffice you are aware without thinking if you are e.g. meddling with the spreadsheet or with the word processor.

While even quicker execution over complete trees is a nice academic exercise, its irrelevant for practical proposes: Unless it opens new workflows, there is no win in being faster than the human in front of the machine can process.

want speed? get rid of make(1)

Posted Mar 1, 2013 9:56 UTC (Fri) by renox (subscriber, #23785) [Link]

Thanks for the link, I didn't know about tup.
Also thanks for volunteering to convert the existing build system to tup ("That's WAY to slow if you ask me. Perhaps they should try tup.") to help them "try tup".

want speed? get rid of make(1)

Posted Mar 1, 2013 10:39 UTC (Fri) by HelloWorld (guest, #56129) [Link]

I can't fix every build system out there, and I hardly ever use LibreOffice, or any other office suite for that matter. What I can do is educate people about the deficiencies in make's design that prevent it from being fast for very large projects.

want speed? get rid of make(1)

Posted Mar 1, 2013 11:01 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

The one thing all these new build tools have in common is an apparent lack of friendly and enthusiastic advocates who are willing to put their labour where their mouth is.

want speed? get rid of make(1)

Posted Mar 1, 2013 11:09 UTC (Fri) by Company (guest, #57006) [Link]

Build systems are currently in the stage where VCS's were when Subversion was new: Everyone had decided CVS sucks, nobody had yet figured out how to do it better, there were myriads of differently crappy evolutions of CVS around with lots of fanboys and loooong mailing list discussions everywhere about which VCS to switch to. These days there is git. I still hold our for the git of build systems, but so far haven't found anything that's more than an evolution of autotools.

want speed? get rid of make(1)

Posted Mar 1, 2013 17:52 UTC (Fri) by nix (subscriber, #2304) [Link]

Sorry, but make, GNU Make at least, *doesn't* suck. Make has some ugly sucky parts (notably the large set of filenames it can't deal with, e.g. those containing spaces or brackets), but a nonrecursive make is actually quite capable. And fixing those warts often brings far more disadvantages than advantages (a requirement to quote every filename or shell line when filenames and shell lines are by far the predominant thing in makefiles, a requirement for a nonstandard and barely-maintained tool...)

Most of the really ugly things you had to do with make of yore to do trivial stuff (e.g. writing out makefile fragments to do dependency analysis) is now either done for you by tools like the compiler (which means that a non-make tool will have to translate those things into the new non-make format), or is obviated entirely by things like $(eval ...), which by now is old enough that virtually every GNU Make user has access to it.

want speed? get rid of make(1)

Posted Mar 1, 2013 18:10 UTC (Fri) by daglwn (subscriber, #65432) [Link]

> Sorry, but make, GNU Make at least, *doesn't* suck.

Thank you!

GNU make is incredibly powerful. I wrote a build & configure system using non-recursive make that handles all of the configuration tasks usually done by autoconf and builds the project as well.

Why do that? Besides the well-documented functional deficiencies of autoconf, it is painfully slow. With make I can run all the config tests in parallel along with the build and project regression tests. I have literally seen regression tests run at the same time the build config part is working on setting up the build for a different project component. It's _fast_.

Usually slow "don't build anything" problems are caused by the use of recursive make. I don't know if that's the problem with LO but I have seen several orders of magnitude of speed improvement for this case by converting recursive make to non-recursive make.

want speed? get rid of make(1)

Posted Mar 3, 2013 11:31 UTC (Sun) by paulj (subscriber, #341) [Link]

As someone who reaches for auto* by default, because I know it, but does find the long run time of "configure" (even minimal ones) annoying, would you have an example of a readable, understandable, and entirely GNU Make based build that handles feature-tests, user-set options, etc.. nicely?

want speed? get rid of make(1)

Posted Mar 4, 2013 15:50 UTC (Mon) by daglwn (subscriber, #65432) [Link]

The system I have is certainly not a complete replacement for all of autoconf's tests. I only implemented the tests I needed for the project. But I don't see any issue with adding more. It's just sh script, after all.

"Nicely," of course, is in the eye of the beholder. I make extensive use of $(eval) and GNU make's rules on making makefiles. It gets a bit hairy at the low level but I tried to make the developer-facing interface reasonable,

The current project is not in a releasable state but I will be starting another project soon that I hope will be releasable in a few months. I plan to re-use all the build infrastructure as it was designed to be generic enough to use in many varieties of projects.

want speed? get rid of make(1)

Posted Mar 4, 2013 16:20 UTC (Mon) by paulj (subscriber, #341) [Link]

Ok, would love to see examples of nice, clean, wholly GNU Make based build-systems, that have idioms for feature tests and handling config variables nicely, whenever you or anyone else have them. :) Very curious to see what it'd take to get away from auto*.

want speed? get rid of make(1)

Posted Mar 5, 2013 1:52 UTC (Tue) by bronson (subscriber, #4806) [Link]

Here's something I extracted from an internal project a few years ago

https://github.com/bronson/makefile-death

It worked really, really well. I wanted to finish polishing and documenting it but the project got cancelled and I haven't needed C/C++ makefiles ever since (yay interpreted languages).

Happy to answer any questions.

want speed? get rid of make(1)

Posted Mar 5, 2013 11:19 UTC (Tue) by etienne (subscriber, #25256) [Link]

> examples of nice, clean, wholly GNU Make

And something which can handle a kind of "make menuconfig" to add/remove features of a large build, which do not rebuild everything when the configuration is changed... I have a 10 Gb tree here...
- Having a single "general_config" file and everything depending on it will rebuild everything when adding a feature.
- Having multiple small files in the sub-part where they have dependencies is difficult to manage (symbolic links if multiple sub-part depend on it), but you can use "find subprog -L -anewer subprog/<target>"
- Using dynamic configuration variable and some "copy_if_changed" (i.e. do not touch the file if unchanged) and have a dependency on that file may work, but I have seen some bad stuff (someone copy the magic file into the repository...) and the process of evolving "make menuconfig" is error prone.

want speed? get rid of make(1)

Posted Mar 5, 2013 20:18 UTC (Tue) by daglwn (subscriber, #65432) [Link]

> And something which can handle a kind of "make menuconfig" to add/remove
> features of a large build, which do not rebuild everything when the
> configuration is changed... I have a 10 Gb tree here...

I should be able to enhance the system I'm working on to provide that. It uses GNU make's ability to know when to rebuild makefiles to keep every configuration result independent. It even knows which config decisions depend on other config decisions and adds rule dependencies to enforce it.

It doesn't yet track which sources depend on which configuration questions but it should be entirely possible to add interfaces to specify that information. It makes the developer interface more complciated but developers have to communicate this information somehow.

Perhaps I should do a git subtree split and just post the thing somewhere for people to bang on and tear apart. It needs lots of documentation. :)

want speed? get rid of make(1)

Posted Mar 5, 2013 9:10 UTC (Tue) by BlueLightning (subscriber, #38978) [Link]

How well does your system deal with cross-compilation? Is there a way to pre-seed/force the test results (the way we deal with most tests with autotools)?

want speed? get rid of make(1)

Posted Mar 5, 2013 20:19 UTC (Tue) by daglwn (subscriber, #65432) [Link]

Right now it doesn't support cross-compilation. I just started building it with a new project. I'm trying to design it to not preclude cross-compilation.

want speed? get rid of make(1)

Posted Mar 1, 2013 18:34 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

LOL.

Make sucks so much it has sucked in the whole Atlantic Ocean. Makefiles are:
1) Hard to maintain.
2) Limited (hard to do dynamic targets, generators, etc)
3) Has poor support for autodependency resolution.
4) Poor support for out-of-tree builds.
And finally, it's non-portable and doesn't play well with IDEs.

It's a high time for something else to replace it (scons, waf, cmake). Preferably, something written in a real programming language.

want speed? get rid of make(1)

Posted Mar 1, 2013 18:38 UTC (Fri) by daglwn (subscriber, #65432) [Link]

I have done all of your 2-4 with GNU make. Makefiles are no more hard to maintain than source files of any other language. Sometimes the syntax is ugly but that's a separate issue.

GNU make is exceedingly portable. Any IDE that doesn't understand Makefiles is nearly worthless to me.

Yes, you need cygwin for Windows. A native port would be possible but so far no one seems itched enough to do that.

want speed? get rid of make(1)

Posted Mar 1, 2013 22:25 UTC (Fri) by nix (subscriber, #2304) [Link]

Agreed. Dynamic targets and generators are not just trivial with GNU Make but there are *multiple* trivial ways to do them (makefile fragments and include and $(eval ...), and heck in the latest unreleased version a built-in Guile interpreter!). Equally, automatic dependency generation (assuming that's what you mean by 'autodependency resolution') is trivially done assuming only that you have something that knows what those dependencies are (and virtually all compilers and the like can these days spit out dependency graphs in GNU Make notation). As for out-of-tree builds, VPATH is decades old, where have you been?

There *are* problems in GNU Make still -- it needs a proper macro system to reduce the verbosity of endlessly re-specified obvious paths in non-recursive make fragments, for example: you can automatically throw your makefiles through M4 or something but that is very ugly because it introduces too much additional notation -- but they never seem to be the problems the condemners of make complain about.

I suspect the real problem here is that most of the tricks you need in order to do anything powerful with make are not well known, even though most of them are not very complicated. (Here, I cannot recommend John Graham-Cumming's _Gnu Make Unleashed_ highly enough. Also, RTFM for GNU Make every ten years or so, or at least the NEWS for the versions you haven't noticed get released. It's gained some very powerful facilities over the last decade or so...)

want speed? get rid of make(1)

Posted Mar 1, 2013 23:04 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

>and virtually all compilers and the like can these days spit out dependency graphs in GNU Make notation
Except that you have to actually run them. Scons and other systems have native dependency resolution which is much faster.

>As for out-of-tree builds, VPATH is decades old, where have you been?
I don't know, but lots of Makefile writers probably were vacationing on Mars. It's rare to find a Makefile-based program that can correctly be assembled out-of-tree.

want speed? get rid of make(1)

Posted Mar 2, 2013 0:16 UTC (Sat) by anselm (subscriber, #2796) [Link]

It's rare to find a Makefile-based program that can correctly be assembled out-of-tree.

That would be pretty much every Tcl/Tk extension (and Tcl/Tk itself). Enabling out-of-tree builds has been policy for Tcl/Tk for considerably more than a decade.

want speed? get rid of make(1)

Posted Mar 2, 2013 1:38 UTC (Sat) by Sweetshark (guest, #89619) [Link]

"I don't know, but lots of Makefile writers probably were vacationing on Mars. . It's rare to find a Makefile-based program that can correctly be assembled out-of-tree."

FWIW, the LibreOffice build (the make based stuff) can builds out-of-tree -- except for one thing: it expects the ./configure output in the source dir, but that would be trivial to fix and is just a convenience for now.
Apart from that the source tree is read-only, although by default it builds in a newly created subdirectory in the source tree, which is reasonable.

want speed? get rid of make(1)

Posted Mar 2, 2013 11:10 UTC (Sat) by cortana (subscriber, #24596) [Link]

Are dependency files that are produced as a side-effect of compilation really slower than a separate tool that has to parse the source files a second time (often incorrectly) in order to generate the dependency information?

And 'make distcheck' should perform a test of the build system using VPATH. I would not be surprised if many auto tools users don't bother with it though. :(

want speed? get rid of make(1)

Posted Mar 2, 2013 20:17 UTC (Sat) by nix (subscriber, #2304) [Link]

As you correctly imply, getting the compiler to provide dependency information is much faster than having a separate pass to do it. It's also more reliable: CMake needs to support every little quirk of every compiler and needs to understand every detail of its include path-searching algorithm or it will go wrong sooner or later. And it doesn't. (Does it even support #include_next<>?)

want speed? get rid of make(1)

Posted Mar 4, 2013 15:53 UTC (Mon) by daglwn (subscriber, #65432) [Link]

> Except that you have to actually run them. Scons and other systems have
> native dependency resolution which is much faster.

If you're doing dependency generation separately from the build in make you're doing it wrong. You should have the compiler generate the dependency information as it builds each source file. Here's the right way to do this:

http://mad-scientist.net/make/autodep.html

want speed? get rid of make(1)

Posted Mar 4, 2013 16:10 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

You STILL need a compiler to generate dependencies. Scons/waf can do it by itself, giving much faster compilation time for small changes.

want speed? get rid of make(1)

Posted Mar 4, 2013 16:24 UTC (Mon) by andresfreund (subscriber, #69562) [Link]

> You STILL need a compiler to generate dependencies. Scons/waf can do it by itself, giving much faster compilation time for small changes.
How does that follow? If done right the compiler generates the dependencies whenever it compiles a file. They don't need to be gathered just to check whether a file needs to be recompiled. Thats why they are stored on disk. Then it uses those stored dependencies to check the dependencies next time a file might need to get recompiled.

In fact I would actually guess that this way is far faster because the source files only need to be stat()ed, not opened and read like most of the tools that compute the dependencies themselves as most of those recompute the dependencies every run.

want speed? get rid of make(1)

Posted Mar 4, 2013 17:11 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Catch 22.

Scons/waf can start compiler immediately for all affected dependencies (because they can scan them directly). You'd have to wait for the compiler to finish.

want speed? get rid of make(1)

Posted Mar 4, 2013 17:38 UTC (Mon) by andresfreund (subscriber, #69562) [Link]

> Scons/waf can start compiler immediately for all affected dependencies (because they can scan them directly). You'd have to wait for the compiler to finish.
Thats not how it works with any sensible compiler based solution. You don't need to start the compiler for any dependency resolution when recompiling. It uses *already computed* dependency information. There is no need to compute dependencies if nothing is compiled yet.

want speed? get rid of make(1)

Posted Mar 5, 2013 20:22 UTC (Tue) by daglwn (subscriber, #65432) [Link]

Does scons/waf understand all of the tricky compiler idiosyncracies and compiler-specific configuration information?

It is not a trivial task to extract dependency information. Consider preprocessor effects.

want speed? get rid of make(1)

Posted Mar 5, 2013 20:28 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

In my experience, they deal with dependencies much better than make-based systems (including cmake). It's not that hard to extract dependencies for C/C++ if you're OK with being a little bit over-conservative.

Here's a very interesting take on it from tridge:
http://article.gmane.org/gmane.network.samba.internals/47290

want speed? get rid of make(1)

Posted Mar 1, 2013 23:59 UTC (Fri) by daglwn (subscriber, #65432) [Link]

I suspect you are right about people now knowing the features of GNU make. It took me a good study of the manual before I realized how powerful it is.

Gnu Make Unleashed

Posted Mar 2, 2013 1:42 UTC (Sat) by ntl (subscriber, #40518) [Link]

Thanks for recommending this; I had read some of Graham-Cumming's articles but didn't know about the book. I'd kinda outgrown _Managing Projects with GNU Make_.

Gnu Make Unleashed

Posted Mar 13, 2013 15:22 UTC (Wed) by jwakely (subscriber, #60262) [Link]

My copy arrived the other day and I think it's going to be life-changing!
Thanks for the recommendation, nix.

Gnu Make Unleashed

Posted Mar 13, 2013 21:32 UTC (Wed) by nix (subscriber, #2304) [Link]

Warning: use of the techniques described in this book can lead to other people needing to buy the book to understand what you did :P

want speed? get rid of make(1)

Posted Mar 1, 2013 22:22 UTC (Fri) by Aliasundercover (subscriber, #69009) [Link]

Sure, I agree, mostly. Make doesn't suck. I use it, works well.

But then why do most users of make actually use it like a compiler's intermediate code pass with some program generating makefiles automatically? Those makefiles suck big time. I rarely see hand written but when I do I find they build fast and are easy to maintain.

Is it really necessary to track every last dependency of every last blasted header file? We all know the result, all the world depends on *.h anyway. Just go to the top and do a clean build. You know that is the only thing you will really trust. My current bit of work is 240,000 lines, clean build in 9 seconds on a good computer. (Is make -j 6 on a six core cheating? Then 22 seconds for non-parallel.) Just what are people doing to turn this in to such a problem?

Is it by refusing to modularise code and building 240,000,000 lines as one cohesive unit? Or is it by using C++ and making a million tiny .cpp files so the header universe gets recompiled a million times? Autotools? They seem to take a year and a day. No, don't tell me, there are many answers.

I don't think the problem is make.

want speed? get rid of make(1)

Posted Mar 2, 2013 0:03 UTC (Sat) by daglwn (subscriber, #65432) [Link]

> But then why do most users of make actually use it like a compiler's
> intermediate code pass with some program generating makefiles
> automatically?

Autotools, though the problem goes back at least to Imake. But autotools is by far the most common system and it sets people up to think this is the standard, thus best, way to do things.

If you're talking about makedepend or gcc -M that's something else entirely.

> Is it really necessary to track every last dependency of every last
> blasted header file?

No it is not and gcc has some controls to limit its output.

> We all know the result, all the world depends on *.h anyway.

That's likely a design flaw of the project code, not the build system.

> I don't think the problem is make.

I entirely agree.

want speed? get rid of make(1)

Posted Mar 2, 2013 17:37 UTC (Sat) by pboddie (subscriber, #50784) [Link]

But then why do most users of make actually use it like a compiler's intermediate code pass with some program generating makefiles automatically? Those makefiles suck big time.

Step forward, automake! I once maintained a build system for a small project where the benefits of automating the configuration were enough for me to learn autoconf, and the Makefiles were all fairly concise. After all, going from Makefile.in to Makefile is just a matter of applying substitutions. The real problem is automake which generates huge Makefile templates for no good reason.

I won't deny that the generated configure scripts are nasty, even without automake, however.

want speed? get rid of make(1)

Posted Mar 7, 2013 13:30 UTC (Thu) by ortalo (subscriber, #4654) [Link]

automake appeared in the early 90s.
The best reasons to use it (or write it or extend it through libtool) from that time may have faded away. Nobody will regret them as they were the most painful (don't ask if you did not code through that era).
Current good reasons are probably associated to standard habits (esp. wrt to make standard targets).

want speed? get rid of make(1)

Posted Mar 7, 2013 20:00 UTC (Thu) by hummassa (subscriber, #307) [Link]

What about cross-compiling?

want speed? get rid of make(1)

Posted Mar 1, 2013 12:12 UTC (Fri) by rvfh (subscriber, #31018) [Link]

Exactly! "Talk is cheap." you know the rest of the quote.

What we all want is a replacement for make/autotools which would be faster, easier to learn and just as portable. This does not really exist [yet] so moving to make was the right move IMNSHO.

want speed? get rid of make(1)

Posted Mar 1, 2013 11:28 UTC (Fri) by bgilbert (subscriber, #4738) [Link]

What I can do is educate people about the deficiencies in make's design

Its arrows go in the wrong direction?

want speed? get rid of make(1)

Posted Mar 1, 2013 15:27 UTC (Fri) by HelloWorld (guest, #56129) [Link]

There are papers on the tup homepage that describe the inherent limitations of make's design. There's no point in explaining them again here. fyi, I have no affiliation whatsoever with that project.

want speed? get rid of make(1)

Posted Mar 1, 2013 18:30 UTC (Fri) by daglwn (subscriber, #65432) [Link]

I read one of the papers and it is very good. There are legitimate problems identified.

The problem I have with all these build systems is that they intentionally go out of the way to be incompatible. I dont *want* a build system that makes all sorts of assumptions about what file extensions mean. I turn off that stuff in GNU make. I want my build description to be explicit.

There's absolutely no reason the enhancements provided by tup couldn't be incorporated into GNU make. It's a different update algorithm, yes, but from the user's point of view the Makefile should still work.

want speed? get rid of make(1)

Posted Mar 2, 2013 2:52 UTC (Sat) by Sweetshark (guest, #89619) [Link]

OTOH tup does all its estimates of performance under the assumption that there is only one goal, not a choice of multiple ones. A naive assumption for all projects of the size where the direction of the DAG matters.
Depending on the number of goals available, it would either need to need to keep and update multiple DAGs around or tag the edges of the with the goals they lead to.

So much for the theory. And then there is the problem that the implementation is barely more than a research project for now.

want speed? get rid of make(1)

Posted Mar 1, 2013 12:51 UTC (Fri) by stephane (subscriber, #57867) [Link]

You'll find a very detailed answer from Michaelsen himself to this proposition to use tup:

http://skyfromme.wordpress.com/2013/02/28/one/comment-pag...

I'm suprised about all comments of this thread which focus on 37 seconds build time for null case.

Do you know what does it really include?
You'll find more information in Michaelsen comment.

want speed? get rid of make(1)

Posted Mar 1, 2013 23:55 UTC (Fri) by heijo (guest, #88363) [Link]

It seems to me that's perhaps an issue with the GNU Make implementation, not a fundamental limitation of makefiles.

Once you have read all file times from disk, "make" should be able to determine that nothing needs to be done in negligible CPU time.

Avoiding reading all file times from disk (or dcache...) requires some sort of "file modification log", which is however an orthogonal issue to the specifics of the build system.

So, this argument seems bullshit.

Michaelsen: One

Posted Mar 1, 2013 10:31 UTC (Fri) by macson_g (subscriber, #12717) [Link]

Ninja is doing null build in null time. And if you use CMake 2.8 or newer, you can ask it to generate Ninja rules file instead of Makefile. I use it for huge project with 1000+ targets, it never failed me, neither CMake nor Ninja.

I don't understand why didn't they just converted to CMake. Maybe they will at some point in the future.

Ninja: http://martine.github.com/ninja/, apt-get install ninja-build

Michaelsen: One

Posted Mar 1, 2013 11:11 UTC (Fri) by ms (subscriber, #41272) [Link]

Ahh, I was wondering when someone was going to mention Ninja - I've come across that one too recently.

That said, there do seem to be some cases where nothing but Make will do. One of these is when you have dynamic dependency generation - for example lots of use of GNU Make's `eval` function. Now I'm fully open to the suggestion that you just shouldn't be doing that sort of thing in the build tool (and indeed if you use other build tools, you can't), but supposing you still need the functionality, what would people suggest you use?

Michaelsen: One

Posted Mar 1, 2013 16:07 UTC (Fri) by dashesy (subscriber, #74652) [Link]

Is it possible to do that in cmake, before the build actually starts?

Michaelsen: One

Posted Mar 1, 2013 17:54 UTC (Fri) by nix (subscriber, #2304) [Link]

If it's happening before the build starts, it's not a replacement. Think a build system where the build process involves running something which then emits dependency rules. (This is a good description of basically everything that uses GCC's automatic dependency rule generation machinery, the whole point of which was to remove the need to do an early 'make depend' which just worked out stuff the compiler had to work out later on anyway.)

Michaelsen: One

Posted Mar 1, 2013 18:45 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

CMake-generated buildfiles handle C header dependencies just fine. In more complicated cases it's possible to generate dependencies during the build (Boost does this to build several variants of itself at the same time) but hardly anyone needs it.

Michaelsen: One

Posted Mar 1, 2013 22:27 UTC (Fri) by nix (subscriber, #2304) [Link]

What? Thus speaks someone who's not worked on the build system for a project of any complexity. Dependencies generated during the build really *are* needed on real projects quite often. (Doesn't Boost use Jam, not CMake, anyway? I loathe Jam so much that I'd be very happy to hear it had switched, but as of Boost 1.52.0 it was still using its own hacked-for-Boost variant of Jam v2. A one-project build system, wonderful.)

Michaelsen: One

Posted Mar 1, 2013 22:31 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

I have actually contributed at various times to Jam, Waf and scons. I've also moved quite complicated build systems to cmake and jam.

Your loathing of Jam is interesting, because it's actually the only build system where the notion of dynamically generated dependencies has been refined to a high level. In BJam.v2 _all_ dependencies are produced from the 'virtual' dependencies 'instantiated' with the current environment settings.

And yes, Boost now uses both cmake and bjam.

Michaelsen: One

Posted Mar 2, 2013 2:28 UTC (Sat) by mathstuf (subscriber, #69389) [Link]

Boost is actually moving[1] over to git (yay!) and CMake (yay, no more bjam[2]). I believe that the multi-target build logic has been ported over to the CMake build of Boost. It'd be interesting to get that factored out and pushed upstream as a module…

[1]https://svn.boost.org/trac/boost/wiki/CMakeModularization...
[2]My main complaint is that pre-build Jamfile reading can take minutes on Windows…which is a pain when I need to build 8 or 9 versions of it (times 2 for release and debug each needing a separate shell environment *shakes fist at Microsoft*).

Michaelsen: One

Posted Mar 2, 2013 3:53 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

Most of time in BJam was actually spent in building dependency graph using a very non-optimal dumb language (Jam). I worked a little bit on trying to make it go faster, but ultimately had to quit.

I actually quite liked BJam's idea of resources that can be automatically built and linked into a project. Good idea, bad implementation.

Michaelsen: One

Posted Mar 2, 2013 20:22 UTC (Sat) by nix (subscriber, #2304) [Link]

I loathe Jam because it is incredibly slow, needs bootstrapping before you can compile which is even *more* complexity, is hilariously unconfigurable (you can't even change the CFLAGS without hacking the build system, as far as I can see) and is punishingly slow. It takes 21s of churning on my not-at-all-slow system before it can even *start* running the compiler for a Boost build, and Boost isn't a very large project as these things go. The fact that it includes an actual message '...patience...' which is printed out while it thinks interminably about the dependency graph is not a good sign.

(I hope Boost is transitioning over to CMake completely, and that it's actual upstream CMake. Boost is one of only two projects I still use that has its own build tool because no existing one is good enough for it, and the other is, oh yes, LibreOffice.)

Michaelsen: One

Posted Mar 2, 2013 21:33 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

>you can't even change the CFLAGS without hacking the build system
You certainly can: "bjam release compileflags=blah"

Bjam is really slow, yes. It would be great if someone rewrote it in a nice real language (Python), but nobody is really interested. That's a shame, because the underlying ideas in Boost.Build are really interesting.

Michaelsen: One

Posted Mar 1, 2013 18:33 UTC (Fri) by daglwn (subscriber, #65432) [Link]

Yep. $(eval) is a killer make feature. I use it a lot to do just the dynamic dependency generation you describe.

Michaelsen: One

Posted Mar 1, 2013 11:52 UTC (Fri) by Sweetshark (guest, #89619) [Link]

A few points on this:
- This was started in 2010, ninja just created their repo back then
- compared to LibreOffice, a 1000 target project is tiny. Even the sw/ (Writer) submodule has a 1000 targets -- and then there are >200 other modules
- "null builds in null time" is of course a cheat as nobody writes ninja rules -- its used as e.g. a backend for CMake. That means CI tools like tinderboxes have to always run both, thus killing the performance gains
- when we checked CMake in 2010 it still generated recursive makefiles -- something that is truely to avoid
- if you look at http://cgit.freedesktop.org/libreoffice/core/tree/sw/Libr... you will find the makefiles to define targets arent that different from what CMake does

As for using for CMake, when your project takes three years to migrate because you are building some 100000 targets on three platforms you take concern of your dependencies. OpenOffice got burned by this as they needed to because the maintainers of dmake when upstream dried up. GNU Make OTOH will be:
- around for a long time
- is 33KLOC according to ohloh, so if we had to take it over it would be rather doable
- CMake OTOH is 782KLOC and does not build itself, so you need to add at least one native code generator on each platform to that. You dont want to become the nonvolunteer heir to that.

Linus once said: "The slogan of Subversion for a while was "CVS done right", or something like that, and if you start with that kind of slogan, there's nowhere you can go." I feel the same way about CMake as it keeps the faulty assumption that it is a bright idea to generate Makefiles -- at which point, when you run in trouble, you have four things to debug:

1/ The target definitions given to the makefile generator
2/ The makefile generator (CMake)
3/ The make implementation/replacement (e.g. ninja -- but possibly even a different one for each platform)
4/ funky interactions between all of the above

There is a joke that the inofficial slogan of LibreOffice is "based on technology breaking your toolchain since 1985". In fact, that is only half joking -- and the toolchain is to blame just as often as is LibreOffice, if not even more than it. In fact we found a quite a few bugs in GNU make for cornercases already, and that it a truely matured and tested and -- as said above -- _small_ project. I dont want to imagine, how many phantom bugs we would chase with something more complex.

Now, if you want to prove me wrong: it should be possible to generate ninja source from GNU make. While that would not be an essential asset for the LibreOffice project, if this is your scratch-your-itch you are invited. ;)

see also: http://skyfromme.wordpress.com/2013/02/28/one/comment-pag...

Michaelsen: One

Posted Mar 1, 2013 12:02 UTC (Fri) by boudewijn (subscriber, #14185) [Link]

"As for using for CMake, when your project takes three years to migrate because you are building some 100000 targets on three platforms you take concern of your dependencies. OpenOffice got burned by this as they needed to because the maintainers of dmake when upstream dried up. GNU Make OTOH will be:
- around for a long time"

All of KDE uses CMake and is a hell of a lot bigger than LibreOffice, and has been using CMake since 2007. We're not afraid that CMake will disappear -- it's too widely used for that. We also didn't need three years to migrate, but maybe KDE just started out with a saner system (although using the word "sane" in connection with autotools is kind of insane).

Anway, reading http://wiki.openoffice.org/wiki/Build_System_Analysis#CMake, I am not impressed by the technical reasoning there: most of the problems raised are just plain wrong.

Michaelsen: One

Posted Mar 1, 2013 12:22 UTC (Fri) by Sweetshark (guest, #89619) [Link]

"We're not afraid that CMake will disappear -- it's too widely used for that."
Still the main steward is KDE, and the 'widely used' derives from there. So its kind of tautological that KDE isnt afraid to loose 'their' tool. OTOH, linking OOo fate to KDE in 2010 tightly would have been risky. Esp. since KDE is relevant only on one platform.

"... maybe KDE just started out with a saner system."
Absolutely. The cost was mainly associated with what we migrated from, not to. The definition of what to build is now reasonably abstract that it could probably be even translated automatically to CMake or whatever for 80% of the targets. Except that there is currently little incentive to do so, and a lot of special cases handling that would need to get added to CMake for little gain.

Esp. since KDE is relevant only on one platform

Posted Mar 1, 2013 12:35 UTC (Fri) by Wol (guest, #4433) [Link]

Which platform is that? Windows?

Sorry for being facetious, but I gather Qt (and, presumably, KDE) has Windows as a *targeted* environment.

Unlike Gnome, which seems to be rather Windows-hostile, despite the gtk toolkit being available there.

Cheers,
Wol

Esp. since KDE is relevant only on one platform

Posted Mar 1, 2013 12:54 UTC (Fri) by Sweetshark (guest, #89619) [Link]

"Sorry for being facetious, but I gather Qt (and, presumably, KDE) has Windows as a *targeted* environment."

A valid technical point, but from the economics have to be factored in too: the size of the userbase and developer base of KDE on Windows are not yet that big.

Im not saying anything about the prospects or technical goals of the initiative, but from an economic standpoint that had little impact on the decision back in 2010.

Esp. since KDE is relevant only on one platform

Posted Mar 1, 2013 12:58 UTC (Fri) by hummassa (subscriber, #307) [Link]

I would go even further: there is no functional Windows KDE nor kdelibs nor kdelibs app.

Esp. since KDE is relevant only on one platform

Posted Mar 1, 2013 18:23 UTC (Fri) by bjartur (guest, #67801) [Link]

Sure there is. Marble for example.

Esp. since KDE is relevant only on one platform

Posted Mar 3, 2013 1:15 UTC (Sun) by hummassa (subscriber, #307) [Link]

Marble is sub-functional, even on Linux. Sorry.

Esp. since KDE is relevant only on one platform

Posted Mar 2, 2013 9:11 UTC (Sat) by Sho (subscriber, #8956) [Link]

For the record, I use various KDE apps regularly on Windows - Okular for document viewing, Gwenview for image viewing, Kate as my text editor - and they work well in my experience. There are some remaining problems on the platform integration front (e.g. working with network paths), but considering I get work done with them I'd call them quite functional.

Esp. since KDE is relevant only on one platform

Posted Mar 3, 2013 1:14 UTC (Sun) by hummassa (subscriber, #307) [Link]

Which versions, and where did you download them, or how did you build them? I am trying to make this (and OSX) work, without any success for some time now.

Esp. since KDE is relevant only on one platform

Posted Mar 3, 2013 4:03 UTC (Sun) by mathstuf (subscriber, #69389) [Link]

My sister has gotten hooked on kshishen, kpatience, and other games shipped with KDE, so I usually have to end up putting them on her laptop. I don't remember which version she has at the moment, but I think it's 4.5.x or 4.6.x. I used installers available[1] from the KDE Windows team.

[1]http://windows.kde.org/

Esp. since KDE is relevant only on one platform

Posted Mar 4, 2013 7:22 UTC (Mon) by Sho (subscriber, #8956) [Link]

I'm using KDE's Windows installer. I've also built a bunch of individual apps from their git master branches on top of that, which was easy to do by installing the devel packages from the installer, installing MinGW and CMake separately, and setting up some environment vars (by running a .bat file as documented on the KDE Windows wiki). It's basically just cmake, make, make install, just as on Linux.

Michaelsen: One

Posted Mar 1, 2013 12:45 UTC (Fri) by boudewijn (subscriber, #14185) [Link]

"Still the main steward is KDE, and the 'widely used' derives from there. So its kind of tautological that KDE isnt afraid to loose 'their' tool. OTOH, linking OOo fate to KDE in 2010 tightly would have been risky. Esp. since KDE is relevant only on one platform."

Well, no, not really. KDE isn't the main steward, that is Kitware. The KDE guy who worked hardest on the migration now works for Kitware on CMake, but KDE actually doesn't steward CMake in any meaningful way.

And CMake is widely used outside KDE as well. For instance, most of the free vfx/movie software uses CMake (like OpenColorIO), and there are many other free and closed projects. But KDE probably is the biggest project using CMake, just like it's the biggest project using Qt.

Michaelsen: One

Posted Mar 1, 2013 13:03 UTC (Fri) by Sweetshark (guest, #89619) [Link]

Let me put it this way:

LibreOffice uses a build system now that:

- does not do recursive make
- has a full dependency tree
- hasnt yet been found to be limited in scalability
- simplifies the work of the developers
- works on all platforms
- and is not using automake or dmake or build.pl

Do we really need to bikeshed over the tools it used to achieve that? If so, why?

Michaelsen: One

Posted Mar 1, 2013 13:18 UTC (Fri) by boudewijn (subscriber, #14185) [Link]

"bikeshed"? Nice way to shut down a discussion. All I wanted to make clear is that the assumptions the OpenOffice people used way back were just wrong. They rejected a perfectly good tool for the wrong reasons. As were your remarks on the time needed to migrate to CMake, the probably longevity of CMake or the suitability of CMake for really big projects.

You may be right that LibreOffice is in makefile heaven now, and I'm fine with that. Just don't answer a honest question like "I don't understand why didn't they just converted to CMake." with incorrect statements about other tools.

Fine, you rejected CMake. You did so for the wrong reasons, however.

Michaelsen: One

Posted Mar 1, 2013 13:28 UTC (Fri) by ovitters (subscriber, #27950) [Link]

So after all the work is done and perfectly happy, why not be impressed?

I don't get these "why didn't you use X" after the work has been done.

Michaelsen: One

Posted Mar 1, 2013 15:57 UTC (Fri) by welinder (guest, #4699) [Link]

> So after all the work is done and perfectly happy, why not be impressed?

Indeed. That is the first order of business.

The next, I think, is to suggest to the Make guys that LibreOffice is
now an obvious profiling case waiting to happen.

Michaelsen: One

Posted Mar 1, 2013 14:00 UTC (Fri) by epa (subscriber, #39769) [Link]

Doesn't this point still stand?
- when we checked CMake in 2010 it still generated recursive makefiles -- something that is truely to avoid
Or is CMake now generating a global makefile so all dependencies can be analysed at once? (From the article, I understood that giving 'make' a global view of the project was the most important thing giving a speedup.)

Michaelsen: One

Posted Mar 1, 2013 17:47 UTC (Fri) by wahern (subscriber, #37304) [Link]

Not only does a global view give a speed up, it's the _only_ way any build system should work. Not having a global view of the build is fundamentally broken.

I've non-recursive-make-ized a tree at least as big as LibreOffice. CMake just isn't an option for large, heterogenous trees where you're doing lots of crap like generating code which generates code which generates code in an ad hoc manner. CMake makes too many assumptions to try to make the simple cases dead-simple; those assumptions become road-blocks for the hard cases.

Make is very elegant in it's simplicity, but CMake totally throws this elegance out in order to cater to cookie-cutter projects. CMake targets the mid-scale project. Make is better suited to the very simple and the very complex.

I was originally going to post saying that 37 seconds is too long, because I've gotten it down to <5 seconds for comparably sized trees. But then I remembered where all the slow parts where, and that LibreOffice just may have been doing lots of run-time dynamic dependency generation, which often requires lots of disk I/O.

Michaelsen: One

Posted Mar 1, 2013 19:56 UTC (Fri) by k8to (subscriber, #15413) [Link]

CMake is also significantly more workable when one of your major platforms is Windows.

But agreed that the makefiles it produces are anything but speedy.

Michaelsen: One

Posted Mar 2, 2013 11:00 UTC (Sat) by cortana (subscriber, #24596) [Link]

Check out the paper on the tup build system web page. It at least opened my eyes to alternatives to a global view that still guarantee that a build is reproducible/correct etc. I haven't used a tup-like system in practice, however.

Michaelsen: One

Posted Mar 3, 2013 0:04 UTC (Sun) by Sweetshark (guest, #89619) [Link]

"I was originally going to post saying that 37 seconds is too long, because I've gotten it down to <5 seconds for comparably sized trees."

Note that the 37 seconds includes 8 seconds for running some basic unittests and was on the slower GNU make 3.82. Excluding the unittests and running on make 3.81 brings it down to <20sec.
see also:
http://skyfromme.wordpress.com/2013/02/28/one/comment-pag...

Michaelsen: One

Posted Mar 1, 2013 13:23 UTC (Fri) by pboddie (subscriber, #50784) [Link]

If the editor could find someone to write an article about the sociological (or other) reasons for people crossing the boundary, starting out on one side by just letting others know about tools that may be useful, and ending up on the other side refusing to accept that others might use tools other than the one they personally favour, then I would find that worth reading.

Sometimes I suspect ulterior motives, like people's businesses depending on a steady stream of adopters of whichever tool or product is being promoted, or the ability to point to a long list of users as a way of building reputation, but having seen this behaviour recur rather often, I find it quite destructive.

Michaelsen: One

Posted Mar 1, 2013 13:52 UTC (Fri) by wazoox (subscriber, #69624) [Link]

That's not that hard. People are petty, lazy, quick to see other's mistakes while forgetful of their own, and somewhat stupid. I'm no exception by the way :)

Michaelsen: One

Posted Mar 1, 2013 16:19 UTC (Fri) by pboddie (subscriber, #50784) [Link]

Yes, but none of that explains some of the petty "don't let people know about that other thing" antics, often wrapped up in "we must make choices easier (by removing alternatives to my project)", that I've seen in my time.

Michaelsen: One

Posted Mar 1, 2013 16:47 UTC (Fri) by raven667 (subscriber, #5198) [Link]

Uncertainty is stressful so being presented with alternatives can cause stress which is one reason why people are naturally resistant to ideas.

Evaluating alternatives

Posted Mar 1, 2013 23:44 UTC (Fri) by man_ls (subscriber, #15091) [Link]

Alternatives are not free, evaluation takes time. If I have to select a tool and there are only two alternatives, the job is much easier than if there are 200.

A good selection process will first cull crazily unfit alternatives, then discard unfit alternatives, finally discuss the relative merits of viable candidates and use a suitable measure, e.g. time to migrate. But unfortunately things change all the time, so unless you are in continuous evaluation mode all the time there will come other alternatives along the way after you have made the choice, or the first round selection, or whatever costly step you are currently in. So when people ask "why didn't you use Y which is much better than your X?" it may take considerable effort just to answer that question. And 90% of the time people will not listen to the reasons given and just keep asking nonsense, so it is often better to say "it looked like a good idea at the time" and get on with your life.

Another solution: try to postpone decisions as late as possible, and then choose one option based on your instincts and stick with it no matter what. At least you will refine your instincts as time passes.

Evaluating alternatives

Posted Mar 2, 2013 17:46 UTC (Sat) by pboddie (subscriber, #50784) [Link]

I'm not denying that a huge choice of alternatives can cause indecision and even stress, but when the idea is to just document things that are out there, leaving it to other resources to weigh up whether any of them are any good, it's interesting to observe the tendencies of some people to actively limit the scope of things that are effectively catalogues of resources.

It's like saying that 90% of the people in the phone book (or online equivalent) should be removed because they aren't famous enough. It would be absurd to pander to such demands because "people might be confused into thinking Mrs J. Random is a celebrity that someone might want to book for their party", which makes me think that either the more insistent advocates are unable to deal with the idea of different kinds or layers of resources or there's another agenda involved.

Michaelsen: One

Posted Mar 3, 2013 2:53 UTC (Sun) by rgmoore (✭ supporter ✭, #75) [Link]

Fanboys exist in every sphere, not just choice of programming tool. I've never seen a 100% satisfactory explanation for the phenomenon, but given its wide spread, I think it must have deep roots in human psychology.

Fanbois

Posted Mar 3, 2013 16:30 UTC (Sun) by bjartur (guest, #67801) [Link]

People like those who are similar. A world where more people choose like I do would be a world with more likable people and thus a better world. And the people would be better for they were making superior choices, my choices. It would be a world with more of 'us', even if with fewer of 'them'.

Michaelsen: One

Posted Mar 1, 2013 18:36 UTC (Fri) by daglwn (subscriber, #65432) [Link]

> I feel the same way about CMake as it keeps the faulty assumption that it
> is a bright idea to generate Makefiles -- at which point, when you run in
> trouble, you have four things to debug:

Right on. This is also the fundamental flaw of automake and friends. It's not just debugging multiple things, it's developer confusion (do I check in a generated Makefile?) and end-user frustration (What?!? I have to install automake?!?).

Michaelsen: One

Posted Mar 1, 2013 19:44 UTC (Fri) by Serge (guest, #84957) [Link]

> It's not just debugging multiple things, it's developer confusion (do I check in a generated Makefile?)

What are the options here? Developer could be also confused by generated *.o files.

> and end-user frustration (What?!? I have to install automake?!?).

Assuming that distributed source package was generated by `make dist-bzip2` (and optionally checked by `make distcheck`), end-user does not need automake to build the source, just shell+make.

Michaelsen: One

Posted Mar 1, 2013 20:00 UTC (Fri) by k8to (subscriber, #15413) [Link]

I can say as a user of many source distributed packages, that a majority of such tools are done by developers who don't understand the correct way to bless a package for release.

Not only do I have to install autoconf and automake, I frequently have to install the correct versions of them. Sometimes I have to debug the tools too, or hack around them by changing their wrong answers.

Michaelsen: One

Posted Mar 1, 2013 21:15 UTC (Fri) by daglwn (subscriber, #65432) [Link]

Said end user DOES need automake if he or she wants to fix a buggy build system, which is not unusual to encounter.

Which is the whole point of Free Software.

Michaelsen: One

Posted Mar 7, 2013 10:39 UTC (Thu) by wookey (subscriber, #5501) [Link]

I've used cmake quite a lot and liked it. And have a had a lot to do with autotools and make and a little to do with other build systems (perl, apache's apr, makemaker). I'm sure you are correct that recursive make is not particularly fast, but it's not something that's ever got in my way. (unlike the speed of configure which for a smaller project can be nearly all the build time).

I spend time building and cross-building distro packages. That has different pain-points from developers building the same thing over and over again. We always build everything, for a start. Speed just needs to be 'adequate'. What does matter (to me at least) is that it does cross as well as native. And most build systems fall down badly on this point. Autotools has many bad points but it does do a pretty good job of crossing. Cmake is also perfectly adequate (from 2.4 onwards), although it's much easier to write a non cross-friendly cmake file.

Other systems are generally hopeless: apr clearly intends to support this but seems to be broken by design and is full of seddery to fix it up, perl's system is a massive PITA, saying the same thing tens of times in different ways, and being useless if the target hardware is not available yet. Makemaker and most of the other less-common build systems I've had anything to do with just have no concept of cross-building and so don't do the right thing. Most homebrew makefile systems are no use in this regard, and even ones which try to do this (nss) make a mess of it. (nss can cross from one OS to another but not between two architectures on the same OS (linux x86->Linux ARM, for example), without patches). I've not had to fight scons yet, so maybe that's OK.

Is ninja any good at this? tup? ant? As a distro/build person my plea would be: please don't use anything that can't get cross-building right. I guess I'll find out if the new libreoffice build system gets this right soon enough.

On a related point, I do think that cmake's 'work out what to do _on the build system_ and then do it' scheme is a great deal more sensible than autotools' 'write a load of complicated stuff that can be shipped to another machine where _that_ run will actually configure and build things'. This just adds a load of extra complexity that doesn't help at all. Autotools remains a great deal more popular sadly, but it does at least work pretty well, even if it is slow and ugly. Ultimately that's what matters. If we can have working, crossable _and_ fast then everybody will be happy. I'm not sure such a thing exists yet.

Michaelsen: One

Posted Mar 7, 2013 14:53 UTC (Thu) by ortalo (subscriber, #4654) [Link]

Canadian cross building (devel. build system != build system != run system) were in fact a requirement for some early use case of autotools. IIRC, that's where the "generate a program that when run does the build" idea appeared: in order to manage differing development and build systems (not to speak of the actual target CPU).

Personnally, I have never faced such a situation in practice BTW.

Michaelsen: One

Posted Mar 14, 2013 3:02 UTC (Thu) by daglwn (subscriber, #65432) [Link]

> Personnally, I have never faced such a situation in practice BTW

Well, it's typically used for building compilers so unless you work on
compilers you would have been unlikely to encounter it. :)

Compilers for embedded systems are tricky. It's not uncommon to have to build the compiler on a host, run it on another machine and generate code for a third.

Michaelsen: One

Posted Mar 7, 2013 15:35 UTC (Thu) by mathstuf (subscriber, #69389) [Link]

> Is ninja any good at this?

Ninja itself isn't really something people are meant to write themselves. CMake or some other tool is meant to generate the ninja.rules file for you. I believe that there is some Python code shipped with ninja which can be used to generate rules.

> tup?

Doubtful. With install rules not being understood natively, it may build, but getting it to not go to /usr/local might be an issue (if it installs at all). IME, nobody other than packagers and (proper) build system authors really know that DESTDIR exists, never mind that it's kind of required.

> ant?

I've never seen this used other than Java. What does cross compiling even mean there?

Michaelsen: One

Posted Mar 2, 2013 2:38 UTC (Sat) by mathstuf (subscriber, #69389) [Link]

> CMake OTOH is 782KLOC

To be fair, CMake ships with copies of libarchive, libcurl, expat, zlib, and bzip2 (for ease of building on Windows). The Source/ directory is "only" 288k lines (via wc -l). Modules/ is another 55k of which most could be tossed out for LibreOffice if it came to that. 90k lines come from the bzip2 manual.ps file.

That said, I don't think CMake or GNU Make is going anywhere anytime soon. They're both used many many places and I'm sure between the distro maintainers, there'd be at least some support to keep things rolling as necessary.

Michaelsen: One

Posted Mar 1, 2013 15:44 UTC (Fri) by Serge (guest, #84957) [Link]

The major difference between build systems is what you need to build the software from source. And that's what makes `make` great — you need nothing but `make`. For the same reason autotools are good — when done right (i.e. `make dist-bzip2`) to build it you need just shell+make.

With properly done autotools project you won't get compatibility problems (like "I can't build software just because my cmake version is different from the one used by developer"). And you'll still get all the features (like "now I need a static build, what's the cmake equivalent of --disable-shared --enable-static?").

Other build systems are trying to make things easier for developers. But autotools also make it easy for users. And it's important because there're always more people using the software then people developing it.

Michaelsen: One

Posted Mar 1, 2013 16:32 UTC (Fri) by HelloWorld (guest, #56129) [Link]

autotools does *not* make it easy for users. The whole notion of an end user compiling someone else's code is insane, as is everything else that is more complicated than double-clicking an rpm file.

Michaelsen: One

Posted Mar 1, 2013 17:29 UTC (Fri) by meuh (subscriber, #22042) [Link]

Some quite famous quote: UNIX is user-friendly, it just chooses its friends (source).
Your users are differents from the users targeted when GNU software were first distributed on tape. At that time, autotools was trying to make it easy for these users.
Users always need to be defined to be a relevent concept.

Michaelsen: One

Posted Mar 1, 2013 17:37 UTC (Fri) by marduk (subscriber, #3831) [Link]

> The whole notion of an end user compiling someone else's code is insane, as is everything else that is more complicated than double-clicking an rpm file.

This "insane" Gentoo user is speechless :-o

Michaelsen: One

Posted Mar 7, 2013 15:01 UTC (Thu) by ortalo (subscriber, #4654) [Link]

I never used Gentoo, but am speechless too!
It's so sloooowwww to use the mouse when you just need to type ten keys or so to "make world" (not to speak of the time to reinstall those multi-megabytes binary packages versus a few small .o regeneration).

Michaelsen: One

Posted Mar 1, 2013 17:37 UTC (Fri) by JoeBuck (subscriber, #2330) [Link]

autotools were written to solve a problem that has faded in importance: get the GNU tools to build on dozens of flavors of Unix/Linux/BSD systems, each of which had many different versions that can affect whether features are present, where many of the flavors have serious deficiencies in their support of basic Posix functionality.

Michaelsen: One

Posted Mar 1, 2013 17:57 UTC (Fri) by wahern (subscriber, #37304) [Link]

+1

POSIX support is really phenomenal these days across various systems. Gone are the days when the same routine behaved differently across platforms. IME, using autotools is generally more complicated then, e.g., handling linker flags manually. Just keep a copy of the OpenGroup HTML spec on your desktop.

I really upped my POSIX game recently when I installed a Solaris VM. If I can make some code work on Solaris, it's pretty much guaranteed to work everywhere else.

I do cheat and use GNU Make. It makes it easier to ifdef make code.

Michaelsen: One

Posted Mar 1, 2013 22:00 UTC (Fri) by daniels (subscriber, #16193) [Link]

> POSIX support is really phenomenal these days across various systems. Gone are the days when the same routine behaved differently across platforms

Take it from someone who used to work on X11 until quite recently: every single word in that sentence is totally incorrect.

Even just getting your definitions exposed across platforms is an amazing dance. Does #define _BSD_SOURCE mean 'expose extra BSD functions' or 'pretend to be BSD libc and don't expose anything it does by default'? What does _XOPEN_SOURCE mean, for all its versions? What is the content of a struct fd_set? It gets pretty existential pretty quickly.

Michaelsen: One

Posted Mar 1, 2013 23:07 UTC (Fri) by wahern (subscriber, #37304) [Link]

IME it's not that big of a deal, and I regularly maintain all of my projects across Linux, OS X, NetBSD, FreeBSD, OpenBSD, and Solaris. I also maintain a PORTING database (in my cqueues project) for portability issue, including issues with FD_SETSIZE, .msg_iovlen type, CMSG_SPACE macro, SCM_RIGHTS bugs (on OS X), NOSIGPIPE alternatives, shutdown(2) (more OS X bugs), fchmod on AF_UNIX, pselect oddities, and a recent bug (or, at least regression) I found in Linux connect(2). Most of those are pretty esoteric, not that big of a deal, and in any event no amount of autotools hackery will fix them.

Here's the problem with features macros IME.

On Linux if you specify -std=c99, then that's exactly what you'll get... just C99 and nothing else. So then you need to _enable_ other interfaces. And this gets hairy, because what if you want something in POSIX but also something in BSD. So you set _XOPEN_SOURCE and _BSD_SOURCE.

The problem is that on NetBSD and FreeBSD you get the opposite behavior. With -std=c99 all the native interfaces are still exposed. But then you set _BSD_SOURCE and all of a sudden everything disappears. Now you're playing around with _XOPEN_SOURCE and friends, hitting conformance bugs and one's own misunderstandings.

In short, on Linux feature macros are additive, on NetBSD and FreeBSD they're more subtractive. OpenBSD and OS X are somewhere in between, and Solaris is byzantine.

Long story short, the best solution IME is to only ever set feature macros on Linux or Solaris. For everything else just don't set them, period. The vast majority of headaches will quickly disappear. For unit files where you really want conformance checking, fiddle with them if you want. And of course, if you steer clear of all non-standard APIs (rare, because there are so many useful ones which are also widely supported), then actual _XOPEN_SOURCE conformance really is pretty good, IME.

Michaelsen: One

Posted Mar 1, 2013 23:20 UTC (Fri) by daniels (subscriber, #16193) [Link]

I'm not entirely sure how you reconcile such a massive comment with 'not that big of a deal'.

Michaelsen: One

Posted Mar 1, 2013 23:46 UTC (Fri) by wahern (subscriber, #37304) [Link]

For something like fchmod on AF_UNIX files, there's no workaround. The behavior is different across platforms, and you either live with it or don't bother trying to use it at all.

For missing features, like a CMSG_SPACE macro, you either provide your own or don't use them.

For broken behavior, like pselect on OS X, NetBSD, and OpenBSD, or the littany of OS X bugs which come and go every release, what can you do? You can't easily paper over system interfaces. Usually they're system interfaces precisely because they couldn't be done from user-land well or at all.

In all cases, I found autotools and it's framework for compile-time introspection to be no better than other, dependency-less techniques. Long story short, the cost+benefit isn't there for autotools anymore. IME. C and POSIX conformance has crossed a threshold where you can deal with each individual issue without involving a framework.

Rather than spend my time writing code to check for these errors, where practical I instead submit bug patches to all the relevant projects. It's harder for Apple, but not impossible. I've submitted a fix to a conformance issue with htons and htonl which quickly made it into the next release, for example.

Michaelsen: One

Posted Mar 7, 2013 15:10 UTC (Thu) by ortalo (subscriber, #4654) [Link]

To me, your view sounds sensible with respect to the evolution over the last decade.
However I would really like if daniels could also give us some examples of the things that make him doubt of such an homogeneity in POSIX support.
(I suspect the X11 code base really is a nice experience on that topic.)

Michaelsen: One

Posted Mar 1, 2013 18:42 UTC (Fri) by dps (subscriber, #5725) [Link]

GNU autoconf is very good at figuring out fine differences between un*x systems including, but not limited to, the location of libraries and what functions are available. A lot of developers think this is worth a lot.

I can understand the of sort end user who thinks in terms of double clicking on a rpm not being aware the benefits to those that maintain and build these packages. There are reasons why sane end users might need or want to build there own version including having an older version of a library than a developer.

My work, where I work as a developer, is an end user for lots of things and we need to recompiling some of them. If we did not then required extra patches would not be applied and things that should not be installed on internet facing servers would have to be installed on them.

Michaelsen: One

Posted Mar 1, 2013 18:54 UTC (Fri) by daglwn (subscriber, #65432) [Link]

> the location of libraries and what functions are available. A lot of
> developers think this is worth a lot.

The first I agree with, the second not so much.

I have never understood why developers bother to write code that uses system function "foo" on one type of system but also provides an implementation of "foo" for systems that don't have it.

Just use the custom "foo!" I don't want my code littered with #ifdefs.

But in any case, autoconf was fine in its day but it doesn't scale and isn't suitable for many of the large projects we see today.

Michaelsen: One

Posted Mar 1, 2013 20:32 UTC (Fri) by zlynx (subscriber, #2285) [Link]

The system strlcpy and strndup will probably be faster than my for-loop version that copies one byte at a time.

Of course in same cases it is the opposite. Windows, for example, is hideously slow executing its inet_ntop when it should be a simple sprintf call for IPv4.

Michaelsen: One

Posted Mar 1, 2013 23:27 UTC (Fri) by wahern (subscriber, #37304) [Link]

Not true for strlcpy. NetBSD and FreeBSD just use OpenBSD's version. I think OS X might have rolled their own using memcpy and strnlen, for which I believe they have optimized assembly implementation, but that's kind of a wash at the end of the day.

strndup is POSIX and available everywhere I've seen, because all the commonly used Unix variants are pretty up-to-date (bleeding edge by historical standards, although the BSDs are a little behind on some of the new interfaces RedHat has been adding). But in any event, it's trivially just strnlen + malloc.

Windows, of course, has neither strndup nor strnlen. But portability across Windows and Unix is just a sick joke. There is no elegant solution except re-writing 20 years of POSIX evolution, or switching to some non-C environment, like Qt.

Michaelsen: One

Posted Mar 1, 2013 22:02 UTC (Fri) by daniels (subscriber, #16193) [Link]

X11 used to have a lot of open-coded strcasecmps for exactly that reason. #ifdefs are bad, let's just reimplement them ourselves! Then we find in profiling that our badly-reimplemented (and buggy) strcasecmp is like 30% of the server startup time. Oops.

Several years of doing that gave me a deep appreciation of the systemd/Wayland approach to portability.

Michaelsen: One

Posted Mar 1, 2013 23:59 UTC (Fri) by wahern (subscriber, #37304) [Link]

Case in point: strncasecmp is supported everywhere now (I can't confirm AIX or HP-UX, though). Times have changed. Conformance is heads-and-shoulders better than it was 10 years ago.

systemd/Wayland is at the opposite extreme. They refuse portability not because of trivial issues, but because their favorite kernel features don't exist everywhere else. They've chosen a no-compromise approach.

What's often left out their diatribes about the BSDs is that they often have features which Linux lacks, or implement some features in superior ways. Yet they're fine settling with the Linux deficiencies.

At the end of the day systemd is Linux-only because that's a simple decision they made. It was their choice. But to compare their gripes with the nightmare that was Unix compatibility 10 or 20 years ago is just silly.

Michaelsen: One

Posted Mar 2, 2013 0:02 UTC (Sat) by daniels (subscriber, #16193) [Link]

Yes, the BSDs do have their own versions of features which are available in Linux. But just because something is there with feature parity, doesn't mean it's zero-cost for integration. They have completely different interfaces - which puts you in #ifdef hell yet again - and have subtly different semantics that can make them very difficult to integrate, even behind an abstraction.

So I'm not sure how that kind of divergence helps your argument in any way.

Michaelsen: One

Posted Mar 2, 2013 1:58 UTC (Sat) by wahern (subscriber, #37304) [Link]

Specific examples please?

I ask because my point is that times have changed. There was a time not too long ago when one was constantly confronted with different interfaces for even trivial operations. But I don't think that it's that way anymore.

There are differences for non-trivial operations, like O(1) polling, file notification, process management, etc. But people make too much of these and falsely equivocate them with the portability nightmare that once existed. These differences are well known, stable, and frankly not that difficult to surmount unless you have very specific and peculiar requirements.

I've written small libraries from scratch to support O(1) polling, pollable signals, file change notifications, and similar interfaces across Solaris, the *BSDs, and Linux. After having tackled them, I can tell you that they're not that big of a deal. Some projects have a gigantic mess of ugly code to wrap these interfaces, but in my repeated experience dealing with many of these issues, I can say that the common wisdom--that you need a complex library to obfuscate and abstract them--is mostly misguided.

For example, regarding O(1) polling, the existing libraries are overly complex because they were started many years ago, and most of the complexity concerns not being cross-platform, but in support select or poll.

If you look at the _present_ state of affairs, with the _present_ state of POSIX conformance and feature parity across platforms, portability has never been easier.

Michaelsen: One

Posted Mar 2, 2013 2:21 UTC (Sat) by daniels (subscriber, #16193) [Link]

Yes, it's definitely a hell of a lot easier than before, but that doesn't make it easy (or necessarily worthwhile) in absolute terms.

Michaelsen: One

Posted Mar 2, 2013 19:38 UTC (Sat) by renox (subscriber, #23785) [Link]

>systemd/Wayland is at the opposite extreme. They refuse portability not because of trivial issues, but because their favorite kernel features don't exist everywhere else.

This isn't true for Wayland: they received a patch to port the library to FreeBSD, it isn't integrated yet, but they plan to integrate it.

Michaelsen: One

Posted Mar 1, 2013 20:06 UTC (Fri) by welinder (guest, #4699) [Link]

> autotools does *not* make it easy for users

Very wrong!

autotools is what make applications fairly easily available on systems that are different from the one the application was built on. That's true even if the end user isn't the one doing the compilation.

No autotools -> more work in porting -> fewer applications.

In other words, autotools make it easier to find an application.

Yes, autotools are heavy to dance with and severely show their age, but most alternatives I see hawked are based on pretending there is no problem and everyone disagrees can go **** themselves.

Michaelsen: One

Posted Mar 1, 2013 20:28 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

>autotools is what make applications fairly easily available on systems that are different from the one the application was built on.
Can I use autotools on Windows with MSVC? That's, like, the most popular development platform for C/C++.

MSVC

Posted Mar 2, 2013 2:23 UTC (Sat) by tialaramex (subscriber, #21167) [Link]

MSVC is _deliberately_ not a modern C compiler. MSVC targets C89, modulo certain features that by chance were later adopted in C99 after having been common in C++. That's Microsoft policy, there's no reason to waste your time trying to mess about getting stuff to work in spite of the vendor's clear policy that you're unwelcome.

You might as well argue that something is no good because it doesn't work with the 1980s K&R C compiler that came with your SunOS. Here's a dime kid, buy yourself a real compiler.

MSVC

Posted Mar 2, 2013 3:51 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

Get real.

Nobody in Windows development cares about pure C. And so Microsoft doesn't really care about updating its C compiler.

C++ compiler in MSVS and other support (autocompletion, refactoring, on-the-fly code analysis), on the other hand, is top-notch.

MSVC

Posted Mar 3, 2013 12:28 UTC (Sun) by tialaramex (subscriber, #21167) [Link]

OK, so we've disposed of one half of your claim. MSVC is not, in fact, "the most popular development platform for C". The remaining argument, which seems to be that C++ projects should choose their build system based on how convenient it is to use with a third party proprietary toolchain doesn't really stand up very well. Why should they make themselves hostages to fortune in this way?

MSVC

Posted Mar 4, 2013 0:26 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Yes, I meant "the most popular development platform for C/C++".

> The remaining argument, which seems to be that C++ projects should choose their build system based on how convenient it is to use with a third party proprietary toolchain doesn't really stand up very well. Why should they make themselves hostages to fortune in this way?
Because this toolchain is used on freaking 95% of PCs? Because IDE in this toolchain is actually BETTER than any other C++ IDE?

MSVC

Posted Mar 4, 2013 9:49 UTC (Mon) by tialaramex (subscriber, #21167) [Link]

C and C++ are two quite distinct programming languages from the same family. When you try to lump them together you reveal profound ignorance which would be surprising on LWN if not for the rest of this thread where you've shown such ignorance on a wide array of topics.

Generally when we see this mistake it's from people who have mistakenly assumed that C++ is "the next version" of C, and no doubt are expecting Perl 6 to be the logical next step from Perl 5. Nope.

FWIW If you want to be a success on Windows you'll need to stop slavishly following the brand owner. Microsoft's continued existence is owed in some measure to their ISVs refusing to follow them when they run full tilt off a cliff. The internal teams don't take their own medicine either, that's why VSS was allowed to live for so long, nobody of consequence at Microsoft used the thing it was just being sold to outsiders.

MSVC

Posted Mar 4, 2013 15:24 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Yeah, yeah, I know. Nevertheless, on Windows plain C is not used much - people just use C++ because it's much nicer.

And anyway, any build system that claims to be portable MUST support MSVC whether you like it or not. It's the most used compiler on Windows and lack of its support means that your build system is instantly disqualified.

MSVC

Posted Mar 4, 2013 13:33 UTC (Mon) by jwakely (subscriber, #60262) [Link]

Wow, telling someone to get real in the same post as claiming MSVC is a top-notch C++ compiler. Well played.

MSVC

Posted Mar 4, 2013 14:17 UTC (Mon) by hummassa (subscriber, #307) [Link]

Funny, actually... :-)

MSVC

Posted Mar 4, 2013 14:20 UTC (Mon) by hummassa (subscriber, #307) [Link]

Altough, to be fair, he didn't say that msvs' C++ *compiler* was top-notch, but c++ *ide support* (autocompletion, refactoring &c)

MSVC

Posted Mar 4, 2013 15:17 UTC (Mon) by jwakely (subscriber, #60262) [Link]

Check it again:

> C++ compiler in MSVS and other support [...] is top-notch.

MSVC

Posted Mar 4, 2013 15:23 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

It IS a top-notch compiler. For example, its compilation speed is much better than that of gdb (helps during development) and some of its features are absent in competitors (edit&continue - I simply LOVE it).

Its C++11 support is decent: https://wiki.apache.org/stdcxx/C++0xCompilerSupport and C++03 support is near perfect (except consciously omitted two-phase name lookup and template export).

MSVC

Posted Mar 4, 2013 17:13 UTC (Mon) by hummassa (subscriber, #307) [Link]

> Its C++11 support is decent:
> https://wiki.apache.org/stdcxx/C++0xCompilerSupport

according to the link, msvc's c++11 support is poor at best -- I would call it, more realistically, inexistent; it has 15 missing items against just one for gcc 4.8 and two for clang 3.1. Not to mention the missing items, like char types, raw string literals and user-defined literals, and freaking *constexpr* are defining items of c++11.

MSVC

Posted Mar 4, 2013 17:41 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Nope. It's quite reasonable, most of the "main" C++11 features work just fine: lambdas, auto, decltype, move constructors (modulo a couple of known bugs). Microsoft can't really implement features as fast as GCC - they really care about compatibility and full support in all tools.

BTW, even GCC doesn't yet support several C++11 features. But we all know that GCC is totally useless, right?

MSVC

Posted Mar 4, 2013 17:52 UTC (Mon) by hummassa (subscriber, #307) [Link]

Come on, you chose to ignore the main part of my comment, where I argued that C++11 without attribute, alignas, and inheriting constructors (as in gcc 4.7) is still closer to c++11 than to c++03 but if you go on yanking constexprs AND defaulted and deleted functions AND chartypes AND literals AND non-static member initializers, then you are in a whole other realm.

MSVC

Posted Mar 4, 2013 17:58 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Not really. Of all the missing features only inherited constructors really hurt. Other features are "nice to have" or can be implemented using compiler-specific intrinsincs (alignas).

From my practice - MSVC is most definitely NOT closer to C++03. Don't believe me? Look into Boost libraries for compiler-specific workarounds.

Anyway, and I'm repeating myself - it doesn't matter at all. MSVC is used by a huge number of projects and your Linux fanboi's automatic disdain of anything with the word "Microsoft" doesn't make it less so.

MSVC

Posted Mar 4, 2013 18:12 UTC (Mon) by pizza (subscriber, #46) [Link]

>Not really. Of all the missing features only inherited constructors really hurt. Other features are "nice to have" or can be implemented using compiler-specific intrinsincs (alignas).

No, what really hurts is MSVC's C99 preprocessor support, or rather, lack thereof. Its lack of other C99 features (eg named initializers) is also quite frustrating, because their use results in far more maintainable code.

It's one thing when you have to write platform-specific hooks to handle OS differences, but it's another thing entirely when you can't use standard language features.

MSVC

Posted Mar 4, 2013 18:37 UTC (Mon) by hummassa (subscriber, #307) [Link]

Whoa, red herring followed by straw man followed by ad hominem. Cool moves!

First, the red herring: I didn't say a single word about the usefulness of MSVC. Gcc is useful, Clang is useful, MSVC is useful and Embarcadero is useful. People can make good programs in any of the above.

I was disputing your quote:

> Its C++11 support is decent

No, it isn't, not by a long shot. Clang>3.1 and Gcc>4.6 C++11 support is decent. Neither is complete. But -- bear in mind I am working in a "strictly c++11" project -- MSVC's is not enough. Constexprs are needed. Custom literals are important to the expressiveness and maintainability of a great part of our code. UTF8 is a requirement. Why should we fiddle with compiler-specific intrinsics if we can have clang 3.2 and gcc 4.8 and our program will run on Linux, OSX, FreeBSD AND Windows?

> From my practice - MSVC is most definitely NOT closer to C++03. Don't believe me? Look into Boost libraries for compiler-specific workarounds.

Here's the straw man; because you are not using enough of the C++11 goodnesses, who would need to use them?

> your Linux fanboi's automatic disdain of anything with the word "Microsoft" doesn't make it less so

Oh, ignoring the ad hominem, I'll admit it: I am a Linux fanboi. I use two Linux desktops (at home and at work) and four or so mobile Linux equipments (kindle, kindle touch, kindle fire, galaxy nexus phone). And an AppleTV with linux installed on it as a file server at home. And a colocated server. And a raspberry pi.

I *do* think Linux is great for a lot of things. And I *do* think FLOSS is great, for economic and ethical reasons.

Oh, but I use a Windows desktop at work too. And OSX in my notebook. And I have an iPad. This means I know I don't live in what I think would be a perfect world, where all software is FLOSS, and I have the maturity of training myself in other things.

Contrary to what you say, I don't even disdain proprietary software. I just happen think it is nocive to the society, but currently almost inevitable, like junk food. And I estimate that it is technically inferior to free software approximately 75% of the time.

I know MSVC's compiler is faster than g++. If you had cited good sources, I could even believe that it produced better/faster/smaller code, that it had less internal or codegen bugs, or that it had any other advantage.

But that was not what I was arguing. I was arguing that, compared to gcc and clang in the more recent than 2011 versions, MSVC's compiler does not have a decent c++11 support, or IOW, if you are planning (like my company is) on using C++11, you should steer away from it and stick to clang and gcc -- probably gcc would be better, because clang's stdlib is kind of c++11-buggy also right now and only works ok in OSX.

And I am not saying gcc is perfect... Gcc's stdlib's regular expressions are giving me too much grief lately (and we would also like to use them a lot).

MSVC

Posted Mar 4, 2013 21:07 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

>Constexprs are needed.
Arguable. They are a dirty hack that shouldn't have been included.

>Custom literals are important to the expressiveness and maintainability of a great part of our code.
Quite the converse - they don't add anything useful.

>UTF8 is a requirement.
And MSVC supports it just fine (for string literals).

> But that was not what I was arguing. I was arguing that, compared to gcc and clang in the more recent than 2011 versions, MSVC's compiler does not have a decent c++11 support, or IOW, if you are planning (like my company is) on using C++11, you should steer away from it and stick to clang and gcc
Why? The minor missing features are not a deal-breaker. All the major features (lambdas, auto and template goodness) are present. And bundled with the greatest ever C++ IDE (with real code analysis and refactoring!).

And yes, we actually have a sizeable body of C++ code that we are happily migrating to C++11. I can't say that I ever even wanted constexprs and literals are simply irrelevant for us (what are they for?).

MSVC

Posted Mar 4, 2013 22:09 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

> And bundled with the greatest ever C++ IDE (with real code analysis and refactoring!).

<vim-user>Too bad it doesn't have a useful editor</vim-user> :P . When I am working on Windows, I typically use Vim to edit files, cmake --build to build them and only end up in VS if I need to get error lists from the unreadable dump the compiler spits out. Other than that, it's mainly to make sure that the file loads up correctly (auto-generated target names can't be too long after all), files for targets are sorted into folders correctly, etc.

MSVC

Posted Mar 5, 2013 5:48 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Actually, there's http://www.viemu.com/ for MSVC...

MSVC

Posted Mar 5, 2013 6:21 UTC (Tue) by mathstuf (subscriber, #69389) [Link]

Yeah, $100 when I could just share configs/plugins/etc. with my other machines and use the real thing for free. Plus, only the versions you pay for anymore support extensions[1], so there's another chunk of change. For as little as I actually develop *on* Windows, I'll stick with vim, git bash, and cmake --build to keep me from getting an aneurism from have to deal with sub-par keyboard navigation.

[1]Which means I have to close VS after every CMake generate to avoid answering "yes, reload the target" 100+ times. There's a plugin the make it ask just once, but again, that doesn't work for Express versions.

MSVC

Posted Mar 5, 2013 20:37 UTC (Tue) by hummassa (subscriber, #307) [Link]

You still seem not to be reading my entire comment, even if you didn't fallacied your arguments to death on this. But, hey, I'm admitedly not perfect, so I'll try to clarify:

IN OUR PROJECT, constexpr, custom literals, u8 literals ARE IMPORTANT. It may not be the case in your projects, but it is to us and our project is well underway. They are not "minor missing features", they would be dealbreakers. AND we have access to it via g++ -- we do not need nor intend to waste time tweaking it for other compilers.

> (what are they for?)

constexpr is good for compile-time calculations, so the application starts faster;

custom literals (combining freely with constexpr) are especially good when you have many different units and you use type-safety to keep them all in check, e.g.,

auto len = 36km; // ourlib::units::distance len = 36000 (m)
auto passed = 1h; // ourlib::units::time passed = 3600 (s)
auto v = len/passed; // ourlib::units::velocity v = 10 (m/s)

when someone, for some reason, needs to put

auto newlen = 12miles;

somewhere, things keep working. We have a lot of polar coordinates conversions also, and in some places in our thousands of constants it's easiear and less error-prone to say:

constexpr ourlib::units::complex COMPLEX_THRESHOLD = 4delta * 30degrees;

than it would be to convert manually and say:

constexpr ourlib::units::complex COMPLEX_THRESHOLD = 2i + 3.46410161514;

(which is just plain wrong, because the real part would be imprecise) -- and, to boot, the constants would be calculated at compile-time.

MSVC

Posted Mar 5, 2013 21:36 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

Custom literals are just a thin syntax sugar (designed mostly to confuse).
>auto len = 36km; // ourlib::units::distance len = 36000 (m)
auto len = km(36);

>auto passed = 1h; // ourlib::units::time passed = 3600 (s)
auto passed = hour(1);

>auto v = len/passed; // ourlib::units::velocity v = 10 (m/s)
Works just fine as it is.

Hardly what I call 'critical'. Ditto for constexprs - they can already be optimized by the compiler and you'd be hard-pressed to do anything complex in 512 level recursion limit. I've only seen constepxrs in practice used for template brainfskery magic.

MSVC

Posted Mar 5, 2013 23:53 UTC (Tue) by hummassa (subscriber, #307) [Link]

As I said before, critical because we already have tens of thousands of SLOCs written in this particular dialect of C++11... we are not going to stop the work to reverse them just so we could compile the code with msvc... :-)

MSVC

Posted Mar 6, 2013 12:22 UTC (Wed) by jwakely (subscriber, #60262) [Link]

You still don't know what you're talking about. I don't care about user-defined literals but constexpr is very important. Do you consider solving the static initialization order fiasco to be unimportant?

A constexpr constructor ensures a global object is guaranteed to be initialized during the static initialization phase, i.e. before any code starts running, so code that runs before main() can be guaranteed the object will be valid, even across different translation units where the order of initialization is not defined. This is useful e.g. for a global std::mutex that is accessed by different translation units. That doesn't work with MSVC because its std::mutex is not only non-constexpr but it allocates memory in its default constructor.

> Ditto for constexprs - they can already be optimized by the compiler and you'd be hard-pressed to do anything complex in 512 level recursion limit. I've only seen constepxrs in practice used for template brainfskery magic.

There is no problem with instantiation depth, not even any templates involved. "Can be optimized by the compiler" is not the same as "guaranteed to be optimized by the compiler" ... but you need a decent C++11 compiler to actually get that guarantee of course.

MSVC

Posted Mar 6, 2013 17:39 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

I don't use non-trivial global static constructors, so I definitely consider them unimportant.

> That doesn't work with MSVC because its std::mutex is not only non-constexpr but it allocates memory in its default constructor.
Yep, and that's exactly why.

>There is no problem with instantiation depth, not even any templates involved.
Standard guarantees only 512 level recursion. Everything else is implementation-specific.

>"Can be optimized by the compiler" is not the same as "guaranteed to be optimized by the compiler" ... but you need a decent C++11 compiler to actually get that guarantee of course.
So don't use it for anything important.

MSVC

Posted Mar 6, 2013 23:03 UTC (Wed) by nix (subscriber, #2304) [Link]

>So don't use it for anything important.

Um... even if it's *not* evaluated at compile-time, it is still guaranteed to be evaluated before the static initialization phase. Thus you can use them to construct things that are used *during* the static initialization phase.

Now maybe you don't care about the static initialization phase, but claiming that lack of support for a feature that makes that phase useful is insignificant merely because *you* don't use it is myopic.

MSVC

Posted Mar 7, 2013 21:13 UTC (Thu) by jwakely (subscriber, #60262) [Link]

> Now maybe you don't care about the static initialization phase, but claiming that lack of support for a feature that makes that phase useful is insignificant merely because *you* don't use it is myopic.

This is entirely typical for Cyberax. "I can't make Linux work on the desktop, therefore it's impossible, even though other people are telling me they've done it", "The compiler I claim is top-notch doesn't support a feature and I don't care about it, therefore it's unimportant, even though other people have said how it's important to them."

*plonk*

MSVC

Posted Mar 8, 2013 19:50 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

>I can't make Linux work on the desktop, therefore it's impossible, even though other people are telling me they've done it
Wrong. I _can_ make Linux work on desktop. I can't make it work every time with minimal maintenance for most of real world users out there.

>The compiler I claim is top-notch doesn't support a feature and I don't care about it, therefore it's unimportant, even though other people have said how it's important to them.
Yep, especially if it's such a minor feature.

Does gcc support C# interoperability, for example? It's a very real-world feature necessary for a lot of projects.

Or maybe "#pragma pack" to get a more fitting example?

MSVC

Posted Mar 8, 2013 21:29 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

GCC has a "pack" pragma, though it doesn't support the identifier feature of the Microsoft one.

MSVC

Posted Mar 8, 2013 21:58 UTC (Fri) by nix (subscriber, #2304) [Link]

If you're looking for GCC extensions among pragmas, you're looking in the wrong place. Look at attributes, where you will find not only packing, but also alignment control. (That Cyberax is arguing against GCC without even having searched the manual for the word 'pack' suggests that arguing with him is a waste of time.)

MSVC

Posted Mar 8, 2013 23:11 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Actually, I just forgot that a long time has passed since the gcc 3.2 release. I had to add __atribute__ to a metric ton of structures when I was porting a Windows-based application to Unix, so I quite vividly remember that #pragma pack was not universally supported back then.

MSVC

Posted Mar 7, 2013 21:11 UTC (Thu) by jwakely (subscriber, #60262) [Link]

> > There is no problem with instantiation depth, not even any templates involved.
> Standard guarantees only 512 level recursion. Everything else is implementation-specific.

The example I was talking about has *no recursion*, so there is no problem with a limit of 512 levels. Why are you even mentioning it?

MSVC

Posted Mar 8, 2013 19:49 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Ok. I'll bite.

You CAN'T use constexprs to solve the initialization ordering problem. By definition constexprs can't have any side effects (in fact, allowing side effects is a compiler bug). This means that constexprs can refer to each other and can have any ordering.

So using constexprs to allocate a mutex is right out. The only remaining usages:
1) Precalculating complex mathematics formulae. It's right out - you can't really do anything real-world interesting in 512 level recursion limit.
2) Template mindfskery - instead of using partial template specialization. That cleans up code somewhat, but it still doesn't change the fsked up nature of template metaprogramming.

Basically, that's it.

MSVC

Posted Mar 10, 2013 19:16 UTC (Sun) by jwakely (subscriber, #60262) [Link]

You *still* don't know what you're talking about.

> By definition constexprs can't have any side effects (in fact, allowing side effects is a compiler bug).

Citation needed.

I'll start you off with 7.1.5 [dcl.constexpr] which says "The definition of a constexpr constructor shall satisfy the following constraints: [...] every non-variant non-static data member and base class sub-object shall be initialized"

Please show where in the standard it contradicts that or how to initialize non-static data members without side-effects.

> So using constexprs to allocate a mutex is right out

On a grown-up OS mutexes don't need "allocating" so can be constexpr.

If you disagree please file a defect with the C++ standard, file a GCC bug (category "doing impossible things before breakfast") and tell MS to stop working on the fix to not allocate memory in std::mutex because it's an impossible task, so sayeth Cyberax the unknowing.

I don't think you know what a constexpr constructor is or how it works. Stop trying to educate me on C++, you're embarassing yourself.

MSVC

Posted Mar 10, 2013 20:55 UTC (Sun) by Cyberax (✭ supporter ✭, #52523) [Link]

> Please show where in the standard it contradicts that or how to initialize non-static data members without side-effects.
I thought it was obvious to anyone who follows C++ development.

But OK. Stanza 7.1.5.5:
>For a constexpr function, if no function argument values exist such that the function invocation substi- tution would produce a constant expression (5.19), the program is ill-formed; no diagnostic required. For a constexpr constructor, if no argument values exist such that after function invocation substitution, ev- ery constructor call and full-expression in the mem-initializers would be a constant expression (including conversions), the program is ill-formed; no diagnostic required.

Given that a function can only involve operations on literals (or references to them) or other constexpr values we are immediately (7.1.5.3) limited in possibilities to refer to non-constexpr data like file handles, for example (because open/close do not return constexprs).

Can a body of a constexpr function involve side effects? No. See 5.19.1 - it explicitly notes:
>[Note: Constant expressions can be evaluated during translation. — end note ]

Then in the stanza 5.19.2 the Standard lists tons of things that are Verbotten. Included in them:
>- a new-expression (5.3.4);
>- a delete-expression (5.3.5);
And you also can't use malloc since it's not a constexpr. So yes, dynamic allocation is impossible.

Basically, constepxrs can only use other constexprs (in evaluated paths). And the only available constexpr functions are simple mathematical functions that can't produce side-effects.

>On a grown-up OS mutexes don't need "allocating" so can be constexpr.
Then there's no problem with their initialization order. What's the problem that constexprs should be solving?

> I don't think you know what a constexpr constructor is or how it works. Stop trying to educate me on C++, you're embarassing yourself.
Yes, please do.

MSVC

Posted Mar 10, 2013 23:19 UTC (Sun) by jwakely (subscriber, #60262) [Link]

Noone except you is talking about file handles or dynamic allocation in constexpr constructors. Of course dynamic allocation is impossible during static initialization.

A constexpr constructor is the language feature that guarantees objects with non-trivial constructors (such as std::mutex) can be initialized during the static initialization phase. Without a constexpr constructor you have no such guarantee. A constexpr constructor can set a global object's member variables, those member variables are state, setting them to one value or another is observable to the rest of the program. If that state has not been initialized before it's needed, you have a problem. A constexpr constructor ensures it is set before needed.

MSVC

Posted Mar 11, 2013 4:14 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

> Noone except you is talking about file handles or dynamic allocation in constexpr constructors.
Certain people in this thread do.

Let me quote:
> constexpr is good for compile-time calculations, so the application starts faster;
Which implies that it's possible to do something more complex other than trivial math in constexprs.

> You still don't know what you're talking about. I don't care about user-defined literals but constexpr is very important. Do you consider solving the static initialization order fiasco to be unimportant?
Constexprs don't solve static init order problems _AT_ _ALL_. In particular, constexprs can't work with forward declarations.

> A constexpr constructor can set a global object's member variables, those member variables are state, setting them to one value or another is observable to the rest of the program.
No it cannot. You can _create_ a constexpr member variable, but you can't _mutate_ it within a constexpr (stanza 5.19.2, "— an assignment or a compound assignment").

In fact, constexprs are designed in such way that they make a purely functional language. So they can be evaluated in any order and at any time. You can ALWAYS replace them with static initialization, only you might have to do evaluation manually.

MSVC

Posted Mar 11, 2013 10:19 UTC (Mon) by jwakely (subscriber, #60262) [Link]

I'm not talking about mutation or constexpr member variables. I'm talking about constexpr constructors. As I've said, I don't think you know what they are. You are the poster-child for the Dunning Kruger effect. Instead of rattling off a reply to something I didn't write try some basic reading comprehension.

MSVC

Posted Mar 11, 2013 10:32 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

I think you need a refresher:
http://en.wikipedia.org/wiki/Side_effect_%28computer_scie...

No, constexprs do not have side effects since they don't modify anything outside of their scope. No, constexpr constructors also do not have side effects.

Anyway, my other points still stand:
1) Constexprs are useless for reducing the startup speed.
2) Constexprs do not solve the static init order problem.

MSVC

Posted Mar 11, 2013 11:46 UTC (Mon) by jwakely (subscriber, #60262) [Link]

1) I never claimed this, go argue with someone else.
2) Then why do std::mutex, std::atomic and std::once_flag have constexpr constructors? What's that for?

MSVC

Posted Mar 11, 2013 14:08 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

Claim one was made by other author, yes.

std::mutex has a constexpr constructor (added to maintain compatibility with POSIX), but recursive and timed mutexes don't (check it). MSVC choose to ignore it completely because it's not practical to implement mutexes this way on Windows.

This requirement was just a codification of existing practice where POSIX mutexes can be statically initialized. And yet again, I repeat that constexprs can't solve the problems with initialization order. If your code works with constexprs then it works on MSVC.

MSVC

Posted Mar 12, 2013 11:52 UTC (Tue) by jwakely (subscriber, #60262) [Link]

I don't need to check it, thanks, I didn't ever claim the other mutex types have constexpr constructors.

Things don't get added to the C++ standard for POSIX compatibility if they don't work on other platforms, MS have representatives on the committee so they don't need to "choose to ignore" requirements in the standard, they can prevent them being standardised in the first place. That's how the process is supposed to work, a "top-notch compiler" team doesn't keep quiet then be forced to ship a non-conforming implementation. Having a constexpr constructor is by design, not a compatibility feature, remember that when MSVC eventually fix their std::mutex.

https://connect.microsoft.com/VisualStudio/feedback/detai...

MSVC

Posted Mar 12, 2013 14:40 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

> Things don't get added to the C++ standard for POSIX compatibility if they don't work on other platforms, MS have representatives on the committee so they don't need to "choose to ignore" requirements in the standard, they can prevent them being standardised in the first place.
The standard committee can choose to ignore Microsoft's objections (that happens, really).

>Having a constexpr constructor is by design, not a compatibility feature, remember that when MSVC eventually fix their std::mutex.
No they haven't (yet). Empty std::mutex in MSVC 11 is NOT a constexpr since it uses an extern function in its constructor.

Windows synchronization primitives require initialization, so if you want to do a constexpr then you have to use a spinlock to protect a delayed initialization of the real synchronization primitive. Possible, but ugh.

BTW, your very link tells us that the next MSVC version supports constexprs.

MSVC

Posted Mar 11, 2013 14:23 UTC (Mon) by nye (guest, #51576) [Link]

>1) I never claimed this, go argue with someone else.

But that was a part of the argument that you loudly jumped into before almost immediately throwing insults about.

I think Cyberax must have a great deal of patience to be bothering with you, given your abusive behaviour in this thread.

MSVC

Posted Mar 12, 2013 12:14 UTC (Tue) by jwakely (subscriber, #60262) [Link]

I didn't jump into it, the first mention of constexpr was a descendant of my comment: http://lwn.net/Articles/541018/

MSVC

Posted Mar 11, 2013 15:21 UTC (Mon) by nix (subscriber, #2304) [Link]

I'm reasonably certain that nobody who was talking about constexprs with reference to compile-time calculation could possibly have thought that you could allocate memory or file handles at compile time. That's ridiculous.

MSVC

Posted Mar 11, 2013 15:24 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

So what kind of calculations you can do in constexprs that could noticeably reduce app startup time? I'm totally at loss here.

Almost all more-or-less complex algorithms require something that is not really possible with constexprs.

MSVC

Posted Mar 11, 2013 16:52 UTC (Mon) by etienne (subscriber, #25256) [Link]

> So what kind of calculations you can do in constexprs that could noticeably reduce app startup time?

If you have a very long list of parameters with plenty of sub-parameters (kind of 4096 channels with 50+ configuration options and their SNMP descriptors), you can have them pre-organised (with their next/previous fields pre-initialised) if you declare them "static" and not malloced.
Or if you have a long dictionary.

Note that you could try to support malloc/free inside constexprs by having another section in your ELF file (like .data but after .bss), so that the malloc/free functions (either in glibc or your own wrapper) do not start empty but call brk()/sbrk() to increase that new section... should work at least on Linux (excluding LD_PRELOAD of another malloc/free library).

MSVC

Posted Mar 11, 2013 17:29 UTC (Mon) by Cyberax (✭ supporter ✭, #52523) [Link]

> If you have a very long list of parameters with plenty of sub-parameters (kind of 4096 channels with 50+ configuration options and their SNMP descriptors), you can have them pre-organised (with their next/previous fields pre-initialised) if you declare them "static" and not malloced.
Not really. You'll be able to initialize the 'prev' pointers. But you can do this just fine with the current static init.

> Or if you have a long dictionary.
Compile-time? I doubt it. Though you probably could with lots of template magic.

>Note that you could try to support malloc/free inside constexprs by having another section in your ELF file (like .data but after .bss), so that the malloc/free functions (either in glibc or your own wrapper) do not start empty but call brk()/sbrk() to increase that new section
No you can't. Standard explicitly forbids any pointer math that might result in addresses outside of _initialized_ static storage.

MSVC

Posted Mar 6, 2013 10:01 UTC (Wed) by etienne (subscriber, #25256) [Link]

> auto len = 36km; // ourlib::units::distance len = 36000 (m)
> auto passed = 1h; // ourlib::units::time passed = 3600 (s)
> auto v = len/passed; // ourlib::units::velocity v = 10 (m/s)

Sorry for asking, I should check myself, but you probably have hit it already:
auto speed = 55km/h; // Does that compile (if h is not defined)?
auto acceleration = 9.8m²/s; // Does that compile?

MSVC

Posted Mar 6, 2013 12:09 UTC (Wed) by jwakely (subscriber, #60262) [Link]

It would have to be 55km/1h, km/h is not a single token, so it's two expressions separated by operator/ and h must be defined

You might be able to define a UDL for m² if your compiler supports unicode identifiers, but it would still need to be 9.8m²/1s for the same reason as 55km/1h

MSVC

Posted Mar 6, 2013 17:28 UTC (Wed) by etienne (subscriber, #25256) [Link]

> 55km/1h is not a single token

The thing is it seems hummassa was using different types for all those different things:
auto v = len/passed; // ourlib::units::velocity v = 10 (m/s)
That enable to fail at compilation time if you compare a speed to an acceleration - nice feature!
The problem is how to get the type from the unit, is it possible to do:
auto earth_gravitation = 9.8m²÷s;
GCC probably do UTF8 without problem.

MSVC

Posted Mar 6, 2013 18:47 UTC (Wed) by hummassa (subscriber, #307) [Link]

> GCC probably do UTF8 without problem.

Nope, no utf8/unicode identifiers in C++11; just ASCII letters, numbers and underscore. :-D In our code, we try not to go too deep, so we have defined only the m, kg, s units and use them in our code, like:

constexpr auto G = 9.82m/(1s*1s);

MSVC

Posted Mar 6, 2013 20:11 UTC (Wed) by rleigh (subscriber, #14622) [Link]

From what I've read, it should certainly be possible to use UTF-8 suffixes like µm, Å, Ω etc. While I've started to use a number of C++11 features like auto, array initialisers, declspec, constexpr [no, my code will not build with MSVC while it continues to be so backward], I've not yet had a need for suffixes, but would be interested to know if UTF-8 ones work with GCC.

MSVC

Posted Mar 7, 2013 23:16 UTC (Thu) by hummassa (subscriber, #307) [Link]

I tested both: the standard says a suffix is an identifier and no UTF8 identifiers (and no UTF8 suffixes) are allowed by g++4.7 or clang++3.2

MSVC

Posted Mar 7, 2013 23:57 UTC (Thu) by hummassa (subscriber, #307) [Link]

for sake of completeness:
the last std draft, section 2.14.8 (user-defined literals) says that the suffix is an identifier;
section 2.11 defines identifier thusly:
identifier: identifier-nondigit
  identifier identifier-nondigit
  identifier digit
identifier-nondigit:
  nondigit
  universal-character-name
  other implementation-defined characters
nondigit: one of abcdefghijklm nopqrstuvwxyz ABCDEFGHIJKLM NOPQRSTUVWXYZ_
digit: one of 0123456789

This means that utf8 identifiers are implementation-dependent (and, as I told you, nor clang++ nor g++ accepts them currently).

MSVC

Posted Mar 4, 2013 22:06 UTC (Mon) by jwakely (subscriber, #60262) [Link]

MSVC cares about compatibility?! This is the compiler that changes library ABI with every release, whereas code I can link code compiled with GCC 3.4 and 4.8 happily. You really don't know what you're talking about.

Michaelsen: One

Posted Mar 7, 2013 23:11 UTC (Thu) by marcH (subscriber, #57642) [Link]

> Can I use autotools on Windows with MSVC? That's, like, the most popular development platform for C/C++.

Like, software projects choose a language first and then decide which platform only later...

Michaelsen: One

Posted Mar 2, 2013 8:25 UTC (Sat) by marcH (subscriber, #57642) [Link]

Autotools are simply too complicated to be 100% reliable. Most projects using them have a configuration copied from another project, spending ages scanning for things which aren't even used. But more importantly, autotools errors are *impossible to root cause* by mere mortals. Autotools are one of the best ever illustrations of the quote "... except for the problem of too many layers of indirection" (the absolutely crazy number of indirection layers can be seen in Wikipedia)

Whether a bug is in the autotools themselves (granted: rarely) or in the way they are used (the usual) does not matter to the end user. Considering how complicated they are it's amazing that they don't break more often, but when they break, game is over for mere mortals.

One good alternative to autotools is to have a few conditionals in a GNU Makefile supporting only the few most popular Unices of the day and to IGNORE older and broken Unices: THEY should evolve if they care about actually running software. Aren't most of them dead anyway yet?

Michaelsen: One

Posted Mar 2, 2013 23:28 UTC (Sat) by tpo (subscriber, #25713) [Link]

> One good alternative to autotools is to have a few conditionals in a GNU
> Makefile supporting only the few most popular Unices of the day and to
> IGNORE older and broken UnicesTHEY ["older and broken Unices"] should
> evolve if they care about actually running software. Aren't most of them
> dead anyway yet?

Don't want to argue against that, just note, that there might be users of those "older and broken Unices" that might be very happy being able to use $SOFTWARE on their system (because they are not being (actively?) ignored and because their vendor is mostly happy with the lock in and the support revenue and maybe doesn't care about its "fringe" users either).

Me, being mostly a Linux monogamist, am not actualy able to assert with confidence, that that stated hypothetical scenario is actualy real, I only do know that there are places (banks?) that run a *lot* of exotic Unix systems (HP, Aix, Solaris, even True) with their corresponding sysadmins, problems etc.
*t

Michaelsen: One

Posted Mar 3, 2013 8:54 UTC (Sun) by marcH (subscriber, #57642) [Link]

When the business of these Unix flavours is based on the "cash cow" model you described then yeah, anything that can make some of their customers even less happy is a Good Thing.

One should support broken systems ONLY when they are used by the majority of users. For broken AND unpopular systems it's a better choice to give Natural Selection a little boost and make them even more inconvenient to use. Unix Wars no more.

autotools / portability ...

Posted Mar 4, 2013 14:53 UTC (Mon) by mmeeks (subscriber, #56090) [Link]

Hi Morten,

> autotools is what make applications fairly easily available
> on systems that are different from the one the application
> was built on.

Of course, this is assuming that autotools is used properly :-) that the right configure macros are used, and the right conditionals inside the code. I've seen no panacea for portability. Having said that we use autoconf as the right tool to generate our configure.

On the other hand - from the LibreOffice perspective - when you have to have binary / assembler bridges [ ok, so we need to migrate to libffi and we know that ], to de-mangle the compiler/ABI+architecture specific quirks of C++ - in order to do UNO bridging; having to write a few per-platform Makefile pieces to make things work nicely seems reasonably trivial I think.

Then again - anything can be improved, patches most welcome :-)

Michaelsen: One

Posted Mar 1, 2013 20:36 UTC (Fri) by zlynx (subscriber, #2285) [Link]

Think "programmers" when you read end-users. The end-users for source code are usually other programmers. Programmers who have no idea what kind of build system and dependencies the original programmer used.

I once had to compile XTanks (I think it was) on a Linux system. Its build system was horrible, just a raw Makefile I think, designed for Solaris. It assumed all kinds of things like Motif libraries and hardcoded locations of X libraries. To customize build options required hand editing a config.h file.

Autotools used correctly is a great advance over something like that.

Michaelsen: One

Posted Mar 1, 2013 21:17 UTC (Fri) by daglwn (subscriber, #65432) [Link]

> Autotools used correctly is a great advance over something like that.\

Yes, it is.

But that doesn't mean there aren't better options.

Autotools often takes longer than the actual build and that's a huge problem.

Michaelsen: One

Posted Mar 2, 2013 0:28 UTC (Sat) by robert_s (subscriber, #42402) [Link]

> The whole notion of an end user compiling someone else's code is insane, as is everything else that is more complicated than double-clicking an rpm file.

I sure hope the end user checked the signature of that rpm file, wherever they got it from, before they went and double clicked it.

Michaelsen: One

Posted Mar 3, 2013 2:25 UTC (Sun) by rgmoore (✭ supporter ✭, #75) [Link]

No. They should be using a packaging system that checks the package for them and warns them if the signature comes from an unknown packager. That way they don't have to check signatures manually when installing from any source they've already configured. That's by far the most common use case, so the packaging system should be designed to do the right thing for the user rather than forcing the user to do it manually.

Michaelsen: One

Posted Mar 3, 2013 17:59 UTC (Sun) by jwarnica (subscriber, #27492) [Link]

Because a random RPM is safer than a random collection of .c files?

Michaelsen: One

Posted Mar 1, 2013 21:56 UTC (Fri) by daniels (subscriber, #16193) [Link]

CMake's wildly, wildly non-obvious, and I mean that relative to autotools. ./configure && make all install is a very well-understood concept. The CMake equivalent seems to be '????? bees aojksdf'. Building projects using CMake without very detailed and specific instructions is a nightmare, and when you turn to the documentation it's several thousands of treatise on the theory of configuration and build management. Kind of like how Arch/TLA help was simultaneously very enlightening and totally useless.

CMake is a pain to use for developers. install target? Who needs it? Sort it out yourself. And the lack of standardised and first-class pkg-config support is totally unforgiveable. It's 2013 and you're making people roll their own. Not even close to good enough.

The most frustrating thing about CMake is that it was designed to replace and improve upon autotools. God knows there's plenty of room in that space, but CMake is worse for users, developers, and distributors.

(And yes, 'oh but you should just switch your project to some non-standard tool very few people understand or have installed' is really infuriating advice to be given.)

Michaelsen: One

Posted Mar 1, 2013 22:33 UTC (Fri) by nix (subscriber, #2304) [Link]

I don't much like CMake either, but many of these problems have been fixed. CMake's -DTHIS -DTHAT -DTHE_OTHER is unpleasant to read, but 'cmake -Dblahblah && make && sudo make install' is really not much different from './configure --blah-blah && make && sudo make install'. CMake does generate makefiles, after all.

It does, these days, support and generate install targets, and the FindPkgConfig macro has been around for some considerable time. (Unfortunately, there are lots of CMake modules that are older than that, and still call pkg-config themselves, by hand.)

Michaelsen: One

Posted Mar 1, 2013 22:36 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Pkg-config? WTF? How do you even use it on Windows?

CMake solves very real problem of making a portable build system, unlike autotools which only supports it theoretically.

Michaelsen: One

Posted Mar 1, 2013 22:46 UTC (Fri) by daniels (subscriber, #16193) [Link]

So provide something akin to it on Windows then. Don't take the lowest-common-denominator approach of ensuring that everyone will get it wrong on every single platform.

Portability is fine. Being portably useless is not.

Michaelsen: One

Posted Mar 1, 2013 23:07 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

That means maintaining at least 3 separate build systems (there's also XCode on Mac). No, thanks.

It's much better to use CMake that can work just fine. And not hard-depending on tools like pkg-config actually forces you to write better build systems.

Michaelsen: One

Posted Mar 2, 2013 0:50 UTC (Sat) by welinder (guest, #4699) [Link]

No, it means you cross compile and forget about MSVC.

You seem to like every single technology that Microsoft has come out
with as-if it was the One True solution -- and that's fine. For you.

Michaelsen: One

Posted Mar 2, 2013 0:52 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

Are all these Linux fans so detached from the RealWorld(tm)(r) that it hurts to look at?

Hint: it's not possible to cross-develop a moderately complex modern program on Windows.

Michaelsen: One

Posted Mar 2, 2013 12:36 UTC (Sat) by Company (guest, #57006) [Link]

Like Libreoffice you mean? Or Chromium?

Michaelsen: One

Posted Mar 7, 2013 23:00 UTC (Thu) by marcH (subscriber, #57642) [Link]

You are seriously detached from the real world if you actually think every piece of software should care about Windows and Visual Studio.

Pure Unix portability is enough in (obviously not all, but) a very large part of the real world. What do you think powers the Cloud? Or smartphones?

I've read more informed things from you (hint: euphemism)

Michaelsen: One

Posted Mar 8, 2013 19:55 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link]

Sure.

But the piece of software that we're discussing actually runs on Windows. Also, there's iOS (which is poorly supported by autotools).

Michaelsen: One

Posted Mar 3, 2013 9:08 UTC (Sun) by HelloWorld (guest, #56129) [Link]

> No, it means you cross compile and forget about MSVC.
No open source compiler implements MSVC's C++ ABI, so you can't use MSVC-compiled libraries in your project. That's a deal-breaker for many if not most applications.

Michaelsen: One

Posted Mar 2, 2013 1:23 UTC (Sat) by HenrikH (guest, #31152) [Link]

You run it under Cygwin or MinGW of course.

Michaelsen: One

Posted Mar 9, 2013 2:55 UTC (Sat) by ARealLWN (guest, #88901) [Link]

I would like to apologize for my late reply to this article but also congratulate Bjoern Michaelsen on this achievement. Having read some comments in this thread I would like to know just what are the alternatives to gnu make (like tup, cmake and ninja, is ant considered an alternative, etc.) are and if they each have completely different syntax for the configuration and target options or if most of them follow standard make syntax with different dialects. This is just Mister Ignorance giving $0.02. Thanks in advance for any replies.

Michaelsen: One

Posted Mar 9, 2013 15:56 UTC (Sat) by nix (subscriber, #2304) [Link]

All completely different syntax, I'm afraid, wildly different from each other as well as from make. This is not that surprising, since most people who dislike make consider its terse syntax one of its biggest flaws. (And then they go out and perpetrate syntactic horrors like Ant. The mind boggles.)

Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds