|
|
Subscribe / Log in / New account

Wielaard: Looking forward to GCC6 – Many new warnings

Mark Wielaard writes about some of the many new compiler warnings provided by the GCC6 release. "My favorite is still -Wmisleading-indentation. But there are many more that have found various bugs. Not all of them are enabled by default, but it makes sense to enable as many as possible when writing new code."

to post comments

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 15, 2016 17:24 UTC (Mon) by juliank (guest, #45896) [Link] (56 responses)

Wow, -Wmisleading-indentation sounds useful.

Can't read the blog though, connection timed out. :(

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 15, 2016 17:40 UTC (Mon) by mpolacek (guest, #66426) [Link] (55 responses)

Mark is looking into that; I suppose the blog will be back up soon.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 15, 2016 17:53 UTC (Mon) by mjw (subscriber, #16740) [Link] (54 responses)

Sorry about that. Wordpress didn't scale and didn't survive the LWN-effect :)
Hopefully it is reachable again now.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 15, 2016 19:00 UTC (Mon) by mjw (subscriber, #16740) [Link] (53 responses)

BTW. In case the machine falls over again. Most of these new warnings are also documented at https://gcc.gnu.org/gcc-6/changes.html others I just stumbled upon in the manual page https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html and https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Op...

In the above blog post all I am really doing is try them all out and showing some examples where a warning really helps.

Various new warnings found real bugs in actual code. There are a couple that aren't on by default, or enabled by -Wall or -Wextra which might produce false positives, but are IMHO really useful. In particular -Wnull-dereference, -Wduplicated-cond and -Wlogical-op found some real issues and I would have liked if there was some -Wextra-extras that enabled them instead of people having to go through the list by hand and trying them out one-by-one.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 15, 2016 20:02 UTC (Mon) by niner (subscriber, #26151) [Link] (7 responses)

Maybe instead of ever increasing -extra-extra chains, a way for the user to specify the target gcc version would be better. So -Wgcc6 would enable all warnings GCC 6 supports. This way the prudent user can always enable all the warnings the gcc version used during development provides without the risk of some future compiler version breaking existing code.

FWIW that's the same approach perl takes where one can enable features of Perl 5.22 (and all previous versions) by way of a "use 5.22;" statement.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 15, 2016 20:36 UTC (Mon) by josh (subscriber, #17465) [Link]

I'd like to see this as well, especially for -Werror. That way, projects that have fixed all the warnings produced by a given version of the compiler can enable and set as errors all of those warnings in one step, but if a new version of the compiler generates new warnings, they won't become errors until the project addresses them.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 0:47 UTC (Tue) by vonbrand (subscriber, #4458) [Link]

Perhaps -Wall and -Wall=gcc6 or something like that.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 9:34 UTC (Tue) by epa (subscriber, #39769) [Link] (4 responses)

Warnings won't ever "break existing code", unless you have asked for all warnings to be treated as fatal errors (-Werror). Maybe what is needed is a way to turn on all the latest and greatest warnings, but only treat those as errors which existed in a certain gcc version, for example -Werror-version=gcc-5.3.1. Then your program will continue to compile with newer releases, but you will get the benefit of newer warnings - and once the codebase is clean of those warnings, you can increase the -Werror-version number.

It is an interesting question what to do if a warning existed in some older gcc version but was then removed or disabled by default in a later version, but this nicety doesn't really get in the way of having versioned fatal warnings.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 11:46 UTC (Tue) by juliank (guest, #45896) [Link] (3 responses)

-Werror-version=gcc-5.3.1 would also break things, as the warnings enabled in an early version may be detected in more places in later versions, for example, due to changes in the optimiser. It's not realistic to prevent that.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 15:06 UTC (Tue) by epa (subscriber, #39769) [Link] (2 responses)

Good point. I guess you would need to make warnings fatal only if they can be detected before optimization. (Personally I can't see the value in fatal warnings, I'd rather just make sure somebody is paying attention to the build logs, but if maintainers insist on them they need some defensive measure like this.)

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 22:21 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

> Personally I can't see the value in fatal warnings

Having had some warnings that I couldn't get rid of, it can be a real problem, but I'd rather have fatal warnings switched on if I can. That way, I know my code is as clean as possible, and having taken over code that originally had minimal warnings and lots of subtle bugs, that's the way I like it ... :-)

Cheers,
Wol

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 22, 2016 10:07 UTC (Mon) by epa (subscriber, #39769) [Link]

Oh I agree, I certainly see the value in making sure there are zero warnings during development - I just don't see why the source tarball given to distributions and end users needs to have fatal warnings set. There may well be warnings which pop up on some other architecture, but they are probably just warnings rather than hard errors, and it is the user's or packager's choice whether to treat them as serious enough to abandon the build.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 15, 2016 20:04 UTC (Mon) by juliank (guest, #45896) [Link] (44 responses)

Just adopt -Weverything from clang?

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 1:33 UTC (Tue) by pbonzini (subscriber, #60935) [Link] (43 responses)

-Weverything doesn't make much sense, for example -Wtemplates definitely shouldn't be part of it.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 1:58 UTC (Tue) by Elv13 (subscriber, #106198) [Link] (41 responses)

The point of -Weverything is a case of opt-in vs. opt-out. You usually want those warnings, but many projects devs either don't know they exist, don't follow compilers development or don't bother enabling them. At least with -Weverything, you have to just disable all the noise. The main problem is using this with -Werror. When a new version of the compiler is released, it might catch new case and break the build. This is why enabling some -Werror-something only in debug builds make sense, but you may still want to see those -Weverything warnings in all modes. I would also like a -Weverything=gcc6 type option instead of the CMake module I carry across multiple projects to track compiler version and enable warnings (with a else if clang then -Weverything -Wno-...)

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 3:58 UTC (Tue) by Sesse (subscriber, #53779) [Link] (40 responses)

I would go as far as to say you should never enable -Werror in released source code. It tends to create blockers for distributions to move to newer GCC versions, for no good reason at all.

Of course, when developing, -Werror is great.

/* Steinar */

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 4:11 UTC (Tue) by neilbrown (subscriber, #359) [Link] (35 responses)

I like -Werror in source code releases. It encourages people to report warnings on platforms I haven't tested.

Of course I support some easy way to disable it like
make CWFLAGS=

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 6:24 UTC (Tue) by Sesse (subscriber, #53779) [Link] (28 responses)

I'm sure it's great for you; it's not great when you have a hundred projects like that breaking during a distro rebuild, requiring manual review and action (at the very least, turning off the warnings) for all of them. :-)

/* Steinar */

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 6:57 UTC (Tue) by torquay (guest, #92428) [Link] (23 responses)

    it's not great when you have a hundred projects like that breaking during a distro rebuild, requiring manual review and action

It's clear that having per-project defaults is not scalable from a distro point of view. However, is the argument "it's bad for the distro, so the project is bad" ? I'd argue that it's the distro getting in the way, not the other way around.

Perhaps distros should get out of the business of providing mudballs requiring constant rebuilds of everything, and instead focus on providing a stable base operating system?

Ceterum Censeo

Posted Feb 16, 2016 7:57 UTC (Tue) by oldtomas (guest, #72579) [Link] (20 responses)

As in [https://en.wikipedia.org/wiki/Carthago_delenda_est].

Yeah, yeah. We know. Some disagree. Now back to GCC6, shall we?

Ceterum Censeo

Posted Feb 16, 2016 8:54 UTC (Tue) by torquay (guest, #92428) [Link] (19 responses)

Umm? Project XYZ wants to use -Werror (from GCC6 and beyond) because the developer is (rightly) paranoid about bugs. Distro throws the dummy and says no, because it would take too much effort in existing as a distro. The problem here is not the developer, but the rather arbitrary overriding of developers' choices.

There is no burning of Carthage here, but merely an indication that distros in their current state are not fit for purpose. Rather than changing Project XYZ to fit into an arbitrary definition of a good project (which changes from distro to distro), how about we let the project developer compile the project as he/she sees fit, and then just sandbox the result? Let the distro evolve to provide the sandbox and the security policies that go with it.

Ceterum Censeo

Posted Feb 16, 2016 9:42 UTC (Tue) by epa (subscriber, #39769) [Link]

It is common for projects to have build targets make test, which can be done as part of package building every time, and make maintainer-test, which turns on extra checks and doodads intended for the program's maintainer or developers. Building with -Werror could be done as part of maintainer-test only.

It would sure be useful for compiler warnings in package building to be reported back upstream somehow, and automatically - but the amount of spam this would generate makes it impractical unless you have somebody employed full time on build cleanup janitorial tasks.

Ceterum Censeo

Posted Feb 16, 2016 11:37 UTC (Tue) by oldtomas (guest, #72579) [Link] (17 responses)

Look, I don't want to enter a spat about this with you.

Thing is, for me Project XYZ's developer isn't God; sometimes I trust the distro's maintainer more (especially for those programs on my box which get less attention from me). The few (alas, too few: my capacity is far too small) which get more attention I do compile myself.

Why do I trust the packager more?

Focus. The maintainer's focus is to integrate the program into a consistent whole, whereas Project XYZ's maintainer's focus sometimes is "my program is the single most important thing out there", which is understandable, but counterproductive[1]. As long as the distro's maintainer makes her decisions transparent to me, I'm infinitely thankful for her doing this job for me. And (for me, still) this job is becoming ever more essential the more strident those "open source thingies" are becoming out there. Sometimes they remind me of spoiled little children[1].

Now I know you strongly disagree -- and I don't think each of us will convince the other; so having clarified our standpoints enough, let's leave it at that. If you want to have the last word, go ahead... this is *my* last one.

[1] There are great exceptions to that, for sure.

Ceterum Censeo

Posted Feb 16, 2016 17:23 UTC (Tue) by mpr22 (subscriber, #60784) [Link] (15 responses)

Given the choice between trusting someone who turns on compiler warnings and puts -Werror in CFLAGS, and someone who turns off compiler warnings and/or removes -Werror from CFLAGS, I would definitely prefer to trust the former.

Ceterum Censeo

Posted Feb 16, 2016 18:50 UTC (Tue) by bronson (subscriber, #4806) [Link] (11 responses)

So, the type of person that creates the Debian SSL bug?

Pretty sure you've set up a false dichotomy there.

Ceterum Censeo

Posted Feb 17, 2016 11:29 UTC (Wed) by itvirta (guest, #49997) [Link] (10 responses)

That would be the type of people who deliberately access uninitialized memory.
They are surely not to be trusted, but that has little to do with warnings.

Ceterum Censeo

Posted Feb 17, 2016 12:05 UTC (Wed) by juliank (guest, #45896) [Link] (4 responses)

Fun fact: We use uninitialized memory in APT. That is, for the binary (mmap()able) cache, we allocate memory and initialise parts of that.

We store all of it though, and calculate a checksum of it all, including the not yet initialised parts.

Ceterum Censeo

Posted Feb 17, 2016 19:20 UTC (Wed) by ksandstr (guest, #60862) [Link] (3 responses)

However, POSIX initializes fresh mmap() memory to all zeroes, but uninitialized automatic arrays in a function are truly uninitialized in that they contain arbitrary detritus from the thread stack (whatever that was un-/initialized to). In essence, APT performs a well-defined function that yields consistent results despite "you" thinking it doesn't.

Ceterum Censeo

Posted Feb 17, 2016 19:43 UTC (Wed) by juliank (guest, #45896) [Link] (2 responses)

Thing is, we don't really mmap() for writing anymore (only for reading), but use new[] instead (or malloc, idk right now). We then do a nearly-atomic write to temporary cache & rename transaction.

Valgrind currently complains about that, but oh well, it's at least clearly identifiable. Should still get fixed at some point.

Ceterum Censeo

Posted Feb 17, 2016 20:52 UTC (Wed) by ksandstr (guest, #60862) [Link] (1 responses)

Whether it's using new[] or malloc is, in fact, very important here: new[] initializes values to the default constructor of the array subscript type (zeroes for integral and floating-point types; the same rule is applied to all members of structs to make up an implicit default constructor), while malloc() does no initialization at all.

So if you're seeing consistent results, you're either using new[] or malloc() in a process that hasn't freed much of the memory it's allocated and written yet. But sure enough, valgrind does complain about uninitialized values when reading from new[]'d arrays -- this should be reported and corrected.

Back in 2006 it was still realistic that valgrind wouldn't become yet another mindlessly-followed lint. It had innovations like data tracking instrumentation and what-not. Alas.

Ceterum Censeo

Posted Feb 21, 2016 13:34 UTC (Sun) by ms_43 (subscriber, #99293) [Link]

> new[] initializes values to the default constructor of the array subscript type

yes (if an explicit initialization expression is omitted).

https://github.com/cplusplus/draft/blob/master/source/exp...

> (zeroes for integral and floating-point types;

no, that would be zero-initialization!

operator new performs default-initialization, which calls an applicable no-argument constructor for class types, otherwise does nothing.

https://github.com/cplusplus/draft/blob/01c0772e922e5edcd...

Ceterum Censeo

Posted Feb 17, 2016 16:05 UTC (Wed) by bronson (subscriber, #4806) [Link] (4 responses)

The direct reason the SSL bug entered Debian was to get rid of the warning.

Or, if you're saying that it's normal to go commenting out lines of code to eradicate warnings -- because it's ultimately the fault of the person who wrote the questionable code -- then I guess this is a fine example of the problem. Thank you!

Remember, warning-driven development only works when you're using a perfect compiler.

Ceterum Censeo

Posted Feb 17, 2016 21:20 UTC (Wed) by itvirta (guest, #49997) [Link] (3 responses)

Well, of course commenting out lines without understanding what they do is either silly or plain stupid.

But in that particular case, the ultimate reason seems to be that someone tried to be smarter than they should have
been, and did something that is both wrong (in the language sense) and not very useful either.

Wrong, since as far as I can find, the point was to read uninitialized memory, and that results in undefined behaviour
with the full demons-flying-from-ones-mouth possibility included.

Not very useful since the whole thing was to "safeguard" against not having a good RNG by creating a really shitty RNG.
Same goes for the fact that the library used the PID to seed the RNG. PIDs aren't really secret or random and make for
a rather shitty RNG. Had the library not used such a weak addition, the final screw-up might have been detected sooner,
since all the outputs would have been exactly the same, instead of having the 2^15 different possibilities (or whatever
the number was).

All of this goes with a big IIRC disclaimer, so feel free to point me wrong.
However, I do find the custom of solely blaming a silly mistake while ignoring the fucked-up design really tiring.

Ceterum Censeo

Posted Feb 17, 2016 23:58 UTC (Wed) by tialaramex (subscriber, #21167) [Link] (2 responses)

Version of the story which is popular and you're repeating to us:

* OpenSSL is so crappy it relies on uninitialised memory reads to do random number generation even though, er, that can't work on Unix
* Hero Debian Guy fixes the uninitialised reads
* Somehow this breaks OpenSSL because it needed uninitialised reads, even though er, they didn't work so... um.

Actual reality of the story

* OpenSSL is so crappy it has several similar-looking C functions that actually do different stuff
* Debian Guy spots that function A does a harmless uninitialised read and fixes it, arguably making the code more "correct" but with no functional change.
* Idiot Debian Guy sees a similar function B and blindly applies the same "fix".
* The "fix" to this function B disables its entire purpose, effectively removing the PRNG feature almost entirely

I don't know if either story has anything to tell us about GCC6. But I do know that spreading the former story is harmful.

Ceterum Censeo

Posted Feb 19, 2016 14:13 UTC (Fri) by cortana (subscriber, #24596) [Link] (1 responses)

Your version of the story misses out several details that were covered in Debian, OpenSSL, and a lack of cooperation posted to a certain web site some time ago.

Ceterum Censeo

Posted Feb 19, 2016 19:12 UTC (Fri) by bronson (subscriber, #4806) [Link]

It was way better than the guesses itvirta posted though! And nice and succinct.

Ceterum Censeo

Posted Feb 17, 2016 0:17 UTC (Wed) by nix (subscriber, #2304) [Link] (2 responses)

You mean the sort of person who makes compiler warnings fatal on every machine the thing is compiled on, even though many warnings vary per-architecture and almost all vary per-compiler-version?

Yeah, I'd trust that sort of person a whole lot -- to make mistakes through ignorance.

Ceterum Censeo

Posted Feb 18, 2016 21:48 UTC (Thu) by mpr22 (subscriber, #60784) [Link] (1 responses)

My C and C++ code builds without warnings on my system, because I do all my development with -Werror enabled. Thus, if my C and C++ code produces warnings when compiled on someone else's system, then either their toolchain is (defective|deficient|misconfigured) or there is an issue with my code which needs to be properly examined by a human, and that someone else should probably not be distributing binaries to other people until it has been.

Ceterum Censeo

Posted Feb 18, 2016 21:54 UTC (Thu) by Sesse (subscriber, #53779) [Link]

That's a false dichotomy. What's much more likely is that someone used a different compiler which has a different set of warnings. Not all the world is GCC.

Most warnings, especially in code that's been around for a while, are _not_ signs of actual bugs. (It's good style to fix them nevertheless, because some are, but that should not block the compile for an end user.)

/* Steinar */

Ceterum Censeo

Posted Feb 16, 2016 18:16 UTC (Tue) by drag (guest, #31333) [Link]

> Thing is, for me Project XYZ's developer isn't God; sometimes I trust the distro's maintainer more

Distro maintainers can still work with projects and review code for them without needing to be involved in modifying them to be packaged for a specific distribution.

Arguably distro maintainers working to get changes into upstream and systematically eliminating distro-specific packaging requirements and build processes should allow a greater degree of collaboration between distributions. This should help significantly to maintain those packages which are critical, but do not warrant the full time development staff that many other types of software requires.

> The maintainer's focus is to integrate the program into a consistent whole, whereas Project XYZ's maintainer's focus sometimes is "my program is the single most important thing out there", which is understandable, but counterproductive[1].

It depends on the nature of the application.

Some applications are end-user focus and thus are considerably higher priority then some other software that focuses on infrastructure of the OS. Think about the difference between Firefox versus some sort of XML parsing library.

Once my imaginary XML parsing library works, is fast, and is stable there doesn't really need to have a full time team developing on it. You still need maintainers, but new development is discouraged by the nature of it being used by a wide variety of other applications. 'If it isn't broke don't fix it' is the name of the game in this instance. It's purpose in life is just to serve as a basis for new development. Having it be almost 'abandonware' and only be maintained for security purposes is can be a advantage.

In comparison Firefox browser has the triple duty of being a end-user facing application, dealing with a ever evolving set of standards and practices, as well as facing new threats and problems related to users needing to accessing unpredictable sources of unvetted and potentially malicious data on the internet. In this case is it is the job of the OS to bend for the application rather then the other way around. Firefox is significantly more important then the rest of the OS in a lot of ways and almost all situations in which it gets used.

Since the primary purpose of a operating system is to make developing and using end-user applications easier (since it is always the job of computers to 'do something' rather then sitting there consuming electricity for the sake of correctness), then it's the job of distributions to bend to the will of users and the applications they consume. It is true that a OS, as part of it's job description of 'making life easier for users', does have to enforce a certain amount of correctness on applications, however the ideal solution is to 'make it easier to do it right' rather then trying to second guess developers and modify their programs to suit the expectations of distributions.

Sometimes it's unavoidable, but in my mind if distributions have to hack on software then they are the new upstream, the new developers, and thus catering to the whims of distribution packagers is just as troublesome as catering to the whims of the original developer. This also has obvious scalability issues as you can't expect distributions to be experts in all the software in the known universe. A far better solution is to have distributions work together to get changes into upstream rather then trying to solve the problem 'downstream'.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 10:10 UTC (Tue) by roblucid (guest, #48964) [Link]

The point of rebuild, is to ensure you can build & modify the software delivered.
From end users point of view, a distro's security updates are a real benefit, building from source & re-updating soon gets boring, especially if you have the problem of embedded bundled libraries.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 17, 2016 21:09 UTC (Wed) by Sesse (subscriber, #53779) [Link]

Sure, when all upstream app developers commit to making their own packages for the 10–15 architectures people care about on Linux, and keep them up-to-date wrt. security and such.

/* Steinar */

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 9:22 UTC (Tue) by error27 (subscriber, #8346) [Link]

There should be a get_maintainer.pl for the whole distro so that reporting bugs can be done almost automatically. In the kernel I auto generate bug reports, look them over and hit send.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 10:03 UTC (Tue) by roblucid (guest, #48964) [Link] (2 responses)

Packaging lets you apply patches, and tailor the build process. When building software from the developers, it's a good idea to actually look at their documentation and options chosen, hopefully the distro want to make packages with consistent build & feature set and selects the config appropriately. Turn off -Werror in the package, there once.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 22:28 UTC (Tue) by Wol (subscriber, #4433) [Link] (1 responses)

aiui, LibreOffice actually has config options in the build system for "each" distro.

So you can take an upstream git pull, and run make for your distro. So the distro can tweak things, and they go upstream so the user can see what's going on - that is, if they can understand the build system ... :-)

Cheers,
Wol

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 21, 2016 13:48 UTC (Sun) by ms_43 (subscriber, #99293) [Link]

> aiui, LibreOffice actually has config options in the build system for "each" distro.

Where "distro" means "Linux", "MacOSX", "Win32" or "Android".

https://cgit.freedesktop.org/libreoffice/core/tree/distro...

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 7:02 UTC (Tue) by lkundrak (subscriber, #43452) [Link] (5 responses)

Also, it breaks the build. Sounds like a bad deal.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 18, 2016 10:02 UTC (Thu) by mjw (subscriber, #16740) [Link] (4 responses)

I am somewhat surprised by the hostility against -Werror. Personally I use it religiously in my projects and take pride in having the build explicitly break if a warning slips through. I do select the set of warnings (normally at least -Wall and -Wextra and then some of the non-default ones after making sure they catch some real bugs, and then fix any false positive, like some of those in this blog post did). My experience with distros is actually pretty positive. Package maintainers do warn (pun intended!) us when the build breaks because a new warning is detected on some setup we didn't test yet ourselves and we work together to get the build clean again. In fact one real bug was found when Debian tried a mass rebuild with gcc6 because of -Wmisleading-indentation. It was an obscure bug that would probably not trigger in normal situations, but it was nice to find it because it would have been a pain to diagnose otherwise. You do have to have a good relationship with the package maintainers of your project to make -Werror work, but then it really is IMHO a pretty sweet deal.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 18, 2016 13:44 UTC (Thu) by pizza (subscriber, #46) [Link] (3 responses)

> I am somewhat surprised by the hostility against -Werror.

It's awesome for the tail end of development work leading up to a release, but IMO it has no place in a release package if that package is intended to be compiled by random third parties. The same code on different versions of the compiler (or even the same compiler on different platforms) will result in different sets of warnings.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 18, 2016 17:58 UTC (Thu) by mjw (subscriber, #16740) [Link]

But that is the whole point isn't it? If your setup is so different from the supported setups of the project as released that you get new warnings then you do need to carefully check them and ideally discuss with the project maintainers how to resolve them. In my experience that is exactly what happens. Distros that have architecture support not yet seen by the project contribute fixes back and we work together to make sure newer compiler warnings are fixed (which often means there was some subtle bug in the code in the first place). It grows the project to include these new environments as supported. And it makes sure no subtle bugs slip in because people ignore warnings. IMHO it is exactly as making sure your testsuite is zero fail and if not to abort the build so nothing is ever released that is known broken.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Mar 3, 2016 21:20 UTC (Thu) by nix (subscriber, #2304) [Link] (1 responses)

There is one special case where this doesn't apply: where the software is arch-dependent enough and delicate enough that it should only be compiled with specific compilers (or with compilers in a specific version range) and on supported arches. This is particularly true where the software is crucial and any new warnings are quite likely a sign of impending disaster. On such systems, -Werror is actually really useful.

glibc is the only such software I can think of right now, on Linux at least.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Mar 3, 2016 23:00 UTC (Thu) by bronson (subscriber, #4806) [Link]

Hyper-optimized codecs and graphics code... but that's still very specialized. Anyone who isn't sure if -Werror is necessary should ship with it off.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 7:24 UTC (Tue) by josh (subscriber, #17465) [Link] (3 responses)

I'd love to see a -Werror=gcc-6 option or similar, such that any warning introduced after gcc-6 will *not* turn into an error. In that circumstance, enabling -Werror for the specific compiler version you know you've fixed all the warnings for seems fine.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 7:49 UTC (Tue) by andresfreund (subscriber, #69562) [Link] (1 responses)

That sounds bothersome for GCC's authors: I bet immediately people would complain because GCC +1 issues new warnings in existing categories, because of e.g. better control flow analysis. Preventing that problem sounds like it might make changes noticeably more painful.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 17:52 UTC (Tue) by k3ninho (subscriber, #50375) [Link]

>That sounds bothersome for GCC's authors
I'm sure that we can imagine a layer of indirection which manages the mapping of versions to classes of error, holding a matrix of version and features as the release notes would list. And that might (maybe) appear in some man page content. What free/open source projects have been nearly-adequate at doing is explaining the orthodoxy of how a user is expected to use their tool.

You know that thing where a newcomer to a community shows up and complains that feature X looks like it should do Y but instead it's structured to do A, B and/or C? There's a habitual gap in the community record that makes sense of the journey from one problem to a solution, to broader appeal and then on to a growing userbase and contributory community. There's almost room for a maxim: "the current solution looks like it does because we started over there, went round the houses, painted a few bike-sheds and ended up where we are now; of course you know where you want to be and, if you were going there, you wouldn't want to be starting from here."

In light of that polish, the shorthand for someone taking on a compiler upgrade and not evaluating the full extent of changes they will face remains the less-friendly RTFM.

K3n.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 19, 2016 0:18 UTC (Fri) by dirtyepic (guest, #30178) [Link]

In half the cases it's not a new option throwing a warning at you, it's an old option that learned some new tricks (or likely, some new false positives).

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 18, 2016 9:49 UTC (Thu) by mjw (subscriber, #16740) [Link]

I agree that -Weverything is pretty silly. Not all warnings are equal. Some are like -Wtemplate or -Weffc++ are really about coding style (and don't normally signal a possible bug). And then there are warnings like -Wwrite-strings that change the type of some constructs effectively changing the code. Most people are probably not interested in such warnings in the first place.

But for other categories of warnings (where there is the potential that it does indeed signal a subtle bug that might need some analysis even if it could be a false positive) it would be really nice to have some way to enable them all and just see what warning results you get. Is it really too hard to define two or three categories of warnings that can easily be enabled all at once?

I certainly appreciate the GCC documentation. It is pretty good. And it was fun to compare the info pages of the current and new version of GCC to get an overview of all the new warning options to try out. I learned a lot! But an easier way to discover new warnings (especially those not enabled by default or through -Wall or -Wextra) would be appreciated.

Many new warnings, but still not my pet peeve

Posted Feb 15, 2016 20:45 UTC (Mon) by darktjm (guest, #19598) [Link] (13 responses)

I know this isn't really the place to post it (and I don't really feel like posting a request to the gcc mailing list(s) or fixing this in gcc myself), but I'd really like the bug I used to most commonly see in C projects flagged with a warning: the linker doesn't care much about matching symbols, so if your source isn't consistent, problems could occur. In other words, an extern declaration in one file does not need to match the definition in another (not as big a deal with C++'s name mangling, but still possible for pure global data objects or return types). The -Wmissing-delarations flag helps a little (and really should be part of -Wall, but instead it annoyingly generates a warning if you use it with C++), but I don't think there is a "warn if there is a global which was not declared in an included file, rather than in this file" flag, or an equivalent of -Wmissing-declarations for global data objects. I have not used llvm/clang to compile anything, so maybe it's available there. When I take over others' software projects, the first thing I usually do is manually fix this issue (there are tools to help with this, just not gcc itself) , and often find problems as a result. I know that even with improved warnings (even if added to -Wall), it's still possible to screw up, but at least it encourages better coding practice, possibly reducing the number of gratuitously exported symbols from an object as a side benefit.

Re: the linker doesn't care much about matching symbols

Posted Feb 16, 2016 3:08 UTC (Tue) by ldo (guest, #40946) [Link] (11 responses)

You mean, checking type consistency between object modules? I wonder about that myself.

Ada (and other earlier languages) showed the way in the 1980s and even before. Why are linkers still so dumb? Why does C++ need to resort to name mangling to convey type information? Why can’t the object format be smarter?

Re: the linker doesn't care much about matching symbols

Posted Feb 16, 2016 4:01 UTC (Tue) by Sesse (subscriber, #53779) [Link] (5 responses)

I suppose this is plainly because UNIX is written in C, and not in C++. Thus, the entire OS ABI supports C really well, and everything else not so well.

While I would support more intelligent linker conventions (except it would break everything now that we finally have a pretty stable C++ ABI :-) ), I'm not entirely sure exactly what name mangling has to do with it. How would you represent a C++ type signature in such a format in a way that's meaningfully different from name mangling?

/* Steinar */

Re: more intelligent linker conventions

Posted Feb 16, 2016 4:28 UTC (Tue) by ldo (guest, #40946) [Link] (4 responses)

Bear in mind I’m talking about type information, which is quite separate from an ABI.

Even C could benefit from type-consistency checking across modules. This is the difference between Ada-style typesafe “separate” compilation, versus “independent” compilation which is all you can get in C and FORTRAN.

Re: more intelligent linker conventions

Posted Feb 16, 2016 4:43 UTC (Tue) by sfeam (subscriber, #2841) [Link] (1 responses)

Doesn't -funit-at-a-time already do this?

Re: Doesn't -funit-at-a-time already do this?

Posted Feb 16, 2016 4:52 UTC (Tue) by ldo (guest, #40946) [Link]

I don’t think so. From the gcc(1) man page:

-funit-at-a-time
This option is left for compatibility reasons. -funit-at-a-time has no effect, while -fno-unit-at-a-time implies -fno-toplevel-reorder and -fno-section-anchors.

Re: more intelligent linker conventions

Posted Feb 16, 2016 6:25 UTC (Tue) by Sesse (subscriber, #53779) [Link] (1 responses)

In C++, you need type information as part of the ABI.

/* Steinar */

Re: In C++, you need type information as part of the ABI.

Posted Feb 17, 2016 21:00 UTC (Wed) by ldo (guest, #40946) [Link]

The ABI can only handle a small subset of the necessary type information.

Consider the Ada declaration

type B is new A;
which means that type B is essentially a copy of type A, inheriting all the functionality defined for it. But B must still be considered distinct from A, and incompatible with it.

Then you have subtypes, which are considered compatible with the type they derive from, but are subject to additional constraints.

There can be no code or runtime data associated with these definitions, which is why you cannot represent them in any ABI. Yet they need to be processed across compilation units. Isn’t that a natural job for the linker?

Re: the linker doesn't care much about matching symbols

Posted Feb 16, 2016 14:54 UTC (Tue) by alonz (subscriber, #815) [Link] (3 responses)

All Ada compilers I had worked with (around 20 years ago…) also used name mangling.

I don't see why this should concern anybody – it's the tools' job to properly unmangle names. Just like it would have been their job to display whatever extra information a “smarter” object format would have stored.

What is a pity, in my opinion, is that C doesn't provide at least the option to keep type info in the object files (yes, using name mangling, why not).

Re: the linker doesn't care much about matching symbols

Posted Feb 16, 2016 18:08 UTC (Tue) by sytoka (guest, #38525) [Link]

> What is a pity, in my opinion, is that C doesn't provide at least the option to keep type info in the object files (yes, using name mangling, why not).

And Fortran do it for you since F90 with .mod file...

Re: All Ada compilers I had worked with (around 20 years ago…) also used name mangling.

Posted Feb 16, 2016 20:02 UTC (Tue) by ldo (guest, #40946) [Link]

GNAT does not.

Re: the linker doesn't care much about matching symbols

Posted Feb 17, 2016 8:06 UTC (Wed) by niner (subscriber, #26151) [Link]

"it's the tools' job to properly unmangle names."

That's the problem right there: the assumption, that there is "the" tool, rather than multiple tools. With multiple tools, they have to agree on how to mangle the names. If that's standardized (even if it's just de facto) like the bit of mangling that C does, interoperation is easy. That's why I can call an abritrary C library's functions from Perl 6 without even needing a C compiler on the machine[1]. Binding to C++ libraries on the other hand is really hard. It needs special code to support each compiler[2].

[1] http://doc.perl6.org/language/nativecall
[2] https://github.com/rakudo/rakudo/blob/nom/lib/NativeCall/...

Re: the linker doesn't care much about matching symbols

Posted Feb 16, 2016 19:11 UTC (Tue) by darktjm (guest, #19598) [Link]

Yes, I mean type consistency between object modules for exported functions and data objects.

Ada was designed for type safety in this regard, and C wasn't. Simple as that. It's one of the many features that I love(d) about Ada. You could give C the benefit of the doubt, having been designed 10 years earlier than Ada. At least it's more type-safe than its predecessors. In any case, I'm not talking about doing this in the linker, although that is possible if debugging info with type information is used for that purpose (or if a GCC-specific section type were to be added for this purpose, I guess), and it would be better/more secure than the warnings I'm proposing. The warnings I'm proposing are simply to ensure that you declare all globals in a header file. That alone does not guarantee that the same header file is used to declare the globals the same way in multiple separate modules, but it does prevent people from sloppily declaring the same function/global data object in every C file and then forgetting to change it later on when the original changes (or doing it wrong in the first place). It also prevents library conflicts due to excessive exports, since you are more aware of what you're exporting. In other words, I'm proposing a warning enforcing good coding practice (which a linker fix would not do, by the way), similar to the indentation warning mentioned in the summary.

Many new warnings, but still not my pet peeve

Posted Feb 17, 2016 19:31 UTC (Wed) by ksandstr (guest, #60862) [Link]

Somewhat relatedly, this issue recently won the 2015 Underhanded C Competition.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 21:07 UTC (Tue) by xtifr (guest, #143) [Link] (2 responses)

Hmm, I can see the new meaning of -Wlogical-op causing some problems with platform-specific code. Imagine:

#if defined(PLATFORM1)
#define LIMIT1 64
#define LIMIT2 32
#elif defined(PLATFORM2)
#define LIMIT1 32
#define LIMIT2 64
#elif defined(PLATFORM3)
#define LIMIT1 32
#define LIMIT2 32
#endif
...
if (value > LIMIT1 || value > LIMIT2) ...

On Platform 3, this would generate a warning about a redundant comparison (error with -Werror). Of course, there are numerous ways to work around this problem, but, still...

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 17, 2016 14:13 UTC (Wed) by itvirta (guest, #49997) [Link]

Of course that particular check should probably be written as (value > min(limit1, limit2))
since that's what logically happens (and the optimiser will happily remove the non-dominating bound anyway).

Apparently a warning is not given if the limits are different, even though that might
catch the case where check was supposed to be (value > limit1 || value2 > limit2) but the other
variable name was mistyped.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 17, 2016 15:45 UTC (Wed) by andresfreund (subscriber, #69562) [Link]

Indeed. Similar false positives are common with -Wtype-limits as well. E.g. a somechar >= 0, where common platforms will spew warnings about the check being redundant, but for a signed char the check might be important. Dealing with warnings like these can be quite frustrating :(

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 21:35 UTC (Tue) by fratti (guest, #105722) [Link] (5 responses)

I highly recommend giving the SVN version a spin - while the >1 hour it took me to build GCC, it found 3 actual problems in the first codebase I ran it on. One was a compile error caused by returning "false" instead of a nullptr, two others were caught by -Wmisleading-indentation and were cases of forgotten curly braces around if-conditions.

As far as I can tell, none of the uncovered issues had a real world impact; while the code was different than intended it did not lead to program misbehaviour. However, it'll probably save some headaches if the surrounding code were to be refactored.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 16, 2016 23:00 UTC (Tue) by rahvin (guest, #16953) [Link]

It didn't lead to misbehavior that you know of. My experience with computers and software in general is if you let the computer make a decision about something (such as exactly where the brace ending the If statement actually is) the computer will assume something completely random that doesn't make any sense. Those are the kind of errors that lead to untraceable bugs because the compiler made an assumption about where the IF statement ended that and you know you would never see that missing brace while troubleshooting.

Though I have to admit that error name isn't very descriptive.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 17, 2016 10:36 UTC (Wed) by HelloWorld (guest, #56129) [Link] (3 responses)

> One was a compile error caused by returning "false" instead of a nullptr
Why would this be a problem? Every integral constant-expression with a value of 0 is a valid null pointer literal. It's certainly uncommon to use false for this purpose, but probably not a real problem.

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 17, 2016 12:09 UTC (Wed) by mathstuf (subscriber, #69389) [Link] (1 responses)

C++11 introduced nullptr which allows pointer logic with NULL to be much stricter. Relevant links:

https://stackoverflow.com/questions/32009694/why-implicit...
https://www.mail-archive.com/gcc-bugs@gcc.gnu.org/msg4645...

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 25, 2016 6:59 UTC (Thu) by HelloWorld (guest, #56129) [Link]

Yeah, so? Using nulptr here rather than false would lead to the exact same runtime behaviour, so how is this a genuine problem?

Wielaard: Looking forward to GCC6 – Many new warnings

Posted Feb 25, 2016 9:44 UTC (Thu) by jwakely (subscriber, #60262) [Link]

That used to be true, but isn't now, see http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_defects.h...

> A null pointer constant is an integer literal (2.13.2 [lex.icon]) with value zero or a prvalue of type std::nullptr_t.

So false and (0+0) and (1-1) are no longer null pointer constants and cannot be implicitly converted to pointer types.

If you want a pointer then use a pointer, not a bool or some other type.


Copyright © 2016, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds