Wielaard: Looking forward to GCC6 – Many new warnings
My favorite is still -Wmisleading-indentation. But there are many more that have found various bugs. Not all of them are enabled by default, but it makes sense to enable as many as possible when writing new code."
Posted Feb 15, 2016 17:24 UTC (Mon)
by juliank (guest, #45896)
[Link] (56 responses)
Can't read the blog though, connection timed out. :(
Posted Feb 15, 2016 17:40 UTC (Mon)
by mpolacek (guest, #66426)
[Link] (55 responses)
Posted Feb 15, 2016 17:53 UTC (Mon)
by mjw (subscriber, #16740)
[Link] (54 responses)
Posted Feb 15, 2016 19:00 UTC (Mon)
by mjw (subscriber, #16740)
[Link] (53 responses)
In the above blog post all I am really doing is try them all out and showing some examples where a warning really helps.
Various new warnings found real bugs in actual code. There are a couple that aren't on by default, or enabled by -Wall or -Wextra which might produce false positives, but are IMHO really useful. In particular -Wnull-dereference, -Wduplicated-cond and -Wlogical-op found some real issues and I would have liked if there was some -Wextra-extras that enabled them instead of people having to go through the list by hand and trying them out one-by-one.
Posted Feb 15, 2016 20:02 UTC (Mon)
by niner (subscriber, #26151)
[Link] (7 responses)
FWIW that's the same approach perl takes where one can enable features of Perl 5.22 (and all previous versions) by way of a "use 5.22;" statement.
Posted Feb 15, 2016 20:36 UTC (Mon)
by josh (subscriber, #17465)
[Link]
Posted Feb 16, 2016 0:47 UTC (Tue)
by vonbrand (subscriber, #4458)
[Link]
Perhaps
Posted Feb 16, 2016 9:34 UTC (Tue)
by epa (subscriber, #39769)
[Link] (4 responses)
It is an interesting question what to do if a warning existed in some older gcc version but was then removed or disabled by default in a later version, but this nicety doesn't really get in the way of having versioned fatal warnings.
Posted Feb 16, 2016 11:46 UTC (Tue)
by juliank (guest, #45896)
[Link] (3 responses)
Posted Feb 16, 2016 15:06 UTC (Tue)
by epa (subscriber, #39769)
[Link] (2 responses)
Posted Feb 16, 2016 22:21 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
Having had some warnings that I couldn't get rid of, it can be a real problem, but I'd rather have fatal warnings switched on if I can. That way, I know my code is as clean as possible, and having taken over code that originally had minimal warnings and lots of subtle bugs, that's the way I like it ... :-)
Cheers,
Posted Feb 22, 2016 10:07 UTC (Mon)
by epa (subscriber, #39769)
[Link]
Posted Feb 15, 2016 20:04 UTC (Mon)
by juliank (guest, #45896)
[Link] (44 responses)
Posted Feb 16, 2016 1:33 UTC (Tue)
by pbonzini (subscriber, #60935)
[Link] (43 responses)
Posted Feb 16, 2016 1:58 UTC (Tue)
by Elv13 (subscriber, #106198)
[Link] (41 responses)
Posted Feb 16, 2016 3:58 UTC (Tue)
by Sesse (subscriber, #53779)
[Link] (40 responses)
Of course, when developing, -Werror is great.
/* Steinar */
Posted Feb 16, 2016 4:11 UTC (Tue)
by neilbrown (subscriber, #359)
[Link] (35 responses)
Of course I support some easy way to disable it like
Posted Feb 16, 2016 6:24 UTC (Tue)
by Sesse (subscriber, #53779)
[Link] (28 responses)
/* Steinar */
Posted Feb 16, 2016 6:57 UTC (Tue)
by torquay (guest, #92428)
[Link] (23 responses)
It's clear that having per-project defaults is not scalable from a distro point of view. However, is the argument "it's bad for the distro, so the project is bad" ? I'd argue that it's the distro getting in the way, not the other way around.
Perhaps distros should get out of the business of providing mudballs requiring constant rebuilds of everything, and instead focus on providing a stable base operating system?
Posted Feb 16, 2016 7:57 UTC (Tue)
by oldtomas (guest, #72579)
[Link] (20 responses)
Yeah, yeah. We know. Some disagree. Now back to GCC6, shall we?
Posted Feb 16, 2016 8:54 UTC (Tue)
by torquay (guest, #92428)
[Link] (19 responses)
Umm? Project XYZ wants to use -Werror (from GCC6 and beyond) because the developer is (rightly) paranoid about bugs. Distro throws the dummy and says no, because it would take too much effort in existing as a distro. The problem here is not the developer, but the rather arbitrary overriding of developers' choices.
There is no burning of Carthage here, but merely an indication that distros in their current state are not fit for purpose. Rather than changing Project XYZ to fit into an arbitrary definition of a good project (which changes from distro to distro), how about we let the project developer compile the project as he/she sees fit, and then just sandbox the result? Let the distro evolve to provide the sandbox and the security policies that go with it.
Posted Feb 16, 2016 9:42 UTC (Tue)
by epa (subscriber, #39769)
[Link]
It would sure be useful for compiler warnings in package building to be reported back upstream somehow, and automatically - but the amount of spam this would generate makes it impractical unless you have somebody employed full time on build cleanup janitorial tasks.
Posted Feb 16, 2016 11:37 UTC (Tue)
by oldtomas (guest, #72579)
[Link] (17 responses)
Thing is, for me Project XYZ's developer isn't God; sometimes I trust the distro's maintainer more (especially for those programs on my box which get less attention from me). The few (alas, too few: my capacity is far too small) which get more attention I do compile myself.
Why do I trust the packager more?
Focus. The maintainer's focus is to integrate the program into a consistent whole, whereas Project XYZ's maintainer's focus sometimes is "my program is the single most important thing out there", which is understandable, but counterproductive[1]. As long as the distro's maintainer makes her decisions transparent to me, I'm infinitely thankful for her doing this job for me. And (for me, still) this job is becoming ever more essential the more strident those "open source thingies" are becoming out there. Sometimes they remind me of spoiled little children[1].
Now I know you strongly disagree -- and I don't think each of us will convince the other; so having clarified our standpoints enough, let's leave it at that. If you want to have the last word, go ahead... this is *my* last one.
[1] There are great exceptions to that, for sure.
Posted Feb 16, 2016 17:23 UTC (Tue)
by mpr22 (subscriber, #60784)
[Link] (15 responses)
Posted Feb 16, 2016 18:50 UTC (Tue)
by bronson (subscriber, #4806)
[Link] (11 responses)
Pretty sure you've set up a false dichotomy there.
Posted Feb 17, 2016 11:29 UTC (Wed)
by itvirta (guest, #49997)
[Link] (10 responses)
Posted Feb 17, 2016 12:05 UTC (Wed)
by juliank (guest, #45896)
[Link] (4 responses)
We store all of it though, and calculate a checksum of it all, including the not yet initialised parts.
Posted Feb 17, 2016 19:20 UTC (Wed)
by ksandstr (guest, #60862)
[Link] (3 responses)
Posted Feb 17, 2016 19:43 UTC (Wed)
by juliank (guest, #45896)
[Link] (2 responses)
Valgrind currently complains about that, but oh well, it's at least clearly identifiable. Should still get fixed at some point.
Posted Feb 17, 2016 20:52 UTC (Wed)
by ksandstr (guest, #60862)
[Link] (1 responses)
So if you're seeing consistent results, you're either using new[] or malloc() in a process that hasn't freed much of the memory it's allocated and written yet. But sure enough, valgrind does complain about uninitialized values when reading from new[]'d arrays -- this should be reported and corrected.
Back in 2006 it was still realistic that valgrind wouldn't become yet another mindlessly-followed lint. It had innovations like data tracking instrumentation and what-not. Alas.
Posted Feb 21, 2016 13:34 UTC (Sun)
by ms_43 (subscriber, #99293)
[Link]
yes (if an explicit initialization expression is omitted).
https://github.com/cplusplus/draft/blob/master/source/exp...
> (zeroes for integral and floating-point types;
no, that would be zero-initialization!
operator new performs default-initialization, which calls an applicable no-argument constructor for class types, otherwise does nothing.
https://github.com/cplusplus/draft/blob/01c0772e922e5edcd...
Posted Feb 17, 2016 16:05 UTC (Wed)
by bronson (subscriber, #4806)
[Link] (4 responses)
Or, if you're saying that it's normal to go commenting out lines of code to eradicate warnings -- because it's ultimately the fault of the person who wrote the questionable code -- then I guess this is a fine example of the problem. Thank you!
Remember, warning-driven development only works when you're using a perfect compiler.
Posted Feb 17, 2016 21:20 UTC (Wed)
by itvirta (guest, #49997)
[Link] (3 responses)
But in that particular case, the ultimate reason seems to be that someone tried to be smarter than they should have
Wrong, since as far as I can find, the point was to read uninitialized memory, and that results in undefined behaviour
Not very useful since the whole thing was to "safeguard" against not having a good RNG by creating a really shitty RNG.
All of this goes with a big IIRC disclaimer, so feel free to point me wrong.
Posted Feb 17, 2016 23:58 UTC (Wed)
by tialaramex (subscriber, #21167)
[Link] (2 responses)
* OpenSSL is so crappy it relies on uninitialised memory reads to do random number generation even though, er, that can't work on Unix
Actual reality of the story
* OpenSSL is so crappy it has several similar-looking C functions that actually do different stuff
I don't know if either story has anything to tell us about GCC6. But I do know that spreading the former story is harmful.
Posted Feb 19, 2016 14:13 UTC (Fri)
by cortana (subscriber, #24596)
[Link] (1 responses)
Your version of the story misses out several details that were covered in Debian, OpenSSL, and a lack of cooperation posted to a certain web site some time ago.
Posted Feb 19, 2016 19:12 UTC (Fri)
by bronson (subscriber, #4806)
[Link]
Posted Feb 17, 2016 0:17 UTC (Wed)
by nix (subscriber, #2304)
[Link] (2 responses)
Yeah, I'd trust that sort of person a whole lot -- to make mistakes through ignorance.
Posted Feb 18, 2016 21:48 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
My C and C++ code builds without warnings on my system, because I do all my development with -Werror enabled. Thus, if my C and C++ code produces warnings when compiled on someone else's system, then either their toolchain is (defective|deficient|misconfigured) or there is an issue with my code which needs to be properly examined by a human, and that someone else should probably not be distributing binaries to other people until it has been.
Posted Feb 18, 2016 21:54 UTC (Thu)
by Sesse (subscriber, #53779)
[Link]
Most warnings, especially in code that's been around for a while, are _not_ signs of actual bugs. (It's good style to fix them nevertheless, because some are, but that should not block the compile for an end user.)
/* Steinar */
Posted Feb 16, 2016 18:16 UTC (Tue)
by drag (guest, #31333)
[Link]
Distro maintainers can still work with projects and review code for them without needing to be involved in modifying them to be packaged for a specific distribution.
Arguably distro maintainers working to get changes into upstream and systematically eliminating distro-specific packaging requirements and build processes should allow a greater degree of collaboration between distributions. This should help significantly to maintain those packages which are critical, but do not warrant the full time development staff that many other types of software requires.
> The maintainer's focus is to integrate the program into a consistent whole, whereas Project XYZ's maintainer's focus sometimes is "my program is the single most important thing out there", which is understandable, but counterproductive[1].
It depends on the nature of the application.
Some applications are end-user focus and thus are considerably higher priority then some other software that focuses on infrastructure of the OS. Think about the difference between Firefox versus some sort of XML parsing library.
Once my imaginary XML parsing library works, is fast, and is stable there doesn't really need to have a full time team developing on it. You still need maintainers, but new development is discouraged by the nature of it being used by a wide variety of other applications. 'If it isn't broke don't fix it' is the name of the game in this instance. It's purpose in life is just to serve as a basis for new development. Having it be almost 'abandonware' and only be maintained for security purposes is can be a advantage.
In comparison Firefox browser has the triple duty of being a end-user facing application, dealing with a ever evolving set of standards and practices, as well as facing new threats and problems related to users needing to accessing unpredictable sources of unvetted and potentially malicious data on the internet. In this case is it is the job of the OS to bend for the application rather then the other way around. Firefox is significantly more important then the rest of the OS in a lot of ways and almost all situations in which it gets used.
Since the primary purpose of a operating system is to make developing and using end-user applications easier (since it is always the job of computers to 'do something' rather then sitting there consuming electricity for the sake of correctness), then it's the job of distributions to bend to the will of users and the applications they consume. It is true that a OS, as part of it's job description of 'making life easier for users', does have to enforce a certain amount of correctness on applications, however the ideal solution is to 'make it easier to do it right' rather then trying to second guess developers and modify their programs to suit the expectations of distributions.
Sometimes it's unavoidable, but in my mind if distributions have to hack on software then they are the new upstream, the new developers, and thus catering to the whims of distribution packagers is just as troublesome as catering to the whims of the original developer. This also has obvious scalability issues as you can't expect distributions to be experts in all the software in the known universe. A far better solution is to have distributions work together to get changes into upstream rather then trying to solve the problem 'downstream'.
Posted Feb 16, 2016 10:10 UTC (Tue)
by roblucid (guest, #48964)
[Link]
Posted Feb 17, 2016 21:09 UTC (Wed)
by Sesse (subscriber, #53779)
[Link]
/* Steinar */
Posted Feb 16, 2016 9:22 UTC (Tue)
by error27 (subscriber, #8346)
[Link]
Posted Feb 16, 2016 10:03 UTC (Tue)
by roblucid (guest, #48964)
[Link] (2 responses)
Posted Feb 16, 2016 22:28 UTC (Tue)
by Wol (subscriber, #4433)
[Link] (1 responses)
So you can take an upstream git pull, and run make for your distro. So the distro can tweak things, and they go upstream so the user can see what's going on - that is, if they can understand the build system ... :-)
Cheers,
Posted Feb 21, 2016 13:48 UTC (Sun)
by ms_43 (subscriber, #99293)
[Link]
Where "distro" means "Linux", "MacOSX", "Win32" or "Android".
https://cgit.freedesktop.org/libreoffice/core/tree/distro...
Posted Feb 16, 2016 7:02 UTC (Tue)
by lkundrak (subscriber, #43452)
[Link] (5 responses)
Posted Feb 18, 2016 10:02 UTC (Thu)
by mjw (subscriber, #16740)
[Link] (4 responses)
Posted Feb 18, 2016 13:44 UTC (Thu)
by pizza (subscriber, #46)
[Link] (3 responses)
It's awesome for the tail end of development work leading up to a release, but IMO it has no place in a release package if that package is intended to be compiled by random third parties. The same code on different versions of the compiler (or even the same compiler on different platforms) will result in different sets of warnings.
Posted Feb 18, 2016 17:58 UTC (Thu)
by mjw (subscriber, #16740)
[Link]
Posted Mar 3, 2016 21:20 UTC (Thu)
by nix (subscriber, #2304)
[Link] (1 responses)
glibc is the only such software I can think of right now, on Linux at least.
Posted Mar 3, 2016 23:00 UTC (Thu)
by bronson (subscriber, #4806)
[Link]
Posted Feb 16, 2016 7:24 UTC (Tue)
by josh (subscriber, #17465)
[Link] (3 responses)
Posted Feb 16, 2016 7:49 UTC (Tue)
by andresfreund (subscriber, #69562)
[Link] (1 responses)
Posted Feb 16, 2016 17:52 UTC (Tue)
by k3ninho (subscriber, #50375)
[Link]
You know that thing where a newcomer to a community shows up and complains that feature X looks like it should do Y but instead it's structured to do A, B and/or C? There's a habitual gap in the community record that makes sense of the journey from one problem to a solution, to broader appeal and then on to a growing userbase and contributory community. There's almost room for a maxim: "the current solution looks like it does because we started over there, went round the houses, painted a few bike-sheds and ended up where we are now; of course you know where you want to be and, if you were going there, you wouldn't want to be starting from here."
In light of that polish, the shorthand for someone taking on a compiler upgrade and not evaluating the full extent of changes they will face remains the less-friendly RTFM.
K3n.
Posted Feb 19, 2016 0:18 UTC (Fri)
by dirtyepic (guest, #30178)
[Link]
Posted Feb 18, 2016 9:49 UTC (Thu)
by mjw (subscriber, #16740)
[Link]
But for other categories of warnings (where there is the potential that it does indeed signal a subtle bug that might need some analysis even if it could be a false positive) it would be really nice to have some way to enable them all and just see what warning results you get. Is it really too hard to define two or three categories of warnings that can easily be enabled all at once?
I certainly appreciate the GCC documentation. It is pretty good. And it was fun to compare the info pages of the current and new version of GCC to get an overview of all the new warning options to try out. I learned a lot! But an easier way to discover new warnings (especially those not enabled by default or through -Wall or -Wextra) would be appreciated.
Posted Feb 15, 2016 20:45 UTC (Mon)
by darktjm (guest, #19598)
[Link] (13 responses)
Posted Feb 16, 2016 3:08 UTC (Tue)
by ldo (guest, #40946)
[Link] (11 responses)
Ada (and other earlier languages) showed the way in the 1980s and even before. Why are linkers still so dumb? Why does C++ need to resort to name mangling to convey type information? Why can’t the object format be smarter?
Posted Feb 16, 2016 4:01 UTC (Tue)
by Sesse (subscriber, #53779)
[Link] (5 responses)
While I would support more intelligent linker conventions (except it would break everything now that we finally have a pretty stable C++ ABI :-) ), I'm not entirely sure exactly what name mangling has to do with it. How would you represent a C++ type signature in such a format in a way that's meaningfully different from name mangling?
/* Steinar */
Posted Feb 16, 2016 4:28 UTC (Tue)
by ldo (guest, #40946)
[Link] (4 responses)
Even C could benefit from type-consistency checking across modules. This is the difference between Ada-style typesafe “separate” compilation, versus “independent” compilation which is all you can get in C and FORTRAN.
Posted Feb 16, 2016 4:43 UTC (Tue)
by sfeam (subscriber, #2841)
[Link] (1 responses)
Posted Feb 16, 2016 4:52 UTC (Tue)
by ldo (guest, #40946)
[Link]
I don’t think so. From the gcc(1) man page:
Posted Feb 16, 2016 6:25 UTC (Tue)
by Sesse (subscriber, #53779)
[Link] (1 responses)
/* Steinar */
Posted Feb 17, 2016 21:00 UTC (Wed)
by ldo (guest, #40946)
[Link]
The ABI can only handle a small subset of the necessary type information.
Consider the Ada declaration
Then you have subtypes, which are considered compatible with the type they derive from, but are subject to additional constraints.
There can be no code or runtime data associated with these definitions, which is why you cannot represent them in any ABI. Yet they need to be processed across compilation units. Isn’t that a natural job for the linker?
Posted Feb 16, 2016 14:54 UTC (Tue)
by alonz (subscriber, #815)
[Link] (3 responses)
I don't see why this should concern anybody – it's the tools' job to properly unmangle names. Just like it would have been their job to display whatever extra information a “smarter” object format would have stored.
What is a pity, in my opinion, is that C doesn't provide at least the option to keep type info in the object files (yes, using name mangling, why not).
Posted Feb 16, 2016 18:08 UTC (Tue)
by sytoka (guest, #38525)
[Link]
And Fortran do it for you since F90 with .mod file...
Posted Feb 16, 2016 20:02 UTC (Tue)
by ldo (guest, #40946)
[Link]
Posted Feb 17, 2016 8:06 UTC (Wed)
by niner (subscriber, #26151)
[Link]
That's the problem right there: the assumption, that there is "the" tool, rather than multiple tools. With multiple tools, they have to agree on how to mangle the names. If that's standardized (even if it's just de facto) like the bit of mangling that C does, interoperation is easy. That's why I can call an abritrary C library's functions from Perl 6 without even needing a C compiler on the machine[1]. Binding to C++ libraries on the other hand is really hard. It needs special code to support each compiler[2].
[1] http://doc.perl6.org/language/nativecall
Posted Feb 16, 2016 19:11 UTC (Tue)
by darktjm (guest, #19598)
[Link]
Ada was designed for type safety in this regard, and C wasn't. Simple as that. It's one of the many features that I love(d) about Ada. You could give C the benefit of the doubt, having been designed 10 years earlier than Ada. At least it's more type-safe than its predecessors. In any case, I'm not talking about doing this in the linker, although that is possible if debugging info with type information is used for that purpose (or if a GCC-specific section type were to be added for this purpose, I guess), and it would be better/more secure than the warnings I'm proposing. The warnings I'm proposing are simply to ensure that you declare all globals in a header file. That alone does not guarantee that the same header file is used to declare the globals the same way in multiple separate modules, but it does prevent people from sloppily declaring the same function/global data object in every C file and then forgetting to change it later on when the original changes (or doing it wrong in the first place). It also prevents library conflicts due to excessive exports, since you are more aware of what you're exporting. In other words, I'm proposing a warning enforcing good coding practice (which a linker fix would not do, by the way), similar to the indentation warning mentioned in the summary.
Posted Feb 17, 2016 19:31 UTC (Wed)
by ksandstr (guest, #60862)
[Link]
Posted Feb 16, 2016 21:07 UTC (Tue)
by xtifr (guest, #143)
[Link] (2 responses)
#if defined(PLATFORM1)
On Platform 3, this would generate a warning about a redundant comparison (error with -Werror). Of course, there are numerous ways to work around this problem, but, still...
Posted Feb 17, 2016 14:13 UTC (Wed)
by itvirta (guest, #49997)
[Link]
Apparently a warning is not given if the limits are different, even though that might
Posted Feb 17, 2016 15:45 UTC (Wed)
by andresfreund (subscriber, #69562)
[Link]
Posted Feb 16, 2016 21:35 UTC (Tue)
by fratti (guest, #105722)
[Link] (5 responses)
As far as I can tell, none of the uncovered issues had a real world impact; while the code was different than intended it did not lead to program misbehaviour. However, it'll probably save some headaches if the surrounding code were to be refactored.
Posted Feb 16, 2016 23:00 UTC (Tue)
by rahvin (guest, #16953)
[Link]
Though I have to admit that error name isn't very descriptive.
Posted Feb 17, 2016 10:36 UTC (Wed)
by HelloWorld (guest, #56129)
[Link] (3 responses)
Posted Feb 17, 2016 12:09 UTC (Wed)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
https://stackoverflow.com/questions/32009694/why-implicit...
Posted Feb 25, 2016 6:59 UTC (Thu)
by HelloWorld (guest, #56129)
[Link]
Posted Feb 25, 2016 9:44 UTC (Thu)
by jwakely (subscriber, #60262)
[Link]
> A null pointer constant is an integer literal (2.13.2 [lex.icon]) with value zero or a prvalue of type std::nullptr_t.
So false and (0+0) and (1-1) are no longer null pointer constants and cannot be implicitly converted to pointer types.
If you want a pointer then use a pointer, not a bool or some other type.
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Hopefully it is reachable again now.
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
-Wall and -Wall=gcc6 or something like that.
Warnings won't ever "break existing code", unless you have asked for all warnings to be treated as fatal errors (Wielaard: Looking forward to GCC6 – Many new warnings
-Werror). Maybe what is needed is a way to turn on all the latest and greatest warnings, but only treat those as errors which existed in a certain gcc version, for example -Werror-version=gcc-5.3.1. Then your program will continue to compile with newer releases, but you will get the benefit of newer warnings - and once the codebase is clean of those warnings, you can increase the -Werror-version number.
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wol
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
make CWFLAGS=
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
it's not great when you have a hundred projects like that breaking during a distro rebuild, requiring manual review and action
Ceterum Censeo
Ceterum Censeo
It is common for projects to have build targets Ceterum Censeo
make test, which can be done as part of package building every time, and make maintainer-test, which turns on extra checks and doodads intended for the program's maintainer or developers. Building with -Werror could be done as part of maintainer-test only.
Ceterum Censeo
Given the choice between trusting someone who turns on compiler warnings and puts -Werror in CFLAGS, and someone who turns off compiler warnings and/or removes -Werror from CFLAGS, I would definitely prefer to trust the former.
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
They are surely not to be trusted, but that has little to do with warnings.
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
been, and did something that is both wrong (in the language sense) and not very useful either.
with the full demons-flying-from-ones-mouth possibility included.
Same goes for the fact that the library used the PID to seed the RNG. PIDs aren't really secret or random and make for
a rather shitty RNG. Had the library not used such a weak addition, the final screw-up might have been detected sooner,
since all the outputs would have been exactly the same, instead of having the 2^15 different possibilities (or whatever
the number was).
However, I do find the custom of solely blaming a silly mistake while ignoring the fucked-up design really tiring.
Ceterum Censeo
* Hero Debian Guy fixes the uninitialised reads
* Somehow this breaks OpenSSL because it needed uninitialised reads, even though er, they didn't work so... um.
* Debian Guy spots that function A does a harmless uninitialised read and fixes it, arguably making the code more "correct" but with no functional change.
* Idiot Debian Guy sees a similar function B and blindly applies the same "fix".
* The "fix" to this function B disables its entire purpose, effectively removing the PRNG feature almost entirely
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Ceterum Censeo
Wielaard: Looking forward to GCC6 – Many new warnings
From end users point of view, a distro's security updates are a real benefit, building from source & re-updating soon gets boring, especially if you have the problem of embedded bundled libraries.
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wol
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
I'm sure that we can imagine a layer of indirection which manages the mapping of versions to classes of error, holding a matrix of version and features as the release notes would list. And that might (maybe) appear in some man page content. What free/open source projects have been nearly-adequate at doing is explaining the orthodoxy of how a user is expected to use their tool.
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Many new warnings, but still not my pet peeve
Re: the linker doesn't care much about matching symbols
Re: the linker doesn't care much about matching symbols
Re: more intelligent linker conventions
Doesn't -funit-at-a-time already do this?
Re: more intelligent linker conventions
Re: Doesn't -funit-at-a-time already do this?
-funit-at-a-time
This option is left for compatibility reasons. -funit-at-a-time has
no effect, while -fno-unit-at-a-time implies -fno-toplevel-reorder
and -fno-section-anchors.
Re: more intelligent linker conventions
Re: In C++, you need type information as part of the ABI.
type B is new A;
which means that type B is essentially a copy of type A, inheriting all the functionality defined for it. But B must still be considered distinct from A, and incompatible with it.
All Ada compilers I had worked with (around 20 years ago…) also used name mangling.
Re: the linker doesn't care much about matching symbols
Re: the linker doesn't care much about matching symbols
GNAT does not.
Re: All Ada compilers I had worked with (around 20 years ago…) also used name mangling.
Re: the linker doesn't care much about matching symbols
[2] https://github.com/rakudo/rakudo/blob/nom/lib/NativeCall/...
Re: the linker doesn't care much about matching symbols
Many new warnings, but still not my pet peeve
Wielaard: Looking forward to GCC6 – Many new warnings
#define LIMIT1 64
#define LIMIT2 32
#elif defined(PLATFORM2)
#define LIMIT1 32
#define LIMIT2 64
#elif defined(PLATFORM3)
#define LIMIT1 32
#define LIMIT2 32
#endif
...
if (value > LIMIT1 || value > LIMIT2) ...
Wielaard: Looking forward to GCC6 – Many new warnings
since that's what logically happens (and the optimiser will happily remove the non-dominating bound anyway).
catch the case where check was supposed to be (value > limit1 || value2 > limit2) but the other
variable name was mistyped.
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
Why would this be a problem? Every integral constant-expression with a value of 0 is a valid null pointer literal. It's certainly uncommon to use false for this purpose, but probably not a real problem.
Wielaard: Looking forward to GCC6 – Many new warnings
https://www.mail-archive.com/gcc-bugs@gcc.gnu.org/msg4645...
Wielaard: Looking forward to GCC6 – Many new warnings
Wielaard: Looking forward to GCC6 – Many new warnings
