|
|
Subscribe / Log in / New account

GCC 12.1 Released

The GCC project has made the first release of the GCC 12 series, GCC 12.1. As the announcement notes, this month is the 35th anniversary of the GCC 1.0 release. There are lots of changes and fixes in this release, including:
This release deprecates support for the STABS debugging format and introduces support for the CTF debugging format. The C and C++ frontends continue to advance with extending support for features in the upcoming C2X and C++23 standards and the C++ standard library improves support for the experimental C++20 and C++23 parts. The Fortran frontend now fully supports TS 29113 for interoperability with C.

[...] On the security side GCC can now initialize stack variables implicitly using -ftrivial-auto-var-init to help tracking down and mitigating uninitialized stack variable flaws. The C and C++ frontends now support __builtin_dynamic_object_size compatible with the clang extension. The x86 backend gained mitigations against straight line speculation with -mharden-sls. The experimental Static Analyzer gained uninitialized variable use detection and many other improvements.


From:  Richard Biener <rguenther-AT-suse.de>
To:  gcc-announce-AT-gcc.gnu.org
Subject:  GCC 12.1 Released
Date:  Fri, 06 May 2022 10:48:06 +0200
Message-ID:  <p3p5oo2-3221-p179-5547-o250q2p5r8sn@fhfr.qr>
Cc:  gcc-AT-gcc.gnu.org, info-gnu-AT-gnu.org
Archive-link:  Article


The GCC developers are proud to announce another major GCC release, 12.1.

This year we celebrated the 35th anniversary of the first GCC beta release
and this month we will celebrate 35 years since the GCC 1.0 release!

This release deprecates support for the STABS debugging format and
introduces support for the CTF debugging format [1].  The C and C++
frontends continue to advance with extending support for features
in the upcoming C2X and C++23 standards and the C++ standard library
improves support for the experimental C++20 and C++23 parts.
The Fortran frontend now fully supports TS 29113 for interoperability with C.

GCC now understands clangs __builtin_shufflevector extension making
it easier to share generic vector code.  Starting with GCC 12
vectorization is enabled at the -O2 optimization level using the
very-cheap cost model which puts extra constraints on code size expansion.

On the security side GCC can now initialize stack variables implicitly
using -ftrivial-auto-var-init to help tracking down and mitigating
uninitialized stack variable flaws.  The C and C++ frontends now support
__builtin_dynamic_object_size compatible with the clang extension.
The x86 backend gained mitigations against straight line speculation
with -mharden-sls.  The experimental Static Analyzer gained uninitialized
variable use detection and many other improvements.

The x86 backend gained support for AVX512-FP16 via _Float16.
The BPF backend now supports CO-RE, the RISC-V backend gained support
for many new ISA extensions.

Some code that compiled successfully with older GCC versions might require
source changes, see http://gcc.gnu.org/gcc-12/porting_to.html for
details.

See

  https://gcc.gnu.org/gcc-12/changes.html

for more information about changes in GCC 12.1.

This release is available from the WWW and FTP servers listed here:

 https://sourceware.org/pub/gcc/releases/gcc-12.1.0/
 https://gcc.gnu.org/mirrors.html

The release is in the gcc-12.1.0/ subdirectory.

If you encounter difficulties using GCC 12.1, please do not contact me
directly.  Instead, please visit http://gcc.gnu.org for information about
getting help.

Driving a leading free software project such as GCC would not be possible
without support from its many contributors.
Not only its developers, but especially its regular testers and users which
contribute to its high quality.  The list of individuals
is too large to thank individually!

----

[1] See https://ctfstd.org/



to post comments

GCC 12.1 Released

Posted May 6, 2022 17:14 UTC (Fri) by josh (subscriber, #17465) [Link]

Another notable item in the GCC 12 release: it includes changes to libgccjit that support rustc_codegen_gcc, so that backend no longer requires a modified GCC tree.

GCC 12.1 Released

Posted May 6, 2022 17:29 UTC (Fri) by atai (subscriber, #10977) [Link] (2 responses)

Is Aple Silicon (m1) support in the release? (or unofficial releas exists but not yet incorporated into the official release)

GCC 12.1 Released

Posted May 6, 2022 17:29 UTC (Fri) by atai (subscriber, #10977) [Link]

Apple Silicon

GCC 12.1 Released

Posted May 8, 2022 2:26 UTC (Sun) by harrowm (guest, #158408) [Link]

If you want gcc for apple silicon .. the homebrew formula will automagically install a non-gnu patched version of 11.3

GCC 12.1 Released

Posted May 6, 2022 18:28 UTC (Fri) by wtarreau (subscriber, #51152) [Link] (46 responses)

Already feeling anxious to discover what new breakage it brings to existing code and how to work around it without degrading the code further :-/

The improvements to the static analyzer could be nice however.

GCC 12.1 Released

Posted May 6, 2022 19:45 UTC (Fri) by flussence (guest, #85566) [Link] (20 responses)

-O2 enables autovectorisation now, so that should be fun. It may have gotten more stable since it was introduced but it used to be a constant source of misery.

GCC 12.1 Released

Posted May 7, 2022 4:25 UTC (Sat) by wtarreau (subscriber, #51152) [Link] (19 responses)

> -O2 enables autovectorisation now, so that should be fun. It may have gotten more stable since it was introduced but it used to be a constant source of misery.

Indeed, I noticed it causing trouble in the past due to alignment, because when you try to access a struct allocated at some place, which contains only small types and doesn't require a 128-bit alignment, and vector instructions are used on it, suddenly your program dies miserably in alignment trap. I anticipate we'll have to add "__attribute__((aligned(sizeof(void*))))" to each and every struct definition to avoid jokes, just in case...

GCC 12.1 Released

Posted May 7, 2022 16:44 UTC (Sat) by willy (subscriber, #9762) [Link] (15 responses)

IIRC, __aligned__ only increases the alignment of the struct. We'd also need to add __packed__ to prevent the use of autovec on a struct.

GCC 12.1 Released

Posted May 8, 2022 7:21 UTC (Sun) by pbonzini (subscriber, #60935) [Link] (14 responses)

If the concern is use of to the FPU then Linux needs to just disable vectorization at the Makefile level. If the concern is alignment, GCC knows when to use unaligned memory access instructions.

In any case my suggestion is to just *talk* to the developers.

GCC 12.1 Released

Posted May 8, 2022 18:31 UTC (Sun) by hmh (subscriber, #3838) [Link] (13 responses)

GCC used to switch from (non-vector) instructions that tolerate unaligned access to (vector) instructions that forbid it when autovectorizing for x86 targets. Even when there were alternative (slower?) vector instructions that would tolerate unaligned access.

This is going to expose bad source code that relied on UB related to unaligned access and was therefore not compatible with autovectorizaton on x86, but did not disable it explicitly and instead depended on -O2 to not enable autovectorizaton.

Since it triggers at runtime, I foresee some "explicitly disable vectorization on anything using -O2" CFLAGS patching in the future...

GCC 12.1 Released

Posted May 8, 2022 18:57 UTC (Sun) by NYKevin (subscriber, #129325) [Link] (10 responses)

Of course, such code was already broken on most(?) non-x86 targets because the x86 is the weirdo. But I imagine quite a few developers are of the "unless it breaks on my laptop, I don't care" mentality...

GCC 12.1 Released

Posted May 8, 2022 21:14 UTC (Sun) by wtarreau (subscriber, #51152) [Link] (3 responses)

It's even worse (or better), ARM is also excellent with unaligned accesses nowadays, so you if you don't run your code on a wide variety of platforms, you can have broken code that runs fine on the two most popular platforms without ever noticing.

GCC 12.1 Released

Posted May 8, 2022 22:47 UTC (Sun) by Paf (subscriber, #91811) [Link] (2 responses)

It’s also not unrealistic to write code only aimed at those platforms…. I’m involved in a decent size project and we target those two plus a variant of PowerPC and that last is for weird semi-historical reasons.

For a specific software project, it’s not crazy to only aim at ARM and x86, or even just x86 or ARM depending on what you’re up to.

How many non-embedded systems aren’t one of those two? Is it even 0.1% any more? I’m sure it’s not 1%.

GCC 12.1 Released

Posted May 8, 2022 23:38 UTC (Sun) by NYKevin (subscriber, #129325) [Link]

This is a valid position for application code to take, but library code IMHO generally should not be in the business of dictating architecture support unless it is doing something hardware-specific (e.g. if your library provides fast lock-free data structures, it's fair enough to say "the hardware must support certain atomic primitives," if your library does float math, it's fair enough to say "the hardware must conform to IEEE 754," and so on). Thing is, there's a lot of library code out there[citation needed], and it's hard to say with absolute certainty which libraries are getting used on more esoteric hardware configurations.

GCC 12.1 Released

Posted May 15, 2022 16:53 UTC (Sun) by wtarreau (subscriber, #51152) [Link]

That's typically what I'm doing with asm or arch-specific optimizations in general: try to make sure the code works on generic platforms (since it helps detect bugs) and only make efforts on relevant ones, typically x86 and armv8 in my case.

GCC 12.1 Released

Posted May 15, 2022 9:23 UTC (Sun) by anton (subscriber, #25547) [Link] (5 responses)

Of course, such code was already broken on most(?) non-x86 targets because the x86 is the weirdo.
Of course, this is one of the claims commonly made by those who advocate that compilers break programs with undefined behaviour.

First of all, if a program works on some machine, and the compiler breaks it on that machine, the fact that earlier it may not have worked on some other machine does not help the user and is pure whataboutism.

Next, is it actually true? The surviving general-purpose architectures are AMD64, Aarch64, RV64GC, Power, s390. I just tried it on an Aarch64 (Odroid N2) and RV64GC (Starfive Visionfive) machine, and they performed the unaligned access without complaint. Power has supported unaligned accesses in big-endian mode for a long time, and AFAIK they also support it in their new little-endian mode (and the old little-endian mode has not been used in general-purpose computers). Even on the Alphas from the last century, unaligned accesses were supported in Linux by default, albeit very slowly (and with a report in dmesg), and I had to take special measures to trap unaligned accesses.

So, these days an architecture that traps on unaligned accesses is weirdo. In particular, SSE is weirdo (Intel did not repeat this misdesign with AVX, and AMD (but unfortunately not Intel) even supports a fix for SSE), but even SSE includes instructions that tolerate unaligned accesses, so the gcc maintainers could choose to use those to avoid the breakage.

Concerning the claim (not made here) that using the trap-on-unaligned-access instructions are faster, such claims usually come without any empirical support. I microbenchmarked that (with a microbenchmark based on code in a bug report where Jakub Jelinek had justified gcc's use of these instructions with this claim), and found that the claim is not true for this microbenchmark.

GCC 12.1 Released

Posted May 15, 2022 11:23 UTC (Sun) by excors (subscriber, #95769) [Link] (4 responses)

> The surviving general-purpose architectures are AMD64, Aarch64, RV64GC, Power, s390.

It does get a lot easier if you exclude ARMv7, though that transition is either pretty recent or hasn't happened yet, depending on what field you're working in.

If I'm reading it right, ARMv8-A says: Unaligned accesses to Device memory (i.e. MMIO) always fault. Most loads/stores to unaligned Normal memory are okay, but multi-register loads/stores will fault if the SCTLR_ELx.A bit is set (though I believe Linux doesn't set that), and Exclusive/Acquire/Release/Atomic accesses will fault unless your CPU is ARMv8.4 (or older with an optional feature) (but even when unaligned atomics are supported, they may (unpredictably) fault if they cross a 16-byte boundary).

ARMv7-A will fault in much less obscure cases, e.g. any unaligned multi-word access (LDM, LDRD, etc) regardless of SCTLR.A. That's a problem whenever you're loading an int64_t, or even two adjacent int32_ts (because the compiler likes to merge them into one instruction), and if it's not aligned you'll need to tell the compiler with __attribute__((packed)).

ARMv8-M also faults on unaligned multi-word accesses. An ARMv8-M Baseline implementation (which I think is the modern replacement for ARMv6-M) will even fault on unaligned single-word accesses.

GCC 12.1 Released

Posted May 15, 2022 12:48 UTC (Sun) by anton (subscriber, #25547) [Link] (2 responses)

It does get a lot easier if you exclude ARMv7, though that transition is either pretty recent or hasn't happened yet, depending on what field you're working in.
In general-purpose computers, the transition to ARMv8-A has happened quite a while ago (e.g., with Raspi3 in 2016).

However, maybe it has more to do with the instruction set. In that case, Aarch32 seems to be pretty alive on RaspiOS (although even they have started releasing an Aarch64 version). However the Cortex-X2 and Cortex-A510 announced by ARM almost a year ago don't support Aarch32, so Aarch32 is a second-class citizen already, and I expect that there will be no hardware support on general-purpose computers for it in the not-too-distant future.

Personal experience: I just tried to run an EABI5 binary on all four ARMv8-A machines (with various distributions) we have around. On three I get "no such file or directory" (apparently the kernel does not understand the binary at all), the fourth (a Raspi4 with 64-bit Debian 10) eventually chokes on a missing library. It seems that Aarch32 is not very important for 64-bit Linux distributions.

Concerning the SCTLR_ELx.A bit, IA-32 and AMD64 have a similar bit since the 486, which I tried to use (for portability checking in a development environment), but had to give up on, because on IA-32 the ABI puts doubles at 4-byte boundaries, and the flag would cause fault on such accesses. Another attempt with AMD64 failed because gcc produces unaligned accesses from pairs of user-written aligned accesses. So if Linux has not set SCTLR_ELx.A in the past, setting it now would probably cause quite a bit of breakage.

Concerning atomics, they are no excuse for breaking code that does not perform atomic accesses (I doubt that the auto-vectorizer dares auto-vectorizing atomics).

ARMv8-M is irrelevant for general-purpose computers. To those who think it has anything to do with ARMv8-A: it has not. E.g., there is no Aarch64 (the headline feature of ARMv8-A) in ARMv8-M. Yes ARM's naming is confusing.

GCC 12.1 Released

Posted May 18, 2022 14:23 UTC (Wed) by excors (subscriber, #95769) [Link] (1 responses)

> In general-purpose computers, the transition to ARMv8-A has happened quite a while ago (e.g., with Raspi3 in 2016).
>
> However, maybe it has more to do with the instruction set. In that case, Aarch32 seems to be pretty alive on RaspiOS (although even they have started releasing an Aarch64 version).

True, my previous comment should have said "ARMv8-A AArch64" (not "ARMv8-A") - the rules for ARMv8-A AArch32 look essentially identical to ARMv7-A, so unaligned LDRD/LDM/etc will fault as you showed in a later comment. (And the compiler will happily transform assumedly-aligned loads into LDRD/LDM.)

> ARMv8-M is irrelevant for general-purpose computers.

Also true (well, assuming you mean the main user-visible processor and ignore the potentially dozens of microcontrollers in the same computer), but I'm not sure "general-purpose computer" is that useful a distinction in practice. There are plenty of libraries originally designed for Linux userspace that are quite usable and useful on higher-end microcontrollers, and it would be a shame if the only thing preventing them from working in that environment was an accidental reliance on misaligned data. It would also be a shame if GCC wasted performance on those microcontrollers by assuming all data might be misaligned and never using LDRD/LDM, given the vast majority of existing code does follow the alignment rules correctly and is currently benefiting from that optimisation. So I believe there's still value in following those alignment rules in new code, for portability to real systems that may realistically want to reuse your code.

GCC 12.1 Released

Posted May 18, 2022 21:07 UTC (Wed) by anton (subscriber, #25547) [Link]

(And the compiler will happily transform assumedly-aligned loads into LDRD/LDM.)
I was somewhat surprised how hard it was to find ldrds in the binary in order to exercise them: only 32 non-sp/fp ldrds and 58 ldms in 19587 instructions. For comparison, an Aarch64 binary of (a later version of) the same program has 257 non-sp/fp ldps in 21745 instructions. By general-purpose I mean the, e.g. Zen3 core that's targeted by free software developers and/or ISVs, not, e.g., AMDs PSPs which are indeed Aarch32 cores last I heard, but which we unfortunately cannot program.
There are plenty of libraries originally designed for Linux userspace that are quite usable and useful on higher-end microcontrollers, and it would be a shame if the only thing preventing them from working in that environment was an accidental reliance on misaligned data.
Indeed, ideally already the GPL prevents them from being used in such locked-down environments. But if gcc maintainers' willingness to break programs hurts the proprietary crowd for a change, that's less of a concern to me than when they hurt free software developers and users.
It would also be a shame if GCC wasted performance on those microcontrollers by assuming all data might be misaligned and never using LDRD/LDM, given the vast majority of existing code does follow the alignment rules correctly and is currently benefiting from that optimisation.
On the contrary, I would find it a shame if programmers who know how to get good performance by using unaligned accesses would slow down their programs in order to cater for gcc's sillyness.

GCC 12.1 Released

Posted May 17, 2022 20:39 UTC (Tue) by anton (subscriber, #25547) [Link]

I have now managed to indeed get a SIGBUS on a Raspi4 by running 32-bit code that uses ldrd with an unaligned address (but regular ldr does not produce such an exception). So SSE/SSE2, ldrd (and friends) are the last die-hards in a general-purpose world dominated by instructions that work with unaligned addresses.

GCC 12.1 Released

Posted May 9, 2022 12:03 UTC (Mon) by pbonzini (subscriber, #60935) [Link]

You may be confusing with stack alignment. The x86-64 ABI promises 16-byte stack alignment, and if some function failed to preserve that everything broke because GCC used aligned-access instructions on the stack.

GCC 12.1 Released

Posted May 10, 2022 7:49 UTC (Tue) by kilobyte (subscriber, #108024) [Link]

Valgrind points out alignment violations. So run it -- it's a yet another reason to do so.

GCC 12.1 Released

Posted May 8, 2022 9:38 UTC (Sun) by Sesse (subscriber, #53779) [Link] (2 responses)

If a struct only contains small types, its minimum alignment is not going to be 16, and GCC's autovectorization will of course not use instructions that expect such an alignment.

GCC 12.1 Released

Posted May 9, 2022 23:27 UTC (Mon) by foom (subscriber, #14868) [Link]

Yeah, the usual case that trips folks up is when they lie to the compiler and claim larger alignment but then don't actually provide it. (e.g. using __attribute__((aligned(16))) but then use a custom allocator which only provides 8byte aligned memory.)

The compiler believes what you tell it, and will emit instructions or do optimizations that depend upon that claimed alignment. But it won't _always_ choose instructions that will trap, so depending on optimizations you can get away with the incorrectly specified alignment until a future compiler upgrade causes a different instruction choice.

GCC 12.1 Released

Posted May 15, 2022 9:57 UTC (Sun) by anton (subscriber, #25547) [Link]

The problem happens with code that, e.g. copies a block of memory with 64-byte accesses from one unaligned address to another unaligned address. The auto-vectorized code uses movdqa for one of the addresses, and extra code is generated to align this address to a 16-byte boundary assuming that the original address is 8-byte aligned. However, the original address is not 8-byte-aligned, and the movdqa then traps.

gcc could have used movdqu instead and achieved the same performance for this loop even in the 8-byte-aligned case (plus the intended non-trap in the unaligned case).

GCC 12.1 Released

Posted May 6, 2022 21:45 UTC (Fri) by JoeBuck (subscriber, #2330) [Link] (4 responses)

The usual pattern is that the .1 release has cool new features and some new bugs, and the .2 release, which usually arrives about a month later, fixes most of the new bugs. So if you have quality concerns, the way to address them is to test promptly and submit bug reports.

GCC 12.1 Released

Posted May 7, 2022 4:21 UTC (Sat) by wtarreau (subscriber, #51152) [Link] (3 responses)

> So if you have quality concerns, the way to address them is to test promptly and submit bug reports.

Sure, but the concern is more, as usual, about breakage that is not considered as a bug but as an extended lecture of the spec that "allows us to do that so fix your program now even if it worked fine for 30 years on all compilers that way".

GCC 12.1 Released

Posted May 8, 2022 22:06 UTC (Sun) by ballombe (subscriber, #9523) [Link] (2 responses)

The opposite happened to me. The bug was not fixed because my code "interpreted the standard too rigidly..."

GCC 12.1 Released

Posted May 8, 2022 23:45 UTC (Sun) by NYKevin (subscriber, #129325) [Link] (1 responses)

I remember reading in the comments of an otherwise-unrelated GCC bug[1] that ISO C says memcpy() is permitted to clobber errno, and that therefore gcc is technically required to either prove that errno is not used by the application code, or to emit extra instructions to save and restore the variable. The reaction from the gcc developers to this revelation could be paraphrased as "haha, no." Which isn't a terribly surprising response when you consider just how liberally gcc emits memcpy calls, but I found it amusing. So yes, they will deviate from the standard in cases where the standard is ridiculous or otherwise problematic.

[1]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56888

GCC 12.1 Released

Posted May 9, 2022 0:47 UTC (Mon) by hvd (guest, #128680) [Link]

There is no requirement on compilers to support arbitrary libc implementations. libc and the compiler work together, either may depend on internals of the other to make the combined product conform to the relevant standards. For instance, glibc relies on the compiler to define __STRICT_ANSI__ when invoked in standards-conforming mode. The C standard says nothing about this macro beyond that it's in the namespace that's reserved for any use by the implementation and compilers are not required to define this macro, but that is not an issue, glibc is for use with compilers that do define it. If some other compiler, say, pcc, doesn't define it, fine, that just means pcc+glibc is not standards-conforming, but that's not a bug in either pcc or glibc, that's a problem for whoever decided to combine those two. It works the other way around as well. The compiler relies on memcpy to not set errno. The C standard does not guarantee this and implementations are allowed to set it, but GCC is for use with libc implementations that don't set it. If some hypothetical elibc does make memcpy set errno, that just means the combination of GCC+elibc is non-conforming, but that's not a bug in either GCC or elibc, that's a problem for whoever decided to combine those two.

GCC 12.1 Released

Posted May 9, 2022 7:45 UTC (Mon) by wtarreau (subscriber, #51152) [Link] (19 responses)

... and that started already with a new awesome warning, it didn't take long! Note, this one is implified, it instead complains at plenty of places where controls were already in place.

$ cat thankyougcc12.c
#include <sys/param.h>
#include <stdio.h>
#include <string.h>

char dir[MAXPATHLEN];
char file[MAXPATHLEN];
char fullpath[MAXPATHLEN];

/* returns -1 in case of error */
int makefullpath()
{
	if ((strlen(dir) + 1 + strlen(file) + 1) > sizeof(fullpath))
		return -1;

	snprintf(fullpath, sizeof(fullpath), "%s/%s", dir, file);
	return 0;
}

$ x86_64-linux-gcc -O2 -Wall-c thankyougcc12.c 
thankyougcc12.c: In function 'makefullpath':
thankyougcc12.c:15:50: warning: '%s' directive output may be truncated writing up to 4094 bytes into a region of size between 1 and 4095 [-Wformat-truncation=]
   15 |         snprintf(fullpath, sizeof(fullpath), "%s/%s", dir, file);
      |                                                  ^~        ~~~~
thankyougcc12.c:15:9: note: 'snprintf' output between 2 and 8190 bytes into a destination of size 4096
   15 |         snprintf(fullpath, sizeof(fullpath), "%s/%s", dir, file);
      |         ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Sure... I just performed the length check before calling snprintf() and it believes I'm trying to stuff the sum of these in this string. So I have two options, either I conclude that I can remove all my now useless length checks (since gcc12 doesn't trust them, so possibly it optimised them away, not checked) or I'll simply disable that warning that became stupid.

And it's really the control fro the previous check that is wrong, because if I lower the limit on the sump of strlen() in the first check to sizeof/2, it accepts to pass! So it looks like they've implemented a string length test for snprintf() that didn't consider that two strings could be concatenated by a single call (yes we can do that!). It would be nice if they only enabled warnings after they tested that they actually work on real code.

It's sad that each and every new version forces you to disable useful warnings that once used to be valid and became useless over time, it does render the code less secure by letting stupid bugs slip through. Because of this, in the long term I'll probably end up writing my own function and stop calling it snprintf() directly so that it stops being smart. Too bad if I introduce new bugs in this action.

What would be needed would be a diagnostic mode where you ask for suggestions or "are you sure" only as a developer, but not stuff like this that prove the compiler didn't understand the code but will cause build breakage at users', and it completely discourages programmers from putting error checks in their code since regardless of what was done, the compiler complains anyway.

Ah, GNU Complainers Collection, I really love you :-(

GCC 12.1 Released

Posted May 9, 2022 10:08 UTC (Mon) by excors (subscriber, #95769) [Link] (18 responses)

> So I have two options, either I conclude that I can remove all my now useless length checks (since gcc12 doesn't trust them, so possibly it optimised them away, not checked) or I'll simply disable that warning that became stupid.

You could remove the length check and do "if (snprintf(...) >= sizeof(fullpath)) return -1;", because -Wformat-truncation=1 only warns if it heuristically estimates that truncation is likely *and* the return value is unused. That would make the code simpler and more robust, since it no longer relies on you manually replicating snprintf's length calculation, and would eliminate the warning.

> if I lower the limit on the sump of strlen() in the first check to sizeof/2, it accepts to pass!

I suspect the compiler is converting the check into "strlen(dir) + strlen(file) > 4096/2-2", and both values are unsigned so it can deduce strlen(dir) <= 2046 and strlen(file) <= 2046, but it forgets the relationship between them because it doesn't support multi-variable constraints on string lengths - it just has an integer upper/lower bound for each string independently (I think?). Then it knows the snprintf won't need more than 4094 bytes and can't overflow. In the original code, all it can deduce is strlen(dir) <= 4096 etc, which isn't sufficient to prove it won't overflow.

It appears this only fixes the warning at -O2, not -O1, seemingly because -O1 doesn't deduce string length constraints from strlen comparisons and it just uses the declared length instead.

The GCC documentation says:

> When the exact number of bytes written by a format directive cannot be determined at compile-time it is estimated based on heuristics that depend on the level argument and on optimization. While enabling optimization will in most cases improve the accuracy of the warning, it may also result in false positives.

so it's behaving as advertised (i.e. not stable or precise). And -Wall says:

> This enables all the warnings about constructions that some users consider questionable, and that are easy to avoid (or modify to prevent the warning), even in conjunction with macros.

which is also behaving as advertised, because C string functions are always questionable, and it's easy to avoid the warning by checking snprintf's return value.

GCC 12.1 Released

Posted May 9, 2022 10:44 UTC (Mon) by atnot (subscriber, #124910) [Link] (5 responses)

> It appears this only fixes the warning at -O2, not -O1

Wait wait what, the warnings change depending on optimization level? Am I the only one for whom this is surprising news?

GCC 12.1 Released

Posted May 9, 2022 11:40 UTC (Mon) by anselm (subscriber, #2796) [Link]

Wait wait what, the warnings change depending on optimization level?

That's not new. I seem to remember from back when I was programming in C more that some GCC warnings about unreachable code or uninitialised variables were only output under optimisation, because otherwise the analysis on which these warnings were based would not have been performed.

GCC 12.1 Released

Posted May 9, 2022 11:43 UTC (Mon) by pizza (subscriber, #46) [Link]

> Wait wait what, the warnings change depending on optimization level? Am I the only one for whom this is surprising news?

This would appear to be an obvious conclusion from different optimization levels producing different sets of warnings.

I don't know when I first became aware of this, but it's been at least a decade.

GCC 12.1 Released

Posted May 9, 2022 11:49 UTC (Mon) by excors (subscriber, #95769) [Link]

That's true for lots of compiler warnings. The optimisation passes provide a lot of information about control flow and data flow, especially when they remove function call boundaries by inlining, which helps determine whether code is probably buggy (and should be warned about) or probably safe (no warning). Without that information, the compiler can't be confident either way and will usually err on the side of not warning (because programmers get really annoyed by false positives, especially if there's no easy way to make the compiler shut up). So it will usually find and report more bugs when you turn on optimisation.

In this case, if the variables are declared as char* then the compiler has no idea of their probable length and doesn't warn. It's only because they're declared as char[MAXPATHLEN] that it becomes reasonably confident in its guess that the string might actually be MAXPATHLEN-1 in length, which is enough confidence to emit the (incorrect) warning. More sophisticated optimisation passes let it make a better guess of the string's length, reducing the false positives.

GCC 12.1 Released

Posted May 9, 2022 12:22 UTC (Mon) by tzafrir (subscriber, #11501) [Link] (1 responses)

On GCC 10 you get the same warning even without any -O flag. So this did somewhat improve in later GCC versions.

GCC 12.1 Released

Posted May 9, 2022 19:25 UTC (Mon) by wtarreau (subscriber, #51152) [Link]

> On GCC 10 you get the same warning even without any -O flag. So this did somewhat improve in later GCC versions.

In a sense, that's a way to see it... But 4.7 never got it wrong at all and used to provide meaningful warnings if you go in that direction :-) Plus it was 3 times faster.

GCC 12.1 Released

Posted May 9, 2022 19:24 UTC (Mon) by wtarreau (subscriber, #51152) [Link] (11 responses)

> You could remove the length check and do "if (snprintf(...) >= sizeof(fullpath)) return -1;", because -Wformat-truncation=1 only warns if it heuristically estimates that truncation is likely *and* the return value is unused. That would make the code simpler and more robust, since it no longer relies on you manually replicating snprintf's length calculation, and would eliminate the warning.

Sorry, but no. There are sufficiently bogus snprintf() implementations in the wild, I'm not going to remove a security check in my code just to silence a bogus warning in gcc. Instead I added the condition to snprintf() in addition to the existing one, making the code even uglier, and I even managed to fail it once by forgetting to add "> sizeof()" at the end. Fortunately it broke in the right direction and stopped working. A similar bug in the other direction can cause an introduction of a vulnerability, as quite often when playing dirty length tricks to shut up a compiler.

> I suspect the compiler is converting the check into "strlen(dir) + strlen(file) > 4096/2-2", and both values are unsigned so it can deduce strlen(dir) <= 2046 and strlen(file) <= 2046, but it forgets the relationship between them because it doesn't support multi-variable constraints on string lengths - it just has an integer upper/lower bound for each string independently

That was exactly my feeling as well, which proves that the warning is totally bogus and should be reverted. But they never revert warnings, they just add tons more until the code becomes unreadable in ifdefs and convoluted tests that become totally insecure.

> > This enables all the warnings about constructions that some users consider questionable, and that are easy to avoid (or modify to prevent the warning), even in conjunction with macros.
> which is also behaving as advertised, because C string functions are always questionable, and it's easy to avoid the warning by checking snprintf's return value.

I get your point but here we're reaching the point that many of us have been seriously questioning for a while: "how long before we have to definitely remove -Wall projects built with gcc". That's sad because it used to catch many programmers' bugs in the past and has become useless and unusable over time. Reminds me of the 90s when compilers could almost compile /etc/passwd without sweating...

GCC 12.1 Released

Posted May 9, 2022 19:35 UTC (Mon) by mpr22 (subscriber, #60784) [Link] (6 responses)

This whole discussion does a very good job of convincing me that the real problem in this particular scenario is C's string model being profoundly Worng.

GCC 12.1 Released

Posted May 9, 2022 20:17 UTC (Mon) by NYKevin (subscriber, #129325) [Link] (5 responses)

It's not wrong, merely inadequate. Quoth James Mickens:

> You might ask, “Why would someone write code in a grotesque
> language that exposes raw memory addresses? Why not use
> a modern language with garbage collection and functional
> programming and free massages after lunch?” Here’s the
> answer: Pointers are real. They’re what the hardware understands.
> Somebody has to deal with them. You can’t just place
> a LISP book on top of an x86 chip and hope that the hardware
> learns about lambda calculus by osmosis.

https://www.usenix.org/system/files/1311_05-08_mickens.pdf

(I'm sure that Mr. Mickens is/was aware that LISP machines existed, once upon a time, but they're hardly relevant to the modern era.)

GCC 12.1 Released

Posted May 10, 2022 3:43 UTC (Tue) by wtarreau (subscriber, #51152) [Link] (4 responses)

That's exactly the way I see it. Nowadays people using C need it for low-level stuff because someone has to do it, and the places where C is needed want to have a good trust on the code translation to machine code because where it's used, it matters. Usually it's a mix of relying on hardware (e.g. hope the compiler will produce a ROL when using both a left and right shifts), the OS (e.g. cause a segfault when writing at address zero), and the libc (e.g; memcpy() does what the standard says it does).

That's why for me it's important that a C compiler tries less to guess about improbable mistakes that are relevant to absolutely zero use cases for this language, but instead focuses on real mistakes that are easy to solve (e.g; operators precedence, asking for braces, undefined use of self-increment in arguments, etc).

I'm fine with having such unlikely analysis but only on developer's request (e.g. -Wsuspicious). It could then go further and report some uncommon constructs that are inefficient and are suspicious because of that without annoying users when -Wall is used to detect likely incompatibilities on their platforms (because that's why most of us use -Wall -Werror, it's to catch problems at build time on other platforms).

GCC 12.1 Released

Posted May 10, 2022 15:51 UTC (Tue) by dvdeug (guest, #10998) [Link] (3 responses)

If you want all warnings, use Wall. If you want a specific set of warnings, turn just those warnings on. Turning arbitrary warnings on and turning warnings into errors turns the language into an ever-evolving, compiler-specific language, which is your choice, but something terribly silly to complain about.

GCC 12.1 Released

Posted May 10, 2022 17:22 UTC (Tue) by mpr22 (subscriber, #60784) [Link] (2 responses)

GCC's -Wall command-line option does not turn on all warnings, hasn't done for years, and quite possibly has never done so in the quarter-century I've been using GCC.

GCC 12.1 Released

Posted May 10, 2022 18:43 UTC (Tue) by wtarreau (subscriber, #51152) [Link] (1 responses)

Exact. Plus "enabling specific warnings" would only work if there was a portable way to enable (or silence) warnings across all compilers without having to perform a discovery pass first. Enabling a fixed set everywhere is trivial when you use *your* compiler for *your* project. When you distribute your code and it builds on a wide range of compilers that's a totally different story.

GCC 12.1 Released

Posted May 11, 2022 21:42 UTC (Wed) by NYKevin (subscriber, #129325) [Link]

The C standard doesn't really give compilers a whole lot to go on here. For certain issues, it directs the compiler to "emit a diagnostic," and for a superset of those issues, it says "the program is ill-formed," but that's it.

* Warnings vs. errors? Unspecified. The compiler is entirely within its rights to produce a binary even if the program is ill-formed and a diagnostic is required, so long as the compiler at least prints some sort of message diagnosing the issue.
* -Wall vs. -Wextra vs. -Wsome-random-thing? Nope. Most compilers emit all standardized warnings with no flags, so the additional warnings you can enable are all nonstandard, and it's purely an issue of implementation quality how they work, which flags toggle which warnings, and so on.
* Formatting of messages? No. The standard simply directs the compiler to "emit a diagnostic," and compiler writers are responsible for figuring out what that means and how to implement it. This is arguably a good thing, because it makes it possible to display warnings graphically or in an IDE (rather than e.g. requiring the use of stderr and then having the IDE parse the output from a separate process, which might be a good design but should not be mandatory), but it also means that different compilers can print totally different messages for the same problem.

About the best you can do is pick a set of warnings that you think is appropriate for your codebase (e.g. start with -Wall and add/subtract warnings as necessary), fix all of those warnings, and then aggressively WONTFIX any bugs that people file about warnings that are not on the list (unless it looks like the warning may have identified a real bug, in which case you might want to consider adding it to your list). If people don't like that, they can fork it.

GCC 12.1 Released

Posted May 10, 2022 20:39 UTC (Tue) by dvdeug (guest, #10998) [Link] (2 responses)

> There are sufficiently bogus snprintf() implementations in the wild,

So working implementations can't be improved because you have to be backwardly compatible with systems that can't properly implement functions released with BSD4.4 and standardized in C99. Where are these snprintf implementations? Especially "in the wild"? It's not sounding like something that most GCC users or developers would care about.

GCC 12.1 Released

Posted May 11, 2022 8:11 UTC (Wed) by geert (subscriber, #98403) [Link] (1 responses)

Like the snprintf() you had to roll yourself, because VxWorks didn't provide one?

GCC 12.1 Released

Posted May 11, 2022 20:44 UTC (Wed) by wtarreau (subscriber, #51152) [Link]

Or an old Solaris one that returned 0 or -1 when the operation failed (I don't remember, sorry), or the one in dietlibc that used to do something similar, etc. Even here the snprintf() doc doesn't match what we do on most modern systems:

https://pubs.opengroup.org/onlinepubs/7908799/xsh/snprint...

RETURN VALUE
Upon successful completion, these functions return the number of bytes
transmitted excluding the terminating null in the case of sprintf() or snprintf()
or a negative value if an output error was encountered.

On Linux+glibc:
The functions snprintf() and vsnprintf() do not write more than size
bytes (including the terminating null byte ('\0')). If the output was
truncated due to this limit, then the return value is the number of
characters (excluding the terminating null byte) which would have been
written to the final string if enough space had been available.

That's what most modern systems do, allowing you to realloc() the area and try
again. Some do not support being passed size zero, others do.

snprintf() is one of the most important and least portable functions when it comes
to good security practices. There's also %z (size_t) that's not much portable, and
"%.*s" that often does fun things like shifting all args by one since %.* is not
understood as consuming an extra argument, so usually you segfault by trying to
print the string from a pointer that's in fact its max length.

GCC 12.1 Released

Posted May 16, 2022 19:50 UTC (Mon) by jpfrancois (subscriber, #65948) [Link]

But the check the return value is much more robuste in à lot of case :
If you change the format strings it still works.
You do not need to implement your security check across all call site.
What if you have à slightly more complex format string ? You have to implement correctly the size calculation everywhere ?

GCC 12.1 Released

Posted May 6, 2022 19:06 UTC (Fri) by nix (subscriber, #2304) [Link]

Note: ctfstd.org is, uh, message-residue (?) and basically doesn't exist, and nor do its mailing lists, at least not for now. We decided that integrating discussion of it into the overall binutils and GCC lists (until traffic is high enough to merit a list of its own) would make more sense -- after all, most of the traffic is GCC and binutils patches...

GCC 12.1 Released

Posted May 6, 2022 21:57 UTC (Fri) by randomguy3 (subscriber, #71063) [Link] (2 responses)

A more useful CTF link might be https://lwn.net/Articles/795384/

GCC 12.1 Released

Posted May 6, 2022 22:30 UTC (Fri) by nix (subscriber, #2304) [Link] (1 responses)

Thanks! Also the spec is in the binutils-gdb source tree now: if you have texinfo 6.3 or later it gets built with libctf, though it isn't installed: texi2pdf etc should work in libctf/doc as well. (If only I'd written libctf documentation, that *would* be installed, since that's what CTF consumers are really expected to want to use. One of these days I mean to do so. For now the format spec is what we have.)

GCC 12.1 Released

Posted May 7, 2022 7:15 UTC (Sat) by Wol (subscriber, #4433) [Link]

Fascinating read! Not my field :-) but it's always good to know what other people are doing ...

Cheers,
Wol


Copyright © 2022, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds