What's new in GCC 4.5?
Version 4.5 of the GNU Compiler Collection was released in mid-April with many changes under-the-hood, as well as a few important user-visible features. GCC 4.5 promises faster programs using the new link-time optimization (LTO) option, easier implementation of compiler extensions thanks to the controversial plugin infrastructure, stricter standards-conformance for floating-point computations, and better debugging information when compiling with optimizations.
The GNU Compiler Collection is one of the oldest free software projects still around. Version 1.0 of GCC was released in 1987. More than twenty years later, GCC is still under active development and each new version is adding important features. Supporting these new features in such an old codebase often requires major rewriting of substantial parts of GCC. GCC 4.0 was an important milestone in this regard, and GCC internals are still evolving at a rapid pace. However, these core improvements are sometimes not clearly visible as improvements for users. This is not the case in GCC 4.5. This article describes four new features in GCC 4.5, and also looks at an internal feature that may radically change how GCC is developed in the future.
Link-Time Optimization
Perhaps the most visible of the new features in GCC 4.5 is the Link-Time Optimization option: -flto. When source files are compiled and linked using -flto, GCC applies optimizations as if all the source code were in a single file. This allows GCC to perform more aggressive optimizations across files, such as inlining the body of a function from one file that is called from a different file, and propagating constants across files. In general, the LTO framework enables all the usual optimizations that work at a higher level than a single function to also work across files that are independently compiled.
The LTO option works almost like any other optimization flag. First, one needs to use optimization (using one of the -O{1,2,3,s} options). In cases where compilation and linking are done in a single step, adding the option -flto is sufficient
gcc -o myprog -flto -O2 foo.c bar.c
This effectively deprecates the old -combine option, which was too slow in practice and only supported for C.
With independent compilation steps, the option -flto must be specified at all steps of the process:
gcc -c -O2 -flto foo.c gcc -c -O2 -flto bar.c gcc -o myprog -flto -O2 foo.o bar.o
An interesting possibility is to combine the options -flto and -fwhole-program. The latter assumes that the current compilation unit represents the whole program being compiled. This means that most functions and variables are optimized more aggressively. Adding -fwhole-program in the final link step in the example above, makes LTO even more powerful.
When using multiple steps, it is strongly recommended to use exactly the same optimization and machine-dependent options in all commands, because conflicting options during compilation and link-time may lead to strange errors. In the best case, the options used during compilation will be silently overridden by those used at link-time. In the worst case, the different options may introduce subtle inconsistencies leading to unpredictable results at runtime. This, of course, is far from ideal, and, hence, in the next minor release, GCC will identify such conflicting options and provide appropriate diagnostics. Meanwhile, some extra care should be taken when using LTO.
The current implementation of LTO is only available for ELF targets, and, hence, LTO is not available in Windows or Darwin in GCC 4.5. However, the LTO framework is flexible enough to support those targets and, in fact, Dave Korn has recently proposed a patch that adds LTO support for Windows to GCC 4.5.1 and 4.6, and Steven Bosscher has done the same for Darwin.
Finally, another interesting ongoing project, called whole program optimization [PDF], aims to make LTO much more scalable for very large programs (on the order of millions of functions). Currently, when compiling and linking with LTO, the final step stores information from all files involved in the compilation in memory. This approach does not scale well if there are many large files. In practice, there may be little interaction between some files and the information required could be partitioned and optimized independently, with little performance loss, or at least gracefully degrading the effectiveness of LTO depending on existing resources. The experimental -fwhopr option is a first step in this direction, but this feature is still under development and even the name of the option is likely to change. Therefore, GCC 4.6 will probably bring further improvements in this area.
Plugins
Another long-awaited feature is the ability to load user code as plugins that modify the behaviour of GCC. A substantial amount of controversy surrounded the implementation of plugins. The possibility of proprietary plugins was probably the main factor stalling the development of this feature. However, the FSF recently reworked the Runtime Library Exception in order to prevent proprietary plugins. With the new Runtime Library Exception in place, the development of the plugins framework progressed rapidly. This, however, did not completely end the controversy surrounding plugins, and while some developers think that plugins are essential for the future of GCC and for attracting new users and contributors, others fear that plugins may divert efforts from improving GCC itself.
The plugin framework of GCC can work in principle on any system that supports dynamic libraries. In GCC 4.5, however, plugins are only supported on ELF-based platforms, that is, most Unix-like systems, but not Windows or Darwin. A plugin is loaded with the new option -fplugin=/path/to/file.so. GCC makes available a series of events for which the plugin code can register its own callback functions. The events already implemented in GCC 4.5 allow plugins to interact with the pass manager to add, reorder and remove optimization passes dynamically, modify the low level representation used by C and C++ front-ends, add new custom attributes and compiler pragmas, and other possibilities described in the internal documentation.
Despite plugins being a new feature in GCC 4.5, several projects are already making use of the plugins support. Among these projects is Dehydra, the static analysis tool for C++ developed by Mozilla; and MELT, a framework for writing optimization passes in a dialect of LISP. Also, the ICI/MILEPOST research project strongly relies on the new plugins framework in GCC 4.5.
Variable Tracking at Assignments
The Variable Tracking at Assignments (VTA) project aims to improve debug information when optimizations are enabled. When GCC compiles some code with optimizations enabled, variables are renamed, moved around, or even completely removed. When debugging such code and trying to inspect the value of some variable, the debugger would often report that the variable has been optimized out. With VTA enabled, the optimized code is internally annotated in such a way that optimization passes transparently keep track of the value of each variable, even if the variable is moved around or removed.
A small example of the differences between debug information in GCC 4.5 and previous releases is the following program:
typedef struct list { struct list *n; int v; } *node; node find_prev (node c, node w) { while (c) { node opt = c; c = c->n; if (c == w) return opt; } return NULL; }
Variable opt
is removed when compiling with
optimization. Hence, in previous GCC versions, or when compiling
without VTA, one cannot inspect the value of opt
even at
the highest debugging level. In GCC 4.5, however, VTA enables
inspection of the value of all variables at all points of the function.
The effect of VTA is even more noticeable for inlined functions. Before VTA, optimizations would often completely remove some arguments of an inlined function, making it impossible to inspect their values when debugging. With VTA, these optimizations still take place, however, appropriate debug information is generated for the missing arguments.
Finally, the VTA project has brought another feature, the new
-fcompare-debug
option, which tests that the code
generated by GCC with and without debug information is identical. This
option is mainly used by GCC developers to test the compiler, but it
may be useful for users to check that their program is not affected by a
bug in GCC, though at a significant cost in compilation
time.
Standard conforming excess precision
Perhaps the most reported bug in GCC is bug 323. The symptoms appear when different optimization levels produce different results in floating-point computations, and when two ways of performing the same calculation do not produce the same result. Although this is an inherent limitation of floating-point numbers, users are still surprised that different optimization levels lead to highly different results. One of the main culprits of the problem is the excess precision arising from the use of the x87 floating-point unit (FPU). That is, operations performed in the FPU have more precision than double precision numbers stored in memory. Hence, the final result of a computation may significantly depend on whether intermediate operations are stored in the FPU or in memory.
This leads to some unexpected and counter-intuitive results. For example, the same piece of code may produce different results using the same compilation flags and the same machine depending on changes of seemingly unrelated code, because the unrelated code forces the compiler to save some intermediate result in memory instead of keeping it in a FPU register. One workaround to this behavior is the option -ffloat-store, which stores every floating-point variable in memory. This has, however, a significant cost in computation time. A more fine-grained workaround is to use the volatile qualifier in variables suffering from this problem.
While this problem will never be solved in computers with inexact representation of floating-point numbers, GCC 4.5 helps improve the situation by adding a new option -fexcess-precision=standard, currently only available for C, that handles floating-point excess precision in a way that conforms to ISO C99. This option is also enabled with standards conformance options such as -std=c99. However, standards-conforming precision incurs an extra cost in computation time. Therefore, users more interested in speed may wish to disable this behavior using the option -fexcess-precision=fast.
C++ compatible
GCC 4.5 is the first release of GCC that can be compiled with a C++ compiler. This may not seem very interesting or useful at the moment (but take a look at the much improved -Wc++-compat option). However, this is only the first step of an ongoing project to use C++ as the implementation language of GCC. Except for some front-end bits written in other languages, notably Ada, most of GCC is implemented in C. The internal structures of GCC are under a continuous improvement and modularization aimed at creating cleaner interfaces, and many GCC developers think that this work would be easier using C++ than C. However, this proposal is not free of controversy, and it is not clear whether the switch would occur in GCC 4.6, later, or ever.
Other improvements
The above are only some examples of the many improvements and new features in GCC 4.5. A few other features that are worth mentioning:
- GCC now makes better use of the information provided by the restrict keyword, which is also supported in C++ as an extension, to generate better optimized code.
- The libstdc++ profile mode tries to identify suboptimal uses of the standard C++ library, and suggest alternatives that improve performance.
- Previous versions of GCC incorporated the MPFR library in order to consistently evaluate math functions with constant arguments at compile time. GCC 4.5 extends this feature to complex math functions by incorporating the MPC library.
- Many improvements have been made in the specific language front-ends, in particular from the very active Fortran front-end project. Also worth mentioning is the increasing support for the upcoming ISO C++ standard (C++0x)
Conclusion
We are living interesting times on the compiler front, and GCC 4.5 is an indication that we can still expect new developments in the future. The release of GCC 4.5 brings to its users several important, and somewhat controversial, features. It also includes the typical long list of small fixes and improvements, where most will be able to find at least one thing to their liking. GCC 4.5 may well be a transition point, where the foundational work that has been done during the 4.x release series is starting to show up in user-visible features that would have been impossible in the GCC 3.x release series. It is difficult to say at this moment what GCC 4.6 will bring us in a year from now, as it will depend on what the contributors decide. Anyone can contribute to the future of GCC. This is free software after all.
Acknowledgments
I would like to thank in general the community of GCC developers, and in particular, Ian Lance Taylor, Diego Novillo, and Alexandre Oliva, for their helpful comments and suggestions when writing this article.
Index entries for this article | |
---|---|
GuestArticles | López-Ibáñez, Manuel |
Posted May 12, 2010 15:16 UTC (Wed)
by Trelane (subscriber, #56877)
[Link]
Posted May 12, 2010 15:28 UTC (Wed)
by eparis123 (guest, #59739)
[Link] (15 responses)
Very comprehensive article; thanks a lot.
One workaround to this behavior is the option -ffloat-store, which stores every floating-point variable in memory. This has, however, a significant cost in computation time. A more fine-grained workaround is to use the volatile qualifier in variables suffering from this problem.
Is this use of the volatile keyword conforming with the (admittedly vague on this point) standard?
I understood that it was always the opposite: we use volatile to avoid the compiler caching memory access in registers.
Posted May 12, 2010 15:31 UTC (Wed)
by eparis123 (guest, #59739)
[Link]
Posted May 12, 2010 15:45 UTC (Wed)
by foom (subscriber, #14868)
[Link] (11 responses)
The problem is that the registers are 80bits but the memory is 64bits, and the datatype is defined to be a 64bit floating point value. By using volatile, you tell the compiler to always write the data back to memory instead of caching it in the larger register, thus ensuring the calculation is using the expected precision.
Posted May 13, 2010 0:26 UTC (Thu)
by creemj (subscriber, #56061)
[Link] (10 responses)
Posted May 13, 2010 10:09 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link] (2 responses)
Posted May 17, 2010 6:19 UTC (Mon)
by cph (guest, #1433)
[Link] (1 responses)
I don't understand why the article didn't mention this. It's a simple fix that gives consistent results regardless of the memory/register optimization.
Posted May 18, 2010 21:31 UTC (Tue)
by dark (guest, #8483)
[Link]
Posted May 13, 2010 15:52 UTC (Thu)
by foom (subscriber, #14868)
[Link] (3 responses)
Unfortunately most software for Linux/x86 is compiled without SSE2 enabled, because distros want to support pre-Pentium4 processors.
Posted May 15, 2010 6:02 UTC (Sat)
by RCL (guest, #63264)
[Link] (1 responses)
Posted May 22, 2010 14:00 UTC (Sat)
by robert_s (subscriber, #42402)
[Link]
They're still there, but there are just better replacements for both of them. The only situation where this might be true is if bit 29 of CPUID 0x80000001 is not set, in which case you can't use MMX in long mode.
x87 is always there.
Posted May 21, 2010 14:10 UTC (Fri)
by foo-bar (guest, #22971)
[Link]
Posted May 18, 2010 16:47 UTC (Tue)
by pharm (guest, #22305)
[Link] (2 responses)
Why people bang on about -ffloat-store instead of pointing people to -mpc64 if they want to truncate floats to 64 bits on Intel platforms I'm not sure.
Check out the FLDCW (Floating Point Load Control Word) instruction for the gory details.
Posted May 18, 2010 16:50 UTC (Tue)
by pharm (guest, #22305)
[Link] (1 responses)
I suppose you can set the control word to 53-bit mantissa & copy a value from one FP register to another. That would be a bit slow though.
Posted May 31, 2010 15:52 UTC (Mon)
by Spudd86 (subscriber, #51683)
[Link]
Posted May 12, 2010 16:12 UTC (Wed)
by farnz (subscriber, #17727)
[Link] (1 responses)
Using volatile instead of -ffloat-store forces the selected variable to memory every time it's changed, without forcing all floating point variables to memory on all modification. The goal is to avoid the compiler caching floating point numbers in registers; I don't understand why you think this is the opposite to the standard's use.
Posted May 12, 2010 16:42 UTC (Wed)
by eparis123 (guest, #59739)
[Link]
Yes, I misunderstood the context. I misread it as having the desire to put the variable in the 80-bit FPU register for extra precision, instead of the opposite.
Posted May 12, 2010 17:55 UTC (Wed)
by arekm (guest, #4846)
[Link] (8 responses)
Posted May 12, 2010 21:26 UTC (Wed)
by nix (subscriber, #2304)
[Link] (7 responses)
Posted May 14, 2010 18:13 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (6 responses)
What does that mean?
Posted May 14, 2010 19:21 UTC (Fri)
by nix (subscriber, #2304)
[Link] (5 responses)
When linking with -flto, only the GIMPLE form is used and the native code in the .o files (and .a files if gold(1) is in use) is thrown away; when linking without it, only the non-GIMPLE form is used, and the GIMPLE in the .o files is thrown away.
(IIRC, of course. I haven't been paying enough attention to GCC development for the last year or so for this to be more authoritative than the ramblings of any passing madman. I should really have waited for jwakely to answer more authoritatively...)
Posted May 14, 2010 21:18 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (3 responses)
I suppose the objective is not just to let someone choose a non-LTO link, but for the .o file to be useful by a linker that doesn't even know what LTO is.
I was going to say the time to write the GIMPLE shouldn't be enough to be a consideration against using -lto, but then I remembered that I once avoided compiling with debugging information because I was using NFS and writing the .o files took significantly longer with -g.
Posted May 14, 2010 21:45 UTC (Fri)
by nix (subscriber, #2304)
[Link] (1 responses)
Posted May 14, 2010 22:51 UTC (Fri)
by giraffedata (guest, #1954)
[Link]
First, to be clear, I'm using the term "linker" in the same sense as the phrase "link time" in the name LTO, which means the linker is GCC. GCC is the program to which you feed .o files and get an executable out.
If instead of using GCC to link my .o files I use GNU 'ld', it will still work, right? And it looks like 'ld' doesn't know what LTO is.
Even GCC doesn't always know what LTO is. GCC 3 doesn't.
LTO could have been designed so that 'ld' and GCC 3 could not link the .o files created by gcc -flto, but it looks to me like it was a design objective that they be able to.
Posted May 14, 2010 22:43 UTC (Fri)
by stevenb (guest, #11536)
[Link]
So the GIMPLE goes through the compiler pipeline twice: during compilation to an object file, and during link time optimizations. That is where the extra cost comes from.
We have our smartest people working on a solution for this... ;-)
Posted May 15, 2010 2:13 UTC (Sat)
by jwakely (subscriber, #60262)
[Link]
I only focus on the C++ library so I'm not up to speed on LTO either, but stevenb is :-)
Posted May 13, 2010 15:58 UTC (Thu)
by alex (subscriber, #1355)
[Link]
Posted May 14, 2010 2:40 UTC (Fri)
by pr1268 (guest, #24648)
[Link] (9 responses)
GCC 4.5 is the first release of GCC that can be compiled with a C++ compiler. I had to think about this for a moment. Why? Isn't GCC already working just fine (i.e. fast and [reasonably] efficient) as is in C? Then, visiting GNU's GCC page link in the article, I began to wonder if the developers want to use those features of C++ not present in C for the compiler? (Like classes, OO, and templates.) Of course, any compiler can be written in any Turing-complete language. Even the 2nd edition of the Dragon Book has the source for a front-end written in Java. My questions border on rhetorical, but perhaps I'm just trying to stimulate a discussion on this. Thanks!
Posted May 14, 2010 9:35 UTC (Fri)
by jwakely (subscriber, #60262)
[Link] (4 responses)
As well as increased type-safety C++ gives you automatic memory management (via destructors) which could potentially replace the garbage collection used today.
Gcc uses lots of hash tables and vectors internally (the VEC type mentioned at the link you gave) which could be replaced by standard C++ containers - although that's a bit less certain, as it would require a working C++ standard library as well as C++ compiler to bootstrap.
There are of course downsides to C++, so let's not have a language war here :)
Posted May 19, 2010 21:46 UTC (Wed)
by roelofs (guest, #2599)
[Link] (3 responses)
Those are excellent benefits, and I've come to like C++ for such reasons--as long as one doesn't go overboard, of course. C++ can lead to "write-only" code, i.e., easy to write, impossible to maintain. One needs a little discipline and design sense, which I'm sure the GCC folks have in abundance. (Doug Crockford has made similar comments about JavaScript, btw. Just because the language officially supports something doesn't mean you should actually use it. :-) )
One unforeseen drawback we encountered, however: generated code size (that is, binaries) exploded. A 15 MB C-only executable grew to ~600 MB as parts of it were rewritten in C++. I still think it was worthwhile overall, but holy cow...don't underestimate the pain of creating, deploying, loading into memory, and core-dumping huge binaries. (Some of it might have been due to symbol visibility; I never had time to investigate. I think quite a bit was due to template use. No doubt you guys will figure out ways to keep it under control in GCC...)
Greg
Posted May 20, 2010 3:56 UTC (Thu)
by quotemstr (subscriber, #45331)
[Link] (2 responses)
Part of the cause is almost certainly forced inline function generaton. Using hidden symbols allows the compiler to skip the generaiton of certain functions --- if they're private symbols, the compiler can assume they're not overwritten at load-time.
Another thing to keep in mind is C++ template generation, as you mentioned. It's easy to achieve a combinatorial explosion of template instantiations when you have a template library used in many difficult circumstances. It's often worthwhile to have generic, templated code just be an inline-only, typesafe wrapper around concrete code; use function pointers to let that concrete code safely work with whatever the higher-level wrapper gives it.
Using that approach, you give up a tiny bit of runtime performance for a huge reduction in code size. Imagine the difference between qsort() and std::sort --- it's easy to write the latter such that the entire sorting agorithm implementation is emitted once per type sorted! (It's also possible for a C++ library implementor to write std::sort using the type erasure technique I mention.)
Posted May 20, 2010 10:03 UTC (Thu)
by jwakely (subscriber, #60262)
[Link]
Posted May 30, 2010 1:01 UTC (Sun)
by roelofs (guest, #2599)
[Link]
With. In this application, auto-gdb-backtrace was pretty much a necessity.
I'm no longer working on that particular project (or even in C++), but I'll keep jwakely's and your suggestions handy in case it crops up again.
Thanks,
Posted May 15, 2010 5:08 UTC (Sat)
by arief (guest, #58729)
[Link] (3 responses)
C "bugs" of taking-everything-programmers-throws-at-it is actually a "features".
A feature that force developers to think very carefully of what they are trying todo. Having to thought it for 5 times of whether it is possible to free up a pointer. Check a million times for dangling ones.
C is easy to comprehend and hard to master. While C++ is hard to understand and hard to master.
Posted May 15, 2010 10:28 UTC (Sat)
by nix (subscriber, #2304)
[Link]
Regarding the 'free up a pointer' thing, well, this proved so intractable to get right for GCC (where many objects have extremely hard-to-describe and interacting lifetimes crossing many passes) that it ended up with a garbage collector simply to lift the burden of manual memory management from the developers; it is not known how many bugs this fixed, but it was surely a lot. (Some heavily-used objects have since been shifted back from GC for speed reasons, but it's a case-by-case judgement whether to *not* garbage-collect, rather than vice versa.)
Posted May 15, 2010 14:57 UTC (Sat)
by HelloWorld (guest, #56129)
[Link]
Good programmers think about their code anyway, but no matter how good they are, they *will* make silly mistakes, and if the compiler (or whatever else) catches those, then that is a Good Thing.
Posted May 17, 2010 8:10 UTC (Mon)
by mpr22 (subscriber, #60784)
[Link]
I like C. I like C++. I am not so enamoured of either to call it a flawless or even merely universally superior choice in all problem spaces.
Posted May 14, 2010 20:28 UTC (Fri)
by daglwn (guest, #65432)
[Link] (2 responses)
While this problem will never be solved in computers with inexact representation of floating-point numbers, That's simply not true. Compilers have been dealing with this for a long time. For example, good Fortran compilers take great pains not to reorder floating-point computation. There are many solutions available for the x87 problem other than -ffloat-store. For the vast majority of x86 machine today, compiling for sse2 works great. Usually the user cares more about consistency on one architecture (compiler flags not changing results) than consistency across architectures (bitwise matching results on different processors). The latter is indeed very difficult to achieve but even that is possible with enough work. Maintaining consistency across flags (other than those designed to relax consistency) is not very hard at all.
Posted May 14, 2010 22:48 UTC (Fri)
by stevenb (guest, #11536)
[Link] (1 responses)
Posted May 15, 2010 5:37 UTC (Sat)
by daglwn (guest, #65432)
[Link]
Posted May 20, 2010 16:15 UTC (Thu)
by zaitcev (guest, #761)
[Link] (1 responses)
Posted May 20, 2010 17:47 UTC (Thu)
by mjw (subscriber, #16740)
[Link]
It became a lot faster!
http://gcc.gnu.org/ml/gcc/2010-04/msg00948.html
"In general GCC-4.5.0 became faster (upto 10%) in -O2 mode. This is first considerable compilation speed improvement since GCC-4.2. GCC-4.5.0 generates a better (1-2% in average upto 4% for x86-64 SPECFP2000 in -O2 mode) code too in comparison with the previous release. That is not including LTO and Graphite which can gives even more (especially LTO) in many cases."
Posted May 22, 2010 23:18 UTC (Sat)
by Cosan (guest, #66500)
[Link]
I've been working on a project that makes use of a lot of small functions. GCC 4.4, at -O3, inlines many of them and this gives a measurable boost in performance. The problem was that one of my source files was getting pretty large, and I wanted to split it up. Of course, splitting it up meant no more inlining (unless I moved a lot of code into headers, and I didn't want to do that).
Once I had 4.5 installed, I went ahead and did the split. Timing it without LTO showed that it was measurably slower, as expected. However, enabling LTO boosted it right back up to the speed it had been running at previously. There was no loss of performance and the code became much more manageable. Three cheers for LTO!
Open64's LTO (aka IPA) is similarly useful, for the record.
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
This doesn't sound like a complete solution. I think you would also have to use 'double' everywhere and excise 'float' from all your code in order to get consistent results. Though it's probably still okay to use 'float' in arrays as long as you convert to 'double' for all calculations.
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
Explaining why LTO increases compile time:
What's new in GCC 4.5?
the individual object files are driven all the way to assembler
What's new in GCC 4.5?
Thanks for the explanation.
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
I had a real head twister in a previous life caused by FP numbers getting pushed through the x87 when I didn't want them to be. A real pain when your trying to emulate another architectures FP behaviour as closely as possible.
My "favourite" FP bug
What's new in GCC 4.5?
What's new in GCC 4.5?
Compiling gcc with a C++ compiler has already uncovered a number of latent bugs, such as comparing values of enum_type_1 to values of enum_type_2. That's not an error in C, because enums are just ints, but in C++ they're distinct types and the compiler catches the problem.
What's new in GCC 4.5?
What's new in GCC 4.5?
A 15 MB C-only executable grew to ~600 MB as parts of it were rewritten in C++
That's huge! There's no good reason to tolerate that level of bloat. Was that with or without debugging symbols?
What's new in GCC 4.5?
Was that with or without debugging symbols?
What's new in GCC 4.5?
Greg
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
This is *exactly* the kind of *bullshit* that keeps the same bugs happening over and over again in C programs.
What's new in GCC 4.5?
And get a Schrödinbug when you (almost inevitably) miss one.
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?
What's new in GCC 4.5?