LWN: Comments on "Building a High-Performance Cluster with Gentoo" https://lwn.net/Articles/229770/ This is a special feed containing comments posted to the individual LWN article titled "Building a High-Performance Cluster with Gentoo". en-us Sat, 04 Oct 2025 10:12:19 +0000 Sat, 04 Oct 2025 10:12:19 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/231436/ https://lwn.net/Articles/231436/ pimlottc <p>Are you sure that the actual GCC compiler was installed?</p> <p>Looking at the <a rel="nofollow" href="http:// packages.debian.org/stable/net/telnet ">telnet package</a>, it does depend on two gcc runtime libraries, <a rel="nofollow" href="http://packages.debian.org/stable/libs/libgcc1">libgcc1</a> and <a rel="nofollow" href="http://packages.debian.org/stable/libs/libgcc4">libgcc4</a>. Each of these depend on <a rel="nofollow" href="http://packages.debian.org/stable/devel/gcc-4.1-base">gcc-4.1-base</a>, which sounds a lot like it's the main gcc package, but in fact it's just <a rel="nofollow" href="http://packages.debian.org/cgi- bin/search_contents.pl?searchmode=filelist&word=gcc-4.1- base&version=stable&arch=i386">licensing ibnformation and packaging documention</a>.</ p> <p> I can't find anything in telnet's dependency tree that depends on or recommends the actual gcc package (which is simply <a rel="nofollow" href="http://packages.debian.org/stable/devel/gcc">gcc</a>). Recommends aren't normally automatically installed by apt-get, although I don't know for sure about other package management front ends.</p> Mon, 23 Apr 2007 09:44:49 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/231427/ https://lwn.net/Articles/231427/ dlang the performance benifits of gentoo are going to vary wildly depending on your hardware.<br> <p> back in the days of the 1GHz Athlon, the distros were all compiling their code for 386's (with a few just starting to offer 586 optimized versions) the optimizations for AMD chips were significantly different then for Intel chips<br> <p> in that environment there is room for a HUGE performance difference from a simple recompile, and that's when gentoo started (and so I'm sure that's part of where the proponents are looking when they talk about the speed benifits)<br> <p> however, even on modern 64 bit systems that haven't been out long enough to develop this much variability there are compile options for packages that can make a huge performance difference to your apps<br> <p> a perfect example is unicode support. if you need it, you need it, no question. but if all the data you are dealing with is ascii the performance difference between comiling with and without unicode support can be drastic (I see 2x and more on the postgres performance list) these are the comile flags that make the most difference nowdays.<br> <p> In addition, just being able to not include features that you don't need for your installation is a huge benifit. Last week I built a new debian system, when I installed the ftp software it pulled in mySQL libraries, when I installed telnet it pulled in gcc. now in the case of telnet I found that I could go back and remove most of gcc (most of it was reccomended dependancies, not required), but on a gentoo system you just define -mysql and you won't have software installing mysql for you (I still haven't figured out why telnet requires the gcc libs yet)<br> Sun, 22 Apr 2007 22:41:15 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/231420/ https://lwn.net/Articles/231420/ emj <em>(some of) the apps that run on top of the system are insanely fragile.</em><p> Yes, I know at least one Fortran compiler that uses "get_moon_ray_ratio()" when compiling stuff. Sun, 22 Apr 2007 16:53:26 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/231070/ https://lwn.net/Articles/231070/ piggy I would question the claim that a source-based distro necessarily sees a higher risk of encountering obscure and subtle bugs than a binary-based distro. Your reasoning is sound, but my empirical experience suggests that the reverse may be true.<br> <p> My experience as a developer for a vendor of a binary-only commercial Unix clone demonstrates that the range of strange PC hardware out there is more than sufficient to exercise plenty of unique corner cases.<br> <p> My other stint of experience comes from working for an embedded Linux vendor. We saw a LOT more trouble from people trying to piece together tiny distributions from prebuilt binaries (even all from the same source) than from people willing to build everything from source. A very common problem we saw with people who tried to do all of their system work with binaries only was subtle version dependencies among libraries as people upgraded individual packages over time. These problems simply do not occur if every library is built successively against the existing set of binaries on the system.<br> <p> Thu, 19 Apr 2007 12:35:56 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/231055/ https://lwn.net/Articles/231055/ TRauMa What you missed is that the article talks about added flexibility when using gentoo. I'd understand people using Scientific Linux/RHEL for their clusters if it actually would mean less administration overhead and more stability, but for most HPC applications (which are very very frickle), it's the opposite. But I guess you'll find some acidly thing to say about that, too.<br> Thu, 19 Apr 2007 10:53:56 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/231054/ https://lwn.net/Articles/231054/ TRauMa Thanks for this very insightful comment that tells us a lot about managing clusters with gentoo.<br> Thu, 19 Apr 2007 10:49:27 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/230222/ https://lwn.net/Articles/230222/ pbardet You forgot one thing. They love to spend a day (or more) finding why upgrading a package breaks everything. I guess that's the day saved over a week of intense computing.<br> <p> It took me 2 years to setup a usable Gentoo box. Only 1/2 hour to break it and another 1/2h to switch to Ubuntu to get a similar system. Not better, not worse, but no more guesses.<br> Thu, 12 Apr 2007 13:18:02 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/230218/ https://lwn.net/Articles/230218/ nix Indeed. This is one of the reasons *why* I run bleeding-edge systems on all my systems for which stability is relatively unimportant: specifically so that I can find niggling portability bugs before other people. I find a few a month, typically (sometimes a few a week, sometimes none for a month or two, but the trickle never stops completely).<br> <p> Thu, 12 Apr 2007 13:06:47 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/230180/ https://lwn.net/Articles/230180/ jschrod Probably they are afraid that the (most often flaky) interconnect driver doesn't work any more after the upgrade.<br> <p> As long as you're using Gbit Ethernet for interconnects, upgrades are easy. If you're using Myrinet or Infiniband, that's a whole other story.<br> <p> (My experience is from HPC work for automotive and aerospace companies.)<br> Thu, 12 Apr 2007 10:36:02 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/230138/ https://lwn.net/Articles/230138/ njs <font class="QuotedText">&gt;using the approach described above you don't have different systems running different versions of things (unless you want them to).</font><br> <p> You misunderstand -- the point is that all your systems might be the same, but they'll be different from everyone else's systems. For instance, they will be different from the people who you let upgrade to cool new version of Foobar2000 first, so that they could trip over the nasty bugs and get them fixed before you hit them. (Plus the maintainers tasked with fixing those bugs have a huge combinatorial space of configurations they are trying to support.)<br> <p> Thu, 12 Apr 2007 06:29:03 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/230106/ https://lwn.net/Articles/230106/ dw Thank you. This is the first time since Gentoo appeared that I've read what appears to be both an honest testimonial and a practical use for the system.<br> <p> Having worked on commercial software for Linux in the past, I often found myself rebuilding complex trees (e.g. Mozila, wxWidgets) just to flip one or two extra flags (Unicode on/off? --some-strange-chunk-of-code=yes). It was more often than not, easier to do this by just grabbing the original .tar.gz rather than doing a deb or rpm source build. In retrospect, Gentoo may have saved me a lot of pain and trouble in those days.<br> Thu, 12 Apr 2007 01:33:05 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/230056/ https://lwn.net/Articles/230056/ gnu_lorien I use Gentoo. Originally it was the dream of performance benefits that drew me to it because I never have a lot of money and am always about three years behind on the greatest-bang-for-the-buck hardware.<br> <p> I can't honestly say whether Gentoo has ever given me any performance benefits. In the end, I've discovered that I don't care. I use Gentoo because it's constantly building from source. A lot of free software is about having the sources and the freedom to do what I want to with them. Gentoo's portage system is a massive archive of sources and the hoops you have to jump through to build them.<br> <p> That's the advantage to me. I have all of the software on my computer installed with instructions to help me tweak it if I want to.<br> Wed, 11 Apr 2007 21:08:08 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/230036/ https://lwn.net/Articles/230036/ job It all boils down to if you trust the package maintainer to select the best compiler optimizations or if you believe you can select better general ones. Many maintainers are highly competent people and the data seems to support that.<br> <p> Of course, rebuilding packages from source is really simple in Debian too, so it's not really only distribution dependent.<br> <p> That said, most Gentoo people probably use the distribution because the build flags are quite handy, not because of some imaginary performance benefits.<br> Wed, 11 Apr 2007 20:06:40 +0000 compiler sensitive CPU architectures https://lwn.net/Articles/229971/ https://lwn.net/Articles/229971/ tgall Flame bait aside there are some realities that certain CPU architectures really do benefit of mtune= settings at compile time. In some cases the differences are quite large. (orders of 10s of percent I've personally seen)<br> <p> As a for instance the instruction scheduling characteristics of the PS3's cell processor and it's sister processor the power4/970 are dramatically different. I'm not talking about the the SPUs here, I'm talking about the PowerPC instruction pipelines. <a href="http://www.ps3coderz.com/index.php?option=com_content&amp;task=view&amp;id=31&amp;Itemid=43">http://www.ps3coderz.com/index.php?option=com_content&amp;...</a><br> <p> Getting the most out of your CPU means optimizing based on the features and characteristics your processor has. Distros tend to focus on the general case. HPC sites tend to care about the specific case and as a result do things like turning to use commercial compilers such as xlC or intel's compiler suite. Is making use of features that gcc has which are specific to the processor in the cluster so wrong? Of course not.<br> <p> Performance analysis is an art. Perhaps in this discussion does Knuth's famous quote about Premature optimization apply? I'd say no. Donnie and the rest of the gentoo crowd are really just talking about utlizing the features of the distro to maximize the tools one has available to tune for performance.<br> <p> If -mtune=power4 -O2 results in a slower system than a simple -O2 setup on a power4 box, then something is obviously wrong in gcc land. <br> <p> Perhaps there are those that distrust -mtune settings or -Ox settings. To those I would say, go back to assembler as obviously you don't trust your compiler. <br> <p> <p> Wed, 11 Apr 2007 16:51:00 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229969/ https://lwn.net/Articles/229969/ dlang useing the approach described above you don't have different systems running different versions of things (unless you want them to). with the binary package server you have one box compile the code with the optimizations that you want, and then it makes the results available to all the other systems (assuming that they are identical)<br> <p> I haven't done head-to-head performance comparisons with gentoo, but I have seen cases where optimizing the kernel could result in 20-30% performance improvements in the past (back in the 1GHz athlon days). on modern 64 bit hardware it's less of an issue becouse there's less variability between hardware, and therefor less difference betwen optimized versions and the generic versions.<br> <p> where I actually see the benifit of gentoo where I use it (my home server) is in the ability to configure the packages with the options and dependancies that I want them to have (this means turning on some that other distros would leave off, but mostly turning off options that other distros turn on, but I don't care about)<br> Wed, 11 Apr 2007 16:21:35 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229954/ https://lwn.net/Articles/229954/ ajross <i><blockquote>So if you argue that system libraries and kernels don't matter to HPC performance, then why are HPC administrators so reluctant to change something so irrelevant?</blockquote></i> <p>Because, like any IT administrator queried about a big configuration change, they are afraid of breaking things. It doesn't matter how fast that kernel is; an inoperative cluster is still infinitely slower. I assure you, that they would reply with "can I install gentoo?!" with the same horror. <p>Hell, even if they were running gentoo, they would probably refuse to do upgrade. At least they would if I were paying them. <p>Leave the distro wars for the kiddies. Real clusters need to be doing real work, and futzing with the installed software doesn't qualify. Wed, 11 Apr 2007 14:35:50 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229945/ https://lwn.net/Articles/229945/ drag Ya and with that it shown that Gentoo compiled with 'optimizations' was slower then both Debian and Mandrake most of the time.<br> <p> Of course this is old and with GCC 3.xx stuff. GCC 4.xx is a entirely different beast.<br> <p> I think it's simply fair to compare whatever latest release of whatever distribution you plan on using and whatever software comes with it, more or less.<br> <p> Historically speaking when your presented with vague claims by any software vendor of improved performance, but fails to back it up with facts.. then the vast majority of the time they were just mostly full of it.<br> <p> I know that compile time optimizations can be counter intuitive in a lot of cases. The biggest example I know of is that the Linux kernel has best performance when compiled optimized for size, not for speed.<br> <p> This is because when you optimize for speed you end up with larger binaries and larger logical steps. When compiled for size, the logic it tighter and the binaries are smaller. With this sort of setup more of the kernel runs in CPU cache and your CPU is going to spend less time retrieving kernel bits from main memory and more time actually processing.<br> <p> So it's all a bit weird. I am sure that that isn't the only time compiling something -0s or -0 would end up faster then -03.<br> <p> Wed, 11 Apr 2007 10:06:58 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229944/ https://lwn.net/Articles/229944/ ewan Pretty much everyone in particle physics (i.e. CERN et al) run Scientific <br> Linux, or RHEL, in a mixture of versions 3 and 4. Also, the reason people <br> are wary of upgrades is not that the packaging system isn't up to it (we <br> wouldn't 'upgrade' cluster machines anyway - it would always be a nuke <br> and re-kickstart) it's because (some of) the apps that run on top of the <br> system are insanely fragile.<br> Wed, 11 Apr 2007 09:31:21 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229939/ https://lwn.net/Articles/229939/ DeletedUser32991 <p> The very <a href="http://web.archive.org/web/20031012081422/articles.linmagau.org/modules.php?op=modload&name=Sections&file=index&req=printpage&artid=227">classic</a> comparison not only comes with its own set of problems (starting with "identical hardware") but also seems to predate Fedora Core a bit. </p> Wed, 11 Apr 2007 07:26:35 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229937/ https://lwn.net/Articles/229937/ amacater The classic answer on the Beowulf list: It depends. It depends on whether<br> you admin. your own server or have to rely on central admin. It depends on<br> the size of your cluster and, more importantly, who your hardware vendors<br> are. If you buy a 2048 node cluster from IBM, to some exent it's easier to<br> take the hardware vendor's choice of distro and cluster admin tools. HP's<br> choice may be different from Penguin's. Two further considerations: fast<br> interconnect hardware (Quadrics/Mellanox ...) which is an essential for<br> some classes of problem needs drivers. The companies are relatively small<br> in terms of staff size and are operating on tight margins in a small<br> market. It may be that they haven't time to sort out a Debian/Gentoo/Yellow<br> Dog ... hardware card driver. Lastly, there's the high performance compiler<br> writers and high-end proprietary software types: they want to debug a known<br> kernel/memory combination when they get oops reports. You can run highly<br> successful infrastructures on whichever distribution you like - as ever,<br> your problem set, resources, time and effort will differ from everyone<br> else's and, sometimes, it's easier to buy a system off the shelf so that<br> your users can concentrate on coding and running jobs. Read the Beowulf<br> list archives for this discussion and minor variants - many times :)<br> Wed, 11 Apr 2007 07:14:22 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229934/ https://lwn.net/Articles/229934/ njs There are now plenty of distributions that have excellent incremental upgrade support -- Debian is the classic leader here (and my preference), but it's not unique. AFAIK Red Hat does pretty well these days too. So portage might well beat out RH9 (which is what, 4 years old at this point?), but that's not really saying much.<br> <p> And, if your criterion is minimizing the risk of upgrades, then a source-based distribution like Gentoo will necessarily be worse than a binary-based one. With a binary-based distribution, everyone is running exactly the same executables, and the chance that you will be the first person to trip over some bug is minimized. With a source-based distribution, it's entirely possible that you are the only person in the world to have packages built with your exact combination of header files, compiler version, and USE and compiler flags -- so even if the bug tracker says that some piece of software has been out for 6 months with no reported problems, that's no guarantee that it'll work for *you*. Of course, you can minimize this by sticking to well-known compiler versions and declining to fiddle with compile flags, but if you're doing that then why bother with a source-based distro at all?<br> Wed, 11 Apr 2007 04:34:54 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229933/ https://lwn.net/Articles/229933/ gdt <p>The main advantage of using Gentoo seems to be Portage. The incremental upgrade approach of Portage might well be worth the effort.</p> <p>The "big upgrade" approach of Red Hat doesn't seem to cut it. My experience [1] with pushing around Large Hadron Collider datasets is that there's still a lot of clusters running RH9, with its kernel's lack of ability to push big fat network pipes to capacity. Whenever I ask a HPC administator to upgrade to a recent kernel I get a look of horror.</p> <p>So if you argue that system libraries and kernels don't matter to HPC performance, then why are HPC administrators so reluctant to change something so irrelevant? Perhaps it is because the packaging solution they use makes the risk of a change too great. It would well be worth a look at Gentoo to see if its packaging system lowers the risk of changes.</p> <p><small>[1] a network engineer at a academic and research network with responsibility for end-to-end performance.</small></p> Wed, 11 Apr 2007 04:09:17 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229931/ https://lwn.net/Articles/229931/ nix It's not as if what was so disparaged by maks was impossible even in the <br> *absence* of the binary package server. rsync works anywhere, after all, <br> and you can rsync your speed-critical stuff around easily (at least you <br> can if it's an even slightly competently written distributed app).<br> Wed, 11 Apr 2007 02:54:25 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229924/ https://lwn.net/Articles/229924/ dberkholz I haven't seen anything real formal, and I agree it needs to be done. But here's one example of RHEL vs Gentoo:<br> <p> <a href="http://www.mail-archive.com/gentoo-cluster@lists.gentoo.org/msg00150.html">http://www.mail-archive.com/gentoo-cluster@lists.gentoo.o...</a><br> <p> I tried to sum up the results at <a href="http://www.mail-archive.com/gentoo-cluster@lists.gentoo.org/msg00158.html">http://www.mail-archive.com/gentoo-cluster@lists.gentoo.o...</a> by saying:<br> <p> "Yeah, from the numbers it looks as if it would be dependent on the<br> purpose of the cluster whether OS X or Gentoo would do better. On ppc,<br> Gentoo does poorly on the first two benchmarks and also on context<br> switching. On x86, the first two are more comparable with RH, but the<br> others, Gentoo has a small to large advantage over RH, just as on ppc."<br> <p> And the post at <a href="http://www.mail-archive.com/gentoo-cluster@lists.gentoo.org/msg00166.html">http://www.mail-archive.com/gentoo-cluster@lists.gentoo.o...</a> explains why the first two benchmarks may be synthetic.<br> <p> Of course as you mention, benchmarks of real-world applications are really what we need, not just running a benchmark suite.<br> Wed, 11 Apr 2007 00:36:18 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229923/ https://lwn.net/Articles/229923/ dberkholz What I love about LWN is that we can have productive dialogs in the comments, but it's tough to do so with that kind of tone. Your suggested criticism was already addressed in the article. Search for the section containing "binary package server".<br> Wed, 11 Apr 2007 00:29:44 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229919/ https://lwn.net/Articles/229919/ drag Well it's obvious now. <br> <p> We want benchmarks from the Gentoo community to make sure that it's worth our time to even give a crap about doing oddball compiler optimizations on a kernel or glibc as well as the rest of the OS.<br> <p> I'd like to see a comparisions between Debian (cold) vs Fedora (warm) vs highly optimized Gentoo (hot) on identical hardware running:<br> <p> 1. Filesystem benchmarks.<br> 2. network I/O benchmarks.<br> 3. computational benchmnarks.<br> 4. database bencmarks.<br> 5. application benchmark.<br> (Stuff like rendering 3d scenes, data encoding, compiling and such stuff. to illistrate the impact that benchmarks may have on real world use)<br> <p> Just like the sort of stuff you'd find on any reputable hardware-oriented website trying to compare motherboards or cpus.<br> <p> If anybody can do that and come away with conclusive evidence that all that time spent on Gentoo is actually worth anything then I'd find that a very compelling reason to use Gentoo.<br> <p> But so far anything I've seen actually points that people playing around with optimizations often actually _reduce_ performance and increase bugginess. So not only you end up wasting your time compiling everything, but your actually making slower.<br> Wed, 11 Apr 2007 00:09:15 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229917/ https://lwn.net/Articles/229917/ maks loosing time building boxes is the least thing you want to do while number crunching. that article needs to be shot. pure propaganda bull shit.<br> <p> on my experience almost any scientific cluster from germany, us or cern runs either fedora or an old red hat.<br> Tue, 10 Apr 2007 23:14:53 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229909/ https://lwn.net/Articles/229909/ dberkholz Yes, both of you have good points. Although there is no need to compile your whole system with the optimal CFLAGS you discovered for the scientific and number-crunching libraries and programs, you might as well do it if you're running Gentoo. A big advantage I mentioned is that you can do this with your scientific libs and apps without leaving Portage behind -- at that point, you might as well compile everything else the same way.<br> <p> And I assume you've already done all you can by picking appropriate algorithms and doing profiling -- using a good compiler with good flags is just the last step, not the only step.<br> Tue, 10 Apr 2007 21:52:18 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229870/ https://lwn.net/Articles/229870/ rsidd <i>In my experience, the performance of system binaries is usually almost irrelevant to scientific code. What matters is how you compile your own code and perhaps a few key libraries like the BLAS, but in neither case do you need to recompile your whole OS.</i> <P> Exactly. It's not "almost" irrelevant, it's entirely irrelevant. (Well, the <i>kernel</i> may cause a tiny difference -- nothing you could measure, I imagine -- but no other system binary should matter at all.) Compile your program, and possibly dependent libraries, with the relevant C flags and you're fine. <P> And, as you say, tweaking compiler flags is a lazy and unproductive approach. Usually if you profile your program and find the time-consuming bits, you can optimise those and get the sort of speed boost no compiler could ever give you. Tue, 10 Apr 2007 19:10:59 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229868/ https://lwn.net/Articles/229868/ asamardzic <p> Yeah, I supposed something alike :-) <p> Basically, I have same concerns over this that Steven expressed below - for serious HPC work, it is more important to choose carefully between all available implementations of low-level libraries (for example, regarding BLAS kernel, between <a href="http://math-atlas.sourceforge.net/">ATLAS</a>, or vendor implementation like <a href="http://www.intel.com/software/products/mkl/">Intel Math Kernel Library</a>, or maybe <a href="http://www.tacc.utexas.edu/~kgoto/">Kazushige Goto</a>'s implementation), than to choose over one Linux distribution or another. And, again as Steven stated below, it is <b>much</b> more important, regarding overall efficiency, how the algorithm given is actually parallelized... Tue, 10 Apr 2007 19:09:54 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229863/ https://lwn.net/Articles/229863/ ajross Of course not, this is Gentoo. :) <p>That's not as much a joke as it sounds like. People who are truly interested in implementing high performance systems occupy themselves with writing and running benchmarks and other analysis tools. When they find issues, they document their findings and (hopefully) submit patches to the appropriate software maintainers. <p>People who just want to play with their compiler switches and talk about the need "to eke out every last bit of performance from hardware" without actually doing the hard work? They run gentoo. Tue, 10 Apr 2007 18:35:52 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229855/ https://lwn.net/Articles/229855/ stevenj <blockquote> If you can increase the speed of your code by 5%, you save a day and a half every month. The amount of work you can accomplish with that extra time really adds up when you consider hundreds or thousands of CPUs. </blockquote> <p>Not if you spend two days every month tweaking compiler flags, recompiling your system, and benchmarking the resulting executables... <p>I see claims of Gentoo's performance advantages being tossed around a lot; can someone point to benchmarks to back them up? <p>In my experience, the performance of system binaries is usually almost irrelevant to scientific code. What matters is how you compile your own code and perhaps a few key libraries like the <a href="http://en.wikipedia.org/wiki/Basic_Linear_Algebra_Subprograms">BLAS</a>, but in neither case do you need to recompile your whole OS. <p>(And in any case, compiler tricks are usually the least significant part of a serious optimization effort, if you know what you are doing.) Tue, 10 Apr 2007 17:47:38 +0000 Building a High-Performance Cluster with Gentoo https://lwn.net/Articles/229857/ https://lwn.net/Articles/229857/ asamardzic Any hard numbers to support claim that cluster built in alike way is indeed faster against (OSCAR|Rocks|Warewulf) binary installation on the same hardware?<br> Tue, 10 Apr 2007 17:46:38 +0000