LWN: Comments on "All hail the speed demons (O'Reillynet)" https://lwn.net/Articles/158126/ This is a special feed containing comments posted to the individual LWN article titled "All hail the speed demons (O'Reillynet)". en-us Fri, 24 Oct 2025 01:36:42 +0000 Fri, 24 Oct 2025 01:36:42 +0000 https://www.rssboard.org/rss-specification lwn@lwn.net Real-life optimization work https://lwn.net/Articles/159198/ https://lwn.net/Articles/159198/ njhurst I think that the problem is each application looks in 10 different places for 10 different files.<br> Wed, 09 Nov 2005 10:16:07 +0000 All hail the grammar checkers https://lwn.net/Articles/159178/ https://lwn.net/Articles/159178/ TwoTimeGrime Thanks for the tip. It wasn't clear that it was a web log. After knowing that now and looking at the page again the only thing that gives an indication that it might be a web log is some breadcrumb navigation right above the author's photo. Everything else has O'Reilly branding that makes it look like it's regular editorial content. The fact that it's a weblog is not conspicuous.<br> <p> If by "we the editors" you mean that you work there, you might want to pass this comment on to someone there. I will see if there's a feedback address on the web page and email them as well.<br> Wed, 09 Nov 2005 02:43:32 +0000 All hail the grammar checkers https://lwn.net/Articles/159128/ https://lwn.net/Articles/159128/ chromatic It's a weblog, not an article. We the editors don't edit those.<br> Tue, 08 Nov 2005 22:32:26 +0000 Real-life optimization work https://lwn.net/Articles/159035/ https://lwn.net/Articles/159035/ piman You forgot to mention bloated things like file permissions and multiple terminals. :)<br> <p> Also, Unix code (meaning all those things the grandparent eschewed, like malloc and localtime) written in 1989-1991 would take a couple days to port to a modern GNU/Linux distribution. And probably only a few days to port to whatever comes 15 years from now.<br> <p> So would it do more? Yeah. To start with, it would run in the first place. And without that ability, source code of any size is worthless.<br> Tue, 08 Nov 2005 08:21:48 +0000 Real-life optimization work https://lwn.net/Articles/159034/ https://lwn.net/Articles/159034/ piman <font class="QuotedText">&gt; You could spend all day twiddling bits and re-aranging this or that and save maybe a half a second off a 7 second load time, were as you could spend time re-thinking out how the configuration files work and save 5 seconds off of the load time.</font><br> <p> I'll believe that when I see a profile.<br> <p> (Which, in my opinion, was rather the point of this article. Stop complaining and start profiling. That's how we'll get rid of "bloat".)<br> Tue, 08 Nov 2005 08:17:15 +0000 All hail the speed demons (O'Reillynet) https://lwn.net/Articles/159025/ https://lwn.net/Articles/159025/ nix Yes. Even so, that large chunk containing a LOADed section would be read *whether or not the other parts of it happen to be LOADed or not*, so, again, the worst that .comment does is to reduce packing efficiency of LOADed sections. This is hardly a killer --- given that we have no effective tools to improve locality of reference in shared libraries anyway, we're wasting far more disk accesses on unnecessary paging due to poor packing of accessed functions.<br> Mon, 07 Nov 2005 23:37:59 +0000 All hail the grammar checkers https://lwn.net/Articles/158945/ https://lwn.net/Articles/158945/ TwoTimeGrime You may not value your time but don't disparage me because you feel that Slashdot-quality journalism is acceptable on LWN. If someone has something to say then I expect their sentences to be clearly formed so that their ideas can be properly communicated and understood. If the writer can't even take the time to communicate clearly then why should I bother to read the article? I have better things to do than try to figure out what the author was really trying to say. A confusing grammatical error in the first sentence and an irrelevant detour about developers with the "coolest names" doesn't help me understand what he's saying about Liunx application performance.<br> <p> What's even more disappointing is that O'Reilly is a publisher that I have felt has always published quality books. It's a shame that their editors did such a poor job on this article before it was published.<br> <p> Mon, 07 Nov 2005 17:40:08 +0000 All hail the grammar checkers https://lwn.net/Articles/158931/ https://lwn.net/Articles/158931/ lypanov pathetic. grow up<br> Mon, 07 Nov 2005 15:36:41 +0000 All hail the speed demons (O'Reillynet) https://lwn.net/Articles/158929/ https://lwn.net/Articles/158929/ lypanov umm you realize that disks are laid out in fairly large tracks<br> rather than 4kb sections right?<br> Mon, 07 Nov 2005 15:35:32 +0000 Real-life optimization work https://lwn.net/Articles/158854/ https://lwn.net/Articles/158854/ nix I size(1)d them, of course; I just didn't want to spray the result all over the comments page.<br> <p> And, yes, I'd agree that normally shaving bytes off things isn't worth it: however, with that in mind I spent this weekend shaving a few bytes off one data structure in one program and reducing the number of instances of that data structure --- and reducing the program's peak memory consumption from many gigabytes to a few hundred Mb.<br> <p> But microoptimizations without major results, or pervasive ones, are indeed generally not worth it.<br> Mon, 07 Nov 2005 00:18:28 +0000 Real-life optimization work https://lwn.net/Articles/158817/ https://lwn.net/Articles/158817/ zblaxell Actually the car with airbags has probably also lost weight in other places thanks to materials and construction optimizations, and maybe has a more efficient and/or powerful engine. It most likely actually starts *more* quickly now. In my case, it actually does--my current car has airbags and starts in well under a second, whereas my previous car had no airbags and could not leave the safety of park without idling for at least three seconds *after* the starter motor finished getting the engine to run on its own power (and some days that took a while). The new one is bigger and heavier than the old, but consumes half the fuel (another thing naive Newtonian analysis suggests shouldn't happen). There's a lot of complexity in the new system, but it does something useful that the simpler system couldn't do, and the cost is reasonable given the benefits.<br> <p> Some people are alive today because of airbags--I doubt they would consider them bloat. OTOH, some people do complain about airbags, although they don't use the word "bloat" to describe it ("dangerous" and "explosive" come to mind).<br> <p> Sometimes software gets better when it gets bigger. A 6809 machine running a 1.87MHz processor doesn't have a complicated buffer cache subsystem, because it would be slower to read and cache data than to reread it from disk every time it is requested and copy it into a cache; however, a P4 machine running a 3.0GHz processor turns a complicated buffer cache subsystem into huge average-case improvements in I/O subsystem performance. Very few users would consider disk cache to be bloat on a modern machine (maybe people who write bootloaders that must share 512 total bytes of binary with a partition table, or embedded systems which use alternatives to cache, like execute-in-place, or high-end database systems which regard *everything* between the backend and the disk as bloat).<br> <p> Often, bigger software is just bloated. Program A that performs a task in 1000 steps in a loop is generally slower than Program B that performs the same task in only 500 steps, all else being equal. No amount of spiffy new hardware can change the fact that program A is still twice as slow as program B on the same machine. The complaints about bloat start when program A fails to demonstrate useful improvements over program B, especially when program A is new and typically doesn't do everything that program B does.<br> <p> And why shouldn't people complain when, all else being equal, someone proposes replacing their existing working software with something slower, larger, and more broken? Even entirely new software is bloated if its runtime cost far exceeds reasonable technical requirements for the problem it solves. It's one thing to say "it has a lot of new capabilities", but the people who are complaining care more about things they're doing now, than things they could do at some future time.<br> <p> The text editor I used in 1991 (vi) is the same text editor I use in 2005, but its performance relative to CPU clock rate has been largely unchanged during that time (with the convenient side-effect that it is now 200 times faster than it was 14 years ago, or put another way, I can start editing a 200MB file today in the same time I used to need to load a 1MB file).<br> <p> Today I expect a text editor to fit in well under 1MB of RAM (not including the file being edited of course), support all the editing operations vi does, and go from "zero RAM usage" to "editing a 50MB text file" in three seconds or less. It's possible to double the startup speed of vi by removing the recovery feature, so I'm already tolerating nearly 50% overhead in that standard. Anything slower would be bloated--no matter what fonts or rendering capabilities it has. It's certainly possible to achieve this performance with a Unicode-capable, locale-aware text editor--the fact that nobody seems to have managed to do it yet doesn't mean that all known attempts so far haven't been bloated monsters. To the people who are creating these monsters: don't deny this. Your code, or some code you have chosen to depend on, *is* bloated. Please, keep trying until you get it right. It *is* possible.<br> <p> OTOH, bloat is often tolerable, although still nothing to be proud of. I have a gigabyte of RAM in this laptop because it's easier and cheaper than trying to make a bunch of huge, multi-modular, multi-layered applications smaller.<br> Sun, 06 Nov 2005 09:45:50 +0000 Real-life optimization work https://lwn.net/Articles/158816/ https://lwn.net/Articles/158816/ zblaxell The Windows registry is organized into a vaguely tree-like recursive structure, demand-paged and cached in RAM.<br> <p> The Linux filesystem is organized into a vaguely tree-like recursive structure, demand-paged and cached in RAM.<br> <p> Performance-wise there isn't much difference unless you're using a braindead filesystem. The frequently accessed and recently modified stuff will be in RAM, and everything else won't.<br> <p> It would be better to tweak the demand-paging of the executables. Reading 4K at a time according to quasi-random execution paths is stupid when it's faster to read 500K of data from disk than it is to read 4K, seek 492K ahead, and read 4K.<br> Sun, 06 Nov 2005 08:08:43 +0000 Real-life optimization work https://lwn.net/Articles/158794/ https://lwn.net/Articles/158794/ tialaramex Are you /sure/ it wouldn't do more?<br> <p> You see, it's so easy to write a Unicode-enabled, locale-sensitive program that you might easily do so by accident. Your new program might, without you really intending it, support a lot of extra things that a lot of people (maybe even you) would find useful. Things which weren't so much missing from the original as simply never considered. Remember also that the OS support functions are much more powerful and robust than their equivalents on your 6809. Depending on the APIs used your "save file" routine may magically support saving a compressed file, over the network, with automatic versioning...<br> <p> <p> Sun, 06 Nov 2005 00:58:48 +0000 Real-life optimization work https://lwn.net/Articles/158718/ https://lwn.net/Articles/158718/ vonbrand <p> Sorry, but comparing the size of the binaries is useless. Use <code>size(1)</code> for that. Also, from what I understand, on SPARC 64-bit binaries are much larger due to larger constants (pointers, integers, ...) all over the place. <p> Besides, what is the point? To get <em>anything</em> running on an 8-bit machine was a challenge, lots of things you take for granted today weren't even the stuff of wet dreams then. You also have to remember that today the expensive part of the mix is <em>people</em>, not machine. Sure, one could develop mean and lean applications doing most of what today's software does. With enough care, you could even figure out how to include just the features people really use, and shave off quite a bit more. But the development would be a whole lot more expensive, just for letting a few MiB of RAM lay around unused for a change. Sat, 05 Nov 2005 03:11:11 +0000 Real-life optimization work https://lwn.net/Articles/158648/ https://lwn.net/Articles/158648/ oak Only if your mass storage is slow at seeking.<br> <p> This is not the case if you use instead of hard disk for example Flash memory like is done on many embedded devices.<br> <p> Fri, 04 Nov 2005 18:31:54 +0000 glibc https://lwn.net/Articles/158647/ https://lwn.net/Articles/158647/ oak And note that Glibc cannot even produce really static binaries...<br> Name resolving and security stuff are always loaded dynamically.<br> <p> However, it's silly to do static binaries with Glibc, you should use a C-library that's "designed" for that.<br> For example uClibc. :-)<br> <p> Fri, 04 Nov 2005 18:28:14 +0000 All hail the speed demons (O'Reillynet) https://lwn.net/Articles/158629/ https://lwn.net/Articles/158629/ dps I think bloat *is* a problem, and it applies to my code too... when I estimate 300--500 lines of code and the actually code is more like 1500 lines this indicates a problem to me.<br> <p> That said being big is not necessaitly bad... my current choice of CGI library is big because it was designed to allow you to decompress zip archives and feed the to cgi programs uncompressed, and not be vulnerable to zips of death consuming all avialable disc space. This feature now works.<br> <p> The infrastructure for this is overkill for some applications but using it made sense if I had to have it anyway. The ability to replay a request when using gdb is an addictive side benefit. Unfortunately this library is not generally available, so not bother asking for a copy.<br> <p> I reduced some string functions from 30% of a profile to under 5% by making them process 2 characters at a time. This was hard work and those functions are a lot bigger and more complex than the natural implementation.<br> <p> Fri, 04 Nov 2005 17:16:50 +0000 Real-life optimization work https://lwn.net/Articles/158623/ https://lwn.net/Articles/158623/ dann "Paging in" does not make a big difference for small libraries during a cold startup, at least the symbol table and the _init need to be read from the<br> disk. Extra disk seeks are expensive.<br> Fri, 04 Nov 2005 16:29:08 +0000 Real-life optimization work https://lwn.net/Articles/158612/ https://lwn.net/Articles/158612/ hppnq <em><blockquote> The car doesn't start more slowly because of power airbags.</blockquote></em> <p> Of course it does, Newton proved that about 350 years ago. ;-) <p> The trick with optimization is knowing *what* to optimize. Most of the complaining about bloated and slow software is meaningless nonsense, it's like complaining that the tires of the average truck are so much bigger than my own car's -- and MY car runs fine, you know. Fri, 04 Nov 2005 16:04:47 +0000 All hail the speed demons (O'Reillynet) https://lwn.net/Articles/158598/ https://lwn.net/Articles/158598/ nix Sections which are not loaded are not read except if they happen to be in the same page as the loaded sections. You can ignore them except for their disk space consumption.<br> Fri, 04 Nov 2005 13:28:22 +0000 Real-life optimization work https://lwn.net/Articles/158597/ https://lwn.net/Articles/158597/ nix That is coming now the kernel exports that sort of info (as of 2.6.14).<br> Fri, 04 Nov 2005 13:27:00 +0000 Real-life optimization work https://lwn.net/Articles/158596/ https://lwn.net/Articles/158596/ nix Shared libraries are paged in, not `loaded from disk'; the overhead of using extra shared libraries on a prelinked system is very low indeed. (dlopen()ing is rather a lot more expensive, as you can't prelink dlopen()ed libraries.)<br> Fri, 04 Nov 2005 13:26:18 +0000 Real-life optimization work https://lwn.net/Articles/158595/ https://lwn.net/Articles/158595/ nix Indeed they could be compressed, but I think you might need a new relocation type for 64+32 base+offset... (I'm not sure and don't have the specs here).<br> Fri, 04 Nov 2005 13:22:50 +0000 All hail the speed demons (O'Reillynet) https://lwn.net/Articles/158581/ https://lwn.net/Articles/158581/ NAR I've just checked, it took 62 seconds after I typed <P> <CODE> oowriter 6k_long_file.sxw </CODE> <P> to get to a point where I can move the cursor in Writer. And as a side effect, most of my other processes were swapped out - and this is an 1.6GHz processor with 512 MB RAM. At least it motivates me to write code so I could avoid writing implementation proposal documentation... <P> <CENTER>Bye,NAR</CENTER> Fri, 04 Nov 2005 10:58:03 +0000 Real-life optimization work https://lwn.net/Articles/158565/ https://lwn.net/Articles/158565/ drag In gnome much of the start up time in programs doesn't have anything to do witth the binary sizes, or how it's programmed, or libraries it's linked to or anything like that.<br> <p> What it is is that it's looking around on your harddrive for various configuration files and whatnot. Polling files here and there. So a large part of the start up time is when the program is fine and ready to run pretty much, but it is waiting on disk I/O.<br> <p> With windows you have the registry were all this stuff is stored, which I suppose is mostly in memory most of the time anyways. It's much quicker interface then the Linux-style configuration files and directories stored in various places on your directory system.<br> <p> That's the trouble with optimizing code. You could spend all day twiddling bits and re-aranging this or that and save maybe a half a second off a 7 second load time, were as you could spend time re-thinking out how the configuration files work and save 5 seconds off of the load time.<br> <p> Linux itself also has numerious small things that have been developed and added to the kernel to greatly improve memory performance and whatnot, but nobody uses them because they are unaware of them, and when they are they often don't want to bother because it's a hassle to make Linux-specific code when other systems like the BSDs aren't nearly as sophisticated desktop-wise.<br> <p> <p> Or something like that. I am not a programmer though. But I found this interesting:<br> <a href="http://stream.fluendo.com/archive/6uadec/Robert_Love_-_Optimizing_GNOME.ogg">http://stream.fluendo.com/archive/6uadec/Robert_Love_-_Op...</a><br> Fri, 04 Nov 2005 07:25:35 +0000 Real-life optimization work https://lwn.net/Articles/158557/ https://lwn.net/Articles/158557/ zblaxell There is also a shift in the nature of the programming task.<br> <p> In 1989-1991 I wrote a personal calendering application in the best available programming tools for me at the time: 6809 assembler. From scratch. (OK, I had Unix-like system calls, but no library functions, not even math with integers larger than 16 bits).<br> <p> The application contained many of the usual personal calendar features and some unusual ones: alarm notifications, recurring events, a categorization and prioritization scheme, expiration dates, interactive editing, printable sorted deadline lists, colored text, curses-like interface, etc. The particular combination of features was highly productive for me, and unfortunately a) I've never seen anyone else write a similar application, b) the source code is on an obsolete hard drive, and c) without it, I can't seem to organize my life to get the time to rewrite it.<br> <p> One thing that happens when you manually type in 1300 assembler instructions is that you don't waste them. There was nothing in that code that didn't need to be there. I entered each instruction by hand, using no assembler macros, only function calls. Features were carefully designed to balance functional benefits against fairly painful coding cost--when 10% of your program is consumed by the functions that manipulate dates and intervals, you think twice before adding superfluous features, and you also find ways to *add* functionality by *removing* code.<br> <p> This calendering application binary was about 3K. The smallest i386 binary I can get for the source code "int main(){return 0;}" is more than double that size, but it does less (now *that* is bloat ;-). Oddly enough, at the time I thought 3K was a huge investment in memory since it would be resident in RAM all the time.<br> <p> If I cloned the old program line by line, but transliterated into C, it'd probably become 10 times larger (recall it became twice as large just by being replaced with a program that returns a constant integer). The i386 requires four bytes for memory addresses instead of two, many of the x86 instructions are longer than the 6809 equivalents, and C compilers don't usually find ways to exploit instructions that are designed for people who are writing date formatting functions by hand in assembler.<br> <p> If I designed an equivalent program using the tools I'd normally use for binary software development today (C, curses, etc), it'd be 100 times larger. My program contains constant strings for terminal manipulation--this would be replaced with the while curses/termcap/terminfo/etc infrastructure. If I used malloc() instead of my own memory management library and ANSI C string functions instead of my own string management library the memory overhead on each event would double. localtime() and mktime() are considerably larger than my date manipulation library--my library didn't have to support time zones, for one thing. A lot of data that was stored in packed bit structures would end up being spread out over bytes, ints, or even text strings in a "modern" design.<br> <p> On the other hand there is one saving--I won't need several hundred bytes of integer math library since modern CPU's come with these functions *built right into the hardware*. ;-)<br> <p> If I designed an equivalent program in a scripting language, its source code might be somewhat smaller, but it will probably use more RAM at runtime than was available in the entire machine that used to run the application as a daemon--a bloat factor of over 200 (with a GUI, over 1000). It would also take me a single weekend, not three years, to write it.<br> <p> But would the program do anything more? No. It would be the same little program, it would just be sitting on top of a mountain of accreted infrastructure.<br> Fri, 04 Nov 2005 06:47:56 +0000 All hail the grammar checkers https://lwn.net/Articles/158542/ https://lwn.net/Articles/158542/ TwoTimeGrime No kidding. Hero's what?<br> Fri, 04 Nov 2005 01:40:34 +0000 Real-life optimization work https://lwn.net/Articles/158463/ https://lwn.net/Articles/158463/ mcm i guess the relocations could be compressed, as they can probably be represented as 32-bit offsets to a 64-bit base.<br> Thu, 03 Nov 2005 19:55:15 +0000 Real-life optimization work https://lwn.net/Articles/158449/ https://lwn.net/Articles/158449/ dann The crypto libraries are brought in because gnome-vfs is linked to them.<br> libgnomeui links to gnome-vfs, so any GNOME application that links to libgnomeui will be linked to the crypto libraries. <br> It would be better if gnome-vfs dlopened the crypto libraries on demand when they are used, that would avoid linking all the GNOME applications to the crypto libraries (and probably avoid loading them from disk on startup, as they probably are not used).<br> <p> Thu, 03 Nov 2005 19:05:48 +0000 Real-life optimization work https://lwn.net/Articles/158446/ https://lwn.net/Articles/158446/ dann Well it would be nice if the calendar and appointment functionality would be loaded on demand. If one does not use evolution, then there's little point in<br> loading all those libraries, it just slows down the startup. <br> <p> About pmap, it would be great if the linux pmap printed more details about the maps like the Solaris pmap -x: <br> <p> Address Kbytes Resident Shared Private Permissions Mapped File<br> 00010000 1688 1616 1616 - read/exec emacs<br> 001C4000 4904 4816 1208 3608 read/write/exec emacs<br> ...<br> <p> This way you more more exactly how memory is used.<br> Thu, 03 Nov 2005 18:44:24 +0000 Real-life optimization work https://lwn.net/Articles/158413/ https://lwn.net/Articles/158413/ beoba The car doesn't start more slowly because of power airbags.<br> <p> With software, adding features is often a tradeoff, and because of that, different people have different ideas of what position is optimal for their case.<br> Thu, 03 Nov 2005 16:51:36 +0000 All hail the grammar checkers https://lwn.net/Articles/158361/ https://lwn.net/Articles/158361/ gravious first sentence: hero's<br> _groan_<br> why bother going on?<br> Thu, 03 Nov 2005 14:02:31 +0000 All hail the speed demons (O'Reillynet) https://lwn.net/Articles/158347/ https://lwn.net/Articles/158347/ ekj Not quite.<p> First you save disk-space since the binaries gets smaller.<p> Secondly your program starts quicker since there is less data to read (even if they're not loaded, they're still read, and even if they're not read, then there's an extra seek to skip them, and even then the VFS migth decide to do readahead anyway and thus physically read and transfer to RAM parts of the file which your application never ever touches or reads.<p> OK, so it's probably not major. But it wouldn't surprise me if the measured speedup was quite measurable. Thu, 03 Nov 2005 12:20:54 +0000 glibc https://lwn.net/Articles/158326/ https://lwn.net/Articles/158326/ nix Oh, and of course none of this is true of dynamically linked programs and pretty much none of it is paged into memory for the vast majority of programs (whether statically or dynamically linked), so this explains no actual bloat at all.<br> Thu, 03 Nov 2005 11:45:24 +0000 glibc https://lwn.net/Articles/158321/ https://lwn.net/Articles/158321/ nix Here's the start of what happens with an empty file, simplifying where things start to explode. <p> A object file like <pre> int main (void) { return 0; } </pre> obviously pulls in nothing. But it gets linked with crtn.o, and then this happens (every object file except for crtn.o herein is in libc.a; things which do not lead to a size explosion omitted for clarity): <pre> crtn.o: __libc_start_main libc-start.o: __cxa_atexit cxa_atexit.o: malloc malloc.o: fprintf, abort... [pulls in stdio, which pulls in libio, which pulls in i18n code, &amp;c] </pre> (There are other paths inside malloc() which also pull in stdio code, too). <p> The situation is basically unchanged since Zack Weinberg posted <a href="http://sources.redhat.com/ml/libc-hacker/1999-08/msg00042.html">this post</a> in 1999, except that changes in glibc since then mean that his solution won't quite work (you need to redefine __cxa_atexit()...) <p> Fixing it is difficult, and since everyone in the glibc team hates wasting time on static linking-related stuff that doesn't affect the common case of dynamically linked programs, it's not likely to happen soon. Thu, 03 Nov 2005 11:44:12 +0000 Real-life optimization work https://lwn.net/Articles/158307/ https://lwn.net/Articles/158307/ nix <blockquote> A quick look in the memory map shows that about half of it is used by the clock applet itself (HEAP+STACK). The rest is used by the non-readonly segments (and so non-shared) of the shared libraries. </blockquote> /proc/*/smaps is <i>useful</i>, isn't it? Thu, 03 Nov 2005 10:08:57 +0000 Real-life optimization work https://lwn.net/Articles/158305/ https://lwn.net/Articles/158305/ nix Indeed. A lot of this is increased alignment constraints, but in binaries as opposed to in memory a pile is caused directly by increased address sizes. e.g.: <pre> -rwxr-xr-x 1 nix users 1165752 Nov 3 09:55 32/libcrypto.so.0.9.7 -rwxr-xr-x 1 nix users 1398112 Nov 3 09:55 64/libcrypto.so.0.9.7 </pre> That's two stripped UltraSPARC binaries, both built with -mcpu=ultrasparc (thus using almost identical instructions), one built with -m32 and one with -m64 with a biarch GCC. Major differences are thus alignment of data (25Kb size difference) code (20Kb size difference)... and relocations (100Kb difference: the 64-bit relocation sections are twice the size, because they're basically big tables of addresses and all the addresses have doubled in size). Thu, 03 Nov 2005 10:03:39 +0000 Real-life optimization work https://lwn.net/Articles/158292/ https://lwn.net/Articles/158292/ rossburton Ah the classic "foo takes 20M it's evil!" argument.<br> <p> 10M of virtual memory, most of which is shared. That's GTK+, Pango, GConf, Bonobo, for a start, and often the Evolution calendar libraries being loaded to display your appointments and tasks in the calendar. Heap wise, the clock uses a meg, and the executable code itself is 72K.<br> <p> pmap is your friend. Bannish the ignorance and see how memory is actually being used! I found an interesting bug in Evolution Data Server which resulted in vastly inflated "ps" memory counts: threads were not being destroyed correctly and for every thread (read: contact search) 8M was added to the VM size. Of this 8M only 4 bytes was actually used (it's the thread stack, and the thread didn't return anything), but it's easy to get "ps" sizes in the hundred of megabytes this way. One line patch later, bug fixed.<br> <p> Thu, 03 Nov 2005 09:16:48 +0000 Re: All hail the speed demons (O'Reillynet) https://lwn.net/Articles/158279/ https://lwn.net/Articles/158279/ gvy I don't. :-/<br> <p> Thu, 03 Nov 2005 07:29:47 +0000 glibc https://lwn.net/Articles/158262/ https://lwn.net/Articles/158262/ chant Some small part of this may be glibc bloat.<br> <p> A 10 instruction AMD64 assembly program to<br> xor a register to 0<br> increment that register<br> exit when that register is 0 again<br> <p> assembled to 731 bytes (not stripped of symbols/tables/etc).<br> <p> When linked with gcc -static -o &lt;progname&gt; assembly.o<br> becomes <br> 614,748 bytes.<br> <p> That is incredible.<br> <p> <p> Thu, 03 Nov 2005 04:44:12 +0000