An LCA 2012 summary
Freedom is always a strong theme at LCA, and the 2012 version was no
exception. That emphasis tends to be especially strong in the keynote
talks, as should be clear from the reports on the keynotes by Bruce Perens and Jacob Appelbaum. Karen Sandler's keynote was
just as concerned with freedom and just as noteworthy; it would have
merited an article of its own had we not covered some of her topics from
another talk back in 2010. Karen retold
her chilling story of trying to get access to the source code for an
implanted device designed to protect her from abrupt heart failure. Not
only is that source not available to her; even the regulatory agency in the
US (the FDA) charged with approving the device normally does not review the
code. The presence of catastrophic bugs seems guaranteed.
In addition to simple worries about whether the device will work as needed, there is another concern: these devices are increasingly given wireless communications capabilities that allow them to be reconfigured and controlled remotely. To the extent that the security associated with that access can be verified, it seems to be notable mostly in its absence. In other words, implanted medical devices would appear to be open to a variety of denial of service attacks with extreme consequences. Given that some of them are implanted into important people (she named Dick Cheney who, as a result of his implanted device, no longer has a pulse), it only seems like a matter of time until somebody exploits one of these vulnerabilities in a high-profile way. Karen noted dryly that, given the type of people she hangs around with, it would be unwise to expose herself to such attacks; she went out of her way to get an older device with no wireless connectivity.
She pointed out that a lot of other safety-critical devices - automotive control systems were mentioned in particular - have similar problems. The solution to the problem is clear: we need more free software in safety-critical applications so that we can all review the code and convince ourselves that we can trust our lives to it. And that, she said, is why she made the move to the GNOME Foundation. GNOME's work to make free software usable and attractive in current and future systems is, she said, an important part of getting free software adopted in places where we need it to be.
Another theme at LCA has always been represented by the maker
contingent: whether it's Arduino, rockets, robots, or home automation, the
people who make their own toys always turn out in force at LCA. Notable
among a strong set of maker-oriented talks was "Rescuing Joe" by Andrew
"Tridge" Tridgell. The challenge here is to make an autonomous aircraft
that can search a defined area for a lost bushwalker ("hiker," in US
dialect), drop a water bottle nearby (without hitting him), then return
safely back to its landing point. This challenge has been run for a few
years, but nobody has yet fully achieved its goals; Tridge's team hopes to
be the first to succeed.
Getting there requires the design of a complex system involving autonomous avionics, an independent failsafe mechanism that will crash the plane if it leaves the search area, computer vision systems to locate the hiker, mechanical systems to reliably drop the water bottle in the desired location, and high-bandwidth digital communications back to the launch base. The test systems currently run with a combination of Pandaboard and Arduino-based systems, but the limitations of the Arduino are becoming clear, so the avionics are likely to move to another Linux-based Pandaboard in the near future.
This project requires the writing of a lot of software, most of which is finding its way back upstream. The hardware requirements are also significant; Tridge noted that the team received a sophisticated phased-array antenna as a donation with a note reading "thanks for rsync." All told, "challenge" appears to not even begin to describe the difficulty of what this team has taken on. The whole talk, done in Tridge's classic informative and entertaining style, is well worth watching.
Rusty Russell and Matt Evans recently took a look at V6 Unix, as built for the PDP-11, and noted something obvious but interesting: it was a whole lot smaller than the systems we are running now. The cat binary on that system was all of 152 bytes - in an era when everything was statically linked - while cat on Ubuntu 11.10 weighs in at 47,696 bytes - and that is with dynamic linkage. We have seen similar growth in grep (2,190 bytes to 151,056) and ls (4,920 bytes to 105,776). So they asked: where is all this growth coming from, and what did we get for it?
What followed was an interesting look into how Unix-like systems have changed over the years; watching the video is well recommended. Their first observation was that contemporary binaries could be reduced in size by about 30% by using the GCC -Os option, which causes the compiler to optimize for size. In other words, we are paying a 30% size penalty in order to gain some speed; the actual speed benefit they measured was about 9%. But there is a lot more to it than that.
A simple program consisting of a single "return 42;" line on
Ubuntu will, when build statically, weigh in at about 500,000 bytes. Rusty
and Matt determined that this program, which makes no direct C library
calls, was pulling in about 17% of glibc anyway. Even the simplest
program, anymore, must make provisions for dynamic loading,
atexit() handling, proper setuid behavior, and more. So the
program gets huge but, in this case, only about 2% of the pulled-in code
actually gets run. In general, they found, most of the code dragged in by
contemporary programs is simply wasted space. That waste can be reduced
considerably by linking against dietlibc instead of glibc, though.
How much does 64-bit capability cost? An amusing exercise in porting the V6 code to the imaginary 64-bit "PDP-44" architecture increased its size by about 50%; the size difference between 32-bit and 64-bit Ubuntu programs is rather smaller, at about 9%. Use of "modern infrastructure" (that, for example, forces malloc() to be used instead of sbrk() in all programs) bloats things by about 120%. The large growth in features (ls has 60 options) leads to a massive 440% increase in size; they also measured a 20% time overhead caused by rarely-used features in ls. It's worth noting that half of that time cost goes away when running with LANG=C, leading to the conclusion that locales and other flexibility built into contemporary systems has a large cost. In the end, though, these appear to be costs that we are willing to pay.
David Rowe gave a fascinating talk on the development of Codec2, a speech-oriented codec that is able to produce surprisingly good voice quality at data rates as low as 1,400 bits/second. To understand the math involved, one should watch the video. But even without following that aspect of things, the talk is an interesting discussion of the open development of a patent-free codec with interesting real-world applications - sufficiently interesting that it risked being classified as a munition and treated like cryptographic code.
In summary, LCA remains unique in its combination of strongly technical talks, freedom-oriented and hands-on orientation, wide variety of topics covered, and infectious Australian humor. There is a reason some of us seem to end up there every year despite the painful air-travel experiences required. Linux Australia has put together a structure that allows the conference to be handed off to a new team in a new city every year, bringing a fresh view while upholding the standards set in the previous years. With regard to upholding the standards, the LCA 2012 lived up to expectations in a seemingly effortless manner - it was a job well done indeed. They have set a high bar for the 2013 team (Canberra, January 28 to February 2) to live up to.
[ Conference videos can be found
on YouTube,
in Ogg
format, and in
WebM. Your editor would like to thank the LCA 2012 organizers for
assisting with his travel to the event. ]
Index entries for this article | |
---|---|
Conference | linux.conf.au/2012 |
Posted Jan 26, 2012 5:37 UTC (Thu)
by jmorris42 (guest, #2203)
[Link] (7 responses)
Imagine that Linux, from the kernel to the desktop had made size and performance important, such that it could run a modern looking desktop in half the ram and at half the CPU usage. A couple of years ago there was a brief moment when Linux was actually shipping on netbooks. Imagine that this smaller and faster Linux had shipped on those machines and that, since a key design goal of the original netbook was low cost, the products were down scaled to match. Say 500Mhz and 256MB ram. Now imagine trying to shoehorn Windows XP on those, even if Microsoft was essentially giving it away.
Now imagine resource usage low enough that the netbook revolution could have been launched a couple of years before ASUS partnered with Intel but on REALLY low spec units on the scale of the WinCE mini laptops of that era.
Can anyone fault Google for dumping all of the GNU/X userspace for android and just keeping the Linux kernel? The first Google phone had 128MB of ram (after DSP/GPU overhead), imagine them trying to showhorn gtk, alsa, a few *kits in that... oh wait, Maemo did it so we don't have to imagine. Just picture a phone as slow as an N770. That would have been a iPhone killer fer sure!
Again, imagine we had cared about performance before the mad rush to own phones and tablets. We would now own that space. But developers write for developers hardware.
Posted Jan 26, 2012 9:11 UTC (Thu)
by fuhchee (guest, #40059)
[Link] (1 responses)
Posted Jan 26, 2012 11:30 UTC (Thu)
by njwhite (guest, #51848)
[Link]
Posted Jan 30, 2012 20:52 UTC (Mon)
by BenHutchings (subscriber, #37955)
[Link] (3 responses)
You do realise that Windows XP was released in 2001, right? That was a very good system spec for that time, and it is more than adequate to run XP. Your point that the Linux desktop is bloated is even more valid than you think!
Posted Jan 31, 2012 9:22 UTC (Tue)
by Fowl (subscriber, #65667)
[Link]
If you're not savy enough to run without anti-virus, you may as well give up.
Posted Jan 31, 2012 18:14 UTC (Tue)
by Jonno (subscriber, #49613)
[Link] (1 responses)
Even back in 2001 when Windows XP was released it was "common knowledge" that the practical minimum requirement for Windows XP was 512 MB RAM. Much the same way that the "common knowledge" practical minimum requirement for Windows Vista is 2 GB RAM, even though the official System Requirements is only 512 MB (with which I seriously doubt you will be able to even launch notepad.exe)
Posted Feb 2, 2012 13:58 UTC (Thu)
by nye (subscriber, #51576)
[Link]
It really wasn't. I got my first summer job in 2001, and used the money to build a machine with 384MB of RAM, which everyone considered ludicrous. It was a couple of years after that before 512MB became at all commonplace (and indeed three or four years ago 512MB was *still* the default on machines marketed to everyday users).
The release version of Windows XP runs just fine with 128MB; SP3 is basically unusable without at least 512 (though I still see people using it with 256).
Posted Feb 2, 2012 14:09 UTC (Thu)
by renox (guest, #23785)
[Link]
The thing is even if you reduce desktop resource usage, modern applications need lots of resources, so this may not help so much..
Posted Jan 26, 2012 10:05 UTC (Thu)
by appie (guest, #34002)
[Link]
Posted Jan 26, 2012 13:26 UTC (Thu)
by etienne (guest, #25256)
[Link] (3 responses)
Posted Jan 26, 2012 15:59 UTC (Thu)
by cesarb (subscriber, #6266)
[Link]
Of course, the infrastructure responsible for loading these separate files still counts.
Posted Jan 27, 2012 15:50 UTC (Fri)
by jeremiah (subscriber, #1221)
[Link]
Posted Jan 27, 2012 19:58 UTC (Fri)
by giraffedata (guest, #1954)
[Link]
Considering just the base language, I think it's true that in Unix 6 programmers consciously avoided detailed messages in order to save the space to store the text. To the extent that the new larger programs are that way because the error messages are more informative, we definitely got our money's worth.
But it looks to me like the three word error message is still king in the Unix world and messages longer than one line are rare. So is having separate messages for separate conditions. So I don't think better messages accounts for much of the bloat.
Posted Jan 27, 2012 13:51 UTC (Fri)
by pflugstad (subscriber, #224)
[Link]
<http://www.muppetlabs.com/~breadbox/software/tiny/teensy....>
Fun read...
Posted Jan 27, 2012 20:05 UTC (Fri)
by giraffedata (guest, #1954)
[Link] (1 responses)
I think about it more in execution time, though. Often the new release of something is considerably slower than the old one even though as far as I can tell, I use only the features that were in the old one.
Emacs is particularly vexing that way. I'm sure wonderful things were added in the last 10 years, but I still edit the way I did 10 years ago and it takes a lot more CPU time. Emacs is apparently doing wondrous new things for me with every scroll-one-line command, because on some computers I can no longer scroll as fast as the keyboard repeats (which, by the way, makes it much more difficult to use -- it makes it scroll in jumps). I've always wondered what I'm getting in return for that.
Posted Jan 27, 2012 22:00 UTC (Fri)
by nix (subscriber, #2304)
[Link]
(If it's anything else, please come to emacs-devel and discuss it there!)
Bloat and the hidden costs
Bloat and the hidden costs
Bloat and the hidden costs
Bloat and the hidden costs
Say 500Mhz and 256MB ram. Now imagine trying to shoehorn Windows XP on those, even if Microsoft was essentially giving it away.
Bloat and the hidden costs
Bloat and the hidden costs
Bloat and the hidden costs
Bloat and the hidden costs
brain short circuit
An LCA 2012 summary
An LCA 2012 summary
An LCA 2012 summary
programs growing in size over the years
binary sizes
I've always found this question interesting (what did we get in return for the huge increase in resource usage from prior generations).
Programs growing over the years
Programs growing over the years