Escaping the cold for 70 degree days in Los Angeles might be a reason for some—Colorado-based LWN Editors for example—but it clearly is not the reason that most folks choose to attend Southern California Linux Expo (SCALE). Many of the approximately 1400 attendees already live in the region, so it is the speakers, participants, and the expo floor that bring them in. I attended the sixth annual SCALE (SCALE 6x), just held, February 8-10 and it didn't take me very long to see why it continues to grow and prosper.
SCALE is a three day event, with two main conference days on Saturday and Sunday and a set of mini-conferences running in parallel on Friday. Each mini-conference covers a focused topic of interest to the community, with this year's topics examining Women in Open Source (WIOS), Open Source Software in Education (OSSIE), and Demonstrating Open Source Healthcare Solutions (DOHCS). It was a full day as each had eight or more hour-long sessions.
Allison Randal kicked off the WIOS track with a presentation aimed at encouraging more women to give presentations at conferences. Her talk, "The Art of Conference Presentations", was not particularly gender specific, of course. It covered the process of proposing, creating and giving talks to conferences. Randall's advice was cogent, from avoiding "cute" titles to establishing credibility via your biography without feeling like you are bragging. Her most important point was to not wait around until you are the perfect speaker, but to go out and start speaking; your voice and style will come with practice.
Over in the OSSIE track, Dan Anderson related his experiences teaching
computer science concepts to middle and high school students over the last
fourteen years. His approach
is to use computing as a bridge between math, science, and technology. He
discussed the process of creating, or trying to create, a stable curriculum
in the face of rapid technological change. Because the hardware, operating
systems, and languages all change quickly, his courses need to focus on
concepts that are not specific to any of those. Over the years he has
taught, the language used in the advanced placement course—dictated
state CollegeBoard company—has gone from Pascal, through C++, and now uses Java,
with some rumblings being heard about moving to Python. As he points out,
"much of what a High School student learns about technology will be
outdated by the time they graduate from college."
He uses How to Design Programs as the core text for his courses. It uses a graphical programming environment called DrScheme, which is based on Scheme, that allows different subsets of the language to be used based on the skill level of the student. Anderson has integrated various peripherals, like cameras and audio equipment, into the environment so that students can interact with the real world in interesting ways. His students work on projects like voice authentication and computer vision; this year's project is to recognize tic-tac-toe as drawn on a white board.
Other topics from OSSIE included a tutorial introduction to the moodle content management system (CMS) for online learning. Much like other CMS projects, moodle allows the creation of websites with various kinds of content—audio, video, images, and text—but organized as a course. It provides a framework and philosophy to guide the development of online classes. Students access the content via the web, completing tasks, taking quizzes, and participating in forums and chats with other students.
Charles Edge (no relation) spoke about the challenges of implementing directory services for educational institutions. One problem is that the term "directory services" cover a large amount of ground, from tracking users (both employees and students) to allowing single sign-on (SSO) into multiple machines and services throughout the school. The biggest challenge can be handling the sheer numbers of people to be tracked. Open source solutions do exist, OpenLDAP for storing the information, Kerberos for single sign-on and Simple Authentication and Security Layer (SASL) for extending the reach of the SSO into other services, but it is complex to configure and administer. For scalability and robustness in large installations, Edge suggests Microsoft's Active Directory, which was not a particularly popular opinion with the open source oriented audience.
The first day closed with a WIOS panel discussion, where six of the women presenting or showing at the conference discussed the issues facing women in open source. The discussion was informal and wide-ranging with a great deal of audience participation. Audience members asked questions as well as offered opinions and theories on why the participation of women is low and what can be done to make things better. No real conclusions were reached, as is usual for discussions of this topic; it is one of the more puzzling attributes of the free/open source community.
The animated and amusing Ubuntu community manager Jono Bacon gave a rousing keynote to start things off on Saturday. He tried to ensure that everyone was awake by leading a greeting in multiple languages (including Klingon). His main point was to describe the responsibilities of the various "factions" that jockey to determine the future of open source software—companies, distributions, and communities—trying to show that each has an important role. In fact, it is up to all constituents to ensure that the greater Linux ecosystem thrives and that each group works well with the others. It was all pretty much "motherhood and apple pie" stuff, but well described and illustrated—all with Chuck Norris to keep track of the score. Bacon did provide the quote of the show when he said that free software was "started by a guy with a beard who was pissed off at a printer."
Saturday was also the first day that the expo floor was open. Some 80 booths were there, representing companies large and small as well as lots of free software projects. One of the more interesting booths contained a working simulator of a 747 cockpit. All of the instruments were driven from a realtime Linux box and the FlightGear flight simulator was used to generate the cockpit window view. The two machines communicated over the network and various laptops were able to view the flight from other perspectives by getting updates from the simulator. It was rather impressive.
The linuxastronomy.org project was also on hand with their telescope prototype. The telescope will be controlled via a Linux machine allowing it to be pointed at locations as specified by users. A Linux desktop application will send locations to the telescope over the internet, allowing it to be remotely controlled so that it can be installed in a mountaintop or other location with (relatively) little light pollution and good viewing conditions. In addition, the project was demonstrating many of the free astronomy programs available for Linux.
A mobile audio studio product, Indamixx, did not have a booth, but could be seen all over the show. The company loaned two of the UMPC-based devices to the conference which were used to do podcasts of interviews with speakers and attendees. The device runs Linux with Audacity and ardour along with other free software. The company has tweaked things to make it all work well and be easy to use on the device. It looks to be quite capable as well as easily portable.
In another interesting talk, David Maxwell of Coverity gave an update on their project to scan free software for security holes. The US Department of Homeland Security gave Coverity a grant to work with free software projects to use the Coverity Prevent static code analysis tool (once known as the "Stanford Checker") on the code. The scan project has found over 7,000 defects in around a hundred free software projects since its inception. Maxwell is the Open Source Strategist for Coverity; he is looking for more projects to participate. He is encouraging any free/open source software project to get in touch with him to get signed up for the program.
Projects that join get their code scanned with a report being generated on the Coverity website for project members to view. The projects can then fix any of the issues that are actually bugs, mark others as "not a bug", and resubmit the code. The Coverity system will check the latest code out of their source code repository and check it again. Once all issues that the tool finds are handled, the project can move up to a higher "rung on the scan ladder" which will allow them to be scanned by more recent versions of the Coverity tool.
Bdale Garbee had perhaps the geekiest talk of the show on Saturday afternoon with "Open Avionics for Model Rockets". Garbee gave an overview of the hobby, which has gone far beyond the Estes rockets that many of us dabbled with in our youth. These rockets can go to 10,000 feet and above; just how high they go is one of the questions that led folks to start outfitting them with instruments. Deploying the recovery system—typically a parachute—at apogee is very desirable and a barometric sensor with a little bit of logic tied to the ejection charge can do just that. Unfortunately, all of the commercially available options for these systems are completely closed; even the protocol to talk to the device is not released by the manufacturers.
Garbee decided to once again combine one of his hobbies with open source to design and build an open device. Both the hardware and software will be released under free licenses (GPL and Open Hardware License); he had version 0.1 of the hardware (missing the accelerometer due to a problem in the board layout) with him at the show. The AltusMetrum system also has an onboard barometric sensor and will be able to support things like GPS devices and radio transmitters—so that lost rockets do not stay lost. Garbee expects to flight test the board and design version 0.2 of the hardware over the coming months.
Sunday's keynote, by Stormy Peters of OpenLogic was entitled "Would you do it again for free?". Peters looked at whether external rewards, usually money, affect the motivation of open source developers; in particular, if the pay stops, will the project work stop as well? She cited four separate "studies" (including two that weren't intended as studies) that seemed to show that adding a reward, or penalty, can sometimes have a counter-intuitive effect (see an entry in her weblog for more information).
Peters came to no firm conclusions about what the long-term effects of paying open source developers would be, but there are some mitigating factors that seem to provide hope that developers would continue if the paychecks stopped. When a payment or reward is in line with expectations for doing a particular task, it is much less demotivating. Also, if the payment is for working on the project, not tied to a specific goal or milestone, it is also less of a problem. Both of those are typically the case with folks who are paid—40% of open source developers are, according to Peters—for their work in the community.
After a last wander through the show floor, I was able to catch a few minutes of the talk given by Ken Gilmer and Angel Roman of Bug Labs describing their modular embedded Linux gadget building system. The system consists of a core module along with various plug-in devices: camera, motion detector, GPS, etc. that can be combined into a single Java programmable device. Many additional peripheral modules are planned. The software that runs on the device is free and Bug Labs has a community site to share application code; they are clearly hoping that they can foster a community of users and developers.
As can be seen, SCALE offers a wide variety of technical content in a well organized and fun conference. It has grown beyond the capacity of the Airport Westin where it has been held for the last few years; expect a new, bigger venue somewhere in LA next year. Over the last few years, SCALE has drawn from more areas of the southwest US in moving from a small, local conference to a regional one. If things continue, in another few years it may grow into a national conference; one can only hope that if that happens, it will continue to be as well run and interesting as it is today.
Keith Packard is a fixture at Linux-related events, so it was no surprise to see him turn up at LCA. His talk covered X at a relatively high, feature-oriented level. There is a lot going on with X, to say the least. Keith started, though, with the announcement that Intel had released complete documentation for some of its video chips - a welcome move, beyond any doubt.
There are a lot of things that X.org is shooting for in the near future. The desktop should be fully composited, allowing software layers to provide all sorts of interesting effects. There should be no tearing (the briefly inconsistent windows which result from partial updates). We need integrated 2D and 3D graphics - a goal which is complicated by the fact that the 2D and 3D APIs do not talk to each other. A flicker-free boot (where the X server starts early and never restarts) is on most distributors' wishlist. Other desired features include fast and secure user switching, "hotplug everywhere," reduced power consumption, and a reduction in the (massive) amount of code which runs with root privileges.
So where do things stand now? 2D graphics and textured video work well. Overlaid video (where video data is sent directly to the frame buffer - a performance technique used by some video playback applications) does not work with compositing, though. 3D graphics does not always work that well either; Keith put up the classic example of glxgears running while the window manager is doing the "desktops on a cube" routine - the 3D application runs outside of the normal composite mechanism and so cannot be rotated with all the other windows.
On the tearing front, only 3D graphics supports no-tearing operations now. Avoiding tearing is really just a matter of waiting for the video retrace before making changes, but the 2D API lacks support for that.
The integration of APIs is an area requiring some work still. One problem is that Xv (video) output cannot be drawn offscreen - again, a problem for compositing. Some applications still use overlays, which really just have no place on the contemporary desktop. It is impossible to do 3D graphics to or from pixmaps, which defeats any attempt to pass graphical data between the 2D and 3D APIs. On the other side, 2D operations do not support textures.
Fast user switching can involve switching between virtual terminals, which is "painful." Only one user session can be running 3D graphics at a time, which is a big limitation. On the hotplug front, there are some limitations on how the framebuffer is handled. In particular, the X server cannot resize the framebuffer, and it can only associate one framebuffer with the graphics processor. Some GPUs have maximum line widths, so the one-framebuffer issue limits the maximum size of the internal desktop.
With regard to power usage: Keith noted that using framebuffer compression in the Intel driver saves 1/2 watt of power. But there are a number of things to be fixed yet. 2D graphics busy-waits on the GPU, meaning that a graphics-intensive program can peg the system's CPU, even though the GPU is doing all of the real work. But the GPU could be doing more as well; for example, video playback does most of the decoding, rescaling, and color conversion in the CPU. But contemporary graphics processors can do all of that work - they can, for example, take the bit stream directly from a DVD and display it. The GPU requires less power than the CPU, so shifting that work over would be good for power consumption as well as system responsiveness.
Having summarized the state of the art, Keith turned his attention to the future. There is quite a bit of work being done in a number of areas - and not being done in others - which leads toward a better X for everybody. On the 3D compositing front, what's needed is to eliminate the "shared back buffers" used for 3D rendering so that the rendered output can be handled like any other graphical data. Eliminating tearing requires providing the ability to synchronize with the vertical retrace operation in the graphics card. The core mechanism to do this is already there in the form of the X Sync extension. But, says Keith, nobody is working on bringing all of this together at the moment. Getting rid of boot-time flickering, instead, is a matter of getting the X server properly set up sufficiently early in the process. That's mostly a distributor's job.
To further integrate APIs, one thing which must be done is to get rid of overlays and to allow all graphical operations (including Xv operations) to draw into pixmaps. There is a need for some 3D extensions to create a channel between GLX and pixmaps.
Supporting fast user switching means adding the ability to work with multiple DRM master. Framebuffer resizing, instead, means moving completely over to the EXA acceleration architecture and finishing the transition to the TTM memory manager. In the process, it may become necessary to break all existing DRI applications, unfortunately. And multiple framebuffer support is the objective of a project called "shatter," which will allow screens to be split across framebuffers.
Improving the power consumption means getting rid of the busy-waiting with 2D graphics (Keith say the answer is simple: "block"). The XvMC protocol should be extended beyond MPEG; in particular, it needs work to be able to properly support HDTV. All of this stuff is currently happening.
Finally, on the security issue, Keith noted the ongoing work to move graphical mode setting into the kernel. That will eliminate the need for the server to directly access the hardware - at least, when DRM-based 2D graphics are being done. In that case, it will become possible to run the X server as "nobody," eliminating all privilege. There are few people who would argue against the idea of taking root privileges away from a massive program like the X server.
In a separate talk, Dave Airlie covered the state of Linux graphics at a lower level - support for graphics adapters. He, too, talked about moving graphical mode setting into the kernel, bringing an end to a longstanding "legacy issue" and turning the X server into just a rendering system. That will reduce security problems and help with other nagging issues (graphical boot, suspend and resume) as well.
Mode setting is the biggest area of work at the moment. Beyond that, the graphics developers are working on getting TTM into the kernel; this will give them a much better handle on what is happening with graphics memory. Then, graphics drivers are slowly being reworked around the Gallium3D architecture. This will improve and simplify these drivers significantly, but "it's going to be a while" before this work is ready. The upcoming DRI2 work will improve buffering and fix the "glxgears on a cube" problem.
Moving on to graphics adapters: AMD/ATI has, of course, begun the process of releasing documentation for its hardware. This happened in an interesting way, though: AMD went to SUSE in order to get a driver developed ahead of the documentation release; the result was the "radeonhd" driver. Meanwhile, the Avivo project, which had been reverse-engineering ATI cards, had made significant progress toward a working driver. Dave took that work and the AMD documentation to create the improved "radeon" driver. So now there are two competing projects writing drivers for ATI adapters. Dave noted that code is moving in both directions, though, so it is not a complete duplication of work. (As an aside, from what your editor has heard, most observers expect the radeon driver to win out in the end).
The ATI R500 architecture is a logical addition to the earlier (supported) chipsets, so R500 support will come relatively quickly. R600, instead, is a totally new processor, so R600 owners will be "in for a wait" before a working driver is available.
Intel has, says Dave, implemented the "perfect solution": it develops free drivers for its own hardware. These drivers are generally well done and well documented. Intel is "doing it right."
NVIDIA, of course, is not doing it right. The Nouveau driver is coming along, now, with 5-6 developers working on it. Dave had an RandR implementation in a state of half-completion for some time; he finally decided that he would not be able to push it forward and merged it into the mainline repository. Since then, others have run with it and RandR support is moving forward quickly. It was, he says, a classic example of why it is good to get the code out there early, whether or not it is "ready." Performance is starting to get good, to the point that NVIDIA suddenly added some new acceleration improvements to its binary-only driver. Dave is still hoping that NVIDIA might yet release some documents - if it happens by next year, he says, he'll stand in front of the room and dance a jig.Part 4 of this retrospective ended in October, 2002, when LWN adopted its current subscription model. That change brought a certain amount of stability for LWN (too much, we might argue), but, in the wider Linux world, things continued to happen. This installment picks up where the last left off.
During this period, the business of Linux was relatively quiet - not that many acquisitions, but not many failures either. But quite a bit was happening around legal issues, copyright enforcement, and more...
BitKeeper flames were a more-or-less constant feature in those days, but BitKeeper became an established part of the kernel development process anyway. In the October 10, 2002 edition, your editor wrote: "If Larry McVoy (or his board of directors) wakes up hung over one morning and decides to end free access to BitKeeper, the show is over." That was, unfortunately, an example of your editor's crystal ball working rather better than usual.
The trojaning of sendmail was the first of a few such incidents. It looked like a scary trend for a while, but, in fact, the frequency of this kind of attack has dropped quite a bit in the intervening years.
By this point, there was a certain amount of discomfort over the direction SCO was taking. But nobody had any clue of just how weird it would actually get.
Remember the days of disruptive worms? MS-SQL was one of the scariest, in that it did most of its propagation in just a few minutes. We don't see to many worms like that anymore; contemporary crackers prefer to turn systems into zombies and rent them out.
And so it began, with SCO telling the world that the Linux community could not possibly have achieved what it did unless the work had been stolen by IBM.
For the remainder of this retrospective, your editor will attempt to keep the number of SCO-related entries to a minimum. It has been quite an experience to go back and reread all of those McBride/Enderle/Boies/DiDio/Lyons/etc. quotes, and it is tempting to put them all here. But that temptation will be resisted; those who want to relive that bit of bizarre history in more detail can read the LWN pages directly or dig through the considerable resources at Groklaw.
SCO is about as scary as Y2K now, but, in 2003, the SCO suit was a frightening event. To many of us it seemed possible that, maybe, one out of thousands of developers might have slipped something improper into the kernel code base. And, in any case, we were under attack by a company with millions of dollars to burn and a loud-mouthed CEO. The whole thing cost us a lot of time and anxiety - and, for those most directly involved - money.
Nonetheless, your editor will reiterate his claim that, overall, the SCO attack has been good for us. We needed to improve our legal defenses; as Linux grew, there could be no doubt that people would attempt to use the legal system to grab a piece of the pie. In SCO we had an arrogant assailant with no substance; we were attacked by a clown. We got the ability to straighten up our processes, arrange better legal help, and prove that our code is clean without the inconvenience of facing a complaint with a bit of legitimacy. The community is now close to immune from copyright-based attack, and is much better poised to deal with similar attackers (patent trolls, for example) who could still do us some serious damage.
Novell's claim was clearly significant at the time, though it fell below the radar again for several months. In the end, of course, this was the factor which killed SCO. That is convenient, but almost unfortunate too: there would have been value in seeing the substance of SCO's claims demolished in court.
In these days of fast releases, it is interesting to consider that, for the first half of 2003, there were no stable kernel releases at all.
OSDL was often controversial in the Linux community, but nobody doubted that providing a home for developers like Linus and Andrew was a good thing. Until now, neither had held a job where working on Linux was their primary duty.
Meanwhile, few suspected how big the software patent battle in Europe would become - or that the anti-patent side would emerge victorious (for now).
Selling Linux in boxes was how Red Hat got going, so the end of that business was a clear sign that things had changed. The separation of Mozilla and AOL (which had bought Netscape) was a little scary at the time; it seemed that the project could fade away before the Mozilla browser became truly ready and that it was an Internet Explorer future for all of us. Things were a little lean at Mozilla for a while. Now that Mozilla is bringing in tens of millions of dollars every year, the idea that it once sought donations is amusing.
SCO, remember, "encrypted" its slides of "copied" code by switching them to a Greek font - a scheme which the community, somehow, managed to overcome. The code in question was straight from ancient Unix; it had been contributed by SGI, and had already been removed by the time it was revealed. After this, nobody worried that SCO might come up with the "millions of lines" of code that, it said, it could prove it owned.
Fedora started with all kinds of talk about what a community-oriented project it would be. The reality was rather slower in coming, but is beginning to be visible now. Meanwhile, Fedora was a useful (and used) distribution from the outset.
The LinkSys settlement was the result of a long battle. It was an important early GPL enforcement action which led to the creation of a number of distributions created for the sole purpose of doing interesting things on LinkSys routers. The ironic result is that LinkSys almost certainly sold quite a few more units than it would have if it had continued to hold on to the code.
2.6.0 took almost exactly three years after 2.4.0 came out. For the few developers who had observed the 2.4 feature freezes, their code - which could be four years old at this point - was only now making it into an official mainline release. It was not yet understood at this point, but, once 2.6.0 came out, the "new kernel development model" started to take shape. Never again would we go years between major stable releases.
There had been trouble in XFree86 for a long time, but the license change brought it all to a head. This was the move which killed XFree86, led to the creation of the revitalized X.org, and, eventually, brought life back to X development.
The first Grumpy Editor article was never intended to be the beginning of a series; your editor was simply grumpy that the Galeon browser had gone the route of many early GNOME 2.x applications: less configurability, fewer features, and worse performance. The persona proved popular with readers, though, and the Grumpy Editor has been making irregular appearances on LWN ever since.
The attack on Linux users had been long foreshadowed - and feared. Regardless of the validity of its claims, SCO could certainly make life hard for Linux by attacking those who use it. The attacks were so laughable, though, that they had no appreciable effect, even in the short term.
For those who don't remember, OSRM was a scheme to sell insurance against legal attacks to users of free software. But, by this point, nobody was all that worried about SCO, and OSRM never did take off. On the other hand, MandrakeSoft did succeed in getting out of bankruptcy and is still with us.
This installment started with BitKeeper, and will end there. For all the complaints about BitKeeper and its associated "don't piss off Larry" license, few could contest the claim that kernel development was proceeding at a much faster pace. We needed a tool like that. To this day, it remains discouraging that we were not able to develop a distributed revision control system for ourselves until Larry McVoy and BitMover showed the way. If there was ever an itch in need of scratching, this was it.
The next installment (which will most likely appear two weeks from now) will start with April, 2004 and come fairly close to the present. Stay tuned.
Page editor: Jake Edge
Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds