The Mesa project (aka "Mesa 3D") is
"almost old enough to drink" (21 in the US), said Ian Romanick when
introducing Brian Paul's X.Org Developers
Conference (XDC) talk. Paul is "the father of Mesa", Romanick said; he
a look back at the 20 (or so) years of development on the open source
Paul noted that there is some question about when development on
Mesa actually began. He has been doing some archeology to try to figure it
out, but he doesn't think the August 1993 date noted on the Mesa web site
is correct. He thinks he may have gotten started in 1992 and hopes to
eventually find the Amiga floppies (with timestamps) to determine that.
He was following the comp.graphics newsgroup in 1992 when Silicon Graphics
(SGI) posted an
announcement of the OpenGL language. At the time, he was working at the
Wisconsin on a visualization package using SGI's IRIS GL, which was "terrible".
There were other contenders for a replacement (e.g. PHIGS), but SGI was
pretty dominant in the graphics world. After the announcement, he read all
the documentation "in a couple of days" and "jumped pretty quickly" into
OpenGL. It was "beautiful and slick", he said, which is an interesting
comparison to OpenGL today, which has 1000 functions and "tons of ways to
do the same thing".
Before all of that, though, he got started with computers (and graphics) as
a freshman in high school on the TRS-80 Model III. It had 128x48
monochrome graphics (with non-square pixels) and he wrote a paint program
for it in BASIC. That's what got him "hooked on graphics", he said.
His first computer was an Atari 800XL, which was a "quantum leap" as it was
the first color graphics he had access to (160x192 with four colors, or
320x192 monochrome). Using that, he wrote a 3D modeling program, a 3D
renderer using Painter's
algorithm, and a ray tracing program. The ultimate output from the ray
tracer was a ten-frame animation that took two-and-a-half days to render.
By the time he got to college, he had an Amiga 500, which had 640x400
resolution with 4K colors. Even though it had lots more RAM (1M vs. 64K on
the Atari), it still lacked the memory for a real Z-buffer. That didn't
prevent him from writing a scan-line rasterizer with lighting and
dithering as well as another ray tracer. That was all written in Modula-2
because that compiler was cheaper than the Amiga C compiler, he said with a
laugh. He did eventually switch to Matt Dillon's C compiler when that
In the early 90s, he was working on visualization software on "really
expensive" workstations (like $180K). The price of those workstations made
them prohibitive for most, but there were other barriers too. Each vendor
had its own API for graphics, so graphics programs had to have multiple
backends to support each. But SGI and IRIS GL were pretty dominant, and
that was most of what he was doing then.
There were lots of 2D workstations around at the time, it was 3D
hardware that was lacking. The Vis5D 3D visualization package for
atmospheric science that he developed needed to be able to run on 2D
workstations. He found the VOGL library that was a small subset of IRIS
GL. It didn't have triangle rendering or hidden surface removal, both of
which he hacked in. "It worked but it was ugly", he said.
When OpenGL was announced, he immediately knew it was the direction he
wanted to go. SGI targeted it to replace all of the other alternatives, and
some industry players (DEC and IBM) were supportive of that, but
others (Sun and HP) were not. But it took a surprisingly long time before
vendors had OpenGL implementations. It was 1994 or 1995 before SGI
shipped anything, he said, and it wasn't until 1996 that it was available
across all SGI models.
Starting from scratch
He began work on Mesa (though it didn't yet have that name) in either 1992
or 1993. He was working on the Amiga at home and on Unix systems at work,
using floppy disks to move the code back and forth. He developed the device
driver interface to accommodate the two different drawing interfaces (Amiga
He had a "lot more spare time then", he said, so he had a mostly complete
OpenGL implementation in November 1994. It was still missing some
features, but he was using it at work and thought it might be useful to
others. So he started talking with SGI about open-sourcing the code.
SGI was "pretty reasonable", he said, contrasting that reaction to what he
thinks Microsoft would have said to a similar request. He and SGI went
back and forth a bit, but eventually the company said "go for it". SGI
said he shouldn't use the letters "GL" in the name and that it needed a
disclaimer saying it was not a "real OpenGL". The irony is that today Mesa
is a real OpenGL "and we have the certification to prove it", as
an audience member noted. The name, Mesa, "just popped into my
head one day", Paul said.
He thinks that SGI was receptive to the idea of releasing Mesa because it
wanted to get the interface out in front of people as much as possible.
Open source would "push that goal forward". Folks that didn't have
OpenGL for their workstation could get Mesa instead, so it filled a hole in
He got permission to release the code in early 1995 and announced
Mesa 1.0 beta on comp.graphics on February 3, 1995. When he did that,
he had no feel for how much interest there would be, but within days it became
clear that there was lots. He started receiving dozens of emails about the
ranging from "wow" or "thanks" to patches.
To support the project, he wrote his first page for the web, which was just
getting traction at that point. He also set up a mailing list. He only
had email access at work, so he had to try to deal with the
mailing list before work or on weekends. As the project quickly ramped up,
his boss, Bill Hibbard, was quite supportive and allowed him to use part of
his workday for Mesa.
The project owes a lot to Hibbard, he said.
In the early days, the project faced a number of issues. Supporting
multiple different systems (IRIX, SunOS, HPUX, ...) was sometimes
difficult due to compiler incompatibilities. Some of those kinds of
still seen today. The displays could be problematic as well. 24-bit
displays were expensive, so most displays were 8 or 16-bit. That meant
dealing with color maps and dithering. He remembers spending "days and
weeks" on dithering patterns, which just "seems crazy" today.
Handling lots of patches and rolling out new releases without using
revision control was another problem area. He's not sure why he didn't use
SCCS or RCS, but he didn't have any revision control until Mesa moved to a
SourceForge predecessor in the late 1990s, he said.
Performance was another issue. PCs had 16MHz 386 processors. Workstations
were faster, but "still not super fast". Calling Mesa "interactive 3D
graphics" was sometimes an overstatement, he said.
But OpenGL was "pretty small back then", so he could still keep all of the
Mesa code in his head in those days. That "was a lot of fun", but with a
million lines of code today, Mesa is "pretty overwhelming".
In the late 1990s, 3D was really starting to take off. There was consumer
3D hardware available from companies like NVIDIA, ATI, Matrox, 3dfx, and
others, with price points that were fairly reasonable. Those cards used
Direct3D, Glide (for 3D effects), or some one-off vendor API, and there was
little OpenGL support.
SGI did open up its GLX client/sever code so that programs could use OpenGL
in X windows. The Utah GLX project, which was an XFree86 server-side
driver that provided indirect rendering, also got started. Paul said he was not
directly involved in Utah, but it attracted some talented developers to
included Keith Whitwell who had done a lot of work to increase the
performance of Mesa. Around that time, John Carmack of id Software donated
$10,000 to Mesa, which funded more of Whitwell's work.
In addition, Precision Insight was formed as a company to work on X and
other related technology in 1998. Paul met the founders of the company at
the SIGGRAPH 99 conference and joined Precision Insight in September
1999. That meant that he could finally work on Mesa, Direct Rendering
Infrastructure (DRI), and GLX full time. Prior to that, most of his work
on Mesa had been done in his spare time.
In the 2000s, DRI became well established. Precision Insight was acquired
by VA Linux, but then the dot-com bubble popped "and we all got laid off",
so he helped form Tungsten
Graphics with other former Precision Insight employees in 2001. There were
multiple releases of Mesa (4.x–7.x) over that decade with
"lots of hardware drivers". The Gallium3D
project was started by Whitwell in 2008 to simplify creating new 3D
drivers. The existing device driver interface was not a good match to what
the graphics hardware was doing, so Gallium3D provided a
different interface that would help reduce the code duplication in Mesa
drivers, he said.
Paul said that he never expected Mesa to be so successful, nor to last as
long as it has. He would have guessed that vendors would put out their own
OpenGL libraries so that there would be no need for Mesa. But people
still need software rendering, using llvmpipe and other drivers. It's
also important to have an open source implementation available that people
can adapt and use for new projects, he said.
There have been as many as 1000 contributors to Mesa over the years, which
is far beyond what he expected. The project has grown in size and
complexity beyond his expectations as well. Mesa is in all of the Linux
distributions and used by thousands (or millions?) of users every day. By
his measure, it is a "very successful project".
Paul outlined what he saw as the keys to that success. Mesa was "the right
project at the right time". He happened to have a job that caused him to
follow comp.graphics, which led him to start the project. If he hadn't,
though, someone else would have around the same time, he said. He also has
always liked the perspective that Fred Brooks has espoused about computer
scientists as "tool makers". He took that idea to heart and made a tool
that others use to do their job. In the early days, that job was
visualization, today it is entertainment, which is "not as noble" as it
once was, "but still pretty cool".
As project lead, he let people "smarter than me" take control of areas they
were interested in. Beyond that, the Mesa development community has always
been "pretty civilized". There are no insults or screaming in the mailing
list. He has seen other open source leaders who do that and he doesn't
like it at all, he said. But perhaps the most significant reason for
Mesa's success is that it was following a specification. "We are the
carpenters" who are building a "house that someone else designed".
He closed by thanking the developers who contributed "amazing work" to the
project over the years. Mesa is now his calling card and the project has
greatly exceeded his expectations. In addition, he met some of his closest
friends through the project.
At the start of the talk, Paul said it
has been around five years since he last attended an XDC. He is now with
VMware (which bought Tungsten Graphics along the way). He said that he was
quite impressed to see all of the new faces of those involved with
X today. Keith Packard likely summed up the feelings of many in the room (and
beyond) when he
thanked Paul for his longtime work on an important project that is
available as open source so that it can be
used by all.
[I would like to thank the X.Org Foundation for travel assistance to
Portland for XDC.]
Comments (8 posted)
Veterans of The Great Google Reader Shutdown scattered in a number
of different directions once the search giant's RSS- and Atom-feed
reading service closed its doors in July 2013. Quite a few
proprietary web services stepped up in an attempt to attract former
Google Reader users, but free software projects gained prominence as
well. The ownCloud project added a feed reading application, for
example, but Tiny Tiny RSS (TT-RSS)
had a considerable head start. The project recently released its latest
update, the latest from a post–Google Reader surge in development.
TT-RSS has been in development for more than eight years,
but prior to the March announcement
of the Google Reader shutdown, the pace
of development had slowed. The project still made regular
updates every few months, but there was not a steady influx of new
developers and the feature set grew in small increments with each new
release. The uptick in interest after the Reader announcement was
dramatic; one need only look at the Ohloh graphs for contributor
count and commits to
see the difference. From a low of 54 commits by two committers in
January, activity shot up to 564 commits by 31 committers in March.
Most of those commits came from lead developer Andrew Dolgov, but it
would be fair to say that the Reader deactivation reinvigorated
interest in the project.
The first new major release following this flurry of activity was
version 1.8 in
June. That update brought updates to feed parsing (including
dropping the Simplepie parsing
library in favor of a native TT-RSS parser), moved several optional
plugins out of the core codebase into a separate contrib
repository, and reworked the user interface. The moved plugins
constitute a significant number of lines of code, including several
authentication modules (such as IMAP, LDAP, and RADIUS
authentication) and connections to a variety of third-party web
services. Those web-service plugins include the ability to share
items from TT-RSS via Twitter, Identi.ca, Google+, Pinterest,
ownCloud, and several other well-known sites.
followed in late July. Several parser fixes were included, as were
translation updates and a refresh of the icon set. Arguably the most
visible changes were the addition of a plugin that serves a
feed of the user's shared articles and the migration of TT-RSS from
GPLv2 to GPLv3. Version 1.10 arrived on
September 21, containing only minor changes and bug fixes.
There are a glut of feed-reading options available these days, and
many look the same on the surface. The TT-RSS interface will feel
familiar to anyone who has used Google Reader, Feedly, Digg Reader,
NewsBlur, or the like. A scrollable list on the left-hand side shows
all of the subscribed feeds by category (plus special categories for
saved and starred items) and how many unread entries are available,
while the remainder of the screen is occupied by split-pane news
reading interface: one can view feeds by headline or by full text,
mark or unmark them as read, and so on.
But TT-RSS still offers a handful of features that the majority of
the competition does not—even though those features may not jump
out of the interface. For one thing, it is one of the few feed
readers that supports subscribing to username/password-protected
feeds. There are not a lot of feeds out there that require
authentication, but that is something of a chicken-and-egg problem:
lack of reader support decreases the demand for authenticated feeds, and
Another example of TT-RSS's feature set is its support
for automatically detecting duplicate articles and removing the
duplicates from the unread item interface. This, too, is not a common
problem at the global scale, perhaps, but it is one that many free
software fans are likely to run into. Planet software aggregates
multiple blogs, but in doing so a single person's blog may get carried
by multiple planets if that person is involved in multiple projects.
Having reader software like TT-RSS filter out
the duplicates that come from extra-active bloggers is surely a better
solution than trying to persuade those bloggers to participate in
As for the changes in the latest releases, TT-RSS does feel
snappier than it used to. That may be attributable to the new,
lightweight feed parsing code, to the reduction in the number of
plugins installed, or even to the accumulated improvements of all of
those new contributors and Dolgov's surge of development in March.
Whatever the cause, I last used TT-RSS more than two years ago and
frequently found it pausing momentarily when switching to a new folder
or when loading large posts; I have noticed no such hangs with version
A resurgence in interest is welcome news for any free software
project. The lone dark cloud in the recent uptick in TT-RSS's
popularity has been the project's hostility toward forum users who
bring suggestions or questions that the team does not like. The FAQ
directs users with Google Reader–related requests to an
thread where one such user is ridiculed, and the same FAQ page
defends Dolgov's right to "either mock or not deal with
people" who ask "stupid questions again and again because
new people gracing tt-rss community with their presence largely can't
be bothered to read anything." The justification given for
this approach is that community management is too time-consuming.
No doubt community management is a task that requires
a significant time investment, but it is hardly an unsolved
problem—and one would think the ten-fold increase in volunteers
seen last March could have provided some person-power that could be
directed toward community building. Prickly project leads are not
a new problem, but TT-RSS is quality code and it was in essence handed
a ready-made stable of millions of potential users when Google Reader
closed; it would be a shame to drive them away.
Comments (7 posted)
The X.Org Developers
Conference (XDC) is a three-day, one-track event with presentations
covering many different parts of the graphics stack. This year it was held
at Portland State University in Portland, Oregon—home of Voodoo Doughnuts,
which were provided daily. XDC is an intense
experience and this year's edition will lead to a few more articles in the
There were also a few shorter sessions with some news and plans that seem
worth reporting on here.
X.Org Foundation board
member Peter Hutterer reported on the state of the
foundation. The most recent news about the foundation was that it had lost its
501(c)(3) (US) non-profit status in August. Hutterer was happy to report
that had all been reversed. With help from the Software Freedom Law Center
(SFLC), which helped the foundation become a
non-profit in 2012, the foundation was able to regain its status. The
paperwork was "still in transit", he said, but the non-profit status was
Part of the problem that led to the revocation of the non-profit status was
that it is a lot of work to maintain a 501(c)(3) organization. The members
of the foundation board are not lawyers or accountants, he said, and some
are not even living in the US, which makes it that much more difficult. So
the board decided to look into organizations that manage free software
projects, foundations, and the like. Umbrella organizations like the Software Freedom Conservancy (SFC), Apache Software Foundation (ASF), and Software in the Public Interest (SPI) are
set up to handle much of the paperwork for various types of projects.
The board has voted that SPI is the right umbrella
organization for the
X.Org Foundation. That switch is not finalized as it may require a vote of
the X.Org membership, Hutterer said. The board will be consulting with the SFLC
and looking at the by-laws to determine that. Assuming the change is made,
SPI would take 5% of any donations made to the foundation for the work that
it does, which "seems fair", he said.
The foundation has "a slab of money" that remains from a number of years
ago, when it
was getting donations of $100,000 or so per year. It uses that money to
put on XDC and to sponsor the travel of several participants (four this
year, including a Google Summer of Code student and an LWN editor). It also
funds GSoC students and participants in the X.Org Endless Vacation of Code
program. The pile of
money is enough to last for another four or five years, Hutterer said,
before the foundation needs to consider doing some fundraising—something
that's never been done since he became involved.
The foundation is also moving banks after HSBC closed its account for
unclear reasons. "Banks are fun", Hutterer said with a laugh. The current
plan is to move to Bank of America, he said.
The Board of
Directors consists of eight people and four of those seats turn over
There are 78 members of the foundation, which is "lots better than it was a
couple of years ago". Hutterer strongly encouraged those
present to think about joining, which allows
voting in elections and has a few other benefits.
X server 1.15 planning
Keith Packard took the floor on the first day to discuss plans for the X
server 1.15 release. It was supposed to have been released the week of XDC,
but in August he and others realized that the release itself was "really
boring". He asked the assembled developers if there were any features due
in 1.15 that they were "desperate to have". Hutterer mentioned that having
touch input working would be nice, but Packard noted that those changes
backported to 1.14.
In fact, as far as he knows, all of the security,
stability, and usability bug fixes have been added to 1.14. The 1.14
release manager has been making minor releases with those changes, which
are, of course, ABI-compatible with 1.14—unlike 1.15. At this point,
Packard said, 1.15 looks like "1.14 plus an ABI change".
There are, however, many features that are awaiting the release
of 1.15 before they get merged. So, an idea that had been batted around on
IRC was to delay the release of 1.15 until it had some features of note.
Those features might include a rewrite of the Xephyr nested X server (that
deleted "many thousands of lines of code, which is what we do best at
Xorg", Packard said, pronouncing the last word as "zorg"), Packard's own
DRI3 and Present
extensions which are getting close
to being ready to merge, some XWayland changes, Adam Jackson's GLX rewrite
(which removes around 40,000 lines of code), and possibly others.
Packard would talk about DRI3 and Present later in the conference, as would
Jackson about the GLX rewrite, so the final decision would be made after
those discussions. All of the proposed features seemed like
they would plausibly be ready in time for a code freeze at the end of
October. The normal pattern would be for a two-month stabilization period
putting the release of 1.15 at the end of the year. "A Christmas present",
An informal straw poll of those in the room found all in favor of the
proposed change, but there wasn't any real emotion one way or the other.
"Consensus by apathy", one developer suggested, which is a "hallmark of
X.Org" added another—to chuckles around the room.
Packard encouraged anyone with additional features they would like to see
in 1.15 to "let us know".
In the end, the X.Org
calendar shows a final 1.15 release scheduled for December 25.
Links to the slides and videos for most of
the sessions can be found from the XDC schedule page.
[I would like to thank the X.Org Foundation for travel assistance to
Portland for XDC.]
Comments (4 posted)
Page editor: Jonathan Corbet
Inside this week's LWN.net Weekly Edition
- Security: Integrity and embedded devices; New vulnerabilities in chicken, glibc, kernel, sudo, ...
- Kernel: Transactional memory in the dentry cache; How much memory power management is useful?; NUMA scheduling.
- Distributions: Fedora and bug tracking; Debian Edu, GNU, FreeBSD, NetBSD, RHEL, Ubuntu, ...
- Development: AppData; VLC 2.1.0; Rust 0.8; Ten years working on Krita; ...
- Announcements: 30 years of GNU, events, ...