January 25, 2012
This article was contributed by Michael Gilbert
A recent X.org security flaw (CVE-2012-0064) was handled well by those
involved by many measures (by the issue discloser, by the
X.org developers, and by various distribution security response teams). In
fact, the
issue was fixed in less than a day by most distributions, which helps demonstrate
the progress that the open source community has made in terms of security
processes and practices.
On January 19th, Gu1 (a member of the
Consortium of Pwners
computer security war gaming group) published
details of a flaw
he happened to come across in the latest X.org release. By pressing a particular
combination of keys when sitting locally at any machine running X.org 1.11 or
greater (and a subset of release candidates), he found that he could terminate
any application with a current screen grab (i.e. screensavers). This meant
that he, or anyone else with knowledge of that particular "code", would be able
to gain local access to machines for which they did not have appropriate
credentials. Some readers may be tempted to jump to the conclusion that such a simple
"code" is a sign of a
maliciously placed back-door, but the actual explanation is far more mundane.
This particular key combination simply happens to be a debugging feature — with known
and documented security implications — that, by default, was
appropriately disabled in the past.
Fortunately X.org 1.11 is currently so new that it hasn't yet shipped in most
distributions. Of the most common GNU/Linux distributions, the only stable
release affected was Fedora 16. Also affected were Debian testing and
unstable as well as Arch, all of which are either rolling or experimental
releases. All Ubuntu, RedHat (including CentOS, of course), and openSUSE releases were not
affected. So, first of all, there isn't much for most users to worry about
with respect to this particular problem. However, the events leading up to
and following publication of the flaw paint an interesting picture. In
one sense, this flaw was handled well by the security teams of the
affected distributions, but that doesn't mean there isn't room for
improvement.
Note that a comprehensive discussion on the technical details of the flaw
itself will not be included here. Peter Hutterer has already written an
excellent blog entry on the matter, and readers are encouraged to visit his
site for more information. Succinctly, the screen grab debugging key-press
combinations have now been removed from the default XKB keymap configuration
files. It is still possible to re-enable them, but that requires a
determined user that presumably knows what they are doing.
Timeline of the flaw
In the beginning (1984),
X was written.
At some point, developers recognized a need to be able to debug screen grabbing
applications, so they wrote some code to be able to break such grabs. A
screen grab (in X.org speak) is simply a top-level overlay on the screen that
prevents events (key and mouse presses) from touching the windows underneath.
The grab breaks were assigned to the Control+Alt+KeypadMultiply and
Control+Alt+KeypadDivide key-press combinations. At the time, the X developers
recognized the security implications and made it a non-default option. They
even
documented
the problem to hopefully make it very clear to users.
Many years passed...
In 2008, there was a great purge of xf86misc (a code clean up effort that
removed various unused X code that had accumulated over many years), which,
along with many other things, excised those particular debugging options (Daniel
Stone's
commit,
commit).
Recently, Daniel has been working on multi-pointer X. In that process, he
encountered quite a few situations where screen grab debugging would be
helpful. So, he dusted of that code and pushed for
its re-inclusion. In June of 2011, Peter Hutterer reviewed and applied said
patch.
However, lost in translation/communication (and to the passage of time) was
the fact that the code did indeed have security implications. That
fact was not picked up on until around January 5th on a day that Gu1 found
himself rather bored. On that day, he had decided to read some
older X.org documentation, and in particular, he came across "AllowClosedownGrabs",
which documented the Control-Alt-KeypadMultiply key combination. He decided to try
it with the latest X.org expecting nothing, but to his surprise it worked. So,
part of the problem was that the documentation that warned about security
considerations of the code was not brought back as well. It still doesn't
look like this has returned yet, but an important takeaway is that both code and
documentation should be brought back on returning features, and that the
discussion in that documentation should be taken into consideration when doing
so. One solution could be to remove documentation in the same commit that
the code is removed. That
way if the commit is ever reverted, the documentation automatically comes back
as well.
Not content with only finding the issue, Gu1 took the time to write a rather
detailed blog entry, and published that two weeks later on January 19th. He
even went so far as to research, bisect, and identify the commit introducing
the problem. This is an example of a well-written disclosure. It made it
possible for security teams to take rapid action to close the issue. In an
email interview with Gu1, he stated that his motivation to do this was not out
of selflessness. He was more interested in obtaining a discount to the Hackito
Ergo Sum 2012 conference. The discount is provided to those attendees that have
disclosed CVE issues. It may be interesting to think more about providing
these kind of simple incentives in the future to reduce the number of issues
that are currently sat on by those without motivation to disclose.
Note that one could argue that Gu1's decision to fully disclose the issue
with no advance notice to those involved was less than
ideal. The delayed disclosure (often framed as "responsible disclosure")
camp believes that vendors need some time to be able to do appropriate
analysis and testing of fixes, and thus disclosers should give those
vendors some time (though how much time is often a question). This issue
demonstrates a case where that preparation
time didn't matter. The issue was fully disclosed and hours later security
teams had the problem solved. That is because Gu1's research was comprehensive
enough to be able to isolate and fix the problem right away. This kind
of detailed analysis should be sought as the norm. Whether that analysis
is shared with the vendor or project before being made public typically
depends on
which camp (full or responsible disclosure) the researcher is in.
In terms of affected releases, X.org 1.11 was originally shipped in June
2011. Shortly thereafter, distribution development branches started
picking it up. Debian unstable got it in August, Debian testing got it in
September, and the Fedora 16 stable release got it in November. A final
timeline of the issue demonstrates how impressively quickly the issue was
resolved after disclosure by those distributions affected by it:
| Date/Time (UTC) | Event |
| 01/05/2012 | Gu1 discovers issue |
| 01/19/2012 00:03 | Gu1 discloses issue on blog and oss-security |
| 01/19/2012 05:49 | workaround posted |
| 01/19/2012 10:19 | X.org fixed in Debian unstable |
| 01/19/2012 22:01 | X.org fixed in Fedora 16 |
| 01/19/2012 23:48 | X.org upstream fixed (actually in XKB) |
| 01/22/2012 16:39 | X.org fixed in Debian testing (delay due to testing's 2-day minimum migration policy) |
For the set of distributions actually affected by this issue, their
security teams reacted with admirable speed. The table below lists the time
it took to release a fix after Gu1's disclosure. Note that the
"underground potential" entry is the length of time that the
underground side of the computer security community may have been able to
exploit the problem. That said, there is no
way of ever knowing if or when it was actually discovered before the
disclosure. We do know at least that Gu1 knew about the issue two weeks
prior to
publishing it.
| Distribution | Vulnerability window | Underground
potential |
| Debian unstable | ~10 hours | ~5 months |
| Fedora 16 | ~22 hours | ~2 months |
| X.org upstream (XKB) | ~23 hours | ~6 months |
| Debian testing | ~64 hours | ~4 months |
Conclusions
This particular case raises some questions about the prevailing wisdom that
its always best to be running the latest and greatest software releases. Note
that each new release involves some kind of code modifications with varying
levels of risk. Interestingly, it turns out that in this case users were safer if they
chose slower-moving releases. As seen above, the incredibly fast-moving Debian
unstable release had a 4 month potential for underground abuse; whereas
Debian testing, which moves a bit slower, had a smaller 3 month potential.
Fedora 16 was caught by this; whereas Ubuntu wasn't since they played it a bit
safer and stuck with X.org 1.10 for their 11.10 release. Distributions have
to make their choices about which new releases to include based on their
interest in delivering "bleeding edge" packages to their users. Sometimes
that means that undiscovered security bugs come along for the ride.
By all measures Daniel and Peter have an extensive background working on
X.org. Daniel has been working on various aspects (including DRM/KMS
drivers, gstreamer, and kernel input drivers) for 9 years and Peter for 6 years
as well (he is the input subsystem maintainer and has worked on libXi).
Even with this extensive experience, X.org is such a complex system that there
is always the potential for mistakes. We're all human after all.
Daniel had this to say:
Oh, at this stage I don't think we can say with a straight face that
we're able to create perfectly resilient and secure systems. The best
we can do is admit that failures will occur, try to pre-emptively
limit the damage they can do before they're found, and then make sure
our procedures for dealing with problems as they're found are
best-of-class. Even if all your components are extensively
documented, noting their various restrictions, requirements and
limitations, as well as being extensively tested, the reality is that
people are human so either your implementation will be subtly broken
in ways you don't expect, or one of your users will just use it wrong.
Saying that we have perfect security is just hubris.
I've got a lot of time for the school of thought that argues that as
complex systems are inherently less secure than simple ones, the best
thing to do is to build less complex software. Understanding the flow
of events between X and its myriad clients, and the effects even a
simple change will have, is really not an easy thing to do. I find
the setuid vs. capabilities issue that's been cropping up recently a
pretty entertaining example of the law of unintended consequences.
One could argue that Wayland
is the simplification needed to eliminate the complexities of X, and
it's good that most distributions are now on a long-term term path toward that
goal. But even so, Wayland is not necessarily going to be a magic bullet as
some have argued.
It too will have its share of complexity, and there is always the possibility
of writing flaws into the new code, which will only be discovered given
time, interest, and motivation. Computer security is always a
matter of vigilance.
[ The author would like to thank Daniel Stone, Peter Hutterer, and Gu1
for taking the time to answer interview questions for this article. ]
(
Log in to post comments)