|
|
Subscribe / Log in / New account

Pondering the X client vulnerabilities

By Jonathan Corbet
May 27, 2013
Certain projects are known for disclosing a large number of vulnerabilities at once; such behavior is especially common in company-owned projects where fixes are released in batches. Even those projects, though, rarely turn up with 30 new CVE numbers in a single day. But, on May 23, the X.org project did exactly that when it disclosed a large number of security vulnerabilities in various X client libraries — some of which could be more than two decades old.

The vulnerabilities

The X Window System has a classic client/server architecture, with the X server providing display and input services for a range of client applications. The two sides communicate via a well-defined (if much extended) protocol that, in theory, provides for network-transparent operation. In any protocol implementation, developers must take into account the possibility that one of the participants is compromised or overtly hostile. In short, that is what did not happen in the X client libraries.

In particular, the client libraries contained many assumptions about the trustworthiness of the data coming from the X server. Keymap indexes were not checked to verify that they fell in the range of known keys. Very large buffer size values from the server could cause integer overflows on the client side; that, in turn, could lead to the allocation of undersized buffers that could subsequently be overflowed. File-processing code could be forced into unbounded recursion by hostile input. And so on. The bottom line is that an attacker who controls an X server has a long list of possible ways to compromise the clients connected to that server.

Despite the seemingly scary nature of most of these vulnerabilities, the impact on most users should be minimal. Most of the time, the user is in control of the server, which is usually running in a privileged mode. Any remote attacker who can compromise such a server will not need to be concerned with client library exploits; the game will have already been lost. The biggest threat, arguably, is attacks against setuid programs by a local user. If the user can control the server (perhaps by using one of the virtual X server applications), it may be possible to subvert a privileged program, enabling privilege escalation on the local machine. For this reason, applying the updates makes sense in many situations, but it may not be a matter of immediate urgency.

Many of these vulnerabilities have been around for a long time; the advisory states that "X.Org believes all prior versions of these libraries contain these flaws, dating back to their introduction." That introduction, for the bulk of the libraries involved, was in the 1990's. That is a long time for some (in retrospect) fairly obvious errors to go undetected in code that is this widely used.

Some thoughts

One can certainly make excuses for the developers who implemented those libraries 20 years or so ago. The net was not so hostile — or so pervasive — and it hadn't yet occurred to developers that their code might have to interact with overly hostile peers. A lot of code written in those days has needed refurbishing since. It is a bit more interesting to ponder why that refurbishing took so long to happen in this case. X has long inspired fears of security issues, after all. But, traditionally, those fears have been centered around the server, since that is where the privilege lies. If you operate under the assumption that the server is the line of defense, there is little reason to be concerned about the prospect of the server attacking its clients. It undoubtedly seemed better to focus on reinforcing the server itself.

Even so, one might think that somebody would have gotten around to looking at the X library code before Ilja van Sprundel took on the task in 2013. After all, if vulnerable code exists, somebody, somewhere will figure out a way to exploit it, and attackers have no qualms about looking for problems in ancient code. The X libraries are widely used and, for better or worse, they do often get linked into privileged programs that, arguably, should not be mixing interface and privilege in this way. It seems fairly likely that at least some of these vulnerabilities have been known to attackers for some time.

Speaking of review

As Al Viro has pointed out, the security updates caused some problems of their own due to bugs that would have been caught in a proper review process. Given the age and impact of the vulnerabilities, it arguably would have been better to skip the embargo process and post the fixes publicly before shipping them. After all, as Al notes, unreviewed "security" fixes could be a way to slip new vulnerabilities into a system.

In the free software community, we tend to take pride in our review processes which, we hope, keep bugs out of our code and vulnerabilities out of our system. In this case, though, it is now clear that some of our most widely used library code has not seen a serious review pass for a long time. Recent kernel vulnerabilities, too, have shown that our code is not as well reviewed as we might like to think. Often, it seems, the black hats are scrutinizing our code more closely than our developers and maintainers are.

Fixing this will not be easy. Deep code review has always been in short supply in our community, and for easily understandable reasons: the work is tedious, painstaking, and often unrewarding. Developers with the skill to perform this kind of review tend to be happier when they are writing code of their own. Getting these developers to volunteer more of their time for code review is always going to be an uphill battle.

The various companies working in this area could help the situation by paying for more security review work. There are some signs that more of this is happening than in the past, but this, too, has tended to be a hard sell. Most companies sponsor development work to help ensure that their own needs are adequately met by the project(s) in question. General security work does not add features or enable more hardware; the rewards from doing this work may seem nebulous at best. So most companies, especially if they do not feel threatened by the current level of security in our code, feel that security work is something they can leave to others.

So we will probably continue to muddle along with code that contains a variety of vulnerabilities, both old and new. Most of the time, it works well enough — at least, as far as we know. And on that cheery note, your editor has to run; there's a whole set of new security updates to apply.

Index entries for this article
SecurityX client


to post comments

Pondering the X client vulnerabilities

Posted May 27, 2013 22:23 UTC (Mon) by hp (guest, #5220) [Link] (30 responses)

it's longstanding "common knowledge" (among people that think about such things) that an suid binary shouldn't link to Xlib, right? I think it's been long assumed that these vulnerabilities existed and people seemed to feel that the proper course was to not rely on their absence. Even now, who is willing to say they are comfortable these 30 were the only ones? And there's not so much value in linking Xlib alone (vs some toolkit also) for most purposes ... and auditing a full toolkit for this kind of thing sounds impossible.

I think perhaps this went unfixed for so long because many people just don't see the value in it (compared to other work at least). But with open source I guess all it takes is one person to do it!

Pondering the X client vulnerabilities

Posted May 28, 2013 1:26 UTC (Tue) by cesarb (subscriber, #6266) [Link] (26 responses)

> And there's not so much value in linking Xlib alone (vs some toolkit also) for most purposes ... and auditing a full toolkit for this kind of thing sounds impossible.

IIRC, at least one toolkit refuses to run if it finds out the process is setuid root.

Pondering the X client vulnerabilities

Posted May 28, 2013 1:38 UTC (Tue) by hp (guest, #5220) [Link] (25 responses)

http://www.gtk.org/setuid.html summarizes pretty well what I'd consider the expert consensus on a setuid program that connects to the X server. tldr it's a bad idea.

though I guess for the most part setuid is ALWAYS a bad idea ;-)

That gtk.org article nicely points out that almost all security bugs are also plain old bugs, though, so can't hurt to fix. But it's not like suid X apps are safe now.

Pondering the X client vulnerabilities

Posted May 28, 2013 2:24 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (3 responses)

Qt doesn't appear to be providing such a warning or treating security issues as seriously but apparently, there is some room for improvement within GTK as well

http://cansecwest.com/slides/2013/Assessing%20the%20Linux...

Pondering the X client vulnerabilities

Posted May 28, 2013 6:34 UTC (Tue) by halla (subscriber, #14185) [Link] (2 responses)

I've no doubt it's a very good presentation and all that... But the author is being silly with slide 7 -- ten years ago is 2003, which was firmly in the KDE 3 times, not the twm era. Desktops didn't look like that anymore, ten years ago, but pretty modern already. Heck, it's ten years this year that I started working on an abandoned KDE 3 application called Krita.

It's kind of a pet peeve with me, just like when articles assume that parents these days are internet newbies compared to their kids. Public access to internet is twenty years old now, in the Netherlands. By now, it's no longer a new thing.

Pondering the X client vulnerabilities

Posted May 28, 2013 6:45 UTC (Tue) by rahulsundaram (subscriber, #21946) [Link] (1 responses)

It is not even a decade back in exact terms as per author (7 to 10 years is quoted) and certainly KDE 3.x and GNOME 2.x were widely shipped as default by mainstream distros at that time including Slackware, Red Hat Linux and Debian and I remember my labs were running both just fine.

I noticed quite a few of such minor niggles such as calling the Unity interface a mix of Windows and OS X and putting up a screenshot of GNOME Shell but they aren't the focus of the presentation and I am more interested to hear thoughts on the security details.

Pondering the X client vulnerabilities

Posted May 28, 2013 8:41 UTC (Tue) by halla (subscriber, #14185) [Link]

"I am more interested to hear thoughts on the security details."

Yes, indeed. It's just something that irks me.

Pondering the X client vulnerabilities

Posted May 28, 2013 13:19 UTC (Tue) by jengelh (guest, #33263) [Link] (5 responses)

Not being suid does not mean one is safe: Think of an xterm in which a root shell is active, for example via su. If the X client gets sufficiently compromised, it could just send keystrokes down the pty.

Pondering the X client vulnerabilities

Posted May 28, 2013 14:35 UTC (Tue) by otaylor (subscriber, #4190) [Link] (4 responses)

If a hostile X server wants to send keystrokes to your root shell in an xterm, it doesn't need to exploit buffer overflows in Xlib, it can just send keystrokes to your root shell in an xterm.

Pondering the X client vulnerabilities

Posted May 29, 2013 9:02 UTC (Wed) by nirbheek (subscriber, #54111) [Link] (3 responses)

This is something that most people do not realise about the X server. There's almost no access control inside it. Any app can do almost anything to any other app. It's ridiculous.

Pondering the X client vulnerabilities

Posted May 29, 2013 18:20 UTC (Wed) by dtlin (subscriber, #36537) [Link] (1 responses)

Did XAce not happen? (I don't really know how well it works or if there's any users…)

Pondering the X client vulnerabilities

Posted May 29, 2013 18:47 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

It's supported in X, but kinda pointless (given the huge attack surface). Also XAce by itself doesn't define any ACL protocol, it just provides hooks for third-party modules.

Pondering the X client vulnerabilities

Posted Jul 15, 2013 11:34 UTC (Mon) by hummassa (subscriber, #307) [Link]

While you are kind of right (Windows apps are equally open to each other, etc), the point is moot when the owned part is the display/X11 server. Because it's the server that is sending your client the keystrokes that you type in your keyboard, and your client cannot distinguish real keystrokes from inserted keystrokes (well, using some sophisticated timing techniques, it can *try*, but all bets are off anyway.)

Pondering the X client vulnerabilities

Posted May 30, 2013 16:24 UTC (Thu) by richmoore (guest, #53133) [Link] (14 responses)

The comments about GTK there largely apply to Qt too. Writing setuid Qt applications is stupid. I'm the person being quoted in the presentation, and I stand by what I said - if people are writing setuid binaries that do this sort of thing then that's the buggy program not Qt. People writing setuid binaries have to learn how to make them secure if they want to make them for general use, there are many ways people can screw them up.

Any distro daft enough to package a setuid binary that links a GUI needs to buck their ideas up too.

Pondering the X client vulnerabilities

Posted May 30, 2013 16:54 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (13 responses)

Except that Qt also has quite good network support and threading primitives. Some people might want to use it just because of them. Is it bad?

Pondering the X client vulnerabilities

Posted May 30, 2013 17:45 UTC (Thu) by anselm (subscriber, #2796) [Link] (11 responses)

There's nothing wrong with Qt in general.

I would however be very suspicious of any SUID program that required the use of »network support and threading primitives«, whether it is based on Qt or anything else. (The X server comes to mind but that's another story.) If nothing else it should be possible to structure one's X client program such that any code that needs special privileges is put into a minimal separate executable that can then be SUID – and nowadays there are various other approaches one could take that may make SUID completely unnecessary in this context.

Pondering the X client vulnerabilities

Posted May 30, 2013 17:48 UTC (Thu) by Cyberax (✭ supporter ✭, #52523) [Link] (10 responses)

It's actually EXTREMELY common to have networked programs to be SUIDed because of the @#$@#(@ requirement to be root to bind to ports <1024 (yeah, you can use caps, but they are not present on all OSes).

So you need at least have the code to parse config files to be run as root.

Pondering the X client vulnerabilities

Posted May 30, 2013 22:27 UTC (Thu) by nix (subscriber, #2304) [Link] (4 responses)

BIND is a classic example of something which needs just that privilege (though it switches user after startup) and also happens to be threaded.

Pondering the X client vulnerabilities

Posted May 30, 2013 23:12 UTC (Thu) by anselm (subscriber, #2796) [Link] (3 responses)

Since when is BIND a SUID-root X11 client?

Pondering the X client vulnerabilities

Posted Jun 3, 2013 13:54 UTC (Mon) by nix (subscriber, #2304) [Link] (2 responses)

It's not, but the thread had drifted to saying that nobody needs to use the threading primitives and the like that Qt provides while running as root. This is tantamount to saying that nobody would want to write threaded code that runs as root, which is ridiculous. (syslog-ng and systemd both use Glib for much the same reason, and Glib provides rather worse facilities in this area than does Qt, IMNSHO.)

Qt is not just about GUI stuff, and hasn't been for as long as KDE has been using it, pretty much. (It only got properly separated out in Qt 4, though.)

Pondering the X client vulnerabilities

Posted Jun 3, 2013 14:21 UTC (Mon) by anselm (subscriber, #2796) [Link] (1 responses)

Nobody needs to use Qt's threading and networking primitives when running SUID root, which is a completely different ballgame than running as root as a daemon.

There's probably nothing wrong with using the non-GUI parts of Qt to implement a threaded networking daemon, if one doesn't mind the Qt haters' jumping all over one. But such a daemon would not run SUID root; it would be started as root initially (from something like SysV init or systemd) and then drop its root privileges ASAP.

It was actually Cyberax who claimed that »It's actually EXTREMELY common to have networked programs to be SUIDed«. This is apparently so common that so far he hasn't managed to come up with one single example.

Pondering the X client vulnerabilities

Posted Jun 5, 2013 20:09 UTC (Wed) by nix (subscriber, #2304) [Link]

Agreed. Really nothing at all should use the suid/sgid bits to gain privilege. They should use something like d-bus services (or, in an ideal world, userv, but nobody but me and Ian Jackson seems to have heard of that) to request that a process that is already root create a new instance of the appropriate thing on its behalf. Without a parent/child relationship you're free of the unexpected inherited state problems that cause so much trouble for SUID binaries, with the slight additional problem that you have to reflect stdout/stderr for the new process. But a library should be able to handle that (something else which, IIRC, userv does but D-Bus does not...)

Pondering the X client vulnerabilities

Posted May 30, 2013 23:15 UTC (Thu) by anselm (subscriber, #2796) [Link]

Programs that need to open ports below 1024 for listening are not usually X clients. They are servers/daemons that are commonly run as root to begin with (and drop their privileges as soon as they can), rather than SUID-root programs. This is a completely different ball game.

Pondering the X client vulnerabilities

Posted May 31, 2013 10:45 UTC (Fri) by jschrod (subscriber, #1646) [Link] (3 responses)

Could you please name some of these »EXTREMELY common programs« that are suid and bind to network ports < 1024?

I'm using Linux and Unix since many decades, and I'm not aware of them, but willing to learn more. (I've got daemons that bind to ports, but they are not suid, they are started by root. Even an X server is most often not suid nowadays, with xdm and friends.)

Pondering the X client vulnerabilities

Posted May 31, 2013 15:26 UTC (Fri) by hummassa (subscriber, #307) [Link] (2 responses)

well, actually...

in my 5308-packages-installed Kubuntu desktop, I found ONE setuid file linked with X libs: kppp (and none setgid). One moment of googling, and I found this:

> Why is KPPP installed with the setuid bit on?

> There is no need for the setuid bit, if you know a bit of UNIX® systems administration. Simply create a modem group, add all users that you want to give access to the modem to that group and make the modem device read/writable for that group. Also if you want DNS configuration to work with KPPP, then /etc/resolv.conf must be read/writable by the members of that group. The same counts for /etc/ppp/pap-secrets and /etc/ppp/chap-secrets if you want to use the built-in PAP or CHAP support, respectively.

> The KPPP team has lately done a lot of work to make KPPP setuid-safe. But it's up to you to decide if you install and how you install it.

http://docs.kde.org/development/en/kdenetwork/kppp/faq.ht...

I will not bother with it, for now...

Pondering the X client vulnerabilities

Posted May 31, 2013 15:32 UTC (Fri) by jschrod (subscriber, #1646) [Link] (1 responses)

X screen savers are sometimes installed suid, too.

FTR: I didn't want to question that there aren't suid X programs. I question that this happens to be "EXTREMELY common" and that access to low network ports is the reason to be so.

Pondering the X client vulnerabilities

Posted May 31, 2013 18:58 UTC (Fri) by hummassa (subscriber, #307) [Link]

That was my point:
X screensavers (I have a tonne of them installed, and none is suid) usually do not try to listen to restricted ports. Kppp is usually more secure if used suid, for the reasons described here (*). The "extremely common" thing is way off IMHO.

(*) http://www.oocities.org/margineantm/syspp/murphy.html

Pondering the X client vulnerabilities

Posted May 30, 2013 19:41 UTC (Thu) by richmoore (guest, #53133) [Link]

Find a problem in the core networking code, for example a problem that might lead to code execution due to incorrectly parsing responses and sure, we'd regard that as a problem and would fix it straight away.

The problem reported against Qt was simply someone using something incorrectly, and being told they've done so, then complaining it doesn't work. A couple of the issues spotted were genuine bugs, and should be addressed (I'm not aware that they actually filed them btw) but they aren't security holes in Qt.

Pondering the X client vulnerabilities

Posted May 28, 2013 20:44 UTC (Tue) by acoopersmith (subscriber, #72107) [Link] (2 responses)

> it's longstanding "common knowledge" (among people that think about such things) that an suid binary shouldn't link to Xlib, right?

While we’d like to hope so, we know it’s not true amongst the application developer population in general. Besides the KDE issues mentioned in the CanSecWest preso, and GTK’s prohibition of setuid use (though that’s not about not trusting Xlib, but not wanting to have to block or sandbox user-specified loadable modules), we know there’s other examples out there.

There’s even a few claims to the contrary:

> The amount of code in Xlib is very small, and has been extensively security audited.
> It is very unlikely that there are crashing bugs lurking in Xlib itself.
http://www.jwz.org/xscreensaver/toolkits.html

which sounds like someone who never looked at the XKB & XIM portions of Xlib. I never thought it would make me so unhappy to prove jwz wrong.

It also goes to show that extensive security audits in the past are no match for either evolving code bases or evolving attack profiles, and anything that hasn't been audited recently, by someone with the latest code and knowledge of the latest advances in security vulnerabilities and exploits, is likely to have some issues.

Pondering the X client vulnerabilities

Posted May 28, 2013 21:09 UTC (Tue) by dlang (guest, #313) [Link]

A log of it depends on the threat model that you are thinking of when you do the audit.

If you are assuming the server is not a threat when doing you evaluation, then Xlib probably is fairly well covered.

However, once you add the server to the threat, all sorts of new things start showing up. Not just this sort of "what if the server gives you bogus data" but also "what if it gives you legitimate data that isn't what the user wants" and from there things get _really_ ugly

Pondering the X client vulnerabilities

Posted May 28, 2013 21:14 UTC (Tue) by viro (subscriber, #7872) [Link]

"Latest advances"? Like, say, "grep for strncpy and see if the code assumes that result will be NUL-terminated (usually under a couple of minutes per hit)"? Or "look for <alloc-style function>(<something> {*,<<} <something>); it's likely to be allocating an array and constitutes a potential overflow, unless the array size is constrained by something"? Something that recent?

Pondering the X client vulnerabilities

Posted May 28, 2013 3:14 UTC (Tue) by airlied (subscriber, #9104) [Link] (2 responses)

The code not having had a deep review for 13 years is bad, but really it got one eventually and it got fixed. If this class of vuln had been noticed 5 years ago we'd still have 30 CVEs and this article would still have been the same. 10 years ago pretty much the same.

Pondering the X client vulnerabilities

Posted May 28, 2013 17:52 UTC (Tue) by acoopersmith (subscriber, #72107) [Link] (1 responses)

> 10 years ago pretty much the same.

Ten years ago we'd have had 5 CVEs instead of 30, since we'd just say “The X11R6.8.5 monolith has these 5 classes of bug, upgrade your entire X stack to X11R6.8.6.” instead of having a separate CVE for each library that contains the same type of bug (or in some cases, the exact same bug copied & pasted from Xlib code).

It would be far less scary to see in the CVE lists, but far more painful to packagers, distro makers, and users to replace the entire X monolith to update some libraries they don't even use all of.

While the build & packaging of X has generally gotten better over the last ten years, there is always some price to pay, and a longer CVE list is a cost we can live with.

Pondering the X client vulnerabilities

Posted Jul 15, 2013 7:41 UTC (Mon) by yuhong (guest, #57183) [Link]

10 years ago was before X11R6.7 even existed.

Pondering the X client vulnerabilities

Posted May 28, 2013 3:29 UTC (Tue) by dlang (guest, #313) [Link] (2 responses)

The X codebase is far less of an opensource project than most that use that label.

Yes, the code is open, but it really was under the very tight control of one organization for a very long time. By the time it 'escaped', it was huge, requiring nonstandard tools to deal with it.

In recent years, it's also suffered from the fact that many of the core developers are publicly calling it obsolete, that Wayland should replace it and therefor that's where all the effort should be spent.

All of this has discouraged people from digging into the code (although the x.org project has made progress in getting the codebase to be easier for people to deal with, it's still a monster)

People need to realize that even if Wayland does take over the desktop, the X libraries and protocol are still going to be used.

Pondering the X client vulnerabilities

Posted May 28, 2013 11:24 UTC (Tue) by daniels (subscriber, #16193) [Link]

People need to realize that even if Wayland does take over the desktop, the X libraries and protocol are still going to be used.

Yeah, obviously they will, approximately forever. But even long before anyone was calling it obsolete and focusing on other work, it still didn't have the kinds of contributors needed to proactively find things like this. It's too enormous a surface of too terrible code to ever grow a proper community around it. LibreOffice managed thanks to its huge visibility and cross-platformness: X will never, ever, attain that critical mass.

The only way it managed before is because a few of us all took it in turns trying to be the hero and single-handedly do everything which needed doing, which obviously doesn't last long before you horrifically burn out. But it did a really good job of masking the enormous shortage of manpower we've always faced.

Pondering the X client vulnerabilities

Posted May 28, 2013 15:43 UTC (Tue) by dgm (subscriber, #49227) [Link]

> People need to realize that even if Wayland does take over the desktop, the X libraries and protocol are still going to be used.

Not only that. X11 code and history is full of lessons on what does and doesn't work in a graphics subsystem. What leads to simple and elegant solutions, and what to ugly piles of hack-over-hack.

I bet Wayland would not be possible without the experience gained maintaining X.org. The main developers are, after all, X.org maintainers.

Pondering the X client vulnerabilities

Posted May 28, 2013 8:42 UTC (Tue) by ortalo (guest, #4654) [Link]

I am not so sure that what we need from the security point of view is "more code reviews". I mean: of course more code reviews (and/or more tools to check code) would be (are) very beneficial. However, this is primarily a question of priority, not specific to open source by the way. If security (especially software security) was more of a priority at large, more authority (and more money) would be sent at it and more code reviews would occur.
I am hoping for such things to occur. However, in the meantime...

... What we probably need much more crucially is a way of preventing false security claims; or more specifically in this case a way to remember (for a long time) that this code has *not* been reviewed or taken care of from the security point of view.
Many people, especially *very serious people* (to paraphrase a famous blogger), falsely claim that security is a given on specific device, when they try to advance on their own project (or career). We need to change the way we look at these claims (and these people knowledge probably) in order to prevent them from simply waiting that a software gets wide usage in order to claim any security (or risk management as they call it).
These false claims are a pain to counter. We all know in this technical field that: either pieces of evidence are available (usually a lot) to demonstrate that some security effort has been done, either the device is not secure.

Let's change our default motto! Unless we can provide hundred of logs of activity, let's say frankly: "of course it's not secure".
(Furthermore, sometimes, this is preferable... but that would another topic.)

Pondering the X client vulnerabilities

Posted May 28, 2013 11:18 UTC (Tue) by epa (subscriber, #39769) [Link] (22 responses)

But why is the software written in such an unsafe language to start with? If there is an integer overflow, why does it silently continue with a dud array size rather than raising an error? Surely only one per cent of integer overflows are desired by the programmer, and can be marked explicitly in the code - the rest should be fatal errors by default. Similarly if the array access is out of bounds, why is it a good idea to trample memory in some random place rather than just aborting?

So doing these checks at runtime would make the code run half as fast? So what? The X client library is hardly a bottleneck for most applications and it wasn't one twenty years ago either. If there are performance-critical sections of code they can be audited carefully and checks can be disabled there.

The saddest part is that today it is still just as easy to put these bugs into your code.

Pondering the X client vulnerabilities

Posted May 28, 2013 11:26 UTC (Tue) by daniels (subscriber, #16193) [Link] (2 responses)

Similarly if the array access is out of bounds, why is it a good idea to trample memory in some random place rather than just aborting?

You say it ('just') as if it's possible to fix with literally no trade-offs whatsoever.

Pondering the X client vulnerabilities

Posted May 28, 2013 11:42 UTC (Tue) by epa (subscriber, #39769) [Link] (1 responses)

The way C works means you cannot fix all unsafe memory accesses (at least not without severe performance penalties). But you can certainly take care of a good proportion of them, such as array bounds checking when the array is accessed simply as a[x], with only a moderate slowdown.

Pondering the X client vulnerabilities

Posted May 28, 2013 12:02 UTC (Tue) by daniels (subscriber, #16193) [Link]

Nevermind the cost of rewriting all extant C code to take this extended type information into account, including the myriad places where the bounds simply get lost, or are blurred thanks to tricks like malloc(sizeof(foo) + n * sizeof(bar)).

In the real world where we have billions of lines of C, it's not a choice between 'I'd love this to crash and be terrible', and 'I'd love this to not crash and be great'.

Pondering the X client vulnerabilities

Posted May 28, 2013 15:27 UTC (Tue) by tjc (guest, #137) [Link] (18 responses)

> But why is the software written in such an unsafe language to start with?

Because the project started in 1984. What else would they have used -- Ada?

Pondering the X client vulnerabilities

Posted May 29, 2013 20:12 UTC (Wed) by epa (subscriber, #39769) [Link] (17 responses)

I went along with the article's statement that this code was from the early 90s (though the first versions of X were much older). But really my comment is not a criticism of the developers as of the glacial pace at which programming environments improve for general use. You'd think that safer C variants such as Cyclone would have displaced plain C by now, or better that the standards committees would have incorporated most of their safety features into newer C standards - with a way to turn them off on a case by case basis.

Pondering the X client vulnerabilities

Posted May 29, 2013 21:05 UTC (Wed) by jg (guest, #17537) [Link]

Well, there are still a few lines of code from W (in a few header files): those date from 1982 and 1083.

Bob Scheifler and I started working on X1 in June of 1984.

X11's (the protocol framework version used today) was shipped in 1987, IIRC.

There were precious few options available to us.

One of the early X Window managers and a terminal emulator were written in Clu, however. http://en.wikipedia.org/wiki/CLU_(programming_language) Bob Scheifler worked for Barbara Liskov.

But Clu wasn't available on other platforms (Project Athena, where I worked at the time as a Digital employee, was a joint project with IBM, and what became the IBM RT/PC was known to be in our future. Sun's were also widespread. So we needed a language that was cross platform. C++ wasn't yet an option either.

So we "knew better", but there just wasn't any actually viable alternative to C in that era.

Pondering the X client vulnerabilities

Posted May 29, 2013 21:08 UTC (Wed) by smoogen (subscriber, #97) [Link] (15 responses)

Those things are controlled by more on social pressure than code advances. It doesn't matter if you figured out how much Esperanto is better than every other European language out there.. if no one else around you uses it then you are pretty much talking to yourself. And while you can code yourself a nice program in Cyclone.. you aren't going to get a lot of help when you want to make it big.

That social inertia is what keeps certain languages in place even after you would think they should have been put away 20-30 years ago.

Pondering the X client vulnerabilities

Posted May 29, 2013 21:29 UTC (Wed) by epa (subscriber, #39769) [Link] (14 responses)

Yes, absolutely, I was referring to the social environment that makes C a common language despite it being far too easy to make dangerous mistakes. It's not as if safer ways of programming didn't exist even back in 1984 - but then, the choice of C was more understandable since alternatives were equally flawed in various ways. Why, though, doesn't the C language become a bit safer with each new version (C89, C99, C11) with code gradually being moved across? Unchecked 'thin pointers' ought to be only for performance-critical code by now.

Pondering the X client vulnerabilities

Posted May 30, 2013 9:27 UTC (Thu) by mpr22 (subscriber, #60784) [Link] (6 responses)

If someone is coding in C (or C++, for that matter), there's a decent chance they already believe all their code is performance-critical.

Pondering the X client vulnerabilities

Posted May 30, 2013 11:22 UTC (Thu) by hummassa (subscriber, #307) [Link] (5 responses)

in the late 1990's and early 2000's it was somewhat true that we had lots of space for non-performance-critical code... But these days, what we have are lots of mobile platforms and high-power-efficiency server platforms, where performance and cost-to-benefit-ratio are measured not only in "operations per second", but also in "operations per milliwatt-hour"... so, unless the other languages made great strides in things like "not wasting a lot of RAM space and CPU cycles to achieve the same end result" (and yes, some did...) C/C++ (esp. the latter) seems like a good solution in many cases.

Pondering the X client vulnerabilities

Posted May 30, 2013 21:17 UTC (Thu) by dvdeug (guest, #10998) [Link] (3 responses)

Android runs Java. The first Power Mac G4 in 1999 had problems with export because it was classified as a super computer; every smart phone on the planet is faster then those systems running at 400 MHz.

I've never understood why an increase in speed is worth having to tell your customers that their credit card numbers are in the hands of hackers, or having your website hosting child porn. Taking the website down for a week while your IT department (getting massive overtime) is trying to recover what it can from your compromised systems is bloody expensive. It's a classic surcharge on day-to-day operations to prevent possible disaster.

Android doesn't really run Java

Posted May 31, 2013 11:39 UTC (Fri) by alex (subscriber, #1355) [Link] (2 responses)

Well obviously I'm being a little trollish ;-)

However an awful lot of the performance critical areas of the code are written in lower level code and exported to Java via JNI (not to mention key components like the browser which is pretty much all C++). There is a debate to be had as to how much can you ameliorate some of the worst areas of C with C++ but once you start up the road to interpreted/JITed higher level languages you will pay a performance penalty. It's fine when they are making fairly simple decisions that result in work for lower down the stack but when you are moving lots of stuff around you really don't to be in even the most efficient JIT code.

Android doesn't really run Java

Posted May 31, 2013 14:01 UTC (Fri) by hummassa (subscriber, #307) [Link]

You are right. And, of course, the performance penalty theoretically, at least, would increase still more when we are talking about performance as battery/energy economy. BTW -- THAT is what I'd like to see measured in this context.

In the same topic, people demonize C mainly because of three things: null pointers, buffer overflows and integer overflows. You can write beautiful and as-efficient-as C++ code without any of those. But PHP/Java-like vulns would persist (data scrubbing/encoding/quoting errors mainly) are a little bit more difficult.

Android doesn't really run Java

Posted May 31, 2013 18:12 UTC (Fri) by dvdeug (guest, #10998) [Link]

Java doesn't need to be interpreted; I presume it makes it easier to limit Android code and it certainly makes MIPS and ARM Androids both possible. In any case, it's not really relevant to the question of use-after-frees and buffer overruns.

The low-level stuff can't be ignored, but I'm a lot more comfortable having the base libraries be written in C, since they're written by someone who presumably knows what they're doing, and having the programs running on that code written in Java.

As for the browser, you're talking about a tool that deals with a sustained barrage of untrusted material, and it's not hard to feed a web browser compromised material. Looking at Chrome's recent vulnerabilities, in May we had 8 use-after-frees and one out-of-bounds read (and four other), and in March we had 6 use-after-frees and 1 out-of-bounds read (and 15 other, including some "memory corruptions"). (Counts of CVEs listed in Debian's changelog.)

As hummassa says, there's other standard security errors that Python and Java as well as C and C++ get up to. But 45% of Chrome's recent CVEs wouldn't have existed in Java or similar systems. Pick the low-hanging fruit first.

Pondering the X client vulnerabilities

Posted Jun 5, 2013 11:03 UTC (Wed) by epa (subscriber, #39769) [Link]

I'm not going to pretend that because computers are faster, there is no longer performance-critical code. (Even though it is worth bearing in mind that today's mobile platforms or ARM-based servers are still far more powerful than the high-end workstations for which X was written.) My point is that some code is performance-critical, but not *all of it*. The 80-20 rule applies - actually more like 99-1, where a small part of the code is the inner loop where micro-optimizations like skipping bounds checking make a difference.

By all means profile the code, find the hot sections, and after careful review turn off safety for those. That does not justify running without bounds checking or overflow checking for the rest of the code.

Pondering the X client vulnerabilities

Posted May 30, 2013 11:18 UTC (Thu) by etienne (guest, #25256) [Link] (2 responses)

> Unchecked 'thin pointers' ought to be only for performance-critical code by now.

Please define 'thick pointers', the opposite of 'thin pointers'.
An address plus a size? The size of what?
OK, for a structure, a thick pointer to that structure is { pointer + size of structure}.
For an array of structures, a thick pointer to that array is { pointer + size of complete array}. To get a thick pointer to an element of that array, you now need the size of that element - possible.
Now you want to scan through an array, you do not want a thick pointer to an element of the array (because you will not be able to increment this thick pointer to point to the next element), you want a thick pointer to an array of element, each times you increment the thick pointer you decrement the size of this thick pointer by the size of the structure of the array.
Now someone wants to search something in the array from the end of the array, each times you decrement the thick pointer you increment this thick pointer size by the size of the structure of the array. But then what stops you to go below the start of the array? You need to include a minimum base address into the thick pointer?
Now imagine all this is about chars and array of chars - your thick pointer is bigger than the string you are manipulating, and you code initially only prints an hexadecimal value into an array of 256 bytes, you know it will not overflow.
You think the compiler will be able to optimise and remove a field of a structure (thick pointer size) when it is obvious there will not be any problem? It is not going to happen if the function is not static inside the C file.
The "thick pointer size" can also be a variable inside the compiler (range analysis) instead of inside the compiled function, but that has its limits.

Pondering the X client vulnerabilities

Posted May 30, 2013 12:20 UTC (Thu) by mpr22 (subscriber, #60784) [Link]

I say the following as someone who loves C and codes in it for a hobby and for a living: As footguns go, the C-style (let's not use the word "thin", since it appears to have caused confusion) pointer is a Single Action Army with six in the cylinder.

Safer-than-C pointers have stricter semantics. You can't just casually toss them into, and fish them back out of, the semantics-free abyss that is void *. You can't use safe_pointer_to_foo as if it was safe_pointer_to_array_of_foo; if you want a mutable handle to an entry in an array_of_foo, you create an iterator_on_array_of_foo, which consists of a safe_pointer_to_array_of_foo and an index.

And the principle of defensive design indicates that since language and library safety features exist to protect the user from the effects of the programmer's idiocy, the syntax that is quick and easy to use should be the one that has all the safeties enabled, and if the compiler can't sufficiently intelligently optimize away redundant safeties then there should be an unsafe syntax carrying a significant dose of syntactic salt. Unfortunately, because people tend to resist even those changes from which they would get real benefits, getting such a change of outlook past the standards committee is very hard.

Pondering the X client vulnerabilities

Posted May 30, 2013 17:15 UTC (Thu) by tjc (guest, #137) [Link]

The "thick pointer" size problem could be solved by defining a thick pointer with just one additional member, an index (generated by the compiler or run-time code) into a global run-time data structure that includes the remaining information about the pointer. This way all thick pointers have the same size, and the remaining information doesn't have to be passed around on the stack.

Pondering the X client vulnerabilities

Posted May 30, 2013 20:48 UTC (Thu) by kleptog (subscriber, #1183) [Link] (3 responses)

The amount of code written in C is so vast that trying to fiddle with basic features of the language (like pointer arithmetic) will simply lead to vast swathes of C code failing to compile. To get people to advance to newer versions of the language requires not leaving a huge percentage of people behind. Witness Python 2 vs 3.

In any case, C doesn't need to become a safe language. There are plenty of safe languages to code in, you need at least one unsafe language to code all the safe languages in. It may as well be C.

What I think would really make a difference is if someone actually implemented a working C-like language (you couldn't call it C) that had a few extra restrictions, like: 'a pointer assigned an address in array X cannot point outside that array by pointer arithmetic'. And then demonstrated that it (a) performed ok and (b) ran >99% of existing C code.

I think the easiest way to achieve this would be code a C interpreter in PyPy where you could implement the restrictions, and you get a JIT to make it perform. If you could make this work, my guess is you could convince a lot of people run most C programs under such an interpreter and you would have solved the problems for a significant portion of the codebase.

But whether its actually achievable....

Pondering the X client vulnerabilities

Posted Jun 5, 2013 10:10 UTC (Wed) by epa (subscriber, #39769) [Link] (1 responses)

Have you looked at Cyclone?

Pondering the X client vulnerabilities

Posted Jun 5, 2013 20:31 UTC (Wed) by kleptog (subscriber, #1183) [Link]

Interesting, but it looks like it just punted and just created a new language that looks like C but isn't. Which means it's never going to catch on because it would require changing the code.

I mean, valgrind manages to catch all sorts of memory errors without any source with not many false positives. ISTM a compiler should be able to do a similar job but more efficiently (because valgrind slow as molasses, but points out the error to you right away, which is what makes it worth it).

Pondering the X client vulnerabilities

Posted Jun 6, 2013 8:55 UTC (Thu) by kevinm (guest, #69913) [Link]

Actually, for the restriction you gave ("a pointer assigned an address in array X cannot point outside that array by pointer arithmetic"), you *could* implement pretty much that restriction and still call it C. In C, pointer arithmetic is only defined if it results in a pointer within the original object or one object past the end of an array.

The major obstacle would be that you'd need an ABI change to pass around the necessary object extent data to enforce this everywhere.

Pondering the X client vulnerabilities

Posted May 28, 2013 13:16 UTC (Tue) by clugstj (subscriber, #4020) [Link] (1 responses)

So why were 30 CVE numbers taken out instead of 1?

Pondering the X client vulnerabilities

Posted May 28, 2013 17:46 UTC (Tue) by acoopersmith (subscriber, #72107) [Link]

> So why were 30 CVE numbers taken out instead of 1?

Because that's what the CVE guidelines told us to do, for two reasons:

1) There’s several different types of bug here - integer overflow, deep stack recursion, missing return code validation, & buffer overflow.

2) These bugs were replicated in 20 different packages, each with their own version numbers and release history. The CVE assigners don't like having a single CVE that says “This covers libX11 1.5, libXv 1.0.8, libXi 1.7.1, ... ”because then you can't say “I have the fix for CVE-2013-XXXX” if you have installed the new libX11 but not the new libXi.

Pondering the X client vulnerabilities

Posted May 28, 2013 20:16 UTC (Tue) by kleptog (subscriber, #1183) [Link] (1 responses)

While introducing a code review process into a company I have noticed that while the code review tool helps a lot and you do catch lots of stupid bugs and future problems early, what it's really good in catching is when peoples expectations differ.

That is, when different team members expect something to be fixed in different ways because they have differing expectations. In a small group you talk it out and get a solution that everybody agrees with.

This I think makes it much more difficult for open source projects because you don't get the real time feedback and when disagreements arise you don't have the offline high-bandwidth, low-latency channel to discuss the problem and come up with better solutions. Nothing is more demotivating that having your patch stuck in review phase.

Fortunately many larger projects do successful patch review and it works reasonably well, but it's not always easy.

Pondering the X client vulnerabilities

Posted May 29, 2013 21:13 UTC (Wed) by jg (guest, #17537) [Link]

Key pieces of X were written by individuals, not teams. I did the original Xlib (under extreme time pressure for X11); Bob did the device independent part of the X server up until X11, and so on.

The code base in that era was very much smaller.

A community large enough to provide useful code reviews did not exist in the early years of X... And security problems in that era were very small: the bad guys weren't out to get us in the 1980's and early 1990's, so there was little incentive to go find them or do such code reviews once there was a viable community.

And then things stagnated, particularly after the X Consortium closed. Long, long, long story there....

Unfortunately, many mistakes of my era were cut and pasted into later X extension libraries, replicating bugs. And other people created new bugs as X continued to evolve over the decades as well. Creativity is found everywhere ;-).

Pondering the X client vulnerabilities

Posted May 30, 2013 17:52 UTC (Thu) by krakensden (subscriber, #72039) [Link]

I would be interested in an investigation of the non-X related problems he reported, like bad dbus configuration. The freedesktop.org, gnome, and kde layers of the stack are frankly a mystery to me (and others- google for 'disable networkmanager'), and there is a great deal of churn...


Copyright © 2013, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds