Pondering the X client vulnerabilities
The vulnerabilities
The X Window System has a classic client/server architecture, with the X server providing display and input services for a range of client applications. The two sides communicate via a well-defined (if much extended) protocol that, in theory, provides for network-transparent operation. In any protocol implementation, developers must take into account the possibility that one of the participants is compromised or overtly hostile. In short, that is what did not happen in the X client libraries.
In particular, the client libraries contained many assumptions about the trustworthiness of the data coming from the X server. Keymap indexes were not checked to verify that they fell in the range of known keys. Very large buffer size values from the server could cause integer overflows on the client side; that, in turn, could lead to the allocation of undersized buffers that could subsequently be overflowed. File-processing code could be forced into unbounded recursion by hostile input. And so on. The bottom line is that an attacker who controls an X server has a long list of possible ways to compromise the clients connected to that server.
Despite the seemingly scary nature of most of these vulnerabilities, the impact on most users should be minimal. Most of the time, the user is in control of the server, which is usually running in a privileged mode. Any remote attacker who can compromise such a server will not need to be concerned with client library exploits; the game will have already been lost. The biggest threat, arguably, is attacks against setuid programs by a local user. If the user can control the server (perhaps by using one of the virtual X server applications), it may be possible to subvert a privileged program, enabling privilege escalation on the local machine. For this reason, applying the updates makes sense in many situations, but it may not be a matter of immediate urgency.
Many of these vulnerabilities have been around for a long time; the
advisory states that "X.Org believes all prior versions of these
libraries contain these flaws, dating back to their introduction.
"
That introduction, for the bulk of the libraries involved, was in the
1990's. That is a long time for some (in retrospect) fairly obvious errors
to go undetected in code that is this widely used.
Some thoughts
One can certainly make excuses for the developers who implemented those libraries 20 years or so ago. The net was not so hostile — or so pervasive — and it hadn't yet occurred to developers that their code might have to interact with overly hostile peers. A lot of code written in those days has needed refurbishing since. It is a bit more interesting to ponder why that refurbishing took so long to happen in this case. X has long inspired fears of security issues, after all. But, traditionally, those fears have been centered around the server, since that is where the privilege lies. If you operate under the assumption that the server is the line of defense, there is little reason to be concerned about the prospect of the server attacking its clients. It undoubtedly seemed better to focus on reinforcing the server itself.
Even so, one might think that somebody would have gotten around to looking at the X library code before Ilja van Sprundel took on the task in 2013. After all, if vulnerable code exists, somebody, somewhere will figure out a way to exploit it, and attackers have no qualms about looking for problems in ancient code. The X libraries are widely used and, for better or worse, they do often get linked into privileged programs that, arguably, should not be mixing interface and privilege in this way. It seems fairly likely that at least some of these vulnerabilities have been known to attackers for some time.
Speaking of review
As Al Viro has pointed out, the security updates caused some problems of their own due to bugs that would have been caught in a proper review process. Given the age and impact of the vulnerabilities, it arguably would have been better to skip the embargo process and post the fixes publicly before shipping them. After all, as Al notes, unreviewed "security" fixes could be a way to slip new vulnerabilities into a system.
Fixing this will not be easy. Deep code review has always been in short supply in our community, and for easily understandable reasons: the work is tedious, painstaking, and often unrewarding. Developers with the skill to perform this kind of review tend to be happier when they are writing code of their own. Getting these developers to volunteer more of their time for code review is always going to be an uphill battle.
The various companies working in this area could help the situation by paying for more security review work. There are some signs that more of this is happening than in the past, but this, too, has tended to be a hard sell. Most companies sponsor development work to help ensure that their own needs are adequately met by the project(s) in question. General security work does not add features or enable more hardware; the rewards from doing this work may seem nebulous at best. So most companies, especially if they do not feel threatened by the current level of security in our code, feel that security work is something they can leave to others.
So we will probably continue to muddle along with code that contains a
variety of vulnerabilities, both old and new. Most of the time, it works
well enough — at least, as far as we know. And on that cheery note, your
editor has to run; there's a whole set of new security updates to apply.
| Index entries for this article | |
|---|---|
| Security | X client |
Posted May 27, 2013 22:23 UTC (Mon)
by hp (guest, #5220)
[Link] (30 responses)
I think perhaps this went unfixed for so long because many people just don't see the value in it (compared to other work at least). But with open source I guess all it takes is one person to do it!
Posted May 28, 2013 1:26 UTC (Tue)
by cesarb (subscriber, #6266)
[Link] (26 responses)
IIRC, at least one toolkit refuses to run if it finds out the process is setuid root.
Posted May 28, 2013 1:38 UTC (Tue)
by hp (guest, #5220)
[Link] (25 responses)
though I guess for the most part setuid is ALWAYS a bad idea ;-)
That gtk.org article nicely points out that almost all security bugs are also plain old bugs, though, so can't hurt to fix. But it's not like suid X apps are safe now.
Posted May 28, 2013 2:24 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (3 responses)
http://cansecwest.com/slides/2013/Assessing%20the%20Linux...
Posted May 28, 2013 6:34 UTC (Tue)
by halla (subscriber, #14185)
[Link] (2 responses)
It's kind of a pet peeve with me, just like when articles assume that parents these days are internet newbies compared to their kids. Public access to internet is twenty years old now, in the Netherlands. By now, it's no longer a new thing.
Posted May 28, 2013 6:45 UTC (Tue)
by rahulsundaram (subscriber, #21946)
[Link] (1 responses)
I noticed quite a few of such minor niggles such as calling the Unity interface a mix of Windows and OS X and putting up a screenshot of GNOME Shell but they aren't the focus of the presentation and I am more interested to hear thoughts on the security details.
Posted May 28, 2013 8:41 UTC (Tue)
by halla (subscriber, #14185)
[Link]
Yes, indeed. It's just something that irks me.
Posted May 28, 2013 13:19 UTC (Tue)
by jengelh (guest, #33263)
[Link] (5 responses)
Posted May 28, 2013 14:35 UTC (Tue)
by otaylor (subscriber, #4190)
[Link] (4 responses)
Posted May 29, 2013 9:02 UTC (Wed)
by nirbheek (subscriber, #54111)
[Link] (3 responses)
Posted May 29, 2013 18:20 UTC (Wed)
by dtlin (subscriber, #36537)
[Link] (1 responses)
Posted May 29, 2013 18:47 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Jul 15, 2013 11:34 UTC (Mon)
by hummassa (subscriber, #307)
[Link]
Posted May 30, 2013 16:24 UTC (Thu)
by richmoore (guest, #53133)
[Link] (14 responses)
Any distro daft enough to package a setuid binary that links a GUI needs to buck their ideas up too.
Posted May 30, 2013 16:54 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (13 responses)
Posted May 30, 2013 17:45 UTC (Thu)
by anselm (subscriber, #2796)
[Link] (11 responses)
There's nothing wrong with Qt in general.
I would however be very suspicious of any SUID program that required the use of »network support and threading primitives«, whether it is based on Qt or anything else. (The X server comes to mind but that's another story.) If nothing else it should be possible to structure one's X client program such that any code that needs special privileges is put into a minimal separate executable that can then be SUID – and nowadays there are various other approaches one could take that may make SUID completely unnecessary in this context.
Posted May 30, 2013 17:48 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (10 responses)
So you need at least have the code to parse config files to be run as root.
Posted May 30, 2013 22:27 UTC (Thu)
by nix (subscriber, #2304)
[Link] (4 responses)
Posted May 30, 2013 23:12 UTC (Thu)
by anselm (subscriber, #2796)
[Link] (3 responses)
Since when is BIND a SUID-root X11 client?
Posted Jun 3, 2013 13:54 UTC (Mon)
by nix (subscriber, #2304)
[Link] (2 responses)
Qt is not just about GUI stuff, and hasn't been for as long as KDE has been using it, pretty much. (It only got properly separated out in Qt 4, though.)
Posted Jun 3, 2013 14:21 UTC (Mon)
by anselm (subscriber, #2796)
[Link] (1 responses)
Nobody needs to use Qt's threading and networking primitives when running SUID root, which is a completely different ballgame than running as root as a daemon.
There's probably nothing wrong with using the non-GUI parts of Qt to implement a threaded networking daemon, if one doesn't mind the Qt haters' jumping all over one. But such a daemon would not run SUID root; it would be started as root initially (from something like SysV init or systemd) and then drop its root privileges ASAP.
It was actually Cyberax who claimed that »It's actually EXTREMELY common to have networked programs to be SUIDed«. This is apparently so common that so far he hasn't managed to come up with one single example.
Posted Jun 5, 2013 20:09 UTC (Wed)
by nix (subscriber, #2304)
[Link]
Posted May 30, 2013 23:15 UTC (Thu)
by anselm (subscriber, #2796)
[Link]
Programs that need to open ports below 1024 for listening are not usually X clients. They are servers/daemons that are commonly run as root to begin with (and drop their privileges as soon as they can), rather than SUID-root programs. This is a completely different ball game.
Posted May 31, 2013 10:45 UTC (Fri)
by jschrod (subscriber, #1646)
[Link] (3 responses)
I'm using Linux and Unix since many decades, and I'm not aware of them, but willing to learn more. (I've got daemons that bind to ports, but they are not suid, they are started by root. Even an X server is most often not suid nowadays, with xdm and friends.)
Posted May 31, 2013 15:26 UTC (Fri)
by hummassa (subscriber, #307)
[Link] (2 responses)
in my 5308-packages-installed Kubuntu desktop, I found ONE setuid file linked with X libs: kppp (and none setgid). One moment of googling, and I found this:
> Why is KPPP installed with the setuid bit on?
> There is no need for the setuid bit, if you know a bit of UNIX® systems administration. Simply create a modem group, add all users that you want to give access to the modem to that group and make the modem device read/writable for that group. Also if you want DNS configuration to work with KPPP, then /etc/resolv.conf must be read/writable by the members of that group. The same counts for /etc/ppp/pap-secrets and /etc/ppp/chap-secrets if you want to use the built-in PAP or CHAP support, respectively.
> The KPPP team has lately done a lot of work to make KPPP setuid-safe. But it's up to you to decide if you install and how you install it.
http://docs.kde.org/development/en/kdenetwork/kppp/faq.ht...
I will not bother with it, for now...
Posted May 31, 2013 15:32 UTC (Fri)
by jschrod (subscriber, #1646)
[Link] (1 responses)
FTR: I didn't want to question that there aren't suid X programs. I question that this happens to be "EXTREMELY common" and that access to low network ports is the reason to be so.
Posted May 31, 2013 18:58 UTC (Fri)
by hummassa (subscriber, #307)
[Link]
Posted May 30, 2013 19:41 UTC (Thu)
by richmoore (guest, #53133)
[Link]
The problem reported against Qt was simply someone using something incorrectly, and being told they've done so, then complaining it doesn't work. A couple of the issues spotted were genuine bugs, and should be addressed (I'm not aware that they actually filed them btw) but they aren't security holes in Qt.
Posted May 28, 2013 20:44 UTC (Tue)
by acoopersmith (subscriber, #72107)
[Link] (2 responses)
While we’d like to hope so, we know it’s not true amongst the application developer population in general. Besides the KDE issues mentioned in the CanSecWest preso, and GTK’s prohibition of setuid use (though that’s not about not trusting Xlib, but not wanting to have to block or sandbox user-specified loadable modules), we know there’s other examples out there.
There’s even a few claims to the contrary:
> The amount of code in Xlib is very small, and has been extensively security audited.
which sounds like someone who never looked at the XKB & XIM portions of Xlib. I never thought it would make me so unhappy to prove jwz wrong.
It also goes to show that extensive security audits in the past are no match for either evolving code bases or evolving attack profiles, and anything that hasn't been audited recently, by someone with the latest code and knowledge of the latest advances in security vulnerabilities and exploits, is likely to have some issues.
Posted May 28, 2013 21:09 UTC (Tue)
by dlang (guest, #313)
[Link]
If you are assuming the server is not a threat when doing you evaluation, then Xlib probably is fairly well covered.
However, once you add the server to the threat, all sorts of new things start showing up. Not just this sort of "what if the server gives you bogus data" but also "what if it gives you legitimate data that isn't what the user wants" and from there things get _really_ ugly
Posted May 28, 2013 21:14 UTC (Tue)
by viro (subscriber, #7872)
[Link]
Posted May 28, 2013 3:14 UTC (Tue)
by airlied (subscriber, #9104)
[Link] (2 responses)
Posted May 28, 2013 17:52 UTC (Tue)
by acoopersmith (subscriber, #72107)
[Link] (1 responses)
Ten years ago we'd have had 5 CVEs instead of 30, since we'd just say “The X11R6.8.5 monolith has these 5 classes of bug, upgrade your entire X stack to X11R6.8.6.” instead of having a separate CVE for each library that contains the same type of bug (or in some cases, the exact same bug copied & pasted from Xlib code).
It would be far less scary to see in the CVE lists, but far more painful to packagers, distro makers, and users to replace the entire X monolith to update some libraries they don't even use all of.
While the build & packaging of X has generally gotten better over the last ten years, there is always some price to pay, and a longer CVE list is a cost we can live with.
Posted Jul 15, 2013 7:41 UTC (Mon)
by yuhong (guest, #57183)
[Link]
Posted May 28, 2013 3:29 UTC (Tue)
by dlang (guest, #313)
[Link] (2 responses)
Yes, the code is open, but it really was under the very tight control of one organization for a very long time. By the time it 'escaped', it was huge, requiring nonstandard tools to deal with it.
In recent years, it's also suffered from the fact that many of the core developers are publicly calling it obsolete, that Wayland should replace it and therefor that's where all the effort should be spent.
All of this has discouraged people from digging into the code (although the x.org project has made progress in getting the codebase to be easier for people to deal with, it's still a monster)
People need to realize that even if Wayland does take over the desktop, the X libraries and protocol are still going to be used.
Posted May 28, 2013 11:24 UTC (Tue)
by daniels (subscriber, #16193)
[Link]
People need to realize that even if Wayland does take over the desktop, the X libraries and protocol are still going to be used. Yeah, obviously they will, approximately forever. But even long before anyone was calling it obsolete and focusing on other work, it still didn't have the kinds of contributors needed to proactively find things like this. It's too enormous a surface of too terrible code to ever grow a proper community around it. LibreOffice managed thanks to its huge visibility and cross-platformness: X will never, ever, attain that critical mass. The only way it managed before is because a few of us all took it in turns trying to be the hero and single-handedly do everything which needed doing, which obviously doesn't last long before you horrifically burn out. But it did a really good job of masking the enormous shortage of manpower we've always faced.
Posted May 28, 2013 15:43 UTC (Tue)
by dgm (subscriber, #49227)
[Link]
Not only that. X11 code and history is full of lessons on what does and doesn't work in a graphics subsystem. What leads to simple and elegant solutions, and what to ugly piles of hack-over-hack.
I bet Wayland would not be possible without the experience gained maintaining X.org. The main developers are, after all, X.org maintainers.
Posted May 28, 2013 8:42 UTC (Tue)
by ortalo (guest, #4654)
[Link]
... What we probably need much more crucially is a way of preventing false security claims; or more specifically in this case a way to remember (for a long time) that this code has *not* been reviewed or taken care of from the security point of view.
Let's change our default motto! Unless we can provide hundred of logs of activity, let's say frankly: "of course it's not secure".
Posted May 28, 2013 11:18 UTC (Tue)
by epa (subscriber, #39769)
[Link] (22 responses)
So doing these checks at runtime would make the code run half as fast? So what? The X client library is hardly a bottleneck for most applications and it wasn't one twenty years ago either. If there are performance-critical sections of code they can be audited carefully and checks can be disabled there.
The saddest part is that today it is still just as easy to put these bugs into your code.
Posted May 28, 2013 11:26 UTC (Tue)
by daniels (subscriber, #16193)
[Link] (2 responses)
Similarly if the array access is out of bounds, why is it a good idea to trample memory in some random place rather than just aborting? You say it ('just') as if it's possible to fix with literally no trade-offs whatsoever.
Posted May 28, 2013 11:42 UTC (Tue)
by epa (subscriber, #39769)
[Link] (1 responses)
Posted May 28, 2013 12:02 UTC (Tue)
by daniels (subscriber, #16193)
[Link]
In the real world where we have billions of lines of C, it's not a choice between 'I'd love this to crash and be terrible', and 'I'd love this to not crash and be great'.
Posted May 28, 2013 15:27 UTC (Tue)
by tjc (guest, #137)
[Link] (18 responses)
Because the project started in 1984. What else would they have used -- Ada?
Posted May 29, 2013 20:12 UTC (Wed)
by epa (subscriber, #39769)
[Link] (17 responses)
Posted May 29, 2013 21:05 UTC (Wed)
by jg (guest, #17537)
[Link]
Bob Scheifler and I started working on X1 in June of 1984.
X11's (the protocol framework version used today) was shipped in 1987, IIRC.
There were precious few options available to us.
One of the early X Window managers and a terminal emulator were written in Clu, however. http://en.wikipedia.org/wiki/CLU_(programming_language) Bob Scheifler worked for Barbara Liskov.
But Clu wasn't available on other platforms (Project Athena, where I worked at the time as a Digital employee, was a joint project with IBM, and what became the IBM RT/PC was known to be in our future. Sun's were also widespread. So we needed a language that was cross platform. C++ wasn't yet an option either.
So we "knew better", but there just wasn't any actually viable alternative to C in that era.
Posted May 29, 2013 21:08 UTC (Wed)
by smoogen (subscriber, #97)
[Link] (15 responses)
That social inertia is what keeps certain languages in place even after you would think they should have been put away 20-30 years ago.
Posted May 29, 2013 21:29 UTC (Wed)
by epa (subscriber, #39769)
[Link] (14 responses)
Posted May 30, 2013 9:27 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link] (6 responses)
Posted May 30, 2013 11:22 UTC (Thu)
by hummassa (subscriber, #307)
[Link] (5 responses)
Posted May 30, 2013 21:17 UTC (Thu)
by dvdeug (guest, #10998)
[Link] (3 responses)
I've never understood why an increase in speed is worth having to tell your customers that their credit card numbers are in the hands of hackers, or having your website hosting child porn. Taking the website down for a week while your IT department (getting massive overtime) is trying to recover what it can from your compromised systems is bloody expensive. It's a classic surcharge on day-to-day operations to prevent possible disaster.
Posted May 31, 2013 11:39 UTC (Fri)
by alex (subscriber, #1355)
[Link] (2 responses)
However an awful lot of the performance critical areas of the code are written in lower level code and exported to Java via JNI (not to mention key components like the browser which is pretty much all C++). There is a debate to be had as to how much can you ameliorate some of the worst areas of C with C++ but once you start up the road to interpreted/JITed higher level languages you will pay a performance penalty. It's fine when they are making fairly simple decisions that result in work for lower down the stack but when you are moving lots of stuff around you really don't to be in even the most efficient JIT code.
Posted May 31, 2013 14:01 UTC (Fri)
by hummassa (subscriber, #307)
[Link]
In the same topic, people demonize C mainly because of three things: null pointers, buffer overflows and integer overflows. You can write beautiful and as-efficient-as C++ code without any of those. But PHP/Java-like vulns would persist (data scrubbing/encoding/quoting errors mainly) are a little bit more difficult.
Posted May 31, 2013 18:12 UTC (Fri)
by dvdeug (guest, #10998)
[Link]
The low-level stuff can't be ignored, but I'm a lot more comfortable having the base libraries be written in C, since they're written by someone who presumably knows what they're doing, and having the programs running on that code written in Java.
As for the browser, you're talking about a tool that deals with a sustained barrage of untrusted material, and it's not hard to feed a web browser compromised material. Looking at Chrome's recent vulnerabilities, in May we had 8 use-after-frees and one out-of-bounds read (and four other), and in March we had 6 use-after-frees and 1 out-of-bounds read (and 15 other, including some "memory corruptions"). (Counts of CVEs listed in Debian's changelog.)
As hummassa says, there's other standard security errors that Python and Java as well as C and C++ get up to. But 45% of Chrome's recent CVEs wouldn't have existed in Java or similar systems. Pick the low-hanging fruit first.
Posted Jun 5, 2013 11:03 UTC (Wed)
by epa (subscriber, #39769)
[Link]
By all means profile the code, find the hot sections, and after careful review turn off safety for those. That does not justify running without bounds checking or overflow checking for the rest of the code.
Posted May 30, 2013 11:18 UTC (Thu)
by etienne (guest, #25256)
[Link] (2 responses)
Please define 'thick pointers', the opposite of 'thin pointers'.
Posted May 30, 2013 12:20 UTC (Thu)
by mpr22 (subscriber, #60784)
[Link]
I say the following as someone who loves C and codes in it for a hobby and for a living: As footguns go, the C-style (let's not use the word "thin", since it appears to have caused confusion) pointer is a Single Action Army with six in the cylinder. Safer-than-C pointers have stricter semantics. You can't just casually toss them into, and fish them back out of, the semantics-free abyss that is void *. You can't use safe_pointer_to_foo as if it was safe_pointer_to_array_of_foo; if you want a mutable handle to an entry in an array_of_foo, you create an iterator_on_array_of_foo, which consists of a safe_pointer_to_array_of_foo and an index. And the principle of defensive design indicates that since language and library safety features exist to protect the user from the effects of the programmer's idiocy, the syntax that is quick and easy to use should be the one that has all the safeties enabled, and if the compiler can't sufficiently intelligently optimize away redundant safeties then there should be an unsafe syntax carrying a significant dose of syntactic salt. Unfortunately, because people tend to resist even those changes from which they would get real benefits, getting such a change of outlook past the standards committee is very hard.
Posted May 30, 2013 17:15 UTC (Thu)
by tjc (guest, #137)
[Link]
Posted May 30, 2013 20:48 UTC (Thu)
by kleptog (subscriber, #1183)
[Link] (3 responses)
In any case, C doesn't need to become a safe language. There are plenty of safe languages to code in, you need at least one unsafe language to code all the safe languages in. It may as well be C.
What I think would really make a difference is if someone actually implemented a working C-like language (you couldn't call it C) that had a few extra restrictions, like: 'a pointer assigned an address in array X cannot point outside that array by pointer arithmetic'. And then demonstrated that it (a) performed ok and (b) ran >99% of existing C code.
I think the easiest way to achieve this would be code a C interpreter in PyPy where you could implement the restrictions, and you get a JIT to make it perform. If you could make this work, my guess is you could convince a lot of people run most C programs under such an interpreter and you would have solved the problems for a significant portion of the codebase.
But whether its actually achievable....
Posted Jun 5, 2013 10:10 UTC (Wed)
by epa (subscriber, #39769)
[Link] (1 responses)
Posted Jun 5, 2013 20:31 UTC (Wed)
by kleptog (subscriber, #1183)
[Link]
I mean, valgrind manages to catch all sorts of memory errors without any source with not many false positives. ISTM a compiler should be able to do a similar job but more efficiently (because valgrind slow as molasses, but points out the error to you right away, which is what makes it worth it).
Posted Jun 6, 2013 8:55 UTC (Thu)
by kevinm (guest, #69913)
[Link]
The major obstacle would be that you'd need an ABI change to pass around the necessary object extent data to enforce this everywhere.
Posted May 28, 2013 13:16 UTC (Tue)
by clugstj (subscriber, #4020)
[Link] (1 responses)
Posted May 28, 2013 17:46 UTC (Tue)
by acoopersmith (subscriber, #72107)
[Link]
Because that's what the CVE guidelines told us to do, for two reasons:
1) There’s several different types of bug here - integer overflow, deep stack recursion, missing return code validation, & buffer overflow.
2) These bugs were replicated in 20 different packages, each with their own version numbers and release history. The CVE assigners don't like having a single CVE that says “This covers libX11 1.5, libXv 1.0.8, libXi 1.7.1, ... ”because then you can't say “I have the fix for CVE-2013-XXXX” if you have installed the new libX11 but not the new libXi.
Posted May 28, 2013 20:16 UTC (Tue)
by kleptog (subscriber, #1183)
[Link] (1 responses)
That is, when different team members expect something to be fixed in different ways because they have differing expectations. In a small group you talk it out and get a solution that everybody agrees with.
This I think makes it much more difficult for open source projects because you don't get the real time feedback and when disagreements arise you don't have the offline high-bandwidth, low-latency channel to discuss the problem and come up with better solutions. Nothing is more demotivating that having your patch stuck in review phase.
Fortunately many larger projects do successful patch review and it works reasonably well, but it's not always easy.
Posted May 29, 2013 21:13 UTC (Wed)
by jg (guest, #17537)
[Link]
The code base in that era was very much smaller.
A community large enough to provide useful code reviews did not exist in the early years of X... And security problems in that era were very small: the bad guys weren't out to get us in the 1980's and early 1990's, so there was little incentive to go find them or do such code reviews once there was a viable community.
And then things stagnated, particularly after the X Consortium closed. Long, long, long story there....
Unfortunately, many mistakes of my era were cut and pasted into later X extension libraries, replicating bugs. And other people created new bugs as X continued to evolve over the decades as well. Creativity is found everywhere ;-).
Posted May 30, 2013 17:52 UTC (Thu)
by krakensden (subscriber, #72039)
[Link]
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Did XAce not happen? (I don't really know how well it works or if there's any users…)
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
X screensavers (I have a tonne of them installed, and none is suid) usually do not try to listen to restricted ports. Kppp is usually more secure if used suid, for the reasons described here (*). The "extremely common" thing is way off IMHO.
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
> It is very unlikely that there are crashing bugs lurking in Xlib itself.
— http://www.jwz.org/xscreensaver/toolkits.html
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
I am hoping for such things to occur. However, in the meantime...
Many people, especially *very serious people* (to paraphrase a famous blogger), falsely claim that security is a given on specific device, when they try to advance on their own project (or career). We need to change the way we look at these claims (and these people knowledge probably) in order to prevent them from simply waiting that a software gets wide usage in order to claim any security (or risk management as they call it).
These false claims are a pain to counter. We all know in this technical field that: either pieces of evidence are available (usually a lot) to demonstrate that some security effort has been done, either the device is not secure.
(Furthermore, sometimes, this is preferable... but that would another topic.)
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
If someone is coding in C (or C++, for that matter), there's a decent chance they already believe all their code is performance-critical.
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Android doesn't really run Java
Android doesn't really run Java
Android doesn't really run Java
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
An address plus a size? The size of what?
OK, for a structure, a thick pointer to that structure is { pointer + size of structure}.
For an array of structures, a thick pointer to that array is { pointer + size of complete array}. To get a thick pointer to an element of that array, you now need the size of that element - possible.
Now you want to scan through an array, you do not want a thick pointer to an element of the array (because you will not be able to increment this thick pointer to point to the next element), you want a thick pointer to an array of element, each times you increment the thick pointer you decrement the size of this thick pointer by the size of the structure of the array.
Now someone wants to search something in the array from the end of the array, each times you decrement the thick pointer you increment this thick pointer size by the size of the structure of the array. But then what stops you to go below the start of the array? You need to include a minimum base address into the thick pointer?
Now imagine all this is about chars and array of chars - your thick pointer is bigger than the string you are manipulating, and you code initially only prints an hexadecimal value into an array of 256 bytes, you know it will not overflow.
You think the compiler will be able to optimise and remove a field of a structure (thick pointer size) when it is obvious there will not be any problem? It is not going to happen if the function is not static inside the C file.
The "thick pointer size" can also be a variable inside the compiler (range analysis) instead of inside the compiled function, but that has its limits.
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
Pondering the X client vulnerabilities
