User: Password:
|
|
Subscribe / Log in / New account

Leading items

Udev and firmware

By Jonathan Corbet
October 10, 2012
Those who like to complain about udev, systemd, and their current maintainers have had no shortage of company recently as the result of a somewhat incendiary discussion on the linux-kernel mailing list. Underneath the flames, though, lie some important issues: who decides what constitutes appropriate behavior for kernel device drivers, how strong is our commitment to backward compatibility, and which tasks are best handled in the kernel without calling out to user space?

The udev process is responsible for a number of tasks, most initiated as the result of events originating in the kernel. It responds to device creation events by making device nodes, setting permissions, and, possibly, running a setup program. It also handles module loading requests and firmware requests from the kernel. So, for example, when a driver calls request_firmware(), that request is turned into an event that is passed to the udev process. Udev will, in response, locate the firmware file, read its contents, and pass the data back to the kernel. The driver will get its firmware blob without having to know anything about how things are organized in user space, and everybody should be happy.

Back in January, the udev developers decided to implement a stricter notion of sequencing between various types of events. No events for a specific device, they decided, would be processed until the process of loading the driver module for that device had completed. Doing things this way makes it easier for them to keep things straight in user space and to avoid attempting operations that the kernel is not yet ready to handle. But it also created problems for some types of drivers. In particular, if a driver tries to load device firmware during the module initialization process, things will appear to lock up. Udev sees that the module is not yet initialized, so it will hold onto the firmware request and everything stops. Udev developer Kay Sievers warned the world about this problem last January:

We might need to work around that in the current udev for now, but these drivers will definitely break in future udev versions. Userspace, these days, should not be in charge of papering over obvious kernel bugs like this.

The problem with this line of reasoning, of course, is that one person's kernel bug is another's user-space problem. Firmware loading at module initialization time has worked just fine for a long time — if one ignores little problems like built-in modules, booting with init=/bin/sh, and other situations where proper user-space support is not present when the request_firmware() call takes place. What matters most is that it works for a normal bootstrap on a typical distribution install. The udev sequencing change breaks that: users of a number of distributions have been reporting that things no longer work properly with newer versions of udev installed.

Breaking currently-running systems is something the kernel development community tries hard to avoid, so it is not surprising that there was some disagreement over the appropriateness of the udev changes. Even so, various kernel developers were trying to work around the problems when Linus threw a bit of a tantrum, saying that the problem lies with udev and needs to be fixed there. He did not get the response that he was hoping for.

Kay answered that, despite the problem reports, udev had not yet been fixed, saying "we still haven't wrapped our head around how to fix it/work around it." He pointed out that things don't really hang, they just get "slow" while waiting for a 30-second timeout to expire. And he reiterated his position that the real problem lies in the kernel and should be fixed there. Linus was unimpressed, but, since he does not maintain udev, there is not a whole lot that he can do directly to solve the problem.

Or, then again, maybe there is. One possibility raised by a few developers was pulling udev into the kernel source tree and maintaining it as part of the kernel development process. There was a certain amount of support for this idea, but nobody actually stepped up to take responsibility for maintaining udev in that environment. Such a move would represent a fork of a significant package that would take it in a new direction; current plans are to integrate udev more thoroughly with systemd. The current udev developers thus seem unlikely to support putting udev in the kernel tree. Getting distributors to adopt the kernel's version of udev could also prove to be a challenge. In general, it is the sort of mess that is best avoided if at all possible.

An alternative is to simply short out udev for firmware loading altogether. That is, in fact, what has been done; the 3.7 kernel will include a patch (from Linus) that causes firmware loading to be done directly from the kernel without involving user space at all. If the kernel is unable to find the firmware file in the expected places (under /lib/firmware and variants) it will fall back to sending a request to udev in the usual manner. But if the kernel-space load attempt works, then udev will never even know that the firmware request was made.

This appears to be a solution that is workable for everybody involved. There is nothing particularly tricky about firmware loading, so few developers seem to have concerns about doing it directly from the kernel. Kay supports the idea as well, saying "I would absolutely like to get udev entirely out of the sick game of firmware loading." The real proof will be in how well the concept works once the 3.7 kernel starts seeing widespread testing, but the initial indications are that there will not be a lot of problems. If things stay that way, it would not be surprising to see the direct firmware loading patch backported to the stable series — once it has gained a few amenities like user-configurable paths.

One of the biggest challenges in kernel design can be determining what should be done in the kernel and what should be pushed out to user space. The user-space solution is often appealing; it can simplify kernel code and make it easier for others to implement their own policies. But an overly heavy reliance on user space can lead to just the sort of difficulty seen with firmware loading. In this case, it appears, the problem was better solved in the kernel; fortunately, it appears to have been a relatively easy one for the kernel to take back without causing compatibility problems.

Comments (11 posted)

CIA.vc shuts down

October 10, 2012

This article was contributed by Joey Hess

CIA didn't seem important until it was gone. For developers and users on IRC networks like Freenode, CIA was just there in the background, relaying commit messages into the channels of thousands of projects in real time—until recently.

CIA.vc was a central clearinghouse for commit messages sent to it from ten thousand or more version control repositories. There were CIA hooks for subversion, git, bzr, etc, so a project just had to install such a hook into their repository and register on the CIA website. CIA handled the rest, collecting the commit messages as they came in and announcing them on appropriate channels via its swarm of IRC bots. Here is an example from the #commits channel from April:

    <CIA-93> OpenWrt: [packages] fwknop: update to 2.0, use new startup commands
    <CIA-93> vlc: Pierre Ynard master * r31b5fbdb6d vlc/modules/lua/libs/equalizer.c:
	lua: fix memory and object leak and reset locale on error path
    <CIA-93> FreeBSD: rakuco * ports/graphics/autoq3d/files/
	(patch-src__cmds__cmds.cpp . patch-src__fgui__cadform.cpp): 
    <CIA-93> FreeBSD: Make the port build with gcc 4.6 (and possibly other compilers).
    <CIA-93> gentoo: robbat2 * gentoo/xml/htdocs/proj/en/perl/outdated-cpan-packages.xml:
	Automated update of outdated-cpan-packages.xml
    <CIA-93> compiz-fusion: a.j.buxton master * /fusion/plugins-main/src/ezoom/ezoom.c: 

For a decade, the CIA bots were part of the infrastructure of many projects, which, along with their bug tracker, mailing lists, wiki, and version control system, helped tie communities together. Eric S. Raymond described the effect of the CIA service as follows:

It makes IRC conversations among a development group more productive. It also does something unquantifiable but good to the coherence of the development groups that use it, and the coherence of the open-source community as a whole — when the service was live it was hard to watch #commits for any length of time without being impressed and encouraged.

That stream of notifications dried up on September 26th, when CIA.vc was shut down, due to a miscommunication with a hosting provider. It seems there were no backups. It is unclear if CIA will return, but there are two possible replacements available now.

irker

Irker is a simple replacement for CIA, that was announced just three days later. Raymond was developing it even before CIA went down, and designed a much different architecture than the centralized CIA service.

Irker consists of two pieces: a server that acts as a simple relay to IRC and a client that sends messages to be relayed. The server has no knowledge of version control systems or commits, and could be used to relay any sort of content. All the version-control-specific code necessary to extract and format the message is in the client, which is run by a version control hook script.

The irker client and server typically both run on the same machine or LAN, so each project or hosting service is responsible for running its own service, rather than relying on a centralized service like CIA.

Irker has undergone heavy development since the announcement, and is now considered basically feature complete. Its simple and general design is likely to lead to other things being built on top of it. For example, there is a CIA to irker proxy for sites that want to retain their current CIA hooks.

KGB

Although irker made a splash when CIA died, another clone has quietly been overlooked for years. KGB was developed by Martín Ferrari and Damyan Ivanov of the Debian project and released in 2009. KGB is shipped in the current Debian stable release, as well as in Ubuntu universe, making it easy to deploy as a replacement for CIA.

KGB is, like irker, a decentralized client-server system. Unlike irker's content-agnostic server, the KGB server is responsible for formatting notifications from commit information it receives from its clients. Though a less flexible design, this does insulate the clients from some details of IRC, particularly from message length limits.

KGB has enjoyed a pronounced upswing in feature requests and development since CIA went down, gaining features such as web links to commits, url shortening, and the ability to broadcast all projects' notifications to a channel like #commits. Developer Martín Ferrari says:

For a small project that was mainly developed and maintained for our own use, this was quite some unexpected popularity!

Will CIA.vc return?

The CIA.vc website currently promises an attempt to revive the service. Any attempt to do so will surely face numerous challenges. Not least is the missing database, which configured much of CIA's behavior. Unless a recent backup of the database is found, any revived CIA.vc will certainly need much configuration to return it to its past functionality.

CIA's code base, while still available, is large and complex with many moving parts written in different languages, is reputedly difficult to install, and has been neglected for years. Raymond's opinion is that "CIA suffered a complexity collapse", and as he said: "It is notoriously difficult to un-collapse a rubble pile".

Even if CIA does eventually return, it seems likely that many projects will have moved away from it for good, deploying their own irker or KGB bots. The Apache Software Foundation, KDE project, and Debian's Alioth project hosting site have already deployed their own bots. If the larger hosting sites like Github, Sourceforge, and Savannah follow suit, any revived CIA may be reduced to being, at best, a third player.

Conclusion

CIA.vc was a centralized service, with code that is free software, but with a design and implementation that did not encourage reuse. The service was widely used by the community, which mostly seems to have put up with its instability, its UTF-8 display bugs, its odd formatting of git revision numbers, and its often crufty hook scripts.

According to CIA's author, Micah Dowty, it never achieved a "critical mass of involvement" from contributors. Perhaps CIA was not seen as important enough to work on. But with two replacements now being developed, there is certainly evidence of interest. Or perhaps CIA did not present itself as a free software project, and so was instead treated as simply the service that it appeared to be. CIA's website featured things like a commit leaderboard and new project list, which certainly helped entice people to use it. (Your author must confess to occasionally trying to fit enough commits into a day to get to the top of that leader board.) But the website did not encourage bugs or patches to be filed.

In a way, the story of CIA mirrors the story of the version control systems it reported on. When CIA began in 2003, centralized version control was the norm. The Linux kernel used distributed version control only thanks to the proprietary Bitkeeper, which itself ran a centralized commit publication service. These choices were entirely pragmatic, and the centralized CIA was perhaps in keeping with the times.

Much as happened with version control, the community has gone from being reliant on a centralized service, to having a choice of decentralized alternatives. As a result, new features are rapidly emerging in both KGB and irker that CIA never provided. This is certainly a healthy response to CIA's closure, but it also seems that our many years of reliance on the centralized service held us back from exploring the space that CIA occupied.

Comments (10 posted)

Linux and automotive computing security

By Nathan Willis
October 10, 2012

There was no security track at the 2012 Automotive Linux Summit, but numerous sessions and the "hallway track" featured anecdotes about the ease of compromising car computers. This is no surprise: as Linux makes inroads into automotive computing, the security question takes on an urgency not found on desktops and servers. Too often, though, Linux and open source software in general are perceived as insufficiently battle-hardened for the safety-critical needs of highway speed computing — reading the comments on an automotive Linux news story it is easy to find a skeptic scoffing that he or she would not trust Linux to manage the engine, brakes, or airbags. While hackers in other embedded Linux realms may understandably feel miffed at such a slight, the bigger problem is said skeptic's presumption that a modern Linux-free car is a secure environment — which is demonstrably untrue.

First, there is a mistaken assumption that computing is not yet a pervasive part of modern automobiles. Likewise mistaken is the assumption that safety-critical systems (such as the aforementioned brakes, airbags, and engine) are properly isolated from low-security components (like the entertainment head unit) and are not vulnerable to attack. It is also incorrectly assumed that the low-security systems themselves do not harbor risks to drivers and passengers. In reality, modern cars have shipped with multiple embedded computers for years (many of which are mandatory by government order), presenting a large attack surface with numerous risks to personal safety, theft, eavesdropping, and other exploits. But rather than exacerbating this situation, Linux and open source adoption stand to improve it.

There is an abundance of research dealing with hypothetical exploits to automotive computers, but the seminal work on practical exploits is a pair of papers from the Center for Automotive Embedded Systems Security (CAESS), a team from the University of California San Diego and the University of Washington. CAESS published a 2010 report [PDF] detailing attacks that they managed to implement against a pair of late-model sedans via the vehicles' Controller Area Network (CAN) bus, and a 2011 report [PDF] detailing how they managed to access the CAN network from outside the car, including through service station diagnostic scanners, Bluetooth, FM radio, and cellular modem.

Exploits

The 2010 paper begins by addressing the connectivity of modern cars. CAESS did not disclose the brand of vehicle they experimented on (although car mavens could probably identify it from the photographs), but they purchased two vehicles and experimented with them on the lab bench, on a garage lift, and finally on a closed test track. The cars were not high-end, but they provided a wide range of targets. Embedded electronic control units (ECUs) are found all over the automobile, monitoring and reporting on everything from the engine to the door locks, not to mention lighting, environmental controls, the dash instrument panel, tire pressure sensors, steering, braking, and so forth.

Not every ECU is designed to control a portion of the vehicle, but due to the nature of the CAN bus, any ECU can be used to mount an attack. CAN is roughly equivalent to a link-layer protocol, but it is broadcast-only, does not employ source addressing or authentication, and is easily susceptible to denial-of-service attacks (either through simple flooding or by broadcasting messages with high-priority message IDs, which force all other nodes to back off and wait). With a device plugged into the CAN bus (such as through the OBD-II port mandatory on all 1995-or-newer vehicles in the US), attackers can spoof messages from any ECU. There are often higher-level protocols employed, but CAESS was able to reverse-engineer the protocols in its test vehicles and found security holes that allow attackers to brute-force the challenge-response system in a matter of days.

CAESS's test vehicles did separate the CAN bus into high-priority and low-priority segments, providing a measure of isolation. However, this also proved to be inadequate, as there were a number of ECUs that were connected to both segments and which could therefore be used to bridge messages between them. That set-up is not an error, however; despite common thinking on the subject, quite a few features demanded by car buyers rely on coordinating between the high- and low-priority devices.

For example, electronic stability control involves measuring wheel speed, steering angle, throttle, and brakes. Cruise control involves throttle, brakes, speedometer readings, and possibly ultra-sonic range sensors (for collision avoidance). Even the lowly door lock must be connected to multiple systems: wireless key fobs, speed sensors (to lock the doors when in motion), and the cellular network (so that remote roadside assistance can unlock the car).

The paper details a number of attacks the team deployed against the test vehicles. The team wrote a tool called CarShark to analyze and inject CAN bus packets, which provided a method to mount many attacks. However, the vehicle's diagnostic service (called DeviceControl) also proved to be a useful platform for attack. DeviceControl is intended for use by dealers and service stations, but it was easy to reverse engineer, and subsequently allowed a number of additional attacks (such as sending an ECU the "disable all CAN bus communication" command, which effectively shuts off part of the car).

The actual attacks tested include some startlingly dangerous tricks, such as disabling the brakes. But the team also managed to create combined attacks that put drivers at risk even with "low risk" components — displaying false speedometer or fuel gauge readings, disabling dash and interior lights, and so forth. Ultimately the team was able to gain control of every ECU in the car, load and execute custom software, and erase traces of the attack.

Some of these attacks exploited components that did not adhere to the protocol specification. For example, several ECUs allowed their firmware to be re-flashed while the car was in motion, which is expressly forbidden for obvious safety reasons. Other attacks were enabled by run-of-the-mill implementation errors, such as components that re-used the same challenge-response seed value every time they were power-cycled. But ultimately, the critical factor was the fact that any device on the vehicle's internal bus can be used to mount an attack; there is no "lock box" protecting the vital systems, and the protocol at the core of the network lacks fundamental security features taken for granted on other computing platforms.

Vectors

Of course, all of the attacks described in the 2010 paper relied on an attacker with direct access to the vehicle. That did not necessarily mean ongoing access; they explained that a dongle attached to the OBD-II port could work at cracking the challenge-response system while left unattended. But, even though there are a number of individuals with access to a driver's car over the course of a year (from mechanics to valets), direct access is still a hurdle.

The 2011 paper looked at vectors to attack the car remotely, to assess the potential for an attacker to gain access to the car's internal CAN bus, at which point any of the attacks crafted in the 2010 paper could easily be executed. It considered three scenarios: indirect physical access, short-range wireless networking, and long-range wireless networking. As one might fear, all three presented opportunities.

The indirect physical access involved compromising the CD player and the dealership or service station's scanning equipment, which is physically connected to the car while in the shop for diagnosis. CAESS found that the model of diagnostic scanner used (which adhered to a 2004 US government mandated standard called PassThru) was an embedded Linux device internally, even though it was only used to interface with a Windows application running on the shop's computer. However, the scanner was equipped with WiFi, and broadcasts its address and open TCP port in the clear. The diagnostic application API is undocumented, but the team sniffed the traffic and found several exploitable buffer overflows — not to mention extraneous services like telnet also running on the scanner itself. Taking control of the scanner and programming it to upload malicious code to vehicles was little additional trouble.

The CD player attack was different; it started with the CD player's firmware update facility (which loads new firmware onto the player if a properly-named file is found on an inserted disc). But the player can also decode compressed audio files, including undocumented variants of Windows Media Audio (.WMA) files. CAESS found a buffer overflow in the .WMA player code, which in turn allowed the team to load arbitrary code onto the player. As an added bonus, the .WMA file containing the exploit plays fine on a PC, making it harder to detect.

The short-range wireless attack involved attacking the head unit's Bluetooth functionality. The team found that a compromised Android device could be loaded with a trojan horse application designed to upload malicious code to the car whenever it paired. A second option was even more troubling; the team discovered that the car's Bluetooth stack would respond to pairing requests initiated without user intervention. Successfully pairing a covert Bluetooth device still required correctly guessing the four-digit authorization PIN, but since the pairing bypassed the user interface, the attacker could make repeated attempts without those attempts being logged — and, once successful, the paired device does not show up in the head unit's interface, so it cannot be removed.

Finally, the long-range wireless attack gained access to the car's CAN network through the cellular-connected telematics unit (which handles retrieving data for the navigation system, but is also used to connect to the car maker's remote service center for roadside assistance and other tasks). CAESS discovered that although the telematics unit could use a cellular data connection, it also used a software modem application to encode digital data in an audio call — for greater reliability in less-connected regions.

The team reverse-engineered the signaling and data protocols used by this software modem, and were subsequently able to call the car from another cellular device, eventually uploading malicious code through yet another buffer overflow. Even more disturbingly, the team encoded this attack into an audio file, then played it back from an MP3 player into a phone handset, again seizing control over the car.

The team also demonstrated several post-compromise attack-triggering methods, such as delaying activation of the malicious code payload until a particular geographic location was reached, or a particular sensor value (e.g., speed or tire pressure) was read. It also managed to trigger execution of the payload by using a short-range FM transmitter to broadcast a specially-encoded Radio Data System (RDS) message, which vehicles' FM receivers and navigation units decode. The same attack could be performed over longer distances with a more powerful transmitter.

Among the practical exploits outlined in the paper are recording audio through the car's microphone and uploading it to a remote server, and connecting the car's telematics unit to a hidden IRC channel, from which attackers can send arbitrary commands at their leisure. The team speculates on the feasibility of turning this last attack into a commercial enterprise, building "botnet" style networks of compromised cars, and on car thieves logging car makes and models in bulk and selling access to stolen cars in advance, based on the illicit buyers' preferences.

What about Linux?

If, as CAESS seems to have found, the state-of-the-art is so poor in automotive computing security, the question becomes how Linux (and related open source projects) could improve the situation. Certainly some of the problems the team encountered are out of scope for automotive Linux projects. For example, several of the simpler ECUs are unsophisticated microcontrollers; the fact that some of them ship from the factory with blatant flaws (such as a broken challenge-response algorithm) is the fault of the manufacturer. But Linux is expected to run on the higher-end ECUs, such as the IVI head unit and telematics system, and these components were the nexus for the more sophisticated attacks.

Several of the sophisticated attacks employed by CAESS relied on security holes found in application code. The team acknowledged that standard security practices (like stack cookies and address space randomization) that are established practice in other computing environments simply have not been adopted in automotive system development for lack of perceived need. Clearly, recognizing that risk and writing more secure application code would improve things, regardless of the operating system in question. But the fact that Linux is so widely deployed elsewhere means that more security-conscious code is available for the taking than there is for any other embedded platform.

Consider the Bluetooth attack, for example. Sure, with a little effort, one might could envision a scenario when unattended Bluetooth pairing is desirable — but in practice, Linux's dominance in the mobile device space means there is a greater likelihood that developers would quickly find and patch the problem than would any tier one supplier working in isolation.

One step further is the advantage gained by having Linux serve as a common platform used by multiple manufacturers. CAESS observed in its 2011 paper that the "glue code" linking discrete modules together was the greatest source of exploits (e.g., the PassThru diagnostic scanning device), saying "virtually all vulnerabilities emerged at the interface boundaries between code written by distinct organizations." It also noted that this was an artifact of the automotive supply chain itself, in which individual components were contracted out to separate companies working from specifications, then integrated by the car maker once delivered:

Thus, while each supplier does unit testing (according to the specification) it is difficult for the manufacturer to evaluate security vulnerabilities that emerge at the integration stage. Traditional kinds of automated analysis and code reviews cannot be applied and assumptions not embodied in the specifications are difficult to unravel. Therefore, while this outsourcing process might have been appropriate for purely mechanical systems, it is no longer appropriate for digital systems that have the potential for remote compromise.

A common platform employed by multiple suppliers would go a long way toward minimizing this type of issue, and that approach can only work if the platform is open source.

Finally, the terrifying scope of the attacks carried out in the 2010 paper (and if one does not find them terrifying, one needs to read them again) ultimately trace back to the insecure design of CAN bus. CAN bus needs to be replaced; working with a standard IP stack, instead, means not having to reinvent the wheel. The networking angle has several factors not addressed in CAESS's papers, of course — most notably the still-emerging standards for vehicle ad-hoc networking (intended to serve as a vehicle-to-vehicle and vehicle-to-infrastructure channel).

On that subject, Maxim Raya and Jean-Pierre Hubaux recommend using public-key infrastructure and other well-known practices from the general Internet communications realm. While there might be some skeptics who would argue with Linux's first-class position as a general networking platform, it should be clear to all that proprietary lock-in to a single-vendor solution would do little to improve the vehicle networking problem.

Those on the outside may find the recent push toward Linux in the automotive industry frustratingly slow — after all, there is still no GENIVI code visible to non-members. But to conclude that the pace of development indicates Linux is not up to the task would be a mistake. The reality is that the automotive computing problem is enormous in scope — even considering security alone — and Linux and open source might be the only way to get it under control.

Comments (69 posted)

Page editor: Jonathan Corbet
Next page: Security>>


Copyright © 2012, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds