User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for November 18, 2010

MeeGo conference: Intel's and Nokia's visions of MeeGo

By Jake Edge
November 17, 2010

We are just at the beginning of a massive change in the way we use computers, and traditional desktops and laptops will be giving way to more and more internet-connected devices—that's the vision presented in two keynotes at the first ever MeeGo conference. But in order for that vision to come about, there needs to be an open environment, where both [Aviva Stadium] hardware and software developers can create new devices and applications, without the innovation being controlled—often stifled—by a single vendor's wishes. Doug Fisher, Intel's VP of the Software and Services Group, and Nokia's Alberto Torres, Executive VP for MeeGo Computers, took different approaches to delivering that message, but their talks were promoting the same theme.

The conference was held November 15-17 at Aviva Stadium in Dublin, Ireland and hosted many more developers than conference organizers originally expected. It was very well put on, and at an eye-opening venue, which bodes well for future conferences. One that is more industry-focused is currently planned for May in San Francisco, while another developer-focused event is tentatively scheduled for November 2011 for somewhere other than the US.

"Strategic Freedom with MeeGo"

After an introduction by conference program committee chair Dirk Hohndel, Fisher kicked off his talk with a rueful reminiscence of his talk at the 2005 Ottawa Linux Symposium, where the person running the slide deck exited his presentation at the end, which put up a Windows desktop on the screen. That wasn't particularly popular with the assembled Linux crowd, so he was careful to show that he was presenting his slides using OpenOffice.org Impress on MeeGo this time. [Doug Fisher]

Over the next few years, there will be one billion new internet-connected users and 15 billion connected devices, Fisher said. Intel and MeeGo want to ensure that they meet the needs of that growing market. It is these new devices that will be the main mechanism for connecting with the internet. They will "surpass the traditional way you interact with the internet". And we are "just at the beginning of where this device environment is going to go".

There are two models that are being proposed for this new environment, one that is controlled versus one that is open. The controlled environment is one where a "single vendor provides the whole solution". But lots of people that want to innovate are outside of the box that the vendor has set up. In these closed environments, business models and the implementation of business models are controlled.

But, "the only way you can scale to all of those devices is to have an open environment", Fisher said. In the book Where Good Ideas Come From, author Steven Johnson "debunks the myth that great ideas come from a single person". Instead, it is a "social process as much as a technology process" to come up with these great ideas. Because we don't have any time to waste to build this new device environment, "we have to be able to work together".

"A controlled environment with a box around it will not be able to scale", to the vast array of devices and device types that are coming. But, Fisher cautioned, an open environment should not lead to fragmentation. There is a responsibility to make the platform consistent, so that companies can depend on it and make investments in it.

That is why MeeGo was moved under the Linux Foundation (LF), so that the LF can be "the steward of MeeGo". The governance of MeeGo is modeled after how Linux is governed; there is no membership required and it is architected in an open way. Both Intel and ARM chips are supported, and MeeGo is constructed to "ensure we meet the needs of a broad type of platforms".

Inclusion, meritocracy, transparency, and upstream first

[Aviva pitch]

Fisher then turned the stage over to Carsten Munk, who is known for his work on Nokia's Maemo and on the MeeGo N900 port. MeeGo "is trying to do something that has never been done before", Munk said, and there are four key elements to making it work: inclusion, meritocracy, transparency, and upstream first. The inclusive nature of MeeGo was embodied in the fact that he was on-stage with an Intel executive, as an independent developer who works on MeeGo ARM. "The MeeGo way is to include people", he said.

When asked by Fisher if the project had been living up to the four ideals, Munk said that it was "getting better over the last 8-9 months", but that "not everything is perfect". There have been arguments over governance and the like over that time, but the community is still figuring things out. In addition to developing MeeGo as an OS and MeeGo applications, the project is developing "the MeeGo way of working".

The upstream-first policy is "really important to avoid fragmentation", Fisher said after Munk left the stage. Avoiding fragmentation is critical for users and developers. Users want to be able to run their applications consistently on multiple devices, while developers want to be sure they can move to different vendors without rewriting their applications.

MeeGo is an OS that vendors can take and do what they want with it, but in order to call it MeeGo, it must be compliant with the MeeGo requirements. That ensures there is a single environment for developers. They can move their code from vendor to vendor, while avoiding the rework and revalidation that currently is required for embedded and other applications.

Intel wants to deliver the best operating environment for MeeGo, and power the best devices, which is why it has invested in the low-power Atom chip. As an example, he pointed to netbooks that are just getting better, some of which have MeeGo on them. There will be more and more of those in 2011 and 2012, Fisher said. In addition, Intel worked closely with Amino Communications on a MeeGo-based television set top box. What would normally take Amino 18 months to deliver was done in six using MeeGo.

One of the strengths of MeeGo is that in addition to allowing multiple vendors to use it, it also enables multiple device types. Intel was involved in helping with the MeeGo netbooks and set-top box that he mentioned, but he also listed two other vendors using MeeGo, where Intel wasn't involved at all. A German company that made a MeeGo-based tablet as well as a company in China doing in-vehicle-infotainment (IVI) systems in cars that are shipping now are examples of the "power of open source", he said. They took the code and made it work for their devices and customers without having to ask for permission. The MeeGo community is going to be responsible for keeping that kind of innovation happening, he said.

One of the visions for MeeGo devices that was presented in a video at the beginning of the talk was the ability to move audio and video content between these devices. The idea is that someone can be watching a movie or listening to some music and move it to other devices, share it with their friends, and so on. Fisher had someone from Intel demonstrate a prototype of that functionality, where a video was paused on a netbook, restarted on a TV, then moved from there to a tablet.

That is an example of "the kind of innovation we need to drive into MeeGo", Fisher said. It's not just something that is unique and innovative on a single device but, because it is MeeGo, it can move between various devices from multiple vendors. It is a "compelling and challenging opportunity". Though it is an exciting vision for the future, there is still a potentially insurmountable challenge which Fisher left unsaid: finding a way to get the content industries on board with that kind of ubiquitous playback and sharing.

It turned out that the MeeGo tablet used in the demo was a Lenovo IdeaPad—an Atom-powered tablet/netbook. Fisher said that one lucky developer in attendance would be receiving one. When the envelope was opened, though, the name on the inside was "Everyone", so Intel would be giving each conference attendee an IdeaPad. He left it to Hohndel to later deliver the bad news to the roughly 200 Intel and Nokia employees in attendance; there would be no tablets for those folks.

"MeeGo Momentum and the Qt App Advantage"

[Alberto Torres]

Torres started his talk by "dispelling rumors" that Nokia might not be committed to MeeGo. He pointed to comments made by new CEO Stephen Elop that reiterated Nokia's commitment. Nokia plans to deliver a "new user experience" using MeeGo, Torres said. Furthermore, he believes that we are "redefining the future of computing" with the advent of widespread internet-connected mobile devices, and MeeGo has all the elements to foster that redefinition.

He looked back at some of the history of computers, noting that in the 1940s IBM's Thomas Watson suggested there was a total worldwide market for five computers. Since that time, the market has grown a bit, but that the command line limited the use of computers to fairly technical users. In the 1970s, when Xerox PARC adopted the mouse and an interface with windows and icons, that really changed things. That interface is a far more human way to interact with a computer, and it is largely the same interface that we have today.

Moving away from the command line meant that you didn't have to be an expert to use a computer and got people "starting to think about every home having a computer". Today, almost every home in the developed world does have a computer. Beyond that, smartphones are computers in our pockets, which allows computers to go places they never went before. But we haven't figured out major new ways to interact with those devices. That is good, because it allows us to define it, he said.

There are advances being made in touch devices using gestures and in motion-sensing gaming interfaces, both of which are more natural to use. He said that his daughter, who is not yet 2 years old, can do things with his smartphone, like use the photo gallery application. Gestures are "bringing computing to a level that is far more intuitive", which is leading to the idea of even more computers in the home. We may not call them computers, he said, but instead they will be called cars or TVs.

All of these different devices need to work together in an integrated way, with interfaces that work in a "human way". One of the strengths of MeeGo is that it was created from the start to go on all of these different kinds of devices. He believes we are going to see a proliferation of devices with MeeGo, and with many different interaction models: driving a car, playing a game or video in the back of the car, at home watching TV, and so on.

Qt for application development

Torres then shifted gears a bit to talk about Qt. It is much more than just a library, he said, it is a development platform incorporating things like database access, network connectivity, inter-object communication, WebKit integration, and more. He said that Qt enables C++ programmers to be four times more productive in developing code, and he expects the addition of Qt Declarative UI to increase that, perhaps as far as a 10x productivity increase.

Qt is also multi-platform and is used "everywhere". It started out as a desktop platform, but is on "all kinds of devices today". As an example of that, he had another Nokia employee demonstrate the same application running on MeeGo, Windows, Symbian, and embedded Linux. The animated photo browsing application was developed using Qt Quick, and could be run, unmodified, on each of the platforms. A Qt Quick application can be placed on a USB stick and moved between the various devices.

Nokia is a company that makes devices, and it "wants to put devices into people's hands that they fall in love with". MeeGo offers them a great opportunity to do that because of its "unique innovation model", which includes both openness and differentiation. Companies like Nokia, mobile phone carriers, TV makers, and so on can add things on top of the MeeGo platform to make themselves stand out. It might be a different user experience or add-on services that are added to differentiate the device, but that can be done on top of a non-fragmented platform with stable APIs. This allows those companies to express their creativity and brand without fragmentation.

The plan for Nokia is to provide "delicious hardware", with great connectivity, and a "fantastic user experience" on top. He again noted Nokia CEO Elop's statement that Nokia would be delivering a new standard for user experience on mobile devices. There are those who think that the user experience for devices has already been decided, but he pointed out that it took decades to decide on the standard interface for driving a car—"and we may not be done", noting that alternatives for car interfaces may be on the horizon.

"Creating a set of devices that are so cool that developers want to develop for them" is the approach Nokia and others are taking with MeeGo, Torres said. Some of those devices will be announced by Nokia in 2011. Given the growth in the MeeGo community, Torres joked that next year's MeeGo developer conference might need to use the outdoor part of the stadium to hold all of the attendees.

While there was much of interest in the visions presented, it is still an open question how many hackable MeeGo devices will become available. There wasn't anything said in the keynotes about devices that can be altered by users with their own ideas of how their MeeGo device should work. Instead, the focus was clearly on the kinds of things that MeeGo enables device manufacturers to do, without any real nod toward user freedoms. With luck, there will be some device makers who recognize the importance of free devices and will deliver some with MeeGo.

Comments (46 posted)

MeeGo beyond the mobile device

November 17, 2010

This article was contributed by Nathan Willis

The majority of the sessions (and indeed, attendees) at the MeeGo Conference in Dublin were focused on the handheld and netbook form factors, because the project emerged from the union of Intel's netbook-oriented Moblin and Nokia's handheld Maemo distributions. As a result it is easy to overlook the fact that the project has added several significantly different target platforms since its inception in February. The "connected TV" and "in-vehicle infotainment" (IVI) platforms share a few common factors with handheld devices, such as near-instant-on boot requirements and remote-management capabilities, but as Monday's talks explained, they also stretch the MeeGo software stack at almost every level, from non-PC hardware support, to different audio and video middleware, to different user interfaces and I/O devices.

Set-top Linux

[Dominique Le Foll]

Dominique Le Foll of Cambridge, UK-based Amino Communication presented two talks about his company's work on the connected TV user experience (UX) for MeeGo. Amino builds MeeGo-based set-top boxes for Europe and North America, generally tailored for television service providers. Le Foll's first talk was one of the Monday-morning keynotes, and focused on Amino's decision to build its products on a "full Linux distribution" rather than a stripped-down embedded Linux platform.

MeeGo's structure as a full distribution lowers the company's development costs, he said, because it permits the team to automatically stay compatible with upstream projects. In contrast, typical embedded distributions tend to use a reduced set of packages and libraries, and usually take the freeze-and-fork approach to what they do include, thus forcing the developers to spend time backporting bug fixes and major updates. In addition, he said, building the company's products — which includes custom applications written for each customer — takes less development time, because they can use the standard desktop Linux development tools, and easily build on top of desktop projects that are rarely included in embedded distributions, such as VoIP, video conferencing, and social networking.

Le Foll's second talk focused more in-depth on the MeeGo software stack and what it needs to become a ready-to-deploy set-top box platform. The five "required" services all set-top devices need to support, he said, are live, broadcast television (based on DVB or ATSC program delivery), Internet video, access to home content (including video, audio, and other media), video-on-demand (VOD) service, and third-party, easy-to-install "apps" of the kind currently popular on consumer smartphones. Each brings its share of challenges to the MeeGo platform.

Broadcast television and VOD services both require some security mechanism with which service providers can implement mandatory access control on specific content streams. This includes DRM and hardware-chain-of-trust as well software modules that can prevent unauthorized applications from accessing protected content or driving special-purpose hardware. Internet video requires, yes, Adobe Flash support — specifically Flash support capable of running on the lower-resource system-on-chip hardware typically used to build set-top boxes.

Access to home content entails seamless playback of a glut of different, often unpredictable video and audio formats, which Le Foll suggested would best be handled by a single unified media-playback application that is decoupled from the content sources. The player, he argued, should not need to know whether the video is coming in live from an antenna, being streamed over IPTV or RTP, or is stored on a network drive. Amino uses GStreamer in its products, and says that it is capable of playing all of the necessary codecs, including broadcast HDTV, but that it lacks a few critical pieces, such as hardware video acceleration and integrated multi-language and subtitle/caption support. Here again, he said, the real need is for a simple playback application that can play back European teletext, US-style closed captioning, and DVD subtitles, without caring which format the underlying source originated in.

Regarding the access control measures and Flash, Le Foll was considering MeeGo set-top boxes as commercial products, of course, to be built by OS-integrators like Amino and sold and deployed by cable companies, satellite TV providers, IPTV distributors, and other content service providers. Do-it-yourself types with an aversion to Flash and no interest in DRM might bristle at the thought of adding them to a Linux distribution, but they would be under no obligation to make use of them, a point which Le Foll clarified in response to an audience question.

On top of the low-level media support, he added, there are several "invisible" things that MeeGo needs to add in order to be a robust connected-TV platform. These include support for remote software updates, automatic backup-and-recovery, and other management tasks that would be infeasible to require non-technical users to perform on their own, and difficult to execute with an infrared remote control. In many countries, he continued, there are legal certification requirements for set-top boxes that entail technical features, such as interfacing with the local emergency broadcast services. Support for infrared remotes is another area in which MeeGo needs significant development, he added, a feature that touches on both hardware drivers and the user interface. Set-top box products demand IR remotes and easy-to-decipher interfaces that can be used from ten feet ("or three meters") away on the couch. Though touch-screen support and gesture interfaces are all the rage in mobile MeeGo device development, he said, they are useless in the set-top environment.

Perhaps the most interesting feature in Le Foll's list of five required services is support for end user "apps." This, he explained, is the most oft-requested feature of the television service providers, who have watched the success of Apple's App Store on the iPhone with envy. In recent years, service providers have tried a number of means to dissuade customers from switching services, including (most recently) "bundling" television service with phone service and Internet access, and all have failed. They are now looking to differentiating their service from the competition with apps on set-top boxes, Le Foll said, which makes MeeGo positioned exceptionally well to meet their needs. For open source developers, this opens up the possibility of developing MeeGo applications for handsets and netbooks that will also run, unaltered, on the next generation of set-top boxes.

Vehicular MeeGo

Another challenging difference in the set-top box environment that Le Foll touched on in his talks is that netbooks and handhelds are essentially single-user devices — while the TV and home theater are shared by the entire household. This distinction has an effect on all sorts of applications, from privacy concerns to customization issues, that developers need to consider when porting their code to the new environment.

[Rudolph Streif]

The same is true of the IVI platform; not only can one vehicle be driven by many members of a household, but an IVI system often needs to consider many users at once. The driver may be using navigation, passengers in the back seat watching rear-seat-entertainment (RSE) consoles each displaying different content, and yet the IVI system also needs to override all of the separate audio zones to sound an alert if the car's proximity sensor detects it is about to back into the curb.

Rudolf Streif from the Linux Foundation's MeeGo IVI Working Group, presented an overview of the MeeGo IVI platform on Monday afternoon, including the missing pieces needed to build MeeGo into a solid IVI base. In addition to multi-zone audio and video, an IVI system also needs to support split-screen and layered video — for example to permit alerts or hands-free phone call messages to pop-up as higher-priority overlays on top of an existing video layer. But the human-machine-interface (HMI) layer in a vehicle system also has to cope with a different set of user input devices, such as physical buttons and knobs on dash units and steering wheels, and simple integration with consumer electronic devices like MP3 players and phones.

The hardware layer also needs to support a variety of device buses used to connect data sensors (speed, fuel level, etc.). There are several industry standards in wide deployment, Streif said, including Controller Area Network (CAN) and Media Oriented Systems Transport (MOST). Supporting them in open source is challenging, he added, because many car-makers have implemented their own brand-specific variations of the standard, and some (like MOST) are not freely or publicly available. For application developers, of course, MeeGo would also need to provide a bus-neutral common API to access this sensor data and (where applicable) to control vehicle hardware.

There are several areas of the middleware stack where MeeGo — and even Linux and open source in general — currently fall short. One (mentioned in Streif's talk and also raised in the IVI birds-of-a-feather session held later that afternoon) is voice control, specifically speech recognition and speech synthesis. There are few open source projects tackling these tasks, and most of those are academic in nature and not easily-integrated with upstream projects. Because hands-free phone operation is critical (even a legal requirement in many areas), there is a need for good acoustic echo cancellation and noise suppression, neither of which is currently well-supported in an open source project.

IVI devices are even more sensitive to fast boot times and fast application start-up than are entertainment devices, plus they must be prepared to cope with unregulated DC power from batteries and shut down safely and quickly when power is cut off. Like the situation with entertainment devices, most end users are not prepared to or interested in performing system updates, so remote management is a must. But unlike set-top boxes or even phones, car IVI systems are generally designed to have a ten-year lifespan. That poses a challenge not only for hardware makers, but for the MeeGo project itself and its application compliance program.

The IVI Working Group includes a diverse group of collaborators, include silicon vendors like Intel, car makers and Tier 1 automotive suppliers, industry consortia like GENIVI, and automotive software developers like Pelagicore AB. Involvement by the existing MeeGo development community has been slow to build, owing in no small part to the long product development cycle of the auto industry, but Streif and other members of the project were actively seeking input and participation from community members.

Where else can MeeGo go

At first blush, vehicle computing and set-top boxes sound like a radical departure from MeeGo's portable-device beginnings. Listening to the talks, however, it becomes clear that in both cases, there is an industry that up until now had been dominated by traditional embedded systems — and often proprietary operating systems and software stacks — which sees the success of Linux in smartphones and wants to emulate it. Open source software on smartphones took decades to arrive; at the very least the opportunity presented by MeeGo on the set-top box and IVI fronts is one where open source software can make a strong showing from the beginning. Beyond that, it may allow free software advocates to push back on some issues like closed and royalty-bearing standards that currently inhibit development.

The first big bullet point made in all of Monday morning's keynotes was that MeeGo is designed to present a unified Linux-based stack for the embedded market, averting the fragmentation that dogged early Linux smartphone development. That is clearly welcome news to the device makers. But the second big bullet point was that MeeGo presents a unified Linux distribution that is compatible with upstream projects and desktop distributions — which ought to be welcome news to open source developers. Le Foll and Streif both discussed examples of how industry product vendors (television service providers and car-makers, respectively) were eager to get on board with the mobile application craze; having those platforms be compatible with Linux desktops is a clear win. Don't think that it stops there, either — although there were no talks on the program about them, more MeeGo platforms kept cropping up in the middle of people's sessions, including everything from desktop video-phones to digital signage.

Comments (8 posted)

Ghosts of Unix past, part 3: Unfixable designs

November 16, 2010

This article was contributed by Neil Brown

In the second installment of this series, we documented two designs that were found to be imperfect and have largely (though not completely) been fixed through ongoing development. Though there was some evidence that the result was not as elegant as we might have achieved had the original mistakes not been made, it appears that the current design is at least adequate and on a path towards being good.

However, there are some designs mistakes that are not so easily corrected. Sometimes a design is of such a character that fixing it is never going to produce something usable. In such cases it can be argued that the best way forward is to stop using the old design and to create something completely different that meets the same need. In this episode we will explore two designs in Unix which have seen multiple attempts at fixes but for which it isn't clear that the result is even heading towards "good". In one case a significant change in approach has produced a design which is both simpler and more functional than the original. In the other case, we are still waiting for a suitable replacement to emerge. After exploring these two "unfixable designs" we will try to address the question of how to distinguish an unfixable design from a poor design which can, as we saw last time, be fixed.

Unix signals

Our first unfixable design involves the delivery of signals to processes. In particular it is the registration of a function as a "signal handler" which gets called asynchronously when the signal is delivered. That this design was in some way broken is clear from the fact that the developers at UCB (The University of California at Berkeley, home of BSD Unix) found the need to introduce the sigvec() system call, along with a few other calls, to allow individual signals to be temporarily blocked. They also changed the semantics of some system calls so that they would restart rather than abort if a signal arrived while the system call was active.

It seems there were two particular problems that these changes tried to address. Firstly there is the question of when to re-arm a signal handler. In the original Unix design a signal handler was one-shot - it would only respond the first time a signal arrived. If you wanted to catch a subsequent signal you would need to make the signal handler explicitly re-enable itself. This can lead to races, such as, if a signal is delivered before the signal handler is re-enabled it can be lost forever. Closing these races involved creating a facility for keeping the signal handler always available, and blocking new deliveries while the signal was being processed.

The other problem involves exactly what to do if a signal arrives while a system call is active. Options include waiting for the system call to complete, aborting it completely, allowing it to return partial results, or allowing it to restart after the signal has been handled. Each of these can be the right answer in different contexts; sigvec() tried to provide more control so the programmer could choose between them.

Even these changes, however, where not enough to make signals really usable, so the developers of System V (at AT&T) found the need for a sigaction() call which adds some extra flags to control the fine details of signal delivery. This call also allows a signal handler to be passed a "siginfo_t" data structure with information about the cause of the signal, such as the UID of the process which sent the signal.

As these changes, particularly those from UCB, were focused on providing "reliable" signal delivery, one might expect that at least the reliability issues would be resolved. Not so it seems. The select() system call (and related poll()) did not play well with signals so pselect() and ppoll() had to be invented and eventually implemented. The interested reader is encouraged to explore their history. Along with these semantic "enhancements" to signal delivery, both teams of developers chose to define more signals generated by different events. Though signal delivery was already problematic before these were added, it is likely that these new demands stretched the design towards breaking point.

An interesting example is SIGCHLD and SIGCLD, which are sent when a child exits or is otherwise ready for the parent to wait() for it. The difference between these two (apart from the letter "H" and different originating team) is that SIGCHLD is delivered once per event (as is the case with other signals) while SIGCLD would be delivered constantly (unless blocked) while any child is ready to be waited for. In the language of hardware interrupts, SIGCHLD is edge triggered while SIGCLD is level triggered. The choice of a level-triggered signal might have been an alternate attempt to try to improve reliability. Adding SIGCLD was more than just defining a new number and sending the signal at the right time. Two of the new flags added for sigaction() are specifically for tuning the details of handling this signal. This is extra complexity that signals didn't need and which arguably did not belong there.

In more recent years the collection of signal types has been extended to include "realtime" signals. These signals are user-defined signals (like SIGUSR1 and SIGUSR2) which are only delivered if explicitly requested in some way. They have two particular properties. Firstly, realtime signals are queued so the handler in the target process is called exactly as many times as the signal was sent. This contrasts with regular signals which simply set a flag on delivery. If a process has a given (regular) signal blocked and the signal is sent several times, then, when the process unblocks the signal, it will still only see a single delivery event. With realtime signals it will see several. This is a nice idea, but introduced new reliability issues as the depth of the queue was limited, so signals could still be lost. Secondly (and this property requires the first), a realtime signal can carry a small datum, typically a number or a pointer. This can be sent explicitly with sigqueue() or less directly with, e.g., timer_create().

It could be thought that this addition of more signals for more events is a good example of the "full exploitation" pattern that was discussed at the start of this series. However, when adding new signal types require significant changes to the original design, it could equally seem that the original design wasn't really strong enough to be so fully exploited. As can be seen from this retrospective, though the original signal design was quite simple and elegant, it was fatally flawed. The need to re-arm signals made them hard to use reliably, the exact semantics of interrupting a system call was hard to get right, and developers repeatedly needed to significantly extend the design to make it work with new types of signals.

The most recent step in the saga of signals is the signalfd() system call which was introduced to Linux in 2007 for 2.6.22. This system call extends "everything has a file descriptor" to work for signals too. Using this new type of descriptor returned by signalfd(), events that would normally be handled asynchronously via signal handlers can now be handled synchronously just like all I/O events. This approach makes many of the traditional difficulties with signals disappear. Queuing becomes natural so re-arming becomes a non-issue. Interaction with system calls ceases to be interesting and an obvious way is provided for extra data to be carried with a signal. Rather than trying to fix a problematic asynchronous delivery mechanism, signalfd() replaces it with a synchronous mechanism that is much easier to work with and which integrates well into other aspect of the Unix design - particularly the universality of file descriptors.

It is a fun, though probably pointless, exercise to imagine what the result might have been had this approach been taken to signals when problems were first observed. Instead of adding new signal types we might have new file descriptor types, and the set of signals that were actually used could have diminished rather than grown. Realtime signals might instead be a general and useful form of interprocess communication based on file descriptors.

It should be noted that there are some signals which signalfd() cannot be used for. These include SIGSEGV, SIGILL, and other signals that are generated because the process tried to do something impossible. Just queueing these signals to be processed later cannot work, the only alternatives are switching control to a signal handler, or aborting the process. These cases are handled perfectly by the original signal design. They cannot occur while a system call is active (system calls return EFAULT rather than raising a signal) and issues with when to re-arm the signal handler are also less relevant.

So while signal handlers are perfectly workable for some of the early use cases (e.g. SIGSEGV) it seems that they were pushed beyond their competence very early, thus producing a broken design for which there have been repeated attempts at repair. While it may now be possible to write code that handles signal delivery reliably, it is still very easy to get it wrong. The replacement that we find in signalfd() promises to make event handling significantly easier and so more reliable.

The Unix permission model

Our second example of an unfixable design which is best replaced is the owner/permission model for controlling access to files. A well known quote attributed to H. L. Mencken is "there is always a well-known solution to every human problem - neat, plausible, and wrong." This is equally true of computing problems, and the Unix permissions model could be just such a solution. The initial idea is deceptively simple: six bytes per file gives simple and broad access control. When designing an operating system to fit in 32 kilobytes of RAM (or less), such simplicity is very appealing, and thinking about how it might one day be extended is not a high priority, which is understandable though unfortunate.

The main problems with this permission model is that it is both too simple and too broad. The breadth of the model is seen in the fact that every file stores its own owner, group owner, and permission bits. Thus every file can have distinct ownership or access permissions. This is much more flexibility than is needed. In most cases, all the files in a given directory, or even directory tree have the same ownership and much the same permissions. This fact was leveraged by the Andrew filesystem which only stores ownership and permissions on a per-directory basis, with little real loss of functionality.

When this only costs six bytes per file it might seem a small price to pay for the flexibility. However once more than 65,536 different owners are wanted, or more permission bits and more groups are needed, storing this information begins to become a real cost. However the bigger cost is in usability.

While a computer may be able to easily remember six bytes per file, a human cannot easily remember why various different settings might have been assigned and so are very likely to create sets of permission settings which are inconsistent, inappropriate, and hence not particularly secure. Your author has memories from University days of often seeing home directories given "0777" permissions (everyone has any access) simply because a student wanted to share one file with a friend, but didn't understand the security model.

The excessive simplicity of the Unix permission model is seen in the fixed, small number of permission bits, and, particularly, that there is only one "group" that can have privileged access. Another maxim from computer engineering, attributed to Alan Kay, is that "Simple things should be simple, complex things should be possible." The Unix permission model makes most use cases quite simple but once the need exceeds that common set of cases, further refinement becomes impossible. The simple is certainly simple, but the complex is truly impossible.

It is here that we start to see real efforts to try to "fix" the model. The original design gave each process a "user" and a "group" corresponding to the "owner" and "group owner" in each file, and they were used to determine access. The "only one group" limit is limiting on both sides; the Unix developers at UCB saw that, for the process side at least, this limit was easy to extend. They allowed a process to have a list of groups for checking filesystem access against. (Unfortunately this list originally had a firm upper limit of 16, and that limit made it's way into the NFS protocol where it was hard to change and is still biting us today.)

Changing the per-file side of this limit is harder as that requires changing the way data is encoded in a filesystem to allow multiple groups per file. As each group would also need its own set of permission bits a file would need a list of groups and permission bits and these became known quite reasonably as "access control lists" or ACLs. The Posix standardization effort made a couple of attempts to create a standard for ACLs, but never got past draft stage. Some Unix implementations have implemented these drafts, but they have not been widely successful.

The NFSv4 working group (under the IETF umbrella) were tasked with creating a network filesystem which, among other goals, would provide interoperability between POSIX and WIN32 systems. As part of this effort they developed yet another standard for ACLs which aimed to support the access model of WIN32 while still being usable on POSIX. Whether this will be more successful remains to be seen, but it seems to have a reasonable amount of momentum with an active project trying to integrate it into Linux (under the banner of "richacls") and various Linux filesystems.

One consequence of using ACLs is that the per-file storage space needed to store the permission information is not only larger than six bytes, it is not of a fixed length. This is, in general, more challenging than any fixed size. Those filesystems which implement these ACLs do so using "extended attributes" and most impose some limit on the size of these - each filesystem choosing a different limit. Hopefully most ACLs that are actually used will fit within all these arbitrary limits.

Some filesystems - ext3 at least - attempt to notice when multiple files have the same extended attributes and just store a single copy of those attributes, rather than one copy for each file. This goes some way to reduce the space cost (and access-time cost) of larger ACLs that can be (but often aren't) unique per file, but does nothing to address the usability concerns mentioned earlier. In that context, it is worth quoting Jeremy Allison, one of the main developers of Samba, and so with quite a bit of experience with ACLs from WIN32 systems and related interoperability issues. He writes: "But Windows ACLs are a nightmare beyond human comprehension :-). In the 'too complex to be usable' camp." It is worth reading the context and follow up to get a proper picture, and remembering that richacls, like NFSv4 ACLs, are largely based on WIN32 ACLs.

Unfortunately it is not possible to present any real example of replacing rather than fixing the Unix permission model. One contender might be that part of "SELinux" that deals with file access. This doesn't really aim to replace regular permissions but rather tries to enhance them with mandatory access controls. SELinux follows much the same model of Unix permissions, associating a security context with every file of interest, and does nothing to improve the usability issues.

There are however two partial approaches that might provide some perspective. One partial approach began to appear in Level 7 Unix with the chroot() system call. It appears that chroot() wasn't originally created for access control but rather to have a separate namespace in which to create a clean filesystem for distribution. However it has since been used to provide some level of access control, particularly for anonymous FTP servers. This is done by simply hiding all the files that the FTP server shouldn't access. Anything that cannot be named cannot be accessed.

This concept has been enhanced in Linux with the possibility for each process not just to have its own filesystem root, but also to have a private set of mount points with which to build a completely customized namespace. Further it is possible for a given filesystem to be mounted read-write in one namespace and read-only in another namespace, and, obviously, not at all in a third. This functionality is suggestive of a very different approach to controlling access permissions. Rather than access control being per-file, it allows it to be per-mount. This leads to the location of a file being a very significant part of determining how it can be accessed. Though this removes some flexibility, it seems to be a concept that human experience better prepares us to understand. If we want to keep a paper document private we might put it in a locked drawer. If we want to make it publicly readable, we distribute copies. If we want it to be writable by anyone in our team, we pin it to the notice board in the tea room.

This approach is clearly less flexible than the Unix model as the control of permissions is less fine grained, but it could well make up for that in being easier to understand. Certainly by itself it would not form a complete replacement, but it does appear to be functionality that is growing - though it is too early yet to tell if it will need to grow beyond its strength. One encouraging observation is that it is based on one of those particular Unix strengths observed in our first pattern, that of "a hierarchical namespace" which would be exploited more fully.

A different partial approach can be seen in the access controls used by the Apache web server. These are encoded in a domain-specific language and stored in centralized files or in ".htaccesss" files near the files that are being controlled. This method of access control has a number of real strengths that would be a challenge to encode into anything based on the Unix permission model:

  • The permission model is hierarchical, matching the filesystem model. Thus controls can be set at whichever point makes most sense, and can be easily reviewed in their entirety. When the controls set at higher levels are not allowed to be relaxed at lower levels it becomes easy to implement mandatory access controls.

  • The identity of the actor requesting access can be arbitrary, rather than just from the set of identities that are known to the kernel. Apache allows control based on source IP address or username plus password. Using plug-in modules almost anything else that could be available.

  • Access can be provided indirectly through a CGI program. Thus, rather than trying to second-guess all possible access restrictions that might be desirable and define permission bits for them in a new ACL, the model can allow any arbitrary action to be controlled by writing a suitable script to mediate that access.

It should be fairly obvious that this model would not be an easy fit with kernel-based access checking and, in any case, would have a higher performance cost than a simpler model. As such it would not be suitable to apply universally. However it could be that such a model would be suitable for that small percentage of needs that do not fit in a simple namespace based approach. There the cost might be a reasonable price for the flexibility.

While an alternate approach such as these might be appealing, it would face a much bigger barrier to introduction than signalfd() did. signalfd() could be added as a simple alternate to signal handlers. Programs could continue to use the old model with no loss, while new programs can make use of the new functionality. With permission models, it is not so easy to have two schemes running in parallel. People who make serious use of ACLs will probably already have a bunch of ACLs carefully tuned to their needs and enabling an alternate parallel access mechanism is very likely to break something. So this is the sort of thing that would best be trialed in a new installation rather than imposed on an existing user-base.

Discerning the pattern

If we are to have a convincing pattern of "unfixable designs" it must be possible to distinguish them from fixable designs such as those that we found last time. In both cases, each individual fix appears to be a good idea addressing a real problem without obviously introducing more problems. In some case this series of small steps leads to a good result, in others these steps only help you get past the small problems enough to be able to see the bigger problem.

We could use mathematical terminology to note that a local maximum can be very different from a global maximum. Or, using mountain-climbing terminology, it is hard to know the true summit from a false summit which just gives you a better view of the mountain. In each case the missing piece is a large scale perspective. If we can see the big picture we can more easily decide if a particular path will lead anywhere useful or if it is best to head back to base and start again.

Trying to move this discussion back to the realm of software engineering, it is clear that we can only head off unfixable designs if we can find a position that can give us a clear and broad perspective. We need to be able to look beyond the immediate problem, to see the big picture and be willing to tackle it. The only known source of perspective we have for engineering is experience, and few of us have enough experience to see clearly into the multiple facets and the multiple levels of abstraction that are needed to make right decisions. Whether we look for such experience by consulting elders, by researching multiple related efforts, or finding documented patterns that encapsulate the experience of others, it is vitally important to leverage any experience that is available rather than run the risk of simply adding bandaids to an unfixable design.

So there is no easy way to distinguish an unfixable design from a fixable one. It requires leveraging the broad perspective that is only available through experience. Having seen the difficulty of identifying unfixable designs early we can look forward to the final part of this series, where we will explore a pernicious pattern in problematic design. While unfixable designs give a hint of deeper problems by appearing to need fixing, these next designs do not even provide that hint. The hints that there is a deeper problem must be found elsewhere.

Exercises

  1. Though we found that signal handlers had been pushed well beyond their competence, we also found at least one area (i.e. SIGSEGV) when they were still the right tool for the job. Determine if there are other use cases that avoid the observed problems, and so provide a balanced assessment of where signal handlers are effective, and where they are unfixable.

  2. Research problems with "/tmp", attempts to fix them, any unresolved issues, and any known attempts to replace rather than fix this design.

  3. Describe an aspect of the IP protocol suite that fits the pattern of an "Unfixable design".

  4. It has been suggested that dnotify, inotify, fanotify are all broken. Research and describe the problems and provide an alternate design that avoids all of those issues.

  5. Explore the possibility of using fanotify to implement an "apache-like" access control scheme with decisions made in user-space. Identify enhancements requires to fanotify for this to be practical.

Next article

Ghosts of Unix past, part 4: High-maintenance designs

Comments (110 posted)

Page editor: Jonathan Corbet

Security

A high-level view of the MeeGo security landscape

By Jake Edge
November 17, 2010

Several members of the MeeGo security team were on hand at the 2010 MeeGo conference to talk about what kinds of threats they will be trying to address—and why—as well as a security framework to enable MeeGo integrators and application developers to handle security tasks. MeeGo security architect Ryan Ware of Intel looked at the what and the why, while Elena Reshetova and Casey Schaufler of Nokia presented on the Mobile Simplified Security Framework (MSSF). As might be guessed from the presence of Schaufler, the Smack kernel security module plays a prominent role in the access control portion of MSSF. This week, we'll cover Ware's presentation and look at Reshetova and Schaufler's next week.

Ware started with a look back at 1990 by way of a justification of the need for MeeGo security solutions. In 1990, Intel had 25MHz 386 processors, the Simpsons were on TV, and there were all of 12 CERT security alerts for the year. All of those alerts "fit on one slide easily" and contain some amusing entries like "rumor of alleged attack" and "security probes from Italy". He listed, again on one slide, the conferences and other notable computer security news for the year. Things have changed just a little bit since then.

Fast-forwarding to the present, there have been 4221 CVEs so far this year, Intel has 3+GHz chips, and the Simpsons are still on TV. When looking at the growth of malware, there is an inflection point in 1996, which is probably associated with wider usage of the internet. "The internet is a petri dish" where all kinds of malware can grow and change. If you put a stock Windows XP system on the internet today without a firewall, it will be infected before you can get the updates installed; it only takes an average of four minutes before that happens, he said.

There is a huge financial incentive these days for those who write malware, which has changed the landscape significantly. You can now get "malware as a service" or rent botnets ($8-90/1000 bots "depending on quantity", he said). In the pwn2own contest at CanSecWest, someone with a working iPhone exploit was unwilling to release it for the $15,000 prize as they believed they could get more elsewhere—and did, with rumors of a six-figure sum.

There are also "spearphishing" efforts like Aurora that targeted Google and 30 other companies, including Intel, last year. It targeted specific individual employees, sending them an email that looked it came from someone they knew. When the PDF or JPG inside was opened, it appeared to be an innocuous file of that type, but actually infected their machine with a worm that looked for source code repositories. Once found, the contents of those repositories were slowly—so that intrusion detection systems weren't alerted—sent elsewhere. The Stuxnet worm/virus is another example of this new kind of "persistent" threat.

With MeeGo, there are new usage models where desktop data is migrating to mobile phones, which are much more easily lost, for example. People are doing banking from their phones as well. When Ware asked how many in the audience had used their phone for banking, he got quite a few hands; "you're all screwed", he said. Those credentials are stored somewhere in the phone for an attacker (or thief) to find. There are also various efforts to publish your location or turn your phone into a credit card, all of which have various dangers.

Because the number of Linux devices is growing quickly, it is becoming more of a target. For reference, he said there are more than a billion Windows-installed systems—some botnets have more than a million bots—but the smartphone market is growing at a rate (35.5%/year) that will go beyond that soon. At that rate, the expected sales of smartphones in 2014 is 506 million. In addition, the smartphone market is getting less fragmented and he sees iOS and Linux as likely to be the only players before too long.

The focus on mobile Linux security is growing, he said. He noted the recent Coverity study of the Android kernel that found 88 high-risk defects and there were "some interesting things in there". The report will not be available for a bit as Coverity gave Google 60 days to fix the problems before the report will be released. Ware noted that the study found that the defect rate for the code written for Android was "significantly higher than for the rest of the kernel".

MSSF was originally developed for smartphones, but has been broadened to support all of the MeeGo vertical markets (netbook, connected TV, in-vehicle-infotainment (IVI), ...). At a high level, the goals for MSSF are to provide protections for users of devices, the device itself, and for new services that are envisioned for MeeGo devices.

For users, that includes protecting things like login credentials and cookies, but also to try to prevent malicious software from being able to do things like making expensive phone calls without the knowledge or consent of the device owner. Protecting the device entails protecting the SIM lock and ensuring that regulatory requirements (for things like radio frequency emissions) are strictly adhered to. New services like mobile payment also need protection, he said.

The MeeGo security team is doing things beyond just MSSF. It ensures that the external facing MeeGo infrastructure is kept secure. That includes things like source code repositories and open build service packages. The team also ensures that MeeGo images are secure by not having insecure defaults on network services, patching packages for security vulnerabilities, and issuing MeeGo advisories.

MeeGo "can't be secure without you guys", he said. The team could do static analysis and code reviews for 80 hours a week and still not find everything. He asked that folks keep an eye out and point out any flaws they find to security@meego.com. There is also a new MeeGo-security-discussion mailing list and weekly IRC meetings of the security team are planned in the near future.

In answer to some audience questions, Ware said he was concerned about security issues surrounding "cloud" applications, but hadn't looked at it specifically yet. It is "something to look at in the future". He also was not interested in talking about DRM solutions, though some in the audience clearly were. He worked on DRM five years ago and was glad to not be working on it any more. "I don't want to fix someone's broken business model", he said. Others who need those kinds of "solutions" will undoubtedly come up with them.

Comments (10 posted)

Brief items

Security quote of the week

GSM equipment manufacturers and mobile operators have shown no interest in fixing gaping holes in their security system.
-- Harald Welte

Comments (none posted)

An OpenSSL race condition

The OpenSSL project has issued an advisory of a race condition which exists in versions prior to 0.9.8p or 1.0.0b. Successfully exploiting this race can enable a remote attacker to inject code into a server using OpenSSL. It's worth noting, though, that only servers which are (1) multi-threaded, and (2) using OpenSSL's internal caching are vulnerable. So, in particular, Apache servers are not at risk.

Full Story (comments: 1)

New vulnerabilities

banshee: privilege escalation

Package(s):banshee CVE #(s):CVE-2010-3998
Created:November 12, 2010 Updated:February 5, 2014
Description: From the CVE entry:

The (1) banshee-1 and (2) muinshee scripts in Banshee 1.8.0 and earlier place a zero-length directory name in the LD_LIBRARY_PATH, which allows local users to gain privileges via a Trojan horse shared library in the current working directory.

Alerts:
Gentoo 201402-05 banshee 2014-02-05
Mandriva MDVSA-2011:034 banshee 2011-02-21
Fedora FEDORA-2010-16907 banshee 2010-10-28
Fedora FEDORA-2010-16916 banshee 2010-10-28
Fedora FEDORA-2010-17021 banshee 2010-10-31

Comments (none posted)

bristol: privilege escalation

Package(s):bristol CVE #(s):CVE-2010-3351
Created:November 15, 2010 Updated:November 17, 2010
Description: From the CVE entry:

startBristol in Bristol 0.60.5 places a zero-length directory name in the LD_LIBRARY_PATH, which allows local users to gain privileges via a Trojan horse shared library in the current working directory.

Alerts:
Fedora FEDORA-2010-16676 bristol 2010-10-27
Fedora FEDORA-2010-16687 bristol 2010-10-27
Fedora FEDORA-2010-16714 bristol 2010-10-28

Comments (none posted)

bugzilla: multiple vulnerabilities

Package(s):bugzilla CVE #(s):CVE-2010-3764 CVE-2010-3172
Created:November 15, 2010 Updated:January 20, 2011
Description: From the CVE entries:

The Old Charts implementation in Bugzilla 2.12 through 3.2.8, 3.4.8, 3.6.2, 3.7.3, and 4.1 creates graph files with predictable names in graphs/, which allows remote attackers to obtain sensitive information via a modified URL. (CVE-2010-3764)

CRLF injection vulnerability in Bugzilla before 3.2.9, 3.4.x before 3.4.9, 3.6.x before 3.6.3, and 4.0.x before 4.0rc1, when Server Push is enabled in a web browser, allows remote attackers to inject arbitrary HTTP headers and content, and conduct HTTP response splitting attacks, via a crafted URL. (CVE-2010-3172)

Alerts:
Gentoo 201110-03 bugzilla 2011-10-10
openSUSE openSUSE-SU-2011:0020-1 perl-CGI-Simple 2011-01-10
openSUSE openSUSE-SU-2011:0064-1 perl 2011-01-20
Mandriva MDVSA-2010:252 perl-CGI-Simple 2010-12-14
Fedora FEDORA-2010-17235 bugzilla 2010-11-04
Fedora FEDORA-2010-17280 bugzilla 2010-11-04
Fedora FEDORA-2010-17274 bugzilla 2010-11-04

Comments (none posted)

gromacs: code execution

Package(s):gromacs CVE #(s):CVE-2010-4001
Created:November 15, 2010 Updated:November 17, 2010
Description: From the Red Hat bugzilla:

Ludwig Nussel discovered that gromacs contained a script that could be abused by an attacker to execute arbitrary code.

The vulnerability is due to an insecure change to LD_LIBRARY_PATH, and environment variable used by ld.so(8) to look for libraries in directories other than the standard paths. When there is an empty item in the colon-separated list of directories in LD_LIBRARY_PATH, ld.so(8) treats it as a '.' (current working directory). If the given script is executed from a directory where a local attacker could write files, there is a chance for exploitation.

Alerts:
Fedora FEDORA-2010-17256 gromacs 2010-11-04
Fedora FEDORA-2010-17248 gromacs 2010-11-04

Comments (none posted)

kernel: privilege escalation

Package(s):kernel CVE #(s):CVE-2010-3865
Created:November 11, 2010 Updated:August 9, 2011
Description:

From the openSUSE advisory:

CVE-2010-3865: A iovec integer overflow in RDS sockets was fixed which could lead to local attackers gaining kernel privileges.

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1164-1 linux-fsl-imx51 2011-07-06
Ubuntu USN-1093-1 linux-mvl-dove 2011-03-25
Ubuntu USN-1119-1 linux-ti-omap4 2011-04-20
Ubuntu USN-1080-2 linux-ec2 2011-03-02
Ubuntu USN-1081-1 linux 2011-03-02
Ubuntu USN-1080-1 linux 2011-03-01
Ubuntu USN-1073-1 linux, linux-ec2 2011-02-25
SUSE SUSE-SA:2011:007 kernel-rt 2011-02-07
Red Hat RHSA-2011:0007-01 kernel 2011-01-11
CentOS CESA-2011:0004 kernel 2011-01-06
Red Hat RHSA-2011:0004-01 kernel 2011-01-04
openSUSE openSUSE-SU-2011:0003-1 kernel 2011-01-03
openSUSE openSUSE-SU-2011:0004-1 kernel 2011-01-03
SUSE SUSE-SA:2010:057 kernel 2010-11-11
openSUSE openSUSE-SU-2010:0933-1 kernel 2010-11-11

Comments (none posted)

kernel: denial of service

Package(s):kernel CVE #(s):CVE-2010-3698
Created:November 11, 2010 Updated:August 9, 2011
Description:

From the Red Hat advisory:

A flaw was found in the way KVM (Kernel-based Virtual Machine) handled the reloading of fs and gs segment registers when they had invalid selectors. A privileged host user with access to "/dev/kvm" could use this flaw to crash the host. (CVE-2010-3698, Moderate)

Alerts:
Oracle ELSA-2013-1645 kernel 2013-11-26
Ubuntu USN-1187-1 kernel 2011-08-09
Ubuntu USN-1081-1 linux 2011-03-02
Ubuntu USN-1074-2 linux-fsl-imx51 2011-02-28
Ubuntu USN-1074-1 linux-fsl-imx51 2011-02-25
Ubuntu USN-1073-1 linux, linux-ec2 2011-02-25
Ubuntu USN-1072-1 linux 2011-02-25
Mandriva MDVSA-2011:029 kernel 2011-02-17
Fedora FEDORA-2010-18983 kernel 2010-12-17
CentOS CESA-2010:0898 kvm 2010-12-14
Red Hat RHSA-2010:0898-01 kvm 2010-12-06
Red Hat RHSA-2010:0842-01 kernel 2010-11-10

Comments (none posted)

libxml2: code execution

Package(s):libxml2 CVE #(s):CVE-2010-4008
Created:November 11, 2010 Updated:December 8, 2010
Description:

From the Ubuntu advisory:

Bui Quang Minh discovered that libxml2 did not properly process XPath namespaces and attributes. If an application using libxml2 opened a specially crafted XML file, an attacker could cause a denial of service or possibly execute code as the user invoking the program.

Alerts:
Scientific Linux SL-ming-20130201 mingw32-libxml2 2013-02-01
Oracle ELSA-2013-0217 mingw32-libxml2 2013-02-01
CentOS CESA-2013:0217 mingw32-libxml2 2013-02-01
Red Hat RHSA-2013:0217-01 mingw32-libxml2 2013-01-31
Oracle ELSA-2012-0324 libxml2 2012-03-09
Oracle ELSA-2012-0017 libxml2 2012-01-12
Scientific Linux SL-libx-20120112 libxml2 2012-01-12
CentOS CESA-2012:0017 libxml2 2012-01-11
Red Hat RHSA-2012:0017-01 libxml2 2012-01-11
Scientific Linux SL-libx-20111206 libxml2 2011-12-06
Red Hat RHSA-2011:1749-03 libxml2 2011-12-06
Gentoo 201110-26 libxml2 2011-10-26
SUSE SUSE-SR:2010:023 libxml2, tomboy, krb5, php5, cups, java-1_6_0-openjdk, epiphany, encfs 2010-12-08
openSUSE openSUSE-SU-2010:1004-1 libxml2 2010-12-02
Debian DSA-2128-1 libxml2 2010-12-01
Mandriva MDVSA-2010:243 libxml2 2010-11-29
Ubuntu USN-1016-1 libxml2 2010-11-10

Comments (none posted)

mod_fcgid: buffer overflow

Package(s):mod_fcgid CVE #(s):CVE-2010-3872
Created:November 17, 2010 Updated:August 10, 2011
Description: The mod_fcgid Apache module is subject to a stack buffer overflow with uncertain effects (but code execution seems plausible).
Alerts:
Gentoo 201207-09 mod_fcgid 2012-07-09
SUSE SUSE-SU-2011:0885-1 apache2-mod_fcgid 2011-08-10
openSUSE openSUSE-SU-2011:0884-1 apache2-mod_fcgid 2011-08-10
Debian DSA-2140-1 libapache2-mod-fcgid 2011-01-05
Fedora FEDORA-2010-17472 mod_fcgid 2010-11-08
Fedora FEDORA-2010-17434 mod_fcgid 2010-11-08
Fedora FEDORA-2010-17474 mod_fcgid 2010-11-08

Comments (none posted)

moodle: cross-site scripting

Package(s):moodle CVE #(s):CVE-2010-4207 CVE-2010-4208 CVE-2010-4209
Created:November 12, 2010 Updated:November 17, 2010
Description: From the openSUSE advisory:

CVE-2010-4207: Cross-site scripting vulnerability in the Flash component infrastructure in YUI allows remote attackers to inject arbitrary web script or HTML via charts/assets/charts.swf.

CVE-2010-4208: Cross-site scripting vulnerability in the Flash component infrastructure in YUI allows remote attackers to inject arbitrary web script or HTML via uploader/assets/uploader.swf.

CVE-2010-4209: Cross-site scripting vulnerability in the Flash component infrastructure in YUI allows remote attackers to inject arbitrary web script or HTML via swfstore/swfstore.swf.

Alerts:
Mageia MGASA-2013-0117 bugzilla 2013-04-18
SUSE SUSE-SR:2010:021 mysql, dhcp, monotone, moodle, openssl 2010-11-16
Fedora FEDORA-2010-16845 moodle 2010-10-28
Fedora FEDORA-2010-16782 moodle 2010-10-28
Fedora FEDORA-2010-16764 moodle 2010-10-28
openSUSE openSUSE-SU-2010:0937-1 moodle 2010-11-12

Comments (none posted)

mysql: denial of service

Package(s):mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 CVE #(s):CVE-2010-3834
Created:November 11, 2010 Updated:July 19, 2011
Description:

From the Ubuntu advisory:

It was discovered that MySQL incorrectly handled materializing a derived table that required a temporary table for grouping. An authenticated user could exploit this to make MySQL crash, causing a denial of service. (CVE-2010-3834)

Alerts:
Ubuntu USN-1397-1 mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 2012-03-12
Gentoo 201201-02 mysql 2012-01-05
openSUSE openSUSE-SU-2011:1250-1 mysql 2011-11-16
openSUSE openSUSE-SU-2011:0799-1 mysql-cluster 2011-07-19
openSUSE openSUSE-SU-2011:0774-1 mysql-cluster 2011-07-19
openSUSE openSUSE-SU-2011:0743-1 MariaDB 2011-07-06
Debian DSA-2143-1 mysql-dfsg-5.0 2011-01-14
Ubuntu USN-1017-1 mysql-5.1, mysql-dfsg-5.0, mysql-dfsg-5.1 2010-11-11

Comments (none posted)

openssl: remote code execution

Package(s):openssl CVE #(s):CVE-2010-3864
Created:November 17, 2010 Updated:November 30, 2010
Description: The OpenSSL project has issued an advisory of a race condition which exists in versions prior to 0.9.8p or 1.0.0b. Successfully exploiting this race can enable a remote attacker to inject code into a server using OpenSSL. It's worth noting, though, that only servers which are (1) multi-threaded, and (2) using OpenSSL's internal caching are vulnerable. So, in particular, Apache servers are not at risk. See this advisory for more information.
Alerts:
Gentoo 201110-01 openssl 2011-10-09
SUSE SUSE-SR:2010:022 gdm, openssl, poppler, quagga 2010-11-30
Ubuntu USN-1018-1 openssl 2010-11-18
Debian DSA-2125-1 openssl 2010-11-22
Slackware SSA:2010-326-01 openssl 2010-11-22
openSUSE openSUSE-SU-2010:0965-2 openssl 2010-11-22
Fedora FEDORA-2010-17847 openssl 2010-11-17
Fedora FEDORA-2010-17827 openssl 2010-11-17
openSUSE openSUSE-SU-2010:0965-1 openssl 2010-11-19
Mandriva MDVSA-2010:238 openssl 2010-11-17
Red Hat RHSA-2010:0888-01 openssl 2010-11-16

Comments (none posted)

openswan: code execution

Package(s):openswan CVE #(s):CVE-2010-3752 CVE-2010-3753
Created:November 17, 2010 Updated:November 17, 2010
Description: From the Red Hat advisory: two input sanitization flaws were found in the Openswan client-side handling of Cisco gateway banners. A malicious or compromised VPN gateway could use these flaws to execute arbitrary code on the connecting Openswan client.
Alerts:
Mageia MGASA-2012-0300 openswan 2012-10-20
Red Hat RHSA-2010:0892-01 openswan 2010-11-16

Comments (none posted)

perl-CGI: multiple vulnerabilities

Package(s):perl-CGI CVE #(s):
Created:November 16, 2010 Updated:November 17, 2010
Description: From the Mandriva advisory:

A new version of the CGI Perl module has been released to CPAN, which fixes several security bugs which directly affect Bugzilla (these two security bugs where first discovered as affecting Bugzilla, then identified as being bugs in CGI.pm itself).

Alerts:
Mandriva MDVSA-2010:237 perl-CGI 2010-11-16

Comments (none posted)

proftpd: code execution

Package(s):proftpd CVE #(s):CVE-2010-4221
Created:November 11, 2010 Updated:December 24, 2010
Description:

From the proftpd bugzilla entry:

The flaw exists within the proftpd server component which listens by default on TCP port 21. When reading user input if a TELNET_IAC escape sequence is encountered the process miscalculates a buffer length counter value allowing a user controlled copy of data to a stack buffer. A remote attacker can exploit this vulnerability to execute arbitrary code under the context of the proftpd process.

Alerts:
Gentoo 201309-15 proftpd 2013-09-24
Slackware SSA:2010-357-02 proftpd 2010-12-24
Fedora FEDORA-2010-17220 proftpd 2010-11-03
Mandriva MDVSA-2010:227 proftpd 2010-11-11
Fedora FEDORA-2010-17091 proftpd 2010-11-02
Fedora FEDORA-2010-17098 proftpd 2010-11-02

Comments (none posted)

systemtap: privilege execution

Package(s):systemtap CVE #(s):CVE-2010-4170
Created:November 17, 2010 Updated:November 23, 2010
Description: The staprun utility contains two vulnerabilities which can be exploited for privilege escalation by local users; see this advisory for (a little) more information.
Alerts:
Debian DSA-2348-1 systemtap 2011-11-17
CentOS CESA-2010:0895 systemtap 2010-11-17
Fedora FEDORA-2010-17868 systemtap 2010-11-18
Fedora FEDORA-2010-17873 systemtap 2010-11-18
Fedora FEDORA-2010-17865 systemtap 2010-11-18
CentOS CESA-2010:0894 systemtap 2010-11-17
Red Hat RHSA-2010:0894-01 systemtap 2010-11-17
Red Hat RHSA-2010:0895-01 systemtap 2010-11-17

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 2.6.37-rc2, released on November 15. "And it all looks the way I like to see my -rc2's: nothing really interesting there." It's mostly fixes, but there's also some residual big kernel lock removal work, the final removal of hard barrier support from the block layer, and a couple of new LED drivers. See the full changelog for the details.

A significant driver API change was merged after the -rc2 release: the SCSI midlayer queuecommand() function is now invoked without the host lock; the function's prototype has changed as well.

Stable updates: there have been no stable updates released since October 29.

Comments (none posted)

Quotes of the week

We *all* want to build infrastructure; when other coders are forced to use it we rise up the kernel dominance hierarchy. Ook ook! (Every Unix app has its own config language for the same reason: the author distills the mental sweat of the users into some kind of Elixir of Coder Hubris).

Yet abstractions obfuscate: let's resist our primal urges to add another speed hump on the lengthening road to kernel expertise.

-- Rusty Russell

Finally, the whole "user space is more flexible" is just a lie. It simply doesn't end up being true. It will be _harder_ to configure some user-space daemon than it is to just set a flag in /sys or whatever. The "flexibility" tends to be more a flexibility to get things wrong than any actual advantage.
-- Linus Torvalds

Our real problem with tracing is lack of relevance, lack of utility, lack of punch-through analytical power.
-- Ingo Molnar

Comments (6 posted)

Coccinelle workshop: January 26, 2011

Julia Lawall has announced that a Coccinelle workshop will be held in Copenhagen on January 26, 2011. "I expect that the program will consist of some presentations about Coccinelle and associated tools, as well as some time for discussions and practical experiments." Anybody who is interested in attending should drop her a note.

Full Story (comments: none)

Announcing a new utility: 'trace'

A group of Linux tracing developers has announced the creation of a new top-level command, called simply "trace." "After years of efforts we have not succeeded in meeting (let alone exceeding) the utility of decades-old user-space tracing tools such as strace - except for a few new good tools such as PowerTop and LatencyTop. 'trace' is our shot at improving the situation: it aims at providing a simple to use and straightforward tracing tool based on the perf infrastructure and on the well-known perf profiling workflow." Obtaining the tool requires fetching a git tree for now.

Full Story (comments: 37)

Simple user-space tracing

One gets the sense that an extended tracing hacking session has been going on. Ingo Molnar has posted a simple patch to support user-space tracing. It is currently implemented as an extension to the prctl() system call which allows an application to inject tracing data into the kernel, where it will be properly mixed with kernel events. With some suitable user-space work (making DTrace tracepoints use this facility, for example), Linux may finally be on a path toward having proper integrated user- and kernel-space tracing.

Comments (1 posted)

Punching holes in files

By Jonathan Corbet
November 17, 2010
The XFS and OCFS2 filesystems currently have the ability to "punch a hole" in a file - a portion of the file can be marked as unwanted and the associated storage released. Josef Bacik, noting that this capability may be added to other filesystems in the near future, came to the conclusion that the kernel should offer a standard interface for hole punching. The result is an extension to the fallocate() system call adding that ability.

In particular, this patch adds a new flag (FALLOC_FL_PUNCH_HOLE) which is recognized by the system call. If the underlying filesystem is able to perform the operation, the indicated range of data will be removed from the file; otherwise ENOTSUPP will be returned. The current implementation will not change the size of the file; if the final blocks of the file are "punched" out, the file will retain the same length. There has been some discussion of whether changing the size of the file should be supported, but the consensus seems to be that, for now, changing the file size would create more problems than it would solve.

Comments (18 posted)

Kernel development news

TTY-based group scheduling

By Jonathan Corbet
November 17, 2010
As long as we have desktop systems, there will almost certainly be concerns about desktop interactivity. Many complex schemes for improving interactivity have come and gone over the years; most of them seem to leave at least a subset of users unsatisfied. Miracle cures are hard to come by, but it seems that a recent patch has come close, at least for some users. Interestingly, it is a conceptually simple solution that may not need to be in the kernel at all.

The core idea behind the completely fair scheduler is its complete fairness: if there are N processes competing for the CPU, each with equal priority, than each will get 1/N of the available CPU time. This policy replaced the rather complicated "interactivity" heuristics found in the O(1) scheduler; it yields better desktop response in most situations. There are places where this approach falls down, though. If a user is running ten instances of the compiler with make -j 10 along with one video playback application, each process will get a "fair" 9% of the CPU. That 9% may not be enough to provide the video experience that the user was hoping for. So it is not surprising that many users see "fairness" differently; wouldn't be nice if the compilation job as a whole got 50%, while the video application got the other half?

The kernel has been able to implement that kind of fairness for years though a feature known as group scheduling. A set of processes placed within a group will each get a fair share of the CPU time allocated to the group as a whole, but groups will, themselves, compete for a fair share of the CPU. So, if the video player were to be placed in one group and the compilation in another, each group would get half of the available processor time. The various processes doing the compilation would then get a fair share of their group's half; they will compete with each other, but not with the video player. This arrangement will ensure that the video player gets enough CPU time to keep up with the stream and any interactivity requirements.

Groups are thus a nice feature, but they have not seen heavy use since they were merged for the 2.6.24 release. The reasons for that are clear: groups require administrative work and root privileges to set up; most users do not know how to tweak the knobs and would really rather not learn. What has been missing all these years is a way to make group scheduling "just work" for ordinary users. That is the goal of Mike Galbraith's per-TTY task groups patch.

In short, this patch automatically creates a group attached to each TTY in the system. All processes with a given TTY as their controlling terminal will be placed in the appropriate group; the group scheduling code can then share time between groups of processes as determined by their controlling terminals. A compilation job is typically started by typing "make" in a terminal emulator window; that job will have a different controlling TTY than the video player, which may not have a controlling terminal at all. So the end result is that per-TTY grouping automatically separates tasks run in terminals from those run via the window system.

This behavior makes Linus happy; Linus, after all, is just the sort of person who might try to sneak in a quick video while waiting for a highly-parallel kernel compilation. He said:

So I think this is firmly one of those "real improvement" patches. Good job. Group scheduling goes from "useful for some specific server loads" to "that's a killer feature".

Others have also reported significant improvements in desktop response, so this feature looks like one which has a better-than-average chance of getting into the mainline in the next merge window. There are, however, a few voices of dissent, most of whom think that the TTY is the wrong marker to use when placing processes in group.

Most outspoken - as he often is - is Lennart Poettering, who asserted that "Binding something like this to TTYs is just backwards"; he would rather see something which is based on sessions. And, he said, all of this could better be done in user space. Linus was, to put it politely, unimpressed, but Lennart came back with a few lines of bash scripting which achieves the same result as Mike's patch - with no kernel patching required at all. It turns out that working with control groups is not necessarily that hard.

Linus, however, still likes the kernel version, mainly because it can be made to "just work" with no user intervention required at all:

Put another way: if we find a better way to do something, we should _not_ say "well, if users want it, they can do this <technical thing here>". If it really is a better way to do something, we should just do it. Requiring user setup is _not_ a feature.

In other words, an improvement that just comes with a new kernel is likely to be available to more users than something which requires each user to make a (one-time) manual change.

Lennart isn't buying it. A real user-space solution, he says, would not come in the form of a requirement that users edit their .bashrc files; it, too, would be in a form that "just works." It should come as little surprise that the form he envisions is systemd; it seems that future plans involve systemd taking over session management, at which time per-session group scheduling will be easy to achieve. He believes that this solution will be more flexible; it will be able to group processes in ways which make more sense for "normal desktop users" than TTY-based grouping. It also will not require a kernel upgrade to take effect.

Another idea which has been raised is to add a "run in separate group" option to desktop application launchers, giving users an easy way to control how the partitioning is done.

Linus seems to be holding his line on the kernel version of the patch:

Anyway, I find it depressing that now that this is solved, people come out of the woodwork and say "hey you could do this". Where were you guys a year ago or more?

Tough. I found out that I can solve it using cgroups, I asked people to comment and help, and I think the kernel approach is wonderful and _way_ simpler than the scripts I've seen. Yes, I'm biased ("kernels are easy - user space maintenance is a big pain").

The next merge window is not due until January, though; that is a fair amount of time for people to demonstrate other approaches. If a solution based in user space turns out to be more flexible and effective in the long run, it may yet prevail. That is especially true because merging Mike's patch does not in any way inhibit user-space solutions; if a systemd-based approach shows better results, that may be what the distributors decide to enable. One way or the other, it seems like better interactive response is coming in the near future.

Comments (41 posted)

The media controller subsystem

By Jonathan Corbet
November 16, 2010
Over the course of the last decade, video acquisition hardware has evolved from relatively rare, bulky, external devices to being a standard feature in a large variety of gadgets. Increasingly, chipsets intended for embedded use have video support as a standard feature. This support is becoming more complex; contemporary video devices are not just frame grabbers anymore. That complexity is revealing limitations in the kernel's device model, prompting the proposal of a new "media controller" abstraction. This article will provide an overview and mild critical review of this new subsystem.

Video acquisition devices have never been entirely simple. Even a minimal camera device will usually be a composite of at least three distinct devices: a sensor, a DMA bridge to move frames between the sensor and main memory, and an I2C bus dedicated to controlling the sensor. Most devices coming onto the market now are more sophisticated than that. For example, the integrated controller in current VIA chipsets (still a very simple device) adds a "high-quality video" (HQV) unit which can perform image rotation and format conversions; that unit can be configured into or out of the processing pipeline depending on the application's needs. For a more complex example, consider the OMAP 3430, which is found in N900 phones; it has multiple video inputs, a white balance processor, a lens shading compensation processor, a resizer, and more.

Each of these components can be thought of as a separate device which can be powered up or down independently, and which, in some cases, can be configured in or out at any given time. The current V4L2 system wasn't designed with this kind of device structure in mind, and neither was the current Linux device model. An added problem is that these devices can be tied with devices managed by other subsystems - audio devices in particular - making it hard for applications to grasp the whole picture. The media controller is an attempt to rectify that situation.

The most recent version of the media controller patch was posted by Laurent Pinchart back in September; if all goes according to plan, it will be merged for 2.6.38. The patch creates a new media_device type which has the responsibility of managing the various components which make up a media-related device. These components are called "entities"; and they can take many forms. Sensors, DMA engines, video processing units, focus controllers, audio devices, and more are all considered to be "entities" in this scheme.

Most entities will have at least one "pad," being a logical connection point where data can flow into or out of the device. "Data" in this sense can be multimedia data, but it might also be a control stream. Pads are exclusively input ("sink") or output ("source") ports, and an entity can have an arbitrary number of each. The final piece is called a "link"; it is a directional connection from a source pad to a sink. Links are created by the media device driver, but they can, in some cases, be enabled or disabled from user space.

Using this scheme, the simple VIA device described above could be represented with three entities and three links:

[Media controller]

The "sensor" entity has a single source pad which can be connected, via links, to the HQV unit or directly to the DMA controller. Only one of those paths can be active at once. The HQV unit has two pads - one sink, one source - allowing it to be slotted into the video pipeline if need be. The DMA controller has a single sink pad.

As an aside: entities also have a "group" number assigned to them; groups are intended to indicate hardware which is meant to function together. All of the units described above would probably be placed into the same group by the driver. If there were a microphone attached to the camera, then the associated audio entity would also be placed in the same group. This mechanism is intended to make it easier for applications to associate related devices with each other.

On the application side, there is a device (probably /dev/media0 or some such) which can be opened to gain access to this device. From there, the interface looks very much like the rest of V4L2 - lots of ioctl() calls to discover what is available and configure it. These calls include:

  • MEDIA_IOC_DEVICE_INFO to get overall information about the device: driver name, device model, etc.

  • MEDIA_IOC_ENUM_ENTITIES is used to iterate through all of the entities contained within the device. Information returned includes an ID number, a coarse entity type (e.g. V4L or ALSA), a subtype (few of these are defined in the patch; "sensor" is one of them), the group ID, the device number, and the numbers of pads and links.

  • MEDIA_IOC_ENUM_LINKS iterates through all of the links attached to source pads on a given entity. Thus, it is only possible to discover the outbound links from any entity; obtaining the whole graph requires iterating through all entities.

  • MEDIA_IOC_SETUP_LINK changes the properties of a specific link; in particular, it can enable or disable the link (though links can be marked "immutable" by the driver). Enabling a link will have the side effect of powering up all components reachable via that link, while disabling the last link to an entity will cause that entity to be powered down. Thus, changing the status of a link affects both the data path and the power configuration of a device.

Thus far, there have been no applications posted which actually use this framework (though a gstreamer source element is in the works). One can certainly see the utility of being able to discover and modify the configuration of a complex media device in this manner. But, at the Linux Plumbers Conference, your editor heard some concerns that the complexity of this interface could prove daunting to application developers. An application which is intended to work with a specific device (the camera application on a mobile handset, say) can be written with a relatively high level of awareness of that device and make good use of this interface. Writing an application which can make full use of any device - without requiring the developer to know about specific hardware - could be more challenging.

One other concern raised at LPC was that this functionality should really be exported via sysfs rather than through an ioctl()-based API. The information contained here would fit well within a sysfs hierarchy, with links represented by symbolic links in the filesystem. Given that the configuration interface (in its current form) changes a single bit at a time, there is no need for the sort of transactional functionality that can make ioctl() preferable to sysfs. On the other hand, V4L2 applications are already a big mass of ioctl() calls; the media controller API will be a natural fit while rooting through sysfs would be a new experience for V4L2 developers.

Something else is worth thinking about here: the problem may be bigger than just media devices. More complex devices are the norm, and it is becoming clear that the kernel's hierarchical device model is not up to the task of representing the structure of our systems. Back in 2009, Rafael Wysocki proposed a mechanism for representing power-management dependencies with explicit links. The media controller mechanism looks quite similar; it is even being used for power management purposes. That suggests that we should be looking for a data structure which can represent device connections and dependencies across the kernel, not just in one subsystem. Otherwise we run the risk of creating duplicated structures and multiple user-space ABIs, all of which must be supported indefinitely.

The media controller subsystem is aimed at solving a real problem, and it is certainly a credible solution. It is also a significant new user-space ABI, one which does not necessarily conform to current ideas of how interfaces should be done. The work done here may also be applicable well beyond the V4L2 and ALSA subsystems, but any attempt at a bigger-picture solution should probably be made before the code is merged and the ABI is set in stone. All of this suggests that the media controller code could benefit from review outside of the V4L mailing list, which tends to be inhabited by relatively focused developers.

(Thanks to Andy Walls, Hans Verkuil, and Laurent Pinchart for their comments on this article).

Comments (1 posted)

Making attacks a little harder

By Jonathan Corbet
November 17, 2010
Regardless of whether one believes that the security of the Linux kernel is as good as it should be, it is hard to disagree with the idea that it could be made more secure. For some years, it has seemed like much of the security-related work on the kernel has been directed toward the creation of new access control mechanisms. But access control is only so helpful if the kernel itself is vulnerable, allowing any access control system to be bypassed. Recently we have begun to see more work aimed at making small improvements to the security of the kernel itself; this article will survey some of that work.

One key to hardening a system against attackers is to make it harder for them to obtain information which could be used to compromise the kernel. So it is not surprising to see an increase in patches which lock down access to information. It turns out, though, that there is not universal agreement on the value of restricting any kind of information about the running system.

Marcus Meissner started things off with a simple patch removing world-read access from /proc/kallsyms. It is difficult to subvert the kernel without knowledge of how the kernel's memory is laid out, so, Marcus thought, there is no point in providing that information to anybody who asks. The problem with this change, as Ingo Molnar pointed out, is that there are many sources of that information. For example, the System.map file shipped by most distributors also has the locations of all symbols built into the kernel.

Now, one can certainly read-protect System.map as well, but that may not be particularly helpful. Most systems out there are running distributor-supplied kernels, and the packages for those kernels are widely available. So an attacker does not need to read /proc/kallsyms or System.map if the target system is running a stock kernel; they need only dig up a package file containing the needed information. For this reason, Ingo suggested that a complete solution would require restricting access to the running kernel version as well. Removing all of the globally-readable kernel version information from a system would be hard, but, if it could be done, attackers would no longer have easy access to the locations of functions and data structures within the kernel.

Suffice to say that this idea was not received with universal acclaim. Critics claim that there are plenty of ways to determine which kernel version is running; hiding version information would just make life harder for legitimate applications (which may need that information to know which features are available) without appreciably slowing attackers. Ingo talked some about instrumenting the kernel to detect an attacker's attempts to determine the running kernel version, thus giving an early alarm, but this idea did not seem to gain a great deal of traction. So, chances are, kernel versions will not be hidden in any near-future release (the /proc/kallsyms patch has been merged for 2.6.37, though).

Dan Rosenberg has a similar concern: when the kernel exposes pointer values to user space, it gives information to potential attackers. These values can be found in a number of places, including the system log and numerous places in /proc. Keeping pointer values out of the system log seems like a hopeless task, but it is possible to better restrict access to that log. To that end, Dan has posted a patch adding a new sysctl knob controlling access to the syslog() system call. Later versions of the patch include a configuration option for the default setting of this knob; with that, distributors can make the system log off limits for unprivileged users starting at boot.

Kernel addresses also show up in other places, though; for example, /proc/net/tcp contains the address of the sock structure associated with each open TCP connection. Dan worries about exposing the address of these structures, especially since many of them contain function pointers; if an attacker is somehow able to change the contents of kernel memory, this kind of address might facilitate the task of taking over the system. To raise the bar a bit, Dan posted a series of patches which replaces the pointer value with an integer value (often zero) if the process reading the associated /proc file is not suitably privileged.

Unlike the syslog patch, which has made it into the mainline, the /proc modification ran into some stiff opposition. It was described as "security theater," and developers worried that it would break applications which are legitimately using the pointer values. There were suggestions that, perhaps, pointer values could be hashed, or that a more general solution could be had by modifying the behavior of "%p" in format strings. We might see the "%p" patch at some point, but Dan has given up on the /proc patches for now, saying "It's clear that there's too much resistance to this effort for it to ever succeed, so I'm ceasing attempts to get this patch series through."

Making it difficult to find structures containing function pointers may make life harder for an attacker, but it still seems better to block the modification of those structures whenever possible, regardless of who knows their location. To that end, Kees Cook has announced his intent to try to lock down more of the kernel:

The proposal is simple: as much of the kernel should be read-only as possible, most especially function pointers and other execution control points, which are the easiest target to exploit when an arbitrary kernel memory write becomes available to an attacker.

Getting various structures marked const is an obvious starting point; "constification" patches have been produced by many developers over the years, but many structures still can be modified at run time. Beyond that, though, Kees would like to have working read-only and no-execute memory in loadable modules, "set once" pointers for things like the security module operations vector, and more; many of the changes he would like to see merged can currently be found in the grsecurity tree. It could be a long process, but Kees says that it would be a security win for everybody and that he would appreciate cooperation from subsystem maintainers.

Not all kernel vulnerabilities are in the core code; many, instead, are found in loadable modules. An attacker wishing to exploit a vulnerability in a module must first ensure that the module is loaded. Module loading is a privileged operation, but there are a number of ways in which an unprivileged user can cause the kernel to load a module anyway; the kernel normally goes out of its way to autoload modules on demand so that things "just work." It seems clear that a kernel which never allows users to trigger the loading of modules is less likely to be affected by any vulnerability which is found in a loadable module.

Dan has posted another patch (again, based on work done in the grsecurity tree) which makes the demand loading of modules harder. It replaces the existing modules_disable sysctl knob with a more flexible version; if it is set to one, only root can trigger the loading of modules. Setting it to two disables module loading entirely until the next boot. The changing of the existing ABI was not well received, so a future version of the patch will keep the existing switch and its semantics. Beyond that, doubts have been expressed regarding whether administrators will enable this option, since demand loading is a convenient feature.

Hardening the kernel to make the exploiting of vulnerabilities more difficult seems like a good thing, but it would also be nice if we could find those vulnerabilities before anybody even tries to exploit them. One technique which can help in this regard is "fuzzing," the process of passing random values into system calls and looking for unexpected behavior. Some attackers certainly have good fuzzing tools, but the development community seems to be rather less well equipped. So it is good to see some recent work by Dave Jones aimed at the creation of a more intelligent fuzzer. It turns out that, by making system call parameters a bit less fuzzy, the tool is more likely to get past the trivial checks and turn up real problems; the improved fuzzer has already turned up one real bug.

The value of all this work may not be clear to everybody, and it probably will not all make it into the mainline kernel. But it does seem that we are seeing the beginning of a more focused effort to improve the security of the kernel and to make it harder to exploit the inevitable bugs. A more secure kernel may make it harder to gain true ownership of our gadgets in the future, but it still is generally a good thing.

Comments (7 posted)

Patches and updates

Kernel trees

Architecture-specific

Build system

Core kernel code

Development tools

Device drivers

Documentation

Filesystems and block I/O

Memory management

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

A high-level search interface for Debian packages

November 17, 2010

This article was contributed by Raphaël Hertzog

The Debian archive is known to be one of the largest software collections available in the free software world. With more than 16,000 source packages and 30,000 binary packages, users sometimes have trouble finding packages that are relevant to them. Debian developer Enrico Zini has been working on infrastructure to solve this problem. During the recent mini-debconf Paris, Enrico gave a talk presenting what he has been working on in the last few years, which "hasn't gotten yet the attention it deserves".

Enrico is known in the Debian community for the introduction of debtags, a system used to classify all packages using facets. Each facet describes a specific kind of property: type of user-interface, programming language it's written in, type of document manipulated, purpose of the software, etc. His most recent work builds on that. It is available in Debian and Ubuntu in the apt-xapian-index package. Its purpose is to allow advanced queries over the database of available packages.

Users of apt-xapian-index

He started by presenting some early users of the infrastructure. The most widely know is Ubuntu's software center. Its search feature provides results almost instantly thanks to apt-xapian-index. But it is a very simple interface that doesn't exploit many of the advanced features provided by the apt-xapian-index.

[GoPlay!]

Another early adopter, making use of some more advanced features, is GoPlay!. It's a graphical user interface to find games. It makes use of debtags to classify games so that you can browse, for example, all 3D action/arcade games related to cars. GoPlay has even been extended to be a more generic debtags based package browser and the package now also provides GoLearn!, GoAdmin!, GoNet!, GoOffice!, GoSafe!, and GoWeb!.

Fuss-launcher is an application launcher and not a package browser, but by using apt-xapian-index, it's able to reuse information provided at the package level to make it easier to find installed applications. Package descriptions tend to be more verbose than those embedded in .desktop files. Enrico also showed another nice feature to the audience: if you drag a document onto its window, it will show you a list of applications that can open it.

Last but not least, apt-xapian-index provides a command line search tool that is vastly superior to the traditional apt-cache search: it's axi-cache search (axi stands for apt-xapian-index). Enrico compared the output of a search on the letter "r". While apt-cache spits out an infinite list of packages containing this letter somewhere in the description, axi-cache only listed packages related to GNU R. He also demonstrated the contextual tab completion. It makes it easy to use debtags and to refine your search. Once you have typed a first keyword, the tab-completion for the second one only contains keywords or debtags that are actually able to provide more restrictive results. Advanced queries with logical operations (AND, OR, NOT, XOR) are also supported.

Features of the backend

Enrico then dived into the internals. Xapian's search engine is at the root of this infrastructure. He likes it because it's a simple library (i.e. no daemon) and it has nice Python bindings. While apt-xapian-index's core work is to index the descriptions of all the packages, it actually stores much more and can be easily extended with plugins (written in Python).

For instance, the information stored encompasses:

  • words appearing in the description of the packages (including the translated descriptions if the user uses a non-English locale);

  • their origin;

  • their section;

  • their size and installed size;

  • the time they have been first seen;

  • icons, categories, descriptions from the .desktop files they contain (through app-install-data);

  • aliases for names of some popular applications that are not available on Linux (for instance "excel" maps to the debtag office::spreadsheet).

He already has plans to store more: adding popularity contest data (see wishlist bugs #602180 and #602182) will make it possible to sort query results in a useful way. The most widely used applications are good choices when it comes to community support, and they are likely of better quality due to the larger user base. Adding timestamps of the last installation/upgrade/removal, will make it easier to pin-point a regression to a specific package update.

The generated index is world-readable and can be used from any application provided it can use the Xapian library—which is written in C++ but has bindings for Perl, Python, PHP, Java, Tcl, C#, and Ruby.

Call for experimentation

Enrico believes that many useful applications have yet to be invented on top of apt-xapian-index's features. He's calling for experimentation and asking for new ideas. The only practical limit that he has encountered is the size of the index: currently it varies between 50 Mb (Debian unstable without translation) and 70 Mb (Debian stable/testing/unstable with one translation). He would like it to not grow over 100 Mb since it's installed by default (due to aptitude recommending it) and he's not comfortable with the idea of using more than 20% of the disk footprint of a basic install just for this service. That's why the index was configured to not store the position of the terms: it's thus not possible to find out packages whose description contains the word "statistical" immediately followed by the word "computing". You can however find those which have both terms somewhere in their description.

Enrico wondered if apt-xapian-index offers too much freedom. That could explain why few people experimented with it despite his numerous blog posts with code samples and information on how to get started using it. But it's not difficult to imagine use cases for this data. It could be used to extend tools like rc-alert or wnpp-alert, for example. They provide a long list of Debian packages that are looking for some help and are installed on the machine. With apt-xapian-index, it would be possible to restrict the results to the set of packages written in a specific programming language or for a particular desktop environment.

The more likely explanation is that too few people know about the tool. There are many more itches to scratch where apt-xapian-index's features could be very useful, and my guess is that Enrico's wishes will eventually come true.

Comments (9 posted)

Brief items

Distribution quotes of the week

If I go on record with my official opinion that you sir are indeed crazy, does that qualify me for a reimbursement check from Red Hat corporate for services rendered as an independent contractor?
-- jef"would love to get paid just for having an opinion"spaleta

Second, I believe that the 6-months release cycle everyone is doing right now is crazy. It gives just some weeks of development time between one release and another, and limited time to freeze and fix bugs, so the developers must run against the clock and depend on the time after the release to fix remaining or not that throughly tested issues with updates.
-- Eugeni Dodonov

Some folks at LPC suggested we should switch from grub to syslinux rather than grub2. Meego uses syslinux. I have little clue how both compare, but maybe it's worth considering syslinux given that we already use it for the cd booting and maybe we should consolidate our options and use syslinux everywhere?
-- Lennart Poettering

Comments (none posted)

openSUSE Medical Version 0.0.6 released

There is a new release of the openSUSE Medical Version available. It includes a long list of specialized software of interest to the medical industry. "TEMPO is open source software for 3D visualization of brain electrical activity. TEMPO accepts EEG file in standard EDF format and creates animated sequence of topographic maps. Topographic maps are generated over 3D head model and user is able to navigate around head and examine maps from different viewpoints."

Comments (none posted)

openSUSE 11.4 Milestone 3

The third milestone in the openSUSE 11.4 development series is available for testing. M3 includes LibreOffice and systemd is available for testing.

Full Story (comments: none)

Ubuntu Developer Summit proceedings

A terse set of notes from the Ubuntu Developer Summit, recently concluded in Orlando, has been posted. "This page summarizes many of the outcomes of the event, and for each track there is a link to further detailed notes. Please note: these are proceedings and plans, and some of these things may not get completed as planned for whatever reason. As such, please read this list as a set of goals, and not a promise of what Ubuntu 11.04 will include."

Comments (none posted)

Distribution News

Debian GNU/Linux

Squeeze Release Update - Upgrades, deep freeze info, BSPs

The release team has an update on the release of Debian 6.0 "squeeze". Topics include release notes, freeze status, bug squashing parties, and current blocker bugs.

Full Story (comments: none)

bits from the DPL: sprints, events, delegations, assets

Debian Project Leader Stefano Zacchiroli has a few bits about what he's been up to recently. Topics include the squeeze release, sprints, events, and delegations.

Full Story (comments: none)

Bits from the Debian Multimedia Maintainers

The Debian Multimedia Maintainers have been busy getting multimedia applications ready for squeeze. Click below to see what's in and what's out.

Full Story (comments: none)

Debian linux-2.6 Paris meeting

The Debian kernel team had a very productive meeting in Paris recently. Click below for a summary.

Full Story (comments: none)

Debian Women IRC Training Sessions

The Debian Women project has announced a series of training sessions which will be held on IRC by experienced community members. "The main goal of this initiative is to encourage more people, and specifically women, to contribute to Debian while introducing them to different aspects of the Debian Project. Topics will span over a wide range of subjects related to daily Debian maintenance efforts as well as advanced tasks."

Full Story (comments: none)

Fedora

A "clarification" from Fedora on the SQLNinja decision

Fedora project leader Jared Smith has sent out a message intending to clarify the Fedora Board's decision to exclude SQLNinja. "Considering these questions against the other security tools that were commonly mentioned in feedback I received (such as tcpdump), it is pretty easy to see how they're different than SQLNinja. I should also note that much of the objections to our decision were against blocking security tools in general, not the SQLNinja package specifically. (In my own limited investigation, I have yet to find a single security professional who was actively using the tool before our decision.)" The question will apparently be revisited at some future time.

Full Story (comments: 10)

Welcoming New Fedora Program Manager Robyn Bergeron

John Poelstra welcomes Robyn Bergeron as the new Fedora Program Manager. "Through the end of 2010 and a little bit beyond, I will be working along side Robyn Bergeron to transition my official Fedora responsibilities to her. This will include getting the Fedora 15 team schedules into shape, feature wrangling, bugzilla maintenance, and any number of other things. Robyn and I are committed to making this transition as smooth, complete and timely as we can, and expect the transition to be completed before the Fedora 15 feature submission deadline."

Full Story (comments: none)

Fedora 12 end of life

The Fedora Project has sent out a reminder that Fedora 12 will reach end of life on December 2, 2010.

Full Story (comments: none)

Fedora Board Recap 2010-11-15

Click below for a recap of the November 15 meeting of the Fedora board. Topics include a draft charter for a Community Working Group, elections, and more. You can also see Máirín Duffy's summary of the meeting.

Full Story (comments: none)

Mandriva Linux

Next Mandriva release dates and schedule

Eugeni Dodonov looks at the schedule Mandriva 2010.2 (expected December 22) and Mandriva 2011 (expected May 30, 2011). "Starting with Mandriva 2011 release, the release policy for Mandriva will change to 1 release per year. This will allow us to develop even greater releases, and - of course - will give us more time to test, validate and further improve the overall quality of the release."

Full Story (comments: 1)

Ubuntu family

Minutes from the Technical Board meeting, 2010-11-16

Click to see the minutes from the November 16 meeting of the Ubuntu technical board. Topics include KDE micro version, couchdb on lucid, and ARB exception proposal.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Red Hat Enterprise Linux 6 (The H)

The H has a review of RHEL 6. "Heaps of new features become apparent when comparing the RHEL 6 [2.6.32] kernel with the version 2.6.18 kernel of RHEL 5, although more than a few of them are already old hat in many other distributions. For instance, the Completely Fair Scheduler (CFS) highlighted by Red Hat has been part of the Linux kernel since version 2.6.23. The "tickless" kernel, which stops the timer interrupt from going off a hundred or a thousand times per second when a system is idle, is already well-tested. This trick reduces both the power consumption and the basic load of RHEL 6 systems that operate as virtualised guests, which frees up the host CPU for productive tasks."

Comments (11 posted)

Linux Mint 10 'Julia' Is Now Official (PCWorld)

PCWorld reviews Mint 10. "Launched in 2006, Linux Mint has quickly become the third most popular Linux distribution out there behind only Ubuntu and Fedora, and version 10 makes it easy to see why. Based on Ubuntu 10.10, or Maverick Meerkat, Julia offers numerous enhancements that put it at the forefront of usability."

Comments (none posted)

PCLinuxOS Releases a Slew of Quarterly Updates (Linux Journal)

Susan Linton takes a look at PCLinuxOS. "PCLinuxOS is a rolling release distribution, which means users can usually update through the package management rather than perform a fresh install every six months. But a few times a year developers release Quarterly Updates for new users or machines. Recently it was that time again when several varieties of PCLOS saw new releases."

Comments (none posted)

grml, the No-Frills Linux Rescue CD--USB (Linux Planet)

Linux Planet takes a look at grml. "You don't lack for options with grml. The boot menu not only offers the standard options to get into grml, but a FreeDOS option, a minimal BSD (MirOS bsd4grml), PXE boot, hardware detection tool, and Memtest. You also can choose to load grml entirely into RAM in case you need the CD-ROM for something, and it's faster. You can use it on a USB stick instead of a CD. There are several failsafe options if you have trouble booting grml due to incompatible hardware. In short - you have options." (LWN looked at grml back in April 2006.)

Comments (none posted)

Page editor: Rebecca Sobol

Development

The way to Wayland: Preparing for life After X

November 17, 2010

This article was contributed by Joe 'Zonker' Brockmeier.

One sure way to stir up Linux users and developers is to propose replacing a tried-and-true technology with an up-and-coming technology. Especially replacing something as crucial as X, which is what Mark Shuttleworth suggested might happen on Ubuntu, with Wayland taking its place. The response to Shuttleworth's post, along comments and questions on development mailing lists since, show that Wayland is not well-understood in the larger Linux community. Moving to Wayland isn't as far-fetched as one might initially think.

So what is it? Wayland is not, as it was initially reported, a "new X Server" for Linux. Wayland is the name for the protocol and the MIT-licensed implementation of a compositor that can run as a standalone display server, or under X as a client. Most importantly, Wayland (when running on its own) removes a few layers of complexity.

As explained in the Wayland architecture document, X runs on top of the Linux kernel and, in conjunction with a compositor, it is in effect adding an extra layer between the kernel, hardware, and compositor. With X, windows and contents are sent to a separate buffer and then "composited" together into the frame buffer by Compiz, Kwin, or another compositing window manager. With Wayland, all of this will happen in one display manager.

Wayland works directly on top of the kernel, and lets the clients handle rendering directly without the intermediate layer. Wayland uses direct rendering through OpenGL, or OpenGL ES. X imposes additional layers, and that causes a performance hit. As Shuttleworth wrote when tapping Wayland as the future for Ubuntu and Unity, "we don't believe X is setup to deliver the user experience we want, with super-smooth graphics and effects."

The initial reactions and discussion about Wayland had more than a tinge of concern. Much of which based on fairly breezy reports about Wayland as a replacement for X for Ubuntu. As Andrew Haley points out on the Fedora devel list, there's no immediate cause for alarm:

It looked like a bunch of kiddies who had never used remote X applications had decided we didn't need to do that anymore, and it was more important to get kewl features like smooth scrolling and rotating 3D whatnots. It seems that isn't true, and we don't need to worry. The lunatics have not, in fact, taken over the asylum.

Yes, the asylum remains in competent hands. Wayland is not a new idea. Wayland started as a "secret" project by Kristian Høgsberg in 2008. Høgsberg's creation was outed by Phoronix, caused a brief wave of excitement in Linux circles, and then went back to largely being ignored by most of the Linux world.

But X folks have been thinking about Wayland, at least occasionally, for some time. Last year at the Linux Foundation Collaboration Summit, Keith Packard talked about turning "the graphics stack upside down" by moving device configuration out of X and into the kernel, which would pave the way for other systems like Wayland. Packard also hinted that a post-X era may be in the offing while at this year's Linux Plumbers Conference, and mentioned Wayland as a possible replacement — with X running as a client.

Why not simply extend X, yet again? It's been extended to add all sorts of features never envisioned when it was first developed. Wayland is an option "of pushing X out of the hotpath between clients and the hardware and making it a compatibility option," as described in the FAQ. X running as a client is a particularly important feature. As Adam Jackson points out on the Fedora devel mailing list, X applications only need be ported to behave as native Wayland clients. Otherwise they can run within Wayland within a nested X server "and you wouldn't ever know the difference." Note that Wayland can also run as an X client, which allows for development and testing during the transition.

It may be beneficial to look at Wayland as an opportunity rather than a potential problem. For example, while many games now run well on X, it is not particularly friendly for fullscreen 3D games. Høgsberg indicates that thought has already gone into the specific problems of fullscreen games and how to address problems like modesetting and handling the pointer.

Wayland is also poised to support GPU hotswapping, something that X does not currently support. As more hardware ships with more than one GPU, which is intended to help with power savings, users will want Linux to support switching between the GPUs.

But we're not there yet. The big problem, of course, is that Wayland is not ready for prime time — or even early morning between infomercials. Wayland may see an influx of interest thanks to the attention it's getting, but there's a long way between vision and reality at the moment.

As Packard mentioned during his LPC talk, input is another problem for an alternative display system. Key mapping, accessibility work (using the keyboard for mouse movement, for instance) and handling more complex input devices like touchpads, all need to be addressed.

Aside from a general lack of readiness for Wayland itself, Wayland also lacks drivers. Nvidia has already explicitly said it has no interest in Wayland, though Nouveau may be able to take up the slack there. Wayland can use the open source KMS drivers for ATI, Intel, and Nvidia — but what about the new crop of video hardware coming with ARM-based devices? Here we have a new set of video hardware without open source drivers or existing efforts to create them.

There's also the question of who's going to do the work to ready Wayland. The work on Wayland up until now — and in the foreseeable future — has been on Red Hat and Intel's payroll. Høgsberg was a Red Hat employee when he first started Wayland, and is now working for Intel. Canonical doesn't have any resources currently assigned to work on Wayland. Canonical's Ted Gould has set up an import of Wayland's git tree into Launchpad to make it easier to build packages. But Gould says he's unaware of anyone directly working on Wayland who is on Canonical's payroll:

Most of our effort there is ensuring that the new stuff we are building (Unity, uTouch, etc.) is compatible with a post-X11 future. It seems like momentum is definitely switching in that direction with even Keith implying it at Plumber's.

Personally, my biggest worry with Wayland is graphics drivers, and I think that was partially what Mark's blog post was trying to help with. Establish a direction at a high level to let other companies know where we're going. I hope it's successful, otherwise the switch (which seems inevitable at this point) will be very painful.

Users who are itching to get hands on bleeding-edge Wayland builds can look at the compile instructions, or add the "xorg crack pushers" PPAs for Ubuntu to install Wayland on Maverick (10.10) or Natty (11.04). Breakage is quite likely. Developers interested in pitching in are welcome to do so. Wayland is part of freedesktop.org, and the git repository is open.

But it will be some time before anyone needs to make the switch. With the renewed attention caused by Shuttleworth's post, Høgsberg has started working more actively on Wayland again. But while Høgsberg isn't quite going it solo, there's not a lot of commits from other developers yet. Shuttleworth indicated that it would be a year before Ubuntu could seriously consider switching to Wayland. Fedora will probably package Wayland for F15, but when it will be default is up in the air. Jackson says the "cabal" of Fedora graphics folks "don't even have a complete list of transition criteria yet, let alone a timeframe for switching the default." It would seem that replacing X has momentum, but we're a long way away from making a switch.

Comments (47 posted)

Brief items

Quotes of the week

I'm not at all convinced that people and, more particularly, corporations, have really analysed the implications of web apps and cloud computing. When they do I don't think web apps will prove all that popular.
-- Harold Fuchs

Maybe months.
-- Shu Wang, Adobe, on when the Flash memcpy() bug might be fixed.

Comments (none posted)

Ten years of MPlayer

Initial MPlayer developer "A'rpi" has noted that November 11 is the tenth anniversary of the 0.01 release. One decade later, the project continues its asymptotic approach to version 1.0 and it clearly has a wide user base. Congratulations are in order.

Full Story (comments: 41)

notmuch 0.5 released

After a long slow period, development on the notmuch mail indexing system has picked up again; the 0.5 release is now available. "The major feature in notmuch 0.5 is the ability to automatically synchronize maildir flags, (so that if a mail file gets marked externally with the flag 'S' for 'seen' then the 'unread' tag in the notmuch database will be automatically removed). And of course, there are various fixes and improvements throughout."

Full Story (comments: none)

Parrot 2.10.0 released

Version 2.10.0 of the Parrot virtual machine has been released. The code has been moved to github, so work has been done to make various subsystems more Git-aware. There's also some new documentation on using Git to work on Parrot.

Full Story (comments: none)

ActiveState announces the PyPM Index

ActiveState has announced an index for Python modules. It offers some useful features like searching by keywords or tags, finding all packages by a specific author, and more. "Although PyPM Index was originally intended to be a frontend to browse/search for packages available in the ActivePython PyPM repository, it evolved as a general purpose site to find information about almost any Python package."

Full Story (comments: 4)

systemd v12 released

The twelfth systemd release is out. New features include support for more services (to the point that a "normal distribution" can boot with no shell invocations), Ubuntu support, system-level passphrase support, a new "condition logic" mechanism, and more.

Full Story (comments: 11)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

Libre Graphics Magazine 1.1

The Libre Graphics Magazine has launched. "This is one purpose of a Libre Graphics Magazine: to serve as a catalyst for discussion, to build a home for the users of Libre Graphics software, standards and methods." The first issue is now available; it's 64 pages of CC-licensed content available either in PDF format or as a for-purchase printed version. This issue includes articles on free fonts, free software in the classroom, a "unicorn tutorial," and more.

Comments (5 posted)

Page editor: Jonathan Corbet

Announcements

Commercial announcements

AMD joins MeeGo

AMD has announced that it is joining the MeeGo project. "We are glad to provide engineering resources to joint industry efforts like MeeGo and expect that this operating system will help drive our embedded plans and create expanded market opportunities for our forthcoming Accelerated Processing Units."

Comments (57 posted)

MeeGo portal and SDK

Intel has announced the MeeGo portal on the Intel AppUp center and associated storefronts. "On this site you will find resources & information to get started with MeeGo. Stay tuned and look for more resources from the AppUpDeveloper program on how to get apps on AppUp clients for various MeeGo devices."

MeeGo has also announced the beta release of MeeGo 1.1 SDK. "This release includes Qt Quick, the Javascript-inspired framework that enables rapid development of mobile applications. Although the heart of MeeGo SDK is Qt Creator, you can also use basic functionalities directly from the command line. MeeGo SDK is extensible and we are working on different plug-ins that introduce new tools and targets."

Comments (none posted)

Canonical Welcomes New Partners Following Latest Ubuntu 10.10 Release

Canonical has announced the signing of several partnerships. "Boxed Ice, Opsview, Riptano, Unoware, Vladster, Wavemaker and Zend all joined as Canonical Software Partners, their applications delivering solutions for business and personal use, and supported by commercial companies providing the highest level of service. Canonical Software Partners work closely with the development teams who deliver Ubuntu, and thus ensure that installation and operation are of the highest quality."

Full Story (comments: none)

Convirture and Canonical to Team Up to Provide Virtual Machine and Private Cloud Management

Convirture has announced a partnership with Canonical. "ConVirt 2.0 Open is now available in the Ubuntu Partner Repository. It provides a sophisticated set of tools which can also be used to manage virtual machines in a private cloud infrastructure."

Full Story (comments: none)

Articles of interest

Red Hat's Secret Patent Deal and the Fate of JBoss Developers (Gigaom)

Bruce Perens wonders about the outcome of a patent lawsuit against JBoss. "The suit in question - Software Tree LLC v. Red Hat, Inc. - claimed that JBoss, the well-known Java web software, infringed upon U.S. Patent No. 6163776, which essentially claims invention of the object-relational database paradigm. In that paradigm, an object in an object-oriented software language represents a database record, and the attributes of the object represent fields in the database, making it possible for programmers to access a database without writing any SQL. It's a common element in most web programming environments today."

Comments (131 posted)

Linaro group advances Linux on ARM with 10.11 release (ars technica)

Ars technica takes a look at the latest Linaro release. "Linaro, a nonprofit organization that aims to accelerate embedded Linux development for the ARM architecture, has announced its first software release. Version 10.11 of the group's software stack quietly debuted this week. The group appears to be attracting interest and making steady progress."

Comments (8 posted)

Navigating and Working in Scribus (Linux Journal)

Linux Journal attempts to demystify Scribus. "Designed for desktop publishing, Scribus is a specialty application, and not intended for general use the way that OpenOffice.org or LibreOffice is. Unlike a word processor, it is not intended primarily as a way to input text -- although you can use it for that -- but as a layout program for manipulating groups of objects for the printed page. With this orientation, it is perhaps closer to The GIMP or Inkscape, which can be disorienting to the general user."

Comments (8 posted)

New Books

Abrégé Dense Python 3.1 in english

The quick reference for Python 3.1 is available in english. "It is a one recto-verso page filled with language basics and advanced constructions."

Full Story (comments: none)

Data Analysis with Open Source Tools--New from O'Reilly

O'Reilly Media has released "Data Analysis with Open Source Tools", by Philipp K. Janert.

Full Story (comments: none)

Driving Technical Change--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "Driving Technical Change", by Terrence Ryan.

Full Story (comments: none)

Pragmatic Guide to Git--New from Pragmatic Bookshelf

Pragmatic Bookshelf has released "Pragmatic Guide to Git" by Travis Swicegood.

Full Story (comments: none)

Two New Python Books from O'Reilly Media

O'Reilly Media has released "Head First Python" by Paul Barry, and "Real World Instrumentation with Python" by John M. Hughes.

Full Story (comments: none)

Resources

Linux Foundation Monthly Newsletter: November 2010

The Linux Foundation newsletter for November covers Linux Foundation and Consumer Electronics Linux Forum Merge, New Embedded Linux Workgroup Announced: The Yocto Project, China Mobile Joins Linux Foundation as Gold Member, Linux Foundation Releases Free Self-Assessment Checklist as Part of Open Compliance Program, and a video of Linus Torvalds diving with sea life.

Full Story (comments: none)

Upcoming Events

Events: November 25, 2010 to January 24, 2011

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
November 23
November 26
DeepSec Vienna, Austria
November 24
November 26
Open Source Developers' Conference Melbourne, Australia
November 27 Open Source Conference Shimane 2010 Shimane, Japan
November 27 12. LinuxDay 2010 Dornbirn, Austria
November 29
November 30
European OpenSource & Free Software Law Event Torino, Italy
December 4 London Perl Workshop 2010 London, United Kingdom
December 6
December 8
PGDay Europe 2010 Stuttgart, Germany
December 11 Open Source Conference Fukuoka 2010 Fukuoka, Japan
December 13
December 18
SciPy.in 2010 Hyderabad, India
December 15
December 17
FOSS.IN/2010 Bangalore, India
January 16
January 22
PyPy Leysin Winter Sprint Leysin, Switzerland
January 22 OrgCamp 2011 Paris, France

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2010, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds