User: Password:
|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for November 23, 2011

That newfangled Journal thing

By Jonathan Corbet
November 20, 2011
Certain topics, it seems, are just meant to stress-test the LWN comment system; after the unveiling of "the Journal" on November 18, "the works of Lennart Poettering" will have to be added to that list. Lennart (along with Kay Sievers) has clearly hit a nerve with his latest proposal. There is probably enough in this concept to upset just about anybody, but the Journal deserves serious consideration, even if it represents a break from how logging has been handled in the past.

Start with this 2010 quote from Andrew Morton:

The kernel's whole approach to messaging is pretty haphazard and lame and sad. There have been various proposals to improve the usefulness and to rationally categorise things in way which are more useful to operators, but nothing seems to ever get over the line.

The point Lennart and Kay are trying to make is that "haphazard and lame and sad" doesn't just apply to kernel messaging; it describes the approach taken by Linux in general. Messages are human-readable strings chosen by developers, often without a whole lot of thought. These messages may have information useful to automated system monitoring tools, but their free-form nature makes them hard to find and the information contained therein hard to extract. A quick look at what goes on inside a tool like logwatch makes it clear that, whatever their benefits may be, free-form messages are not clean or easy for tools to deal with. The fact that developers can (and do) change messages at any time does not help matters either. Messages are stored in flat files that makes them hard to access, and they are subject to manipulation and editing by attackers. There must be a better way.

The Journal may or may not be a better way, but it does incorporate a number of interesting ideas - though some of them may be "interesting" in the Chinese curse sense. It is tied to systemd, which condemns it in some people's eyes even before the rest of the proposal can be considered. But hooking logging into systemd makes it easy to capture all messages from managed services and get them to their final resting place. It enables tricks like printing the last few messages from a service when querying its status. It also makes it possible for the Journal to verify the source of many messages and to add a bunch of additional metadata to those messages. From this point of view, at least, it looks like a more robust way of dealing with system logging.

When this subject was discussed at the 2011 Kernel Summit, one of the most controversial points was the request that developers attach 128-bit UUID values to most or all messages. Associating a unique identifier with each type of message does make some sense. Users (or their tools) can look up specific messages in a help database to learn what the message really means and how they should respond. Translation mechanisms can use the identifiers to localize messages. System monitoring utilities can immediately recognize relevant events and respond to them. But, while the value of the identifier is clear, there is still a lot of disagreement over its practicability. It is not clear that developers will bother to generate unique identifiers and to use them in a disciplined way.

All complaints about UUIDs were quickly overshadowed by another issue once the full proposal was posted: one might charitably say that there is not, yet, a consensus around the proposed new logfile format. In a sense, that is unsurprising, since that format is deliberately undocumented and explicitly subject to change at any time. But it is clearly a binary format, and a reasonably complex one at that; each field of a log message is stored separately, and there is a mechanism for compressing out duplicated strings, for example. Binary-blob data is allowed as a part of a log message. The Journal's format is clearly a significant departure from the classic, all-text flat-file format used by Unix-like systems since before many of us were born.

There is value in a new log format. Current log files quickly become large and unwieldy; extracting useful information from them can be a slow and painful task. The new format, beyond holding much more information, promises to be space-efficient and very quick to answer queries like "please give me all the disk error messages from the last day." System administrators could probably find a way to get used to the new features of this format in a hurry.

That said, the idea of using an undocumented binary file format for crucial system logging data sets off alarms in the head of just about anybody who has been managing systems for any period of time. The format isn't "secret," of course, but we still should not have to reverse-engineer the format of our logfiles from the Journal source. Unless it is very carefully done, this kind of format could lose a lot of logging data as the result of a relatively small corruption. If the file format really does shift over time, recovery of information from older logfiles could become difficult. It is not surprising that this proposal scares a lot of people; there may have to be some significant changes before the Journal becomes acceptable to a lot of system administrators.

In truth, it may make sense to separate systemd's role in the collection, verification, and annotation of log data from the role of a backend dedicated to distribution and storage of that data. The two tasks are different, and not all sites have the same needs in this area. Mixing the two risks limiting the usefulness of the Journal and muddying the debate about the concept as a whole.

Tradition

This proposal is a request for comments; it has garnered no end of those. What is interesting to see, though, is that a lot of these comments take the form of "syslog has been good for Unix for 40 years, this proposal is trying to change one of our grandest traditions." There is a lot to value in the Unix tradition, but we do need to be careful about sticking to it just because That's How Things Have Always Been Done. Even longstanding mechanisms are subject to improvement. In the case of logging, it seems fairly clear that what we have now is not a thing of timeless beauty. This article will take no position on whether the Journal is the right way to improve on Linux logging, but it will claim that the fact that people are looking at this problem and that they are willing to make significant changes to improve things is good for Linux as a whole.

For much of the history of the Linux system, the unwritten goal was probably something like "let's make something that looks sort of like SunOS 4.x, but which is free, works on current hardware and has the rough edges smoothed a bit." That goal - implement something firmly within the Unix tradition - served to unify Linux development for many years; in our own special way, we were all pulling in the same direction. But that goal has long since been met; there is no longer an obvious target for us to be chasing at this point.

That means that we are heading ever more deeply into new and uncharted territory; Linux is increasingly doing things that no Unix-like system has done before. In the absence of an obvious model, developers doing new things will do them in new ways. The results include Android, KDE4, GNOME3, and, yes, the collected works of Lennart Poettering. If the work is genuinely new, it will look different from what we are used to. Some of this work will certainly turn out to be a bad idea, while some of it will endure for a very long time. But none of it will exist unless developers are willing to take risks and rethink how things have always been done.

Unless a shinier SunOS 4.x is really all that we are after, we as a community are desperately in need of people who are willing to stir things up and try new things to solve problems. To restrict ourselves to what Unix did is an almost certain way to share Unix's fate. We can do better than that - indeed, we have done better than that - but, to do so, we need to create an environment where developers are willing to take some technical risks. Bad ideas need to be exposed as such whenever possible, but ideas should not be shot down just because they do not preserve the way our grandparents did things. The Journal may or may not turn into a sensible option, but the process that creates things like the Journal is one of the most important things we have.

Comments (251 posted)

Trinity Desktop Environment: Keeping KDE 3 alive

November 22, 2011

This article was contributed by Bruce Byfield

At this point, the Trinity Desktop Environment (TDE) hardly needs a conventional review. After all, as the numbering of the new 3.5.13 release suggests, TDE is a continuation of the KDE 3 series, and anyone who is even remotely interested must have seen the basic desktop many times. Instead, TDE raises some questions. For instance, how well can a desktop first released nearly a decade ago adapt to modern computing? Just as importantly, what are its chances of survival, and what directions might its development take in the future? To judge from the new release, these are important questions.

The latest version of TDE is available as packages for recent releases of Debian, Ubuntu, Red Hat, and Fedora. However, packages for releases made in the last month or so, such as Fedora 16, are not yet available, while packages for Slackware, openSUSE and Mandriva are mentioned as being in development. Alternatively, you can download a Kubuntu Live CD or build TDE from source.

If you install TDE, note that its packages are not interchangeable with packages from other sources for the KDE 3 series. Nor can KDE 4 releases run them. Installing TDE requires dependencies not found in any version of mainstream KDE, including patched versions of the libcaldav and libcarddav libraries and the version of Qt3 that Trinity maintains as separate projects.

Thankfully, all Trinity packages have a -trinity suffix attached to them, and the desktop maintain settings in a ~/.trinity directory separate from the ~/.kde directory used by KDE 4, so TDE can coexist safely beside other versions of KDE. The largest problem you are likely to encounter is finding TDE at the login screen, where it is listed under "TDE" rather than "KDE" or "Trinity," as you might first expect.

Running TDE

For long time KDE users, TDE is a step back in time. The first login starts with the KPersonalizer wizard, complete with Konqi, the now seldom-used dragon mascot of KDE. However, this nostalgia is not an unalloyed delight.

The first impression is that TDE is generally faster than KDE 4. This impression waivers when you open KDE 4 apps and seems inconsistent when running applications not designed for a specific desktop, but the general impression remains for routine operations like logging in or out.

[TDE desktop]

If you are familiar with the KDE 3 release series, the next thing you are likely to notice is how little it has visibly changed, even though 3.5.13 is the third TDE release, 3.5.12 having been released in October 2010, and 3.5.11 in April 2010. Perhaps the most visible improvements in the 3.5.13 version, which was released November 1, is a Monitor and Display dialog that supports multiple monitors, the option to run a KDE-4-style menu, and a few themes. You have to go into the repositories to notice the addition of a few TDE-specific applications, such as kbookreader, kdbusnotification, kmymoney, and kstreamripper. Otherwise, most of the enhancements in the new release are behind the scenes, and largely unnoticeable. Ranging from bug-fixes to improvements in security, these enhancements are welcome, but, to the casual eye, TDE seems pretty much identical to the last of the KDE 3 series.

Although interface differences are obvious, in terms of basic features TDE compares favorably with the latest KDE 4 releases. In fact, although rearrangement of configuration options make comparisons difficult, TDE's Control Center actually appears to have more administrative options than the latest KDE 4.7 release, especially for hardware peripherals. For many, an interface designed to the standards of a decade ago might seem at first a small price to pay for the speed and administration tools.

[TDE control center]

However, TDE also has some serious shortcomings. For example, according to the release notes, Bluetooth functionality is absent. So, too, is the ability to edit HTML messages and display images in KMail, which is the subject of a $2500 bounty. TDE is also at an obvious disadvantage in its lack of counterparts to advanced KDE 4 features such as Folder Views and Activities. Nor does it integrate any online applications or services into the desktop, the way that an increasing number of desktop environments do.

In addition, TDE would benefit from the ability to migrate email, addresses, and settings from other versions of KDE. Non-KDE applications such as Firefox can be used interchangeably in TDE and KDE, but not KDE-specific applications. Start KMail in TDE, for example, and you have to configure it separately from other versions of the same application. Admittedly, migration of information might be difficult, since KDE 4 applications such as KMail and Amarok store information in databases, but, without some sort of assistance, TDE installations are practical mainly on a fresh machine or else as experiments not intended for serious work.

Even more seriously, while the last releases of the KDE 3 series had a reputation for stability, TDE's patches seem to have taken a toll. Every now and then on my Debian installation, TDE takes three or four times as long as usual to log out. Random crashes also occur. For some reason, too, changing the theme creates a new panel on the left side of the screen, while logging in always starts the audio mixer.

Questions of viability

Judging by the ongoing complaints about the KDE 4 series, a market should exist for a continuation of KDE 3. Yet, for some reason, TDE remains a niche project. It has only four main developers, although, according to project lead Timothy Pearson, others have reported bugs and submitted patches. The same names appear over and over on TDE's mailing lists, and the project's Community Styles and Themes page includes just a single contribution.

Furthermore, in the last eleven months, the project's download page has averaged just 214 visits per month, with a high of 492 in February 2011. Even with the new release, this month's visits were only 252 with the month two-thirds over — and probably not all of those visits were followed by an installation.

Little wonder, then, that the project's home page includes a plea for help in every aspect of development. The current team has undertaken an ambitious project with limited resources, and is running hard just to stay in the same place.

That might lead to misgivings about TDE's viability, but there are some other things to consider. The project has managed to do three releases, with a fourth planned for April 2012. It has also successfully navigated major design decisions, avoiding switching to the Qt4 framework by taking over the maintenance of Qt3, and continuing to rely on DCOP, invoking D-Bus only when necessary for such operations as running KDE 4 applications.

Moreover, project members are well aware that much remains to be done. Asked about the priorities for the future, Pearson mentions a replacement for HAL, but emphasizes that the main concern will be bug-fixes. "TDE inherited a lot of bugs from KDE 3.5.10, and has added some of its own over the years," he acknowledged. The next release, he added, "is going to primarily be a bug fix release, although I imagine some new features will probably creep in between now and the release date."

Despite the challenges, Pearson remains optimistic that TDE will survive and define itself. Since forking from KDE 3, he said,

The two projects' goals have diverged significantly. We are not trying to compete with KDE 4 (or any other desktop for that matter) — instead, we are working on an alternative desktop environment with a different perspective on human-computer interaction compared to the environments currently available.

In other words, like Xfce, TDE is designed for those who view the desktop primarily as a place from which to launch local applications. To those who use online applications or services, or whose workflow now depends on enhanced features like KDE 4's Activities and alternative desktop arrangements, that may seem like an obsolete goal. Yet the continuing mutter of controversy about KDE 4 suggests that there are at least some who wish to continue working as they already have for a decade.

For these reasons, in its own quixotic way, TDE seems likely to struggle on, at least for the immediate future. For those who were happy with KDE 3, and opposed to the direction of KDE 4, the project seems a perfect opportunity for getting involved and making a difference.

Comments (16 posted)

Thoughts on conferences

By Jake Edge
November 22, 2011

Over the last four years or so, I have attended numerous conferences in many different locations. It has been, really without any exceptions, an incredible experience. Conferences are one of the main ways that our communities come together and meet face-to-face—something that's important to counterbalance the standard email and IRC development environment.

In that time, I have also seen many different ways to organize, schedule, and produce those conferences, and, as is the case with free software projects, there are bits and pieces that conferences can learn from each other. What follows is my—fairly opinionated obviously—distillation of what works well and less well, which will hopefully be useful as new conferences spring up, or as existing ones plan for next year.

Location location location

Conferences are generally located in interesting cities throughout the world. This is a good thing as it means that folks get a chance to see different cities and countries. Moving the conferences around every year also means that geographic distance will become less of a factor as the conference will be "nearby" every once in a while. Because of that, those on a limited budget still have a chance to attend. Smaller, regional conferences may not move around very much (or at all), but their focus is typically more on gathering people from the region, rather than trying to attract a large presence from elsewhere.

Regardless of which city the conference is held in, it is very important that it be held in a "downtown" location, close to restaurants, bars, and sightseeing. One of the more important parts of any conference is the so-called "hallway track", where informal gatherings of attendees take place. Nearby bars and restaurants are an integral part of the hallway track. In a well-situated conference setting, it is not unusual to stumble into several different discussions during the conference, when eating dinner or visiting a pub. Requiring taxis, public transportation, or rental cars to access food choices is definitely sub-optimal, especially for lunch breaks.

Being in a downtown location also likely increases the options for hotels or other accommodations. While the rates at the conference hotel are generally pretty reasonable from the standpoint of other, similar hotels, they certainly aren't cheap. Folks that have a limited budget will appreciate having other options that are still within easy walking (or public transport) distance of the conference site.

Putting together a map with the conference venue, the various accommodation options, restaurants, pubs, clubs, and so on, is extremely useful. Adding information about how to use any public transport systems, directions on getting to and from the airport or train stations, as well as a bit of sightseeing information is invaluable as well.

Scheduling

The conference lineup is getting larger each year it seems, which makes it difficult to avoid date conflicts. It's obviously beneficial to do so, but may ultimately be hard to pull off. Even when they don't overlap, it's not uncommon for two conferences, in different places, to bump right up against each other, or only be a few days apart. That will make it difficult or impossible for folks to attend both—making it important to at least try to avoid conflicts between conferences with closely related topics or themes. A recent movement to collect up multiple conferences and run them back-to-back (or even concurrently) in a single location certainly has its advantages, but it also leads to attendees being away from home for up to two weeks—and possibly falling prey to conference burnout part way through. Obviously, there are various pros and cons here.

Something that is more easily controlled by the conference organizers is the actual schedule of talks. There are tradeoffs there as well, of course, but it is important to find the right balance between time for presentations, time for hallway chats, and time to move between various parts of the venue. If there are long distances between the farthest meeting rooms, it may make sense to have larger breaks between sessions. It is frustrating for attendees to be sitting in on a good talk, perhaps one that is running a little bit late, and having to watch the clock so that there is enough time to get to another session all the way across the venue. Worse yet, arriving on-time sometimes means the room is already full.

Timing is tricky too, though, because a staggered schedule (with, say, 50-minute slots and 20 minutes in between) makes it harder for attendees to keep track of when the next session starts. On the other hand, starting each session on the hour (or half-hour) may lead to shortened slots or very short room switching times. One possible solution is to have longer and shorter slots that are alternated in some clear pattern—it is sometimes the case that a speaker doesn't really have enough material for a full slot anyway. For hallway track purposes, 30-minute breaks in the morning and afternoon, as well as a 90-minute lunch break, should be strongly considered. All of that is pretty difficult to balance and still get in plenty of talks—at least in a one or two-day conference—but fewer, higher-quality talks with lots of discussion and social time makes for a better conference in my experience.

Talks

While meeting up with friends and colleagues, and visiting an interesting city, are certainly part of the draw for a conference, it is the presentations that provide much of the impetus for attending. Make no mistake, there have been some excellent talks with great presenters at all of the conferences I have attended. Keynote talks are another story unfortunately. Some conferences choose keynote speakers based on their merits, but some of the bigger conferences give keynote slots to sponsors, which can be something of a mixed bag.

There is nothing inherently wrong with providing keynote slots to sponsors, but those talks often end up being extended advertisements for the company and its products. There are exceptions, and even some of the ad-oriented talks are still interesting, but it is unfortunate that they are often some of the weakest talks at the conference. It may be unavoidable, as large corporate sponsors expect to get something for the money they put in, but there is no reason that the technical level of the talks couldn't be better. There certainly is a lot to be gained both for the company and for the attendees if the talks are genuinely interesting and informative—even if they must stray a ways into marketing.

Conference organizers are largely at the mercy of the presentation proposals that they get, but it is nice to have related talks collected up into themes (or whole tracks if there are enough). It's also important to try to ensure that related subjects are not scheduled at the same time. Scheduling is undoubtedly a tricky balancing act at times, but having several related talks arranged back-to-back in the same room makes it that much easier for attendees.

A grab bag of other thoughts

Conferences often hand out bags with a schedule/program, a T-shirt, and a few handouts (flyers, pens, notebooks, etc.) from sponsors. Most of it is fairly useful (though the flyers generally don't do much for me) and can be used or given to a friend or family member—except the bag itself. One has to wonder how many of the bags—often fairly attractive, made of cloth, with a conference logo—never make it home with attendees.

The reason is that most of those bags are essentially useless for any purpose other than carrying around the small amount of stuff that originally comes in them. In general, they are much too small to do anything with, so they get tossed out—or re-gifted to people who don't know any better. Either spending less money on a paper/plastic bag or, better still, handing out a bag that can be used for other purposes (perhaps a laptop bag or even a cloth grocery bag) would be good. It just seems like most conference bags are an expensive waste. For some, T-shirts may fall into the same category, but at least those can be donated to someone else or to a local organization for the needy.

While schedule apps for mobile phones (or tablets) are all the rage, it's a little hard to see what they offer beyond what a simple—up-to-date!—web page could. If the schedule is changing frequently, an app may remind you to update its data, unlike a web page, but that value is lost in the cumbersome navigation that the apps tend to have. Is it really so hard to recognize what day it is and make that the default? Bonus points would be awarded for noting the time and auto-scrolling to the right place in the schedule. It seems wrong somehow to have a separate app per event that needs to be downloaded before and deleted after, but I guess that's part of what apps are all about.

Of course, either apps or a web page require reasonable WiFi at the conference venue. For the most part, WiFi works pretty well at conferences these days. It can be overwhelmed when the majority of attendees are in one place (like for keynotes), but the rest of the time it seems to work pretty reliably, which stands in (sometimes stark) contrast to five (or even fewer) years ago. It's important for attendees to be able to keep up with email and such, though it can be a distraction as well. It's pretty easy to recognize a talk that isn't going very well by the number of head-down, typing attendees.

In closing ...

Here are some of the things that I noted about some of the conferences I have attended in 2011. It is by no means comprehensive, but just some quick-hit thoughts about things that each did well that other conferences should at least consider as part of their planning. For example, one of the things that struck me about the Prague conferences in October (LinuxCon Europe, Embedded Linux Conference Europe, GStreamer Conference, and Kernel Summit) was the large open area in the center of the venue between all of the rooms. There was plenty of space for chatting, raiding the snack/beverage tables, and even sitting down (if you were quick or paying close attention anyway).

While it probably complicated scheduling even further, I certainly appreciated the 10am start for the second day of the GStreamer Conference. Staying up too late at conferences, whether carousing or writing/editing, is all too common, so a later start (with perhaps a correspondingly late finish) is welcome. Like many conferences, Ubuntu Developer Summit in Budapest had an excellent location, with plenty of nearby restaurants and a walking mall with lots of food and beverage choices just two blocks away. Not to mention the fact that it was close enough to walk to some of the sightseeing highlights of the city.

The Desktop Summit in Berlin was memorable for many reasons, but one thing that really stood out was the wiki with a great deal of useful information about the city, how to get around, and so on. The Embedded Linux Conference Europe had videos (in free formats) available soon after the conference from Free Electrons (the GStreamer conference also had prompt videos that were made by UbiCast). It should also be noted that the Linux Foundation either runs or assists with many of the conferences that I have attended and it does a uniformly great job in putting those conferences on, from the venues to the program and keeping everything running smoothly as well.

We are lucky to have so many high quality conferences for Linux and free software. The schedule is packed, and there are good choices for most any area of our communities. While there are clearly areas where conferences could do better, it's hardly a weak point. It's certainly worth finding the time and budget to attend those that are in your areas of interest.

[ I'd like to thank the various sponsors, LWN subscribers in particular, for making it possible for me to attend—and report on—a wide variety of conferences. ]

Comments (39 posted)

Page editor: Jonathan Corbet

Security

Sovereign Keys for certificate verification

November 23, 2011

This article was contributed by Nathan Willis

The certificate-authority (CA)-based approach to identity-checking on the web is under heavy fire these days. In theory, when a client like a web browser initiates a secure connection to a remote server, it relies on a chain of trusted cryptographic signatures presented by the server to verify that the public-key encryption certificate belongs to who the browser is expecting. But as the Electronic Frontier Foundation (EFF) has pointed out numerous times in public, the CA system is too-easily circumvented by fake CAs or attackers intercepting and replacing credentials during the handshake.

The problem affects SSL/TLS, SMTPS, and other secure protocols. There are plenty of proposals for replacing the CA portion of system, including Convergence, MonkeySphere, and Public Key Pinning. EFF's chief technologist Peter Eckersley has proposed a new framework called Sovereign Keys that is intended to securely verify associations between domain names and cryptographic keys in a way that is more robust than the CA system in use today, but can piggyback on it.

The Sovereign Key system allows clients to verify the validity of a server's X.509 certificate by referencing a persistent, append-only database mirrored around the world. The Sovereign Key service is designed to prevent man-in-the-middle and server-impersonation attacks — even those where an attacker has compromised a CA — and provides a safe workaround when an attack on SSL/TLS would trigger one of those certificate warning messages that users have grown accustomed to ignoring. Although in theory it could be used to replace the CA system outright, the proposal instead presents it as a way to augment existing techniques, which maximizes compatibility with existing HTTPS and SSL/TLS infrastructure.

A November 18 blog post outlines the Sovereign Keys proposal at a high level, including the weaknesses in the current CA model and links to research showing the ineffectiveness of certificate warnings in modern web browsers. The proposal itself is hosted in the EFF Git repository, although Eckersley warns that it is in draft form only, and not a complete specification. In particular, it does not detail protocol encodings and numerical parameters that would be required before undertaking a real-world implementation.

There is also a set of "issues" with the current design that are also hosted in the repository. Currently five in number, they outline possible attack vectors and transitional or pragmatic hurdles in the way of possible adoption. Eckersley said that the EFF is working toward a real-world test, but that because of the scope involved, it is still too soon to announce.

The design

As the proposal outlines it, there are two chief problems with the CA system in use today. First, it is vulnerable to variety of attacks, including a compromised CA itself, a CA pressured by a government into maliciously replacing a certificate, a compromised router near either the client or the CA, or even a compromised DNS server in between the client and CA. Second, the default way for browsers to warn users about potential attacks is with a certificate warning message. But research has shown that the majority of certificate warnings are generated by "bureaucratic" non-attack incidents like mismatches and self-signed certificates, which has trained users to ignore the warnings.

The first goal of the Sovereign Keys design is to allow clients to check the validity of a server's certificate without relying on any third-party that could be compromised like the CA system can. The second is to provide an alternate method for clients to securely connect to the correct server in case the validity check fails (meaning the server the client actually wants to connect to, even if the server originally seen is an impostor).

In the Sovereign Keys system, a definitive "sovereign key" key-pair exists for each service on the Internet. There is a public, master database of all of the (public key) sovereign keys, and any client could use it to check the signature on a particular server's SSL/TLS certificate. The certificate could be changed, but as long as the verified sovereign key signs the new one, the clients would consider it valid. What makes the Sovereign Keys database more trustworthy than a chain of CA-signatures is that it is verifiably read-append-only — meaning that all clients can query it, but that the database server can only append new records, not modify old ones — and replicated by a set of global mirrors.

Because it can be appended to but not changed, the database is a "timeline" that includes a complete record of new sovereign key registrations, revocations, and other incidents. In the event that a timeline server experiences a fault (such as two events being recorded out-of-sequence), any client can recognize it, flag it as renegade, and stop trusting it. The timeline is a bit like the distributed, public transaction record used by Bitcoin — its authenticity is designed to be verifiable by clients and mirrors. Each entry includes a strictly-incrementing serial number and a monotonically-increasing timestamp, and the entries are cryptographically signed.

The other half of the design is what a client is meant to do if a server's certificate fails the sovereign key-verification test. Rather than allow the user to continue to connect (which may be a man-in-the-middle attack, but even if it is not, reinforces the behavior that continuing to connect is acceptable), the client takes the sovereign key found in the timeline for the service, computes an SHA1 hash of the key, and attempts to connect to that hash value as a hidden .onion service via Tor. If the service does not accept connections on the .onion address, the client is to stop, and not connect at all. In other words, it may ultimately fail to connect, but that is better than making an unsafe connection.

Of course, the use of Tor as an alternate method for connecting to a service introduces its own set of additional requirements. The client must be running Tor and have a proxy set up to properly redirect requests using the .onion pseudo-domain. The server must also be configured to run the hidden service, because the system uses Tor routing rather than DNS or IP addressing to route traffic.

Multiple timelines?

The integrity of the timeline is paramount to making the Sovereign Keys system work. Although, technically speaking, there are multiple trusted timelines, one for each timeline server. The timeline servers inform each other of events, but in keeping with the append-only nature of the timeline, each timeline server has its own timestamp. There are separate record types to distinguish between events that happen on the timeline server and those that are relayed from a peer.

The process starts when the owner of a domain registers a sovereign key for a new service (officially, this registration calls for both a protocol — say, HTTPS or SMTPS — and a domain) with a nearby timeline server. According to the proposal, the registrant must present evidence that he or she controls the domain at the time of the registration. The evidence can be either "a CA-signed DER-encoded X.509 certificate associating the name with the Sovereign Key, or DANE DNSSEC evidence associating the name with the Sovereign Key, if that is available for this TLD [top-level domain]." The timeline server must verify the evidence. The evidence is logged in the timeline entry along with other pertinent data: the timeline's serial number and timestamp, the sovereign key itself, the domain name and protocol(s) registered for it, an optional expiration date, and two auxiliary fields, "wildcard" and "inheriting name(s)."

The proposal estimates that the space required to store such timeline entries for all of the 200 million domains currently in existence is around 300GB (although no estimate is made for how quickly this set of domains is growing, which could also prove relevant). The wildcard field is set by the registrant; if 0 then the sovereign key applies only the exact domain name listed; if 1 then the sovereign key is considered valid for matching subdomains as well. The "inheriting name(s)" are an optional list of other sovereign keys that are granted the right to register a new sovereign key for this domain if and when this current sovereign key is later revoked. Only self-signed revocations are accepted, so, in theory, designating "heirs" at the time of registration prevents later hijacking.

Apart from new sovereign key registrations, the timeline contains three types of entries: registrations-by-reference (registrations that were initiated at other timeline servers), revocations, and reissuances (which might change parameters like the protocols accepted). These entries are shorter, consisting of (in the registration-by-reference example) the original timeline server's id, serial number, and timestamp data, plus the serial number and timestamp that the reference was logged at in this timeline.

The design calls for a set number of timeline servers (10-30); having multiple root timeline servers should increase performance and make the network more reliable, particularly if they are located in different geographic and network regions. The proposal does not address how they are bootstrapped, but they seem to require a priori knowledge of each other in order to propagate registrations-by-reference and other important timeline events. Mirrors, on the other hand, also increase availability and performance from the clients' perspective, but mirrors do not record any new events to the timeline, they only synchronize with the authoritative timeline servers.

Conflicts, mirrors, and sharing

Space in the proposal is dedicated to what happens when one timeline server seems to violate the rules, such as storing two entries in its timeline with the same or out-of-order serial numbers, recording a lower timestamp than a previous entry, or registering a new sovereign key without verifiable credentials. Because clients are expected to query the mirror servers and not the root timeline servers directly, mirrors are expected to catch the misbehavior first. They then record the incident (including the problematic timeline entries), stop trusting the timeline server, and propagate the incident to clients that ask. Mirrors are also supposed to propagate this "renegation" information to each other; presumably the managers of the root timeline servers would be notified as well.

A longer discussion is devoted to the mirroring and mirror-querying protocol. Clients are supposed to trust the oldest sovereign key found for a domain in all of the valid timelines — plus whatever revocations and renewals follow it — so replicating the entire timeline could be important to verifying any particular transaction. How long clients and mirrors should cache information is both a performance and security consideration. The proposal recommends a series of trust stages from 24 hours to two weeks in length.

It also discusses how Sovereign Keys could be implemented in parallel with the existing SSL/TLS CA system. Because clients can do sovereign key verification by checking one signature on the certificate presented by a server, the CA system is not needed at all, but Eckersley recommends implementing it at first with sovereign key signatures added-on to existing CA-signed (or self-signed) server certificates.

Issues

The issues/ directory in the Git repository discusses technical and practical problems with the proposal, including concerns with such a CA/Sovereign Keys transitional system. The first is that an attacker who compromises a victim's DNS or web server can generate fraudulent sovereign key registrations complete with the DANE or CA-backed credentials the system is supposed to trust. The authors recommend that domain owners combat this potential attack by preemptively creating a sovereign key with no associated protocols, and outline a series of steps that root timeline server administrators could take to protect against rogue registrations.

There are several potential problems with timeline management discussed as well, including attackers compromising a timeline server (which is particularly troublesome since the timeline servers trust each other unless a renegation is spotted), false alarms caused by system time adjustments, and clock synchronization problems. A separate issue addresses the assumption that the set of authoritative timeline servers is small and well-known. This assumption has implications for renegation tracking and synchronization, but it also limits the decentralization of the Sovereign Keys framework, and in a real sense places critical (if not ultimate) trust in the timeline overseers.

Two attack vectors are considered: denial of service by flooding the timeline servers with registrations or the timeline servers and mirrors with queries, and a Sybil attack against a client by overloading its list of Sovereign Keys mirrors with compromised servers.

Of the vulnerabilities discussed, the false registrations created by breaking the existing CA or DANE DNSSEC system clearly has the largest attack surface. After all, the Internet could not be "switched over" to Sovereign Keys instantaneously even if every server and CA decided to do so. In the meantime, as EFF's other publications regularly show, there are enough attacks on (and security holes in) the CA system now that the Sovereign Keys proposal's reliance on CA as the trust authority for new sovereign key registrations sounds almost illogical. The proposal details ideas for bootstrapping mirror servers in a secure way, but creating verifiable sovereign keys for the 200 million domains is a much larger problem to solve.

It is clear that the current CA system is seriously flawed, and it is clear that browser certificate warnings are an ineffective tool to prevent users from falling victim to attackers. Some parts of the draft Sovereign Keys proposal are intriguing (such as the verifiable, public timeline record), but in other ways to most end users it may sound like replacing one remote authority with another. Falling back on Tor might be good from a technical perspective, but will be a challenge to implement. On either of the two goals of the project, then, its future will depend on how it is received by those outside of the EFF.

Comments (8 posted)

Brief items

Security quotes of the week

Most consumers don't care until they get their first $1,000 phone bill because their pirated Angry Birds has been calling Estonia all month.
-- Chester Wisniewski

If you read an analyst report about 'viruses' infecting ios, android or rim, you now know that analyst firm is not honest and is staffed with charlatans. There is probably an exception, but extraordinary claims need extraordinary evidence.

If you read a report from a vendor that [tries] to sell you something based on protecting android, rim or ios from viruses they are also likely as not to be scammers and charlatans.

-- Chris DiBona

CarrierIQ as seen in real world usage (HTC Devices especially) is nothing like the stock copies shown on the first page. All menus have been stripped, hiding it from users presence without advanced knowledge. The service also runs as user Root in ramdisk. It checks in to a server (or receives commands through other various access) with commands to allow someone undetected access.
-- Trevor Eckhart reports on a rootkit installed in many mobile phones by the carriers

Happily, [Trevor] Eckhart was not cowed by this ham-fisted effort to suppress his findings. Instead, he reached out to EFF. We're glad he did. As we explained in a letter (pdf) to Carrier IQ today, Eckhart's research is protected by fair use and the First Amendment right to free expression. He posted the training materials to teach the public about software that many consumers don't know about, even though it monitors their everyday activities and raises substantial privacy concerns.
-- Electronic Frontier Foundation (EFF) steps in to help defend against a Carrier IQ cease-and-desist

So on the exact same day, Adobe said "we recommend you upgrade, as the version you are using is vulnerable" and "we offer you no way to upgrade".

I'm left with the conclusion that Adobe's aggregate corporate message is "users of desktops based on free software should immediately uninstall AIR and stop using it".

If Adobe's software was free, and they had a community around it, they could turn over support to the community if they found it too burdensome. Instead, once again, users of proprietary tools on free systems get screwed by the proprietary vendor.

-- Daniel Kahn Gillmor (Thanks to Paul Wise.)

Comments (1 posted)

BIND 9 denial of service being seen in the wild

The BIND 9 DNS name server is undergoing a concerted denial of service attack, according to this Internet Systems Consortium advisory. "Organizations across the Internet reported crashes interrupting service on BIND 9 nameservers performing recursive queries. Affected servers crashed after logging an error in query.c with the following message: "INSIST(! dns_rdataset_isassociated(sigrdataset))" Multiple versions were reported being affected, including all currently supported release versions of ISC BIND 9. [...] An as-yet unidentified network event caused BIND 9 resolvers to cache an invalid record, subsequent queries for which could crash the resolvers with an assertion failure. ISC is working on determining the ultimate cause by which a record with this particular inconsistency is cached. At this time we are making available a patch which makes named recover gracefully from the inconsistency, preventing the abnormal exit." We should be seeing distributions releasing updated versions soon.

Comments (10 posted)

Certificate fraud: Protection against future "DigiNotars" (The H)

The H looks at a proposal from Google to protect against rogue or compromised certificate authorities. "[Google product manager Ian] Fette said that after that affair, other companies asked Google for a way to protect themselves against bogus certificates. As there are numerous CAs, the possibility that similar illegitimate certificates could be issued remains, explained the developer. However, Fette said that embedding the certification policy for all potential parties into browsers doesn't scale, and that he and his colleagues, Chris Evans and Chris Palmer, therefore advocate the dynamic pinning of public keys." The article goes on to look at the proposal and some complaints about it, along with an alternative based on DNSSEC.

Comments (9 posted)

Tool kills hidden Linux bugs, vulnerabilities (SC Magazine)

SC Magazine looks at a tool to help look for holes in Linux. "It identifies similar source files based on file names and content to identify relationships between source packages. Fuzzy hashing using ssdeep produces hashes that can be used to determine similar packages. Graph Theory is used to perform the analysis."

Comments (25 posted)

New vulnerabilities

bind9: denial of service

Package(s):bind9 CVE #(s):CVE-2011-4313
Created:November 17, 2011 Updated:November 30, 2011
Description:

From the ISC advisory:

Organizations across the Internet reported crashes interrupting service on BIND 9 nameservers performing recursive queries. Affected servers crashed after logging an error in query.c with the following message: "INSIST(! dns_rdataset_isassociated(sigrdataset))" Multiple versions were reported being affected, including all currently supported release versions of ISC BIND 9.

[...]

An as-yet unidentified network event caused BIND 9 resolvers to cache an invalid record, subsequent queries for which could crash the resolvers with an assertion failure. ISC is working on determining the ultimate cause by which a record with this particular inconsistency is cached. At this time we are making available a patch which makes named recover gracefully from the inconsistency, preventing the abnormal exit.

Alerts:
Gentoo 201206-01 bind 2012-06-02
CentOS CESA-2011:1496 bind 2011-11-29
Oracle ELSA-2011-1496 bind 2011-11-30
SUSE SUSE-SU-2011:1270-3 bind 2011-11-30
Scientific Linux SL-bind-20111129 bind 2011-11-29
CentOS CESA-2011:1496 bind 2011-11-29
Red Hat RHSA-2011:1496-01 bind 2011-11-29
Fedora FEDORA-2011-16002 bind 2011-11-17
Fedora FEDORA-2011-16036 bind 2011-11-17
SUSE SUSE-SU-2011:1270-2 bind 2011-11-23
SUSE SUSE-SU-2011:1270-1 bind 2011-11-22
SUSE SUSE-SU-2011:1268-1 bind 2011-11-22
openSUSE openSUSE-SU-2011:1272-1 bind 2011-11-22
Oracle ELSA-2011-1459 bind97 2011-11-18
Oracle ELSA-2011-1458 bind 2011-11-18
CentOS CESA-2011:1459 bind97 2011-11-18
Scientific Linux SL-bind-20111117 bind97 2011-11-17
Mandriva MDVSA-2011:176-1 bind 2011-11-17
Red Hat RHSA-2011:1458-01 bind 2011-11-17
Debian DSA-2347-1 bind9 2011-11-16
Fedora FEDORA-2011-16057 bind 2011-11-17
Oracle ELSA-2011-1458 bind 2011-11-18
CentOS CESA-2011:1458 bind 2011-11-18
Scientific Linux SL-bind-20111117 bind 2011-11-17
Mandriva MDVSA-2011:176-2 bind 2011-11-18
Red Hat RHSA-2011:1459-01 bind97 2011-11-17
Ubuntu USN-1264-1 bind9 2011-11-16
Mandriva MDVSA-2011:176 bind 2011-11-16

Comments (none posted)

chromium: multiple vulnerabilities

Package(s):chromium CVE #(s):CVE-2011-3892 CVE-2011-3893 CVE-2011-3894 CVE-2011-3895 CVE-2011-3896 CVE-2011-3897 CVE-2011-3898 CVE-2011-3900
Created:November 21, 2011 Updated:August 30, 2012
Description: From the Gentoo advisory:

Multiple vulnerabilities have been discovered in Chromium and V8. A context-dependent attacker could entice a user to open a specially crafted web site or JavaScript program using Chromium or V8, possibly resulting in the execution of arbitrary code with the privileges of the process, or a Denial of Service condition. The attacker also could cause a Java applet to run without user confirmation.

Alerts:
Gentoo 201310-12 ffmpeg 2013-10-25
Mandriva MDVSA-2012:148 ffmpeg 2012-08-30
Mandriva MDVSA-2012:074-1 ffmpeg 2012-08-30
Mageia MGASA-2012-0218 avidemux 2012-08-18
Mageia MGASA-2012-0204 avidemux 2012-08-06
Mandriva MDVSA-2012:076 ffmpeg 2012-05-15
Mandriva MDVSA-2012:075 ffmpeg 2012-05-15
Mandriva MDVSA-2012:074 ffmpeg 2012-05-14
Debian DSA-2471-1 ffmpeg 2012-05-13
Gentoo 201111-05 chromium 2011-11-19

Comments (none posted)

drupal6-views: SQL injection

Package(s):drupal6-views CVE #(s):
Created:November 21, 2011 Updated:November 23, 2011
Description: From the Drupal advisory:

The Views module enables you to list content in your site in various ways. The module doesn't sufficiently escape database parameters for certain filters/arguments on certain types of views with specific configurations of arguments.

Alerts:
Fedora FEDORA-2011-15399 drupal-views 2011-11-04
Fedora FEDORA-2011-15385 drupal6-views 2011-11-04
Fedora FEDORA-2011-15352 drupal6-views 2011-11-03

Comments (none posted)

freetype: code execution

Package(s):freetype CVE #(s):CVE-2011-3439
Created:November 17, 2011 Updated:April 19, 2012
Description:

From the Red Hat advisory:

Multiple input validation flaws were found in the way FreeType processed CID-keyed fonts. If a specially-crafted font file was loaded by an application linked against FreeType, it could cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. (CVE-2011-3439)

Alerts:
SUSE SUSE-SU-2012:0553-1 freetype2 2012-04-23
Fedora FEDORA-2012-4946 freetype 2012-04-18
Oracle ELSA-2012-0467 freetype 2012-04-12
Red Hat RHSA-2012:0094-01 freetype 2012-02-02
Gentoo 201201-09 freetype 2012-01-23
openSUSE openSUSE-SU-2012:0015-1 freetype2 2012-01-05
SUSE SUSE-SU-2011:1307-1 freetype2 2011-12-08
SUSE SUSE-SU-2011:1306-1 freetype2 2011-12-08
Fedora FEDORA-2011-15964 freetype 2011-12-04
Fedora FEDORA-2011-15956 freetype 2011-11-15
Fedora FEDORA-2011-15927 freetype 2011-11-15
CentOS CESA-2011:1455 freetype 2011-11-18
Ubuntu USN-1267-1 freetype 2011-11-18
Oracle ELSA-2011-1455 freetype 2011-11-17
Oracle ELSA-2011-1455 freetype 2011-11-17
Oracle ELSA-2011-1455 freetype 2011-11-17
Red Hat RHSA-2011:1455-01 freetype 2011-11-16
Mandriva MDVSA-2011:177 freetype2 2011-11-21
Debian DSA-2350-1 freetype 2011-11-20
CentOS CESA-2011:1455 freetype 2011-11-18
Scientific Linux SL-free-20111116 freetype 2011-11-16

Comments (none posted)

kdeutils: directory traversal

Package(s):kdeutils CVE #(s):CVE-2011-2725
Created:November 22, 2011 Updated:March 7, 2012
Description: From the Ubuntu advisory:

Tim Brown discovered that Ark did not properly perform input validation when previewing archive files. If a user were tricked into opening a crafted archive file, an attacker could remove files via directory traversal.

Alerts:
openSUSE openSUSE-SU-2012:0322-1 Ark 2012-03-06
Ubuntu USN-1276-1 kdeutils 2011-11-21

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2011-4131 CVE-2011-4132
Created:November 21, 2011 Updated:July 10, 2012
Description: From the Red Hat bugzilla:

nfs4_getfacl decoding causes a kernel Oops when a server returns more than 2 GETATTR bitmap words in response to the FATTR4_ACL attribute request.

While the NFS client only asks for one attribute (FATTR4_ACL) in the first bitmap word, the NFSv4 protocol allows for the server to return unbounded bitmaps.

From the Red Hat bugzilla:

A flaw was found in the way Linux kernel's Journaling Block Device (JBD) handled invalid log first block value. An attacker able to mount malicious ext3 or ext4 image could use this flaw to crash the system.

Alerts:
SUSE SUSE-SU-2015:0812-1 kernel 2015-04-30
openSUSE openSUSE-SU-2013:0925-1 kernel 2013-06-10
SUSE SUSE-SU-2013:0786-1 Linux kernel 2013-05-14
openSUSE openSUSE-SU-2013:0927-1 kernel 2013-06-10
Red Hat RHSA-2013:0566-01 kernel-rt 2013-03-06
Red Hat RHSA-2012:1541-01 kernel 2012-12-04
Ubuntu USN-1530-1 linux-ti-omap4 2012-08-10
CentOS CESA-2012:0862 kernel 2012-07-10
Oracle ELSA-2012-0862 kernel 2012-07-02
Oracle ELSA-2012-2022 kernel 2012-07-02
Oracle ELSA-2012-2022 kernel 2012-07-02
SUSE SUSE-SU-2012:0789-1 Linux kernel 2012-06-26
Ubuntu USN-1471-1 linux-lts-backport-oneiric 2012-06-12
Fedora FEDORA-2012-8824 kernel 2012-06-07
Ubuntu USN-1470-1 linux-lts-backport-natty 2012-06-12
Red Hat RHSA-2012:0862-04 kernel 2012-06-20
Ubuntu USN-1457-1 linux 2012-05-31
SUSE SUSE-SU-2012:0554-2 kernel 2012-04-26
Oracle ELSA-2012-0481 kernel 2012-04-23
SUSE SUSE-SU-2012:0554-1 Linux kernel 2012-04-23
Oracle ELSA-2012-0350 kernel 2012-03-12
Oracle ELSA-2012-2003 kernel-uek 2012-03-12
Oracle ELSA-2012-2003 kernel-uek 2012-03-12
Scientific Linux SL-kern-20120308 kernel 2012-03-08
CentOS CESA-2012:0350 kernel 2012-03-07
Red Hat RHSA-2012:0350-01 kernel 2012-03-06
Red Hat RHSA-2012:0333-01 kernel-rt 2012-02-23
Ubuntu USN-1476-1 linux-ti-omap4 2012-06-15
Ubuntu USN-1472-1 linux 2012-06-12
SUSE SUSE-SU-2012:0153-2 Linux kernel 2012-02-06
SUSE SUSE-SU-2012:0153-1 kernel 2012-02-06
Ubuntu USN-1340-1 linux-lts-backport-oneiric 2012-01-23
Ubuntu USN-1330-1 linux-ti-omap4 2012-01-13
Oracle ELSA-2012-0007 kernel 2012-01-12
Scientific Linux SL-kern-20120112 kernel 2012-01-12
CentOS CESA-2012:0007 kernel 2012-01-11
Red Hat RHSA-2012:0010-01 kernel-rt 2012-01-10
Red Hat RHSA-2012:0007-01 kernel 2012-01-10
Ubuntu USN-1322-1 linux 2012-01-09
Ubuntu USN-1312-1 linux 2011-12-19
Ubuntu USN-1311-1 linux 2011-12-19
Ubuntu USN-1304-1 linux-ti-omap4 2011-12-13
Ubuntu USN-1303-1 linux-mvl-dove 2011-12-13
Ubuntu USN-1302-1 linux-ti-omap4 2011-12-13
Ubuntu USN-1301-1 linux-lts-backport-natty 2011-12-13
Ubuntu USN-1300-1 linux-fsl-imx51 2011-12-13
Ubuntu USN-1299-1 linux-ec2 2011-12-13
Fedora FEDORA-2011-16621 kernel 2011-11-30
Ubuntu USN-1293-1 linux 2011-12-08
Ubuntu USN-1292-1 linux-lts-backport-maverick 2011-12-08
Ubuntu USN-1291-1 linux 2011-12-08
Ubuntu USN-1286-1 linux 2011-12-03
Fedora FEDORA-2011-16346 kernel 2011-11-23
Fedora FEDORA-2011-15959 kernel 2011-11-15

Comments (none posted)

libcap: unauthorized directory access

Package(s):libcap CVE #(s):CVE-2011-4099
Created:November 18, 2011 Updated:December 12, 2011
Description: From the openSUSE advisory:

capsh did not chdir("/") after callling chroot(). Programs could therefore access the current directory outside of the chroot

Alerts:
Mandriva MDVSA-2011:185 libcap 2011-12-12
Scientific Linux SL-libc-20111206 libcap 2011-12-06
Red Hat RHSA-2011:1694-03 libcap 2011-12-06
openSUSE openSUSE-SU-2011:1259-1 libcap 2011-11-18

Comments (none posted)

moodle: multiple vulnerabilities

Package(s):moodle CVE #(s):
Created:November 21, 2011 Updated:November 23, 2011
Description: Moodle 2.1.2, 2.0.5, and 1.9.14 contain multiple security fixes.
Alerts:
Fedora FEDORA-2011-14732 moodle 2011-10-22
Fedora FEDORA-2011-14733 moodle 2011-10-22
Fedora FEDORA-2011-14715 moodle 2011-10-22

Comments (none posted)

NetworkManager: man in the middle attack

Package(s):NetworkManager CVE #(s):CVE-2006-7246
Created:November 22, 2011 Updated:January 19, 2012
Description: From the SUSE advisory:

When 802.11X authentication is used (ie WPA Enterprise) NetworkManager did not pin a certificate's subject to an ESSID. A rogue access point could therefore be used to conduct MITM attacks by using any other valid certificate issued by the same CA as used in the original network (CVE-2006-7246). If password based authentication is used (e.g. via PEAP or EAP-TTLS) this means an attacker could sniff and potentially crack the password hashes of the victims.

Alerts:
openSUSE openSUSE-SU-2012:0101-1 NetworkManager-gnome 2012-01-19
openSUSE openSUSE-SU-2011:1273-1 NetworkManager 2011-11-23
SUSE SUSE-SA:2011:045 NetworkManager, 2011-11-22

Comments (none posted)

openldap: denial of service

Package(s):openldap CVE #(s):CVE-2011-4079
Created:November 17, 2011 Updated:November 23, 2011
Description:

From the Ubuntu advisory:

An OpenLDAP server could potentially be made to crash if it received specially crafted network traffic from an authenticated user.

[...]

It was discovered that slapd contained an off-by-one error. An authenticated attacker could potentially exploit this by sending a crafted crafted LDIF entry containing an empty postalAddress.

Alerts:
Gentoo 201406-36 openldap 2014-06-30
Ubuntu USN-1266-1 openldap 2011-11-17

Comments (none posted)

phpMyAdmin: arbitrary file reading

Package(s):phpMyAdmin CVE #(s):CVE-2011-4107
Created:November 23, 2011 Updated:January 2, 2012
Description:

From the CVE entry:

The simplexml_load_string function in the XML import plug-in (libraries/import/xml.php) in phpMyAdmin 3.4.x before 3.4.7.1 and 3.3.x before 3.3.10.5 allows remote authenticated users to read arbitrary files via XML data containing external entity references, aka an XML external entity (XXE) injection attack.

Alerts:
Debian DSA-2391-1 phpmyadmin 2012-01-22
Gentoo 201201-01 phpmyadmin 2012-01-04
Mandriva MDVSA-2011:198 phpmyadmin 2011-12-31
Fedora FEDORA-2011-15841 phpMyAdmin 2011-11-13
Fedora FEDORA-2011-15846 phpMyAdmin 2011-11-13
Fedora FEDORA-2011-15831 phpMyAdmin 2011-11-13

Comments (none posted)

software-center: man-in-the-middle attack/information disclosure

Package(s):software-center CVE #(s):CVE-2011-3150
Created:November 21, 2011 Updated:November 23, 2011
Description: From the Ubuntu advisory:

David B. discovered that Software Center incorrectly validated server certificates when performing secure connections. If a remote attacker were able to perform a man-in-the-middle attack, this flaw could be exploited to view sensitive information or install altered packages and repositories.

Alerts:
Ubuntu USN-1270-1 software-center 2011-11-21

Comments (none posted)

spip: privilege escalation/cross-site scripting

Package(s):spip CVE #(s):
Created:November 21, 2011 Updated:November 23, 2011
Description: From the Debian advisory:

Two vulnerabilities have been found in SPIP, a website engine for publishing, which allow privilege escalation to site administrator privileges and cross-site scripting.

Alerts:
Debian DSA-2349-1 spip 2011-11-19

Comments (none posted)

squid: denial of service

Package(s):squid CVE #(s):CVE-2011-4096
Created:November 18, 2011 Updated:January 6, 2012
Description: From the Red Hat bugzilla:

An invalid free flaw was found in the way Squid proxy caching server processed DNS requests, where one CNAME record pointed to another CNAME record pointing to an empty A-record. A remote attacker could issue a specially-crafted DNS request, leading to denial of service (squid daemon abort).

Alerts:
Gentoo 201309-22 squid 2013-09-27
openSUSE openSUSE-SU-2012:0213-1 squid3 2012-02-09
Debian DSA-2381-1 squid3 2012-01-06
CentOS CESA-2011:1791 squid 2011-12-22
Oracle ELSA-2011-1791 squid 2011-12-17
Mandriva MDVSA-2011:193 squid 2011-12-27
Scientific Linux SL-squi-20111206 squid 2011-12-06
Red Hat RHSA-2011:1791-01 squid 2011-12-06
Fedora FEDORA-2011-15233 squid 2011-11-02
Fedora FEDORA-2011-15256 squid 2011-11-02
SUSE SUSE-SU-2016:1996-1 squid3 2016-08-09
SUSE SUSE-SU-2016:2089-1 squid3 2016-08-16

Comments (none posted)

system-config-printer: man-in-the-middle package installation

Package(s):system-config-printer CVE #(s):CVE-2011-4405
Created:November 17, 2011 Updated:November 23, 2011
Description:

From the Ubuntu advisory:

Marc Deslauriers discovered that system-config-printer's cupshelpers scripts used by the Ubuntu automatic printer driver download service queried the OpenPrinting database using an insecure connection. If a remote attacker were able to perform a man-in-the-middle attack, this flaw could be exploited to install altered packages and repositories.

Alerts:
openSUSE openSUSE-SU-2011:1331-2 system-config-printer 2012-01-16
openSUSE openSUSE-SU-2011:1331-1 system-config-printer 2011-12-16
Ubuntu USN-1265-1 system-config-printer 2011-11-17

Comments (none posted)

tintin: multiple vulnerabilities

Package(s):tintin CVE #(s):CVE-2008-0671 CVE-2008-0672 CVE-2008-0673
Created:November 21, 2011 Updated:November 23, 2011
Description: From the Gentoo advisory:

Multiple vulnerabilities have been discovered in TinTin++. Remote unauthenticated attackers may be able to execute arbitrary code with the privileges of the TinTin++ process, cause a Denial of Service, or truncate arbitrary files in the top level of the home directory belonging to the user running the TinTin++ process.

Alerts:
Gentoo 201111-07 tintin 2011-11-20

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel remains 3.2-rc2; no 3.2 prepatches have been released in the last week.

Stable updates: the 3.0.10 and 3.1.2 stable kernel updates were released on November 21. They both include another set of important fixes, though this particular update is smaller than many.

The 2.6.32.49, 3.0.11, and 3.1.3 stable updates are in the review process as of this writing; they can be expected on or after November 25.

Comments (2 posted)

Quotes of the week

Merging code in-flight, just because I can. What timezone should I use?
-- Linus Torvalds (thanks to Cesar Eduardo Barros)

Fedora is now getting so weird and dependant on magic initrds it's become pretty unusable for kernel development work nowdays IMHO because it's undebuggable.
-- Alan Cox

...our goal remains to provide a desktop which by default works for the 99%, but that we also think that those we do like to tweak their desktops should have an easy method to do so.

Note my wording: 99% excludes kernel hackers. Not sure if we should say something about that explicitly or not.

-- Olav Vitters; prepare for "Occupy LKML" in the near future

Comments (82 posted)

Kernel development news

Drivers as documentation

By Jonathan Corbet
November 22, 2011
As a community, we are highly concerned with the quality of our code. Kernel code is reviewed for functionality, long-term maintainability, documentation, and more. Driver code is not always reviewed to the same degree, but it can be just as important - if our drivers do not work, our kernel does not work. There is an aspect to the long-term maintainability of drivers that could use more attention: the degree to which a driver documents how its hardware works.

One might argue that the job of documenting the hardware falls on whoever writes the associated datasheet. There is some truth to that claim, but, in many cases, only the original author of the driver has access to that datasheet. Those who come after can try to extract documentation from the vendor or to search for clandestine copies hosted on the net. But often the only option is to figure out the hardware from the one source of information that is actually available: the existing driver. If the driver source does not help that new developer, one can argue that the original author has fallen down on the job.

So, if a driver contains code like:

    writel(devp->regs[42], 0xf4ee0815);

it is missing something important. In the absence of the datasheet, there is no way for any other developer to have any clue of what that operation is actually doing.

The problem is worse than that, though; datasheets often omit useful information, obscure the truth, and lie through their teeth. The hardest part of getting a driver to work is often the process of figuring out what the hardware's features and special needs really are. It often seems, for example, that the datasheet is written before the process of designing the hardware begins. As time passes, the understanding of the problem grows, and deadlines loom, hardware engineers start to jettison features that cannot be made to work in time or that, in their sole and not-subject-to-appeal opinion, can be painlessly fixed in software. Updating the datasheet to match the actual hardware never happens.

Thoughtful driver developers will, on discovery of the imaginary nature of a specific hardware feature, add a comment to the driver; that way, no future maintainer has to figure out (the hard way, involving keyboard imprints on the forehead) why the driver does not use a specific, helpful-looking hardware capability.

Then there is the matter of "reserved" bits. There has not yet been a datasheet written that did not contain entries like:

Weird tangential functions register (offset 0xc8)
BitsFunction
17Reserved: do not touch this bit or the terrorists will win

Somewhere, deep within the company, there will be a maximum of two engineers who know that the document is incomplete, but that nobody had ever gotten around to updating it. If you can corner one of those people, you can usually get them to admit that this bit should be documented as:

Weird tangential functions register (offset 0xc8)
BitsFunction
170 = DMA engine randomly locks up
1 = DMA engine functions as expected
Default value = 0

A developer who cannot get his hands within range of the neck of at least one of those hardware engineers will likely spend a lot of time figuring out that they need to set the "make it work" bit. This effort can involve reverse-engineering proprietary drivers or, in cases of pure desperation, playing with random bits to see what changes. Once that bit has been located, it is natural for the tired and frustrated developer to quietly set the bit before heading off in a determined effort to eliminate the memory of the entire process through the application of large amounts of beer. A particularly forward-thinking developer might make a note on a printed version of the datasheet for future reference.

But handwritten notes are not usually helpful to the next developer who has to work on that driver. A moment spent documenting that bit:

    #define WTF_PRETTY_PLEASE  0x00020000 /* Always set this or it locks up */

may save somebody else hours of unnecessary pain.

It is tempting to think of a completed driver as being done. But driver code, like other kernel code, is subject to ongoing change. Kernel API changes must be dealt with, problems need to be fixed, and newer versions of the hardware must be supported. Depending on how much beer was involved, the original author may remember that device's peculiarities, but those who follow will not. Everybody would be better served if the driver did not just make the hardware work, but if it also made the reader understand how the hardware works.

Doing so is not usually hard. Define descriptive names for registers, bits, and fields rather than putting in hard-coded constants. Note features that are incompletely described, incorrectly described, or entirely science-fictional. Comment operations that have non-obvious ordering requirements or that do not play well together. And, in general, code with a great deal of sympathy for the people who will have to make changes to your work in the future. Some hardware can never be properly documented because the relevant information is simply not available; see this 2006 article for an example. But what information is available should be made available to others.

Core kernel hackers are occasionally heard to make dismissive remarks about driver developers and the work they do. But driver writers are often given a difficult task involving a fair amount of detective work; they get this task done and make our hardware work for us. Writing drivers that adequately document the hardware is not an unreasonable thing to ask of these developers; they have the hardware knowledge and the skills to do it. The harder problem may be asking driver reviewers to insist that this extra effort be made. Without pressure from reviewers, many drivers will never enable readers to really understand what is going on.

Comments (16 posted)

The pin control subsystem

By Jonathan Corbet
November 22, 2011
Classic x86-style processors are designed to fit into a mostly standardized system architecture, so they all tend, in a general sense, to look alike. One of the reasons why it is hard to make a general-purpose kernel for embedded processors is the absence of this standardized architecture. Embedded processors must be extensively configured, at boot time, to be able to run the system they are connected to at all. The 3.1 kernel saw the addition of the "pin controller" subsystem which is intended to help with that task; enhancements are on the way for (presumably) 3.2 as well. This article will provide a superficial overview of how the pin controller works.

A typical system-on-chip (SOC) will have hundreds of pins (electrical connectors) on it. Many of those pins have a well-defined purpose: supplying power or clocks to the processor, video output, memory control, and so on. But many of these pins - again, possibly hundreds of them - will have no single defined purpose. Most of them can be used as general-purpose I/O (GPIO) pins that can drive an LED, read the state of a pushbutton, perform serial input or output, or activate an integrated pepper spray dispenser. Some subsets of those pins can be organized into groups to serve as an I2C port, an I2S port, or to perform any of a number of other types of multi-signal communications. Many of the pins can be configured with a number of different electrical characteristics.

Without a proper configuration of its pins, an SOC will not function properly - if at all. But the right pin configuration is entirely dependent on the board the SOC is a part of; a processor running in one vendor's handset will be wired quite differently than the same processor in another vendor's cow-milking machine. Pin configuration is typically done as part of the board-specific startup code; the system-specific nature of that code prevents a kernel built for one device from running on another even if the same processor is in use. Pin configuration also tends to involve a lot of cut-and-pasted, duplicated code; that, of course, is the type of code that the embedded developers (and the ARM developers in particular) are trying to get rid of.

The idea behind the pin control subsystem is to create a centralized mechanism for the management and configuration of multi-function pins, replacing a lot of board-specific code. This subsystem is quite thoroughly documented in Documentation/pinctrl.txt. A core developer would use the pin control code to describe a processor's multi-function pins and the uses to which each can be put. Developers enabling a specific board can then use that configuration to set up the pins as needed for their deployment.

The first step is to tell the subsystem which pins the processor provides; that is a simple matter of enumerating their names and associating each with an integer pin number. A call to pinctrl_register() will make those pins known to the system as a whole. The mapping of numbers to pins is up to the developer, but it makes sense to, for example, keep a bank of GPIO pins together to simplify coding later on.

One of the interesting things about multi-function pins is that many of them can be assigned as a group to an internal functional unit. As a simple example, one could imagine that pins 122 and 123 can be routed to an internal I2C controller. Other types of ports may take more pins; an I2S port to talk to a codec needs at least three, while SPI ports need four. It is not generally possible to connect an arbitrary set of pins to any controller; usually an internal controller has a very small number of possible routings. These routings can also conflict with each other; pin 77, say, could be either an I2C SCL line or an SPI SCLK line, but it cannot serve both purposes at the same time.

The pin controller allows the developer to define "pin groups," essentially named arrays of pins that can be assigned as a group to a controller. Groups can (and often will) overlap each other; the pin controller will ensure that overlapping groups cannot be selected at the same time. Groups can be associated with "functions" describing the controllers to which they can be attached. Some functions may have a single pin group that can be used; others will have multiple groups.

There are some other bits and pieces (some glue to make the pin controller work easily with the GPIO subsystem, for example), but the above describes most of the functionality found in the 3.1 version of the pin controller. Using this structure, board developers can register one or more pinmux_map structures describing how the pins are actually wired on the target system. That work can be done in a board file, or, presumably, be generated from a device tree file. The pin controller will use the mapping to ensure that no pins have been assigned to more than one function; it will then instruct the low-level pinmux driver to configure the pins as described. All of that work is now done in common code.

The pin multiplexer on a typical SOC can do a lot more than just assign a pin to a specific function, though. There is typically a wealth of options for each pin. Different pins can be driven to different voltages, for example; they can also be connected to pull-up or pull-down resistors to bias a line to a specific value. Some pins can be configured to detect input signal changes and generate an interrupt or a wakeup event. Others may be able to perform debouncing. It adds up to a fair amount of complexity which is often reflected in the board-specific setup code.

The generic pin configuration interface, currently in its third revision, attempts to bring the details of pin configuration into the pin controller core. To that end, it defines 17 (at last count) parameters that might be settable on a given pin; they vary from the value of the pullup resistor to be used through slew rates for rising or falling signals and whether the pin can be a source of wakeup events. With this code in place, it should become possible to describe the complete configuration of complex pin multiplexors entirely within the pin controller.

The number of pin controller users in the 3.1 kernel is relatively small, but there are a number of patches circulating to expand its usage. With the addition of the configuration interface (in the 3.2 kernel, probably), there will be even more reason to make use of it. One of the more complicated bits of board-level configuration will be supported almost entirely in common code, with all of the usual code quality and maintainability benefits. It is hard to stick a pin into an improvement like that.

Comments (5 posted)

POSIX_FADV_VOLATILE

By Jonathan Corbet
November 22, 2011
Caching plays an important role at almost all levels of a contemporary operating system. Without the ability to cache frequently-used objects in faster memory, performance suffers; the same idea holds whether one is talking about individual cache lines in the processor's memory cache or image data cached by a web browser. But caching requires resources; those needs must be balanced with other demands on the same resources. In other words, sometimes cached data must be dropped; often, overall performance can be improved if the program doing the caching has a say in what gets removed from the cache. A recent patch from John Stultz attempts to make it easier for applications to offer up caches for reclamation when memory gets tight.

John's patch takes a lot of inspiration from the ashmem device implemented for Android by Robert Love. But ashmem functions like a device and performs its own memory management, which makes it hard to merge upstream. John's patch, instead, tries to integrate things more deeply into the kernel's own memory management subsystem. So it takes the form of a new set of options to the posix_fadvise() system call. In particular, an application can mark a range of pages in an open file as "volatile" with the POSIX_FADV_VOLATILE operation. Pages that are so marked can be discarded by the kernel if memory gets tight. Crucially, even dirty pages can be discarded - without writeback - if they have been marked volatile. This operation differs from POSIX_FADV_DONTNEED in that the given pages will not (normally) be discarded right away - the application might want the contents of volatile pages in the future, but it will be able to recover if they disappear.

If a particular range of pages becomes useful later on, the application should use the POSIX_FADV_NONVOLATILE operation to remove the "volatile" marking. The return value from this operation is important: a non-zero return from posix_fadvise() indicates that the kernel has removed one or more pages from the indicated range while it was marked volatile. That is the only indication the application will get that the kernel has accepted its offer and cleaned out some volatile pages. If those pages have not been removed, posix_fadvise() will return zero and the cached data will be available to the application.

There is also a POSIX_FADV_ISVOLATILE operation to query whether a given range has been marked volatile or not.

Rik van Riel raised a couple of questions about this functionality. He expressed concern that the kernel might remove a single page of a multi-page cached object, thus wrecking the caching while failing to reclaim all of the memory used to cache that object. Ashmem apparently does its own memory management partially to avoid this very situation; when an object's memory is reclaimed, all of it will be taken back. John would apparently rather avoid adding another least-recently-used list to the kernel, but he did respond that it might be possible to add logic to reclaim an entire volatile range once a single page is taken from that range.

Rik also worried about the overhead of this mechanism and proposed an alternative that he has apparently been thinking about for a while. In this scheme, applications would be able to open (and pass to poll()) a special file descriptor that would receive a message whenever the kernel finds itself short of memory. Applications would be expected to respond by freeing whatever memory they can do without. The mechanism has a certain kind of simplicity, but could also prove difficult in real-world use. When an application gets a "free up some memory" message, the first thing it will probably need to do is to fault in its code for handling that message - an action which will require the allocation of more memory. Marking the memory ahead of time and freeing it directly from the kernel may turn out to be a more reliable approach.

After the recent frontswap discussions, it is perhaps unsurprising that nobody has dared to observe that volatile memory ranges bear a more than passing resemblance to transcendent memory. In particular, it looks a lot like "cleancache," which was merged in the 3.0 development cycle. There are differences: putting a page into cleancache removes it from normal memory while volatile memory can remain in place, and cleancache lacks a user-space interface. But the core idea is the same: asking the system to hold some memory, but allowing that memory to be dropped if the need arises. It could be that the two mechanisms could be made to work together.

But, as noted above, nobody has mentioned this idea, and your editor would certainly not be so daring.

One other question that has not been discussed is whether this code could eventually replace ashmem, reducing the differences between the mainline and the Android kernel. Any such replacement would not happen anytime soon; ashmem has its own ABI that will need to be supported by Android kernels for a long time. Over years, a transition to posix_fadvise() could possibly be made if the Android developers were willing to do so. But first the posix_fadvise() patch will need to get into the mainline. It is a very new patch, so it is hard to say if or when that might happen. Its relatively non-intrusive nature and the clear need for this capability would tend to argue in its favor, though.

Comments (13 posted)

Patches and updates

Kernel trees

Architecture-specific

Core kernel code

Development tools

Device drivers

Filesystems and block I/O

Memory management

Networking

Security-related

Page editor: Jonathan Corbet

Distributions

Ubuntu, Banshee, and default applications

By Jake Edge
November 23, 2011

There have been rumblings for a few weeks now about the fate of the Banshee music player in the Ubuntu default install. The distribution switched from GNOME's default Rhythmbox music player to Banshee for the 11.04 release, but started discussing switching back for 12.04 LTS at the recent Ubuntu Developer Summit—to the surprise of the Banshee team. While the reasons for the switch are largely technical, though there is some dispute about their validity, there is a fair amount of unhappiness about the decision, not only for the switch itself but for the way the decision was made. In any case, the distribution will be switching back to Rhythmbox for 12.04.

One of the biggest complaints is that the Banshee team had no warning that Ubuntu was even considering the switch. As Jo Shields put it in a blog post: "If Banshee was being considered for replacement due to unresolved technical issues, then perhaps it would have been polite to, I don't know, inform upstream that it was on the cards?" He also notes that this is not the first time a project has been blindsided by a UDS discussion to remove a package from the default install. The PiTiVi video editor project had a similar experience back in May with regards to removing that package for 11.10.

After the UDS meeting, Canonical's Ubuntu desktop manager Jason Warner posted a note to the ubuntu-desktop mailing list looking for "further feedback". He outlined some of the problems with Banshee that were brought up at UDS, including its stability, start-up time, resource usage, and the project's responsiveness to Ubuntu-specific concerns.

Ubuntu Banshee maintainer Chow Loong Jin posted a response that addressed each of those concerns, but seemed most puzzled by the lack of responsiveness complaint: "If there were patches that Canonical/Ubuntu wanted in, why haven't I heard about them?". Martin Pitt concurred that Chow and Banshee have been responsive, but he added another handful of concerns about the program, including GTK3 support, the amount of space required to ship Banshee (30M, which includes Mono), ARM support, and a "fuzzy" future for Mono.

Once again, Chow stepped in to address some of those complaints. In particular, there seems to be a disconnect regarding ARM support. Chow points to one bug that has been filed, which is being worked on, and is unaware of other problems. But, based on information from Tobin Davis and Oliver Grawert of the Ubuntu ARM team, there have been numerous problems supporting Mono and Banshee on ARM. Those problems have evidently taken a large portion of the ARM team's time to try to resolve. That, coupled with a perception that Mono is becoming less popular as a platform, has even led to consideration of dropping Mono from the packages that will be officially supported for the LTS as Sebastien Bacher mentions:

We also didn't take any decision on the topic but if mono is having low traction in the linux desktop world nowadays (it was pointed at UDS that we don't see a lot of new projects using mono since f-spot, banshee and tomboy, most new GNOME projects seems to go for vala rather for example) the cost of supporting it officially for 5 years might be higher than the benefits, it's not a made decision of any sort but a topic to discuss and take in consideration there.

The technical complaints seem reasonable, though it appears that some of the stability, resource usage, and start-up time issues are debatable, with some having lots of those kinds of problems and others not seeing them at all. But the bigger question is why the project wasn't really notified that there were big enough problems to consider removing Banshee as a default application until after it had been proposed, and mostly agreed upon, at UDS. One of the Ubuntu Mono packagers, Iain Lane, was actually present at the UDS, but was evidently never notified that Banshee's removal was under discussion. As Lane put it :

None of this should have come out of a UDS session. If there are such serious issues then they should have been raised months ago.

Lane is reacting to Warner's pronouncement, two weeks after bringing the subject up, that "it is clear the right decision for 12.04 is to make Rhythmbox the default music player". That message, in effect, brings the discussion to a close and makes the switch back to Rhythmbox. Lane and others felt that it was far from "clear" that the switch should be made. Warner clarified that he meant "clear to me", and went on to specify the reasons behind his decision. Most of it comes down to stability:

Most people using the traditional "Linux Desktop" are vocal, though forgiving, when it comes to applications. I firmly believe that mass market users will neither be vocal nor forgiving and they will not tolerate systems they can't trust or feel are unstable. They simply won't use our products and apps.

IMO, the default apps need to be the best foot forward, the showcase for Ubuntu. If a users first experience with our product is poor, we have a problem from which it will be very hard to recover. Rhythmbox, while not pretty or as featureful as Banshee, does the core function more reliably and more stably. And it isn't about number of features or [breadth] of feature scope, but rather how well the application does what it says it does.

For his part, though, Lane also notes the PiTiVi situation as an earlier example, and details the problems he sees in the process:

Most importantly, we need to be having more constant and constructive dialogs with upstreams throughout the whole cycle and not just at application selection time (actually in this case there has been very poor [communication] with upstream even since the discussion). Using the promise of being on or threat of being removed from the install to cudgel projects into the direction you want is not very satisfactory.

Chopping-and-changing doesn't do people any favours either. There should be some commitment from Ubuntu; being the default app brings upstreams a lot of users (which is great), but also increases the support and bug workload (which is not so great). If Ubuntu could be relied on more then upstreams would not be so burdened.

Bacher acknowledges some of Lane's complaints, saying that the "topic has been handled in a pretty suboptimal way". He also agreed that the PiTiVi situation was similar and that the distribution could have done a better job keeping that project in the loop. But he is leery of Lane's "cudgel" characterization:

While it would make sense to tell upstream what we like or dislike about their software, I'm not sure we should try to "use" our position to demand things to be done or fixed for us. I would feel uncomfortable telling any upstream "you should fix those bugs or your software will be out of the CD", while in practice it's true that some issues lead us to decide what to ship or not.

There's also the question of the importance of being in the default install. Even if Mono were to be removed from the packages supported for the five-year LTS period, both it and Banshee (Tomboy, F-Spot, ...) will still be available, but may be less prominent. As Lane pointed out, there are both good and bad points to being in the default install, but it's clear that projects which get removed feel it to be something of a slap in the face. But, as Jono Bacon points out, the way that Ubuntu users discover new software is changing:

While a few years ago apt-get and Synaptic provided a means to install software for us, they lacked the discoverability and ease of use required for many novice users. In addition to this, those tools did not provide a means for a novice to identify what were the best choices for them in the sea of software available for Ubuntu. As such, for software not included in the default install, end users were somewhat in the dark about the best software they could install to meet their needs.

The Ubuntu Software Center changed all of that. Now we have a simple, easy to use facility for browsing software, seeing ratings and reviews to get a feeling for what is best of breed, and installing it with just a click.

The Ubuntu Software Center provides an AppStore-like interface for users to discover new software, so they are less likely to be bound only by what's available in the default install, according to Bacon. However, he does recognize the importance of the default applications and why projects are concerned when they get dropped:

I am not surprised that some consternation occurs when applications or components are proposed to be added or removed from a default Ubuntu installation. Being on the disc provides a sense of validation and acceptance to our upstreams, and provides an incredible amount of visibility to these applications.

In the end analysis, it would seem that Ubuntu could do a much better job working with the upstream projects that it is shipping in its default installation. It is mostly just a matter of communication, and this incident (and that PiTiVi situation before it) have raised the visibility of this type of problem. Rather than waiting until (nearly) the last minute, involving the upstream projects as well as the Debian/Ubuntu package maintainers early in the decision-making process is likely to cause a lot less stress and unhappiness. Projects that know about problems that may lead to their eventual removal from the default package set are pretty likely to address them—giving them enough time to do so before suddenly tossing them out can only help with that.

Comments (10 posted)

Brief items

Distribution quotes of the week

Does Debian measure its success by the number of installs? The number of maintainers? The number of flamewars? The number of packages? The number of messages to mailing lists? The quality of Debian Policy? The quality of packages? The "freshness" of packages? The length and quality of maintenance of releases? The frequency or infrequency of releases? The breadth of derivatives?

Many of these metrics are in direct tension with one another; as a consequence, the fact that different DD's prioritise all of these (and other goals) differently makes for ... interesting debate. The sort of debate that goes on and on because there is no way to choose between the goals when everyone has different ones. You know the sort of debate I mean :-)

-- Mark Shuttleworth

The above is why I'm constantly encouraging people interested in running for DPL in the future to reach out to me and work on some tasks of the current DPL's TODO list. I swear it is not just a cheap attempt at slavery!. It is rather an attempt at DPL mentoring that could be beneficial: both to give future candidates more awareness of the task, and to reduce the potential downtime when handing over from one DPL to the next.
-- Stefano "Zack" Zacchiroli

Comments (none posted)

Chrome OS Linux

Chrome OS Linux is a free operating system built around the Google Chrome browser. The aim of this project is to provide a lightweight Linux distribution for the best web browsing experience on any x86 PC, notebook or Chromebook. Version 1.7.932 RC is based on openSUSE, the Google Chrome 17 web browser, and includes GNOME 2.32 and an experimental GNOME Shell desktop environment. Chrome OS Linux is an independent project, not to be confused with Google's Chrome OS.

Full Story (comments: 4)

Distribution News

Fedora

Re-appointment to the Fedora Board and reminder of upcoming elections

The Fedora Board consists of five elected seats and four appointed seats. There are two appointed seats available, one is announced before the election and one after. Toshio Kuratomi has agreed to serve in his appointed Board seat for another term. The other appointed seat will be announced after elections close. Elections for the Fedora Board, FAmSCo, and FESCo begin on November 23 and close at the end of the day on December 5.

Full Story (comments: none)

SUSE Linux

SUSE Linux Enterprise Server 10 Service Pack 3 regular support and maintenance has concluded

SUSE has discontinued support for SUSE Linux Enterprise 10 SP3. "This means that SUSE Linux Enterprise Server 10 SP3, SUSE Linux Enterprise Desktop SP3 SUSE Linux Enterprise SDK 10 SP3 are now unmaintained and further updates will only update the corresponding 10 SP4 product variants." SUSE Linux Enterprise 10 SP4 is still maintained and supported. Also "Long Term Servicepack Support" is still available for SLE 10 SP3.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

People Behind Debian: Mark Shuttleworth

Raphaël Hertzog interviews Mark Shuttleworth about Ubuntu and its relationship with Debian. "Before Ubuntu, we have a two-tier world of Linux: there's the community world (Debian, Fedora, Arch, Gentoo) where you support yourself, and the restricted, commercial world of RHEL and SLES/SLED. While the community distributions are wonderful in many regards, they don't and can't meet the needs of the whole of society; one can't find them pre-installed, one can't get certified and build a career around them, one can't expect a school to deploy at scale a platform which is not blessed by a wide range of institutions. And the community distributions cannot create the institutions that would fix that."

Comments (10 posted)

Lefebvre: Linux Mint 12

Linux Mint developer Clement Lefebvre has some news about Linux Mint 12 "Lisa", which is currently available as a release candidate. "The feedback we got from the RC wasn't as straight forward as it usually is. As expected, the introduction of Gnome 3 is dividing the Mint community. We were delighted to see that MGSE [Mint GNOME Shell Extensions] was well received and that it helped people migrating to Gnome 3. MGSE received a lot of noticeable improvements since and the final release of Linux Mint 12 will come with a Gnome 3 experience that is significantly better than in the RC release."

Comments (3 posted)

What's new in openSUSE 12.1 (The H)

The H has a review of openSUSE 12.1. "While in the package manager it is worth taking a look around the manager's individual categories. For example, the Chromium browser has made it into openSUSE's standard repositories, and the distribution is also the first to include Google's new Go programming language. However, the biggest surprise is to be found in the Desktop category's KDE section, where packages containing the classical KDE 3 desktop have been re-listed. Rather than offering the version maintained by the Trinity project, the packages contain version 3.5.10 of the desktop, which is still released by the KDE project, together with various patches created by the openSUSE team and the Chakra and Trinity projects. One of the patches ensures that the media manager is no longer dependent on HAL."

Comments (1 posted)

Page editor: Rebecca Sobol

Development

Mozilla Popcorn: Enabling interactive video elements

November 22, 2011

This article was contributed by Nathan Willis

Prior to HTML5, web video lived exclusively inside browser plug-ins (like Flash), which put it outside the reach of JavaScript, CSS, and other techniques for interacting with the rest of the surrounding document. Mozilla's Popcorn project is an effort to reverse this, by building an event-driven API to hook <video> content in to other page elements and local browser settings. Popcorn makes videos fully-interactive HTML elements, so that the timeline can both trigger and listen for events. Mozilla declared that the core Popcorn.js library had reached version 1.0 on November 4, and rolled out a set of auxiliary libraries to simplify authorship and to tie videos in to popular web services.

The cornerstone of the Popcorn project is HTML5's HTMLMediaElement (which, despite the heavy emphasis on video, is applicable to audio content as well). In particular, Popcorn relies on the currentTime attribute, which reports the position in the element's timeline. By wrapping a <video> or <audio> element in a Popcorn instance, the page developer can add, remove, and alter the other elements of the Document Object Model (DOM) based on the playback position of the media.

The simplest example is a subtitle track, which is a series of text fragments to be displayed and removed at particular moments in the timeline. But Popcorn builds a more flexible framework on top of HTMLMediaElement, so that developers can trigger custom functions, respond to events, and manipulate the playback of the media itself (such as jumping to particular time codes, pausing/unpausing playback, or changing the playback rate or volume). The code is modular, with base functionality provided in Popcorn.js, and separate "plugins" (arguably a poor choice of terminology, since Mozilla uses the nearly identical term "plug-in" to refer to media-enabling browser add-ons as well) for common use-cases and to link in to particular third-party web services. There are plugins for accessing Twitter, Flickr, and OpenStreetMap, to name just a few.

The core

Popcorn.js defines a Popcorn() JavaScript object, whose only required parameter is the #element_id of an HTMLMediaElement on the page. Primitive playback is controllable with Popcorn.play(), Popcorn.pause(), and Popcorn.currentTime(seconds), with which you can jump to any point in the timeline. The simplest action-triggering functionality is Popcorn.exec(time,callback_function), which merely executes the provided callback_function at the specified time. The exec function is also aliased as Popcorn.cue(), in a nod towards the lingo used by videographers.

Apart from simple time-based triggers, you can use Popcorn.listen(event,callback_function) to bind the callback function to a specified event. Built-in events are provided to handle typical web playback scenarios, such as "play," "pause," "loadstart," "stalled," "volumechange," and so on, or you can define custom events. You can directly trigger an event with Popcorn.trigger(event[,data]), where the data parameter allows you to send a data object to the listener.

The core API also defines pass-through methods for querying and setting basic properties of the media player (the state of the playback controls, the percentage of the media that is buffered, etc.), plus parsers for ancillary formats(including JSON, XML, and external video subtitle tracks), and static methods for utility operations like fetching remote scripts or getting locale and language information from the browser. Popcorn is designed primarily to work with native HTML5 media elements, but the library supports add-in "player" modules to implement compatibility for participating third-party video services, too. As of now there are just two, YouTube and Vimeo, plus a "null player"-like object called Baseplayer, which allows you to bind listeners and trigger events with a timeline that has no media attached.

The rest of Popcorn's functionality is implemented with plugins. This allows you to use the Popcorn build tool to create the smallest possible Popcorn.js to serve up to site visitors. The lowest-level plugin is "code", which allows you to specify a start and end time (in seconds or frames) and run JavaScript code in response. While the core API deals directly with the HTMLMediaElement content, most of the other plugins are designed to act on specific HTML elements elsewhere in the document — such as displaying plain text or images inside of another "target" element, which in the examples is usually a <div>. So-called "special effects" are provided through the applyClass method, which applies a CSS class (or a space-separated list of classes) to a plugin — and thus, by extension, to an element on the page.

The plugins

Although a handful of the existing 1.0 plugins implement utility functions, most are designed to simplify typical interactive tasks needed by video publishers. For example, the subtitle plugin allows you to define subtitles as a straightforward

    myvideo.subtitle({ 
        start: 12,
        end: 24,
        text: "Share your pain with me... and gain strength from the sharing.",
    })
rather than writing text to a page element in separate .exec() actions.

Interestingly enough, some of these plugins are extremely similar — for instance, the footnote plugin also permits you to render text onto an HTML element, with a specified start and end time; the only difference appears to be that subtitle does not require a target element, and will render the text over the video element if none is provided. Mozilla has been developing Popcorn for well over a year, so presumably the initial set of plugins reflects the input of at least part of the web video community. There are plugins designed to implement bottom-of-the-video "news crawls," to tag people (complete with an avatar and link to a profile page), to list citation and attribution information. For the latter case, imagine a political ad and the brief citations that pop on screen as the damaging quotes about the candidate's opponent are read.

There are also general-purpose plugins, to do things like open arbitrary URIs inside of <iframe>s or to call other JavaScript libraries like Mustache templates or Processing sketches. However, many of the plugins are built for the express purpose of integrating a particular web service into the page. The initial list includes Facebook, LinkedIn, Wikipedia, Lastfm, Flickr, Twitter, Google Maps, and the open source mapping library OpenLayers.

These plugins enable you to display and hide everything from "like" boxes and profiles to maps and Twitter searches. Whether or not any particular plugin seems useful is probably in the eye of the beholder and is likely to be dependent on the API of the service involved. There are examples elsewhere in the Popcorn site where automatically bringing up informational content at the appropriate time really does seem to enhance a video — showing a map of the location being discussed, or related images and real-time Twitter commentary on the subject. On the other hand, simply popping up a static web page does not sound particularly innovative.

Authorship

To the average web developer, Popcorn will probably feel similar enough to existing client-side JavaScript libraries like jQuery to make getting started easy (and indeed, Mozilla refers to the project on the overview page and blog as "the jQuery of open video"). But because Mozilla is keen on attracting non-developers to the project — particularly video professionals — it has also set out to build "user-friendly" add-ons that make building Popcorn-enabled pages less like programming and more like the editing room. The centerpiece of this effort is Popcorn Maker, an in-browser editor that support drag-and-drop rearranging of video clips and Popcorn primitives. The project describes it as a "creative tool" for "filmmakers, journalists, and creative people of all stripes." It is built on top of an editing abstraction toolkit with the cute-if-not-very-informative name Butter.js.

Popcorn Maker is still alpha quality, but it allows you to place subtitles, footnotes, and images into a project by dropping them onto a timeline, and stretching their start- and end-points out to the desired duration. At the moment, it does not seem to enable editing of the HTML source of the file, which limits where you can place elements, and if you read through the API reference, writing the JavaScript by hand is not much more complicated. One should expect that to change as more of the plugins are ported to Popcorn Maker. A blog post describing the history of the editor suggests that it is still undergoing high-level redesign. The post does emphasize that Popcorn Maker is page editor, not a non-linear video editor. That is, you do not use Popcorn Maker to cut videos as you would with PiTiVi or Kdenlive — the video has to be completed before uploading to the application. Popcorn Maker is just a tool for crafting the interactivity of the rest of the HTML that surrounds the video at the center.

That distinction may be too fine a detail to get across; I suspect that ultimately, if Popcorn takes off, non-developers will look for a tool that handles both video editing and designing the interactivity to accompany it in-page. After all, once you start thinking about tying interactive content into your final work, it will start affecting the way you edit. Of course, aside from the demo films linked to from the project's site, virtually no one has attempted something similar to Popcorn, but you can already see how designing a video with web interactivity in mind affects the final product. The flashiest example of Popcorn is without question One Millionth Tower, a documentary project that incorporates captioning, layered video, and even animation. The end result is far more than just a video that triggers the display of page events.

As a 1.0 release, Popcorn is complete enough for people to begin designing their own interactive content. The supported third-party services cover the basics, and the home page links to a few other still-in-development add-on projects. The project certainly offers a compelling picture of what DOM-interactive audio/video content can do. Now if we can just get the video codec disputes solved once and for all....

Comments (7 posted)

Brief items

Quotes of the week

We need t-shirts with "I am a user too!"

Frankly, and GNOME is by far not the only one suffering from this, a project like this SHOULD have "open source contributor" as one of their prime audiences (note that I say "contributor" not "developer").

These contributors, like you and me, are the folks that will write bugreports in diff -u form. Heck we're the folks that conquer your bugzilla and FILE the bug in the first place. We're the folks that give you the translations you want. We're the folks who very vocally tell you what is wrong (and sometimes we even tell you what is right) with your software. We're the folks who advocate your software to our tech-noob friends. We're the folks that look at your code sometimes, to find security issues and tell you nicely about it before we go public. We're the folks who form the pool out of which you're going to be fishing your next developer set. We are the 99%^Wone percent, I will grant you that. But we are the one percent you as project NEED to get better in the long term, we are the one percent that your project needs to survive.

-- Arjan van de Ven (in the comments)

It is my belief that we are, right now, in the middle of a very large evolution in the ecology of open source. The language of contribution has infected a new generation of open source contributors. Much of the potential first imagined by open source pioneers is being realized by high school kids on a daily basis who contribute effectively with less effort than has ever been required.

The reason I am so convinced of the importance of this change is so simple it took me nearly a year to identify it. While the ethos of Apache may have been "Community over Code" it required those in the community to understand and internalize that ethos for it to be fully realized. Social problems became political problems because the ethos had to be enforced by the institution.

The new era, the "GitHub Era", requires no such internalization of ethos. People and their contributions are as transparent as we can imagine and the direct connection of these people to each other turn social problems back in to social problems rather than political problems. The barrier to getting a contribution somewhere meaningful has become entirely social, in other words it is now the responsibility of the community, whether that community is 2 or 2000 people.

-- Mikeal Rogers

Comments (6 posted)

Cappucino 0.9.5

Cappucino is a framework for the creation of web-based applications. New features in the 0.9.5 release include vanishing scrollbars, improved documentation, a new "popover" widget, tooltips, a predicate editor, and more.

Comments (none posted)

The first GNOME Boxes release

GNOME Boxes is, according to this blog post by one of its authors, "designed to be the easiest way to use or connect to applications running on another Windows, Mac, or Linux system. Whether the system is virtual and local, a home computer you need to access from the road, or a centrally hosted corporate login - we'll get you there." The first release is now available; see this post for more information and screenshots. (Thanks to Paul Wise).

Comments (41 posted)

The Journal - a proposed syslog replacement

Lennart Poettering and Kay Sievers discussed their concept of the "journal" at the 2011 Kernel Summit; now they have posted a detailed document describing how they think their syslog replacement should work. "Break-ins on high-profile web sites have become very common, including the recent widely reported kernel.org break-in. After a successful break-in the attacker usually attempts to hide his traces by editing the log files. Such manipulations are hard to detect with classic syslog: since the files are plain text files no cryptographic authentication is done, and changes are not tracked. Inspired by git, in the journal all entries are cryptographically hashed along with the hash of the previous entry in the file. This results in a chain of entries, where each entry authenticates all previous ones. If the top-most hash is regularly saved to a secure write-only location, the full chain is authenticated by it. Manipulations by the attacker can hence easily be detected." The plan is to get an initial implementation into the Fedora 17 release.

Comments (253 posted)

PyPy 1.7 released

Version 1.7 of the PyPy Python interpreter is out. As is usual, much of the work in this release is focused on performance. "The main topic of this release is widening the range of code which PyPy can greatly speed up. On average on our benchmark suite, PyPy 1.7 is around **30%** faster than PyPy 1.6 and up to **20 times** faster on some benchmarks." There is also improved (but incomplete) stackless support and improved (but still incomplete) NumPy support.

Full Story (comments: none)

Newsletters and articles

Development newsletters from the last week

Comments (none posted)

News from the GNOME Outreach Program for Women

GNOME has put out a press release that highlights the accomplishments of seven women in the Google Summer of Code program for GNOME along with eight participants in GNOME Outreach Program for Women internships. "The accomplishments of the women who participated in Google Summer of Code this year are impressive. For example, Nohemi Fernandez implemented a full-featured on-screen keyboard for GNOME Shell, which makes it possible to use GNOME 3.2 on tablets. Raluca Elena Podiuc added the ability to create an avatar in Empathy with a webcam. Srishti Sethi created four activities for children to discover Braille for the GCompris educational software. [...] There were also eight women who participated in the GNOME Outreach Program for Women internships during the same time period as Google Summer of Code. Five of them worked on documentation, creating new topic-based help for the core desktop, as well as for the Accerciser accessibility tool, Vinagre remote desktop viewer, Brasero CD/DVD burner, Cheese webcam application, and GNOME System Monitor."

Comments (1 posted)

Page editor: Jonathan Corbet

Announcements

Brief items

Koha creators asking for help in trademark dispute

Koha is a free library management system created by the Horowhenua Library Trust in New Zealand. This software has been the subject of an ongoing fight with a US company called LibLime, which seems to want to take the software proprietary; LWN reported on this dispute in 2010. Now the Horowhenua Library Trust is asking for help; it seems that LibLime now thinks it is entitled to a trademark on the Koha name in New Zealand. "The situation we find ourselves in, is that after over a year of battling against it, PTFS/Liblime have managed to have their application for a Trademark on Koha in New Zealand accepted. We now have 3 months to object, but to do so involves lawyers and money. We are a small semi rural Library in New Zealand and have no cash spare in our operational budget to afford this, but we do feel it is something we must fight."

Comments (16 posted)

Articles of interest

Linux Foundation Monthly Newsletter: November

The November edition of the Linux Foundation newsletter covers the OpenMAMA Project, a long-term stable kernel initiative to support consumer electronics makers, a paper: Making UEFI Secure Boot Work with Open Platforms, keynotes for the Automotive Linux Summit, new features for the Yocto project, and several other topics.

Full Story (comments: none)

Interview with Andrew Tanenbaum (LinuxFr.org)

LinuxFr.org is carrying an interview [French] with Andrew Tanenbaum (English version). In it, he has some criticisms of Linux (and Linus Torvalds) as well as the GPL, monolithic kernels, and so on. Standard Tanenbaum fare, along with the news that he has received a grant to commercialize MINIX 3 and that it will be ported to ARM starting in January. "The reason MINIX 3 didn't dominate the world has to do with one mistake I made about 1992. At that time I thought BSD was going to take over the world. It was a mature and stable system. I didn't see any point in competing with it, so I focused MINIX on education. Four of the BSD guys had just formed a company to sell BSD commercially. They even had a nice phone number: 1-800-ITS-UNIX. That phone number did them and me in. AT&T sued them over the phone number and the lawsuit took 3 years to settle. That was precisely the period Linux was launched and BSD was frozen due to the lawsuit. By the time it was settled, Linux had taken off. My mistake was not to realize the lawsuit would take so long and cripple BSD. If AT&T had not brought suit (or better yet, bought BSDI), Linux would never have become popular at all and BSD would dominate the world."

Comments (141 posted)

Open Source and the Open Road, Part 1 (Linux Insider)

Linux Insider looks at the role of Linux in the automotive industry. "One key factor driving the decision on what operating system car makers will use for this new generation of connected cars is uniqueness. Car makers want to differentiate their products the same way Apple has done with its smartphone technology, said [Peter Vescuso, executive vice president of Black Duck Software]. That same driving force exists with consideration for the Linux subset, Android. The same dynamics that thrust the use of Android into the mobile device market could have a huge impact on the automotive industry, he noted."

Comments (none posted)

Education and Certification

LPI Announces new Master Affiliate for the UK and Ireland

The Linux Professional Institute (LPI) has announced a new Master Affiliate for the UK and Ireland. "LPI UK will be managed and operated by TDM Open Source Services of Worcestershire, England. 'LPI has been working with key people at TDM Open Source Services for some time and welcomes this new partnership as they pursue Linux professional mentorship and apprenticeship programs both within the UK and the European Union. TDM will be an ideal addition to our network of European affiliates who are aggressively promoting the professional adoption of Linux and Open Source Solutions,' said Jim Lacey, president and CEO of LPI. Mr.Lacey noted in particular TDM's innovative work in providing for professional e-learning Linux apprenticeships which incorporate IT training towards LPI certification."

Full Story (comments: none)

Upcoming Events

Events: December 1, 2011 to January 30, 2012

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
December 2
December 4
Debian Hildesheim Bug Squashing Party Hildesheim, Germany
December 2
December 4
Open Hard- and Software Workshop Munich, Germany
December 4
December 9
LISA Â’11: 25th Large Installation System Administration Conference Boston, MA, USA
December 4
December 7
SciPy.in 2011 Mumbai, India
December 27
December 30
28th Chaos Communication Congress Berlin, Germany
January 12
January 13
Open Source World Conference 2012 Granada, Spain
January 13
January 15
Fedora User and Developer Conference, North America Blacksburg, VA, USA
January 16
January 20
linux.conf.au 2012 Ballarat, Australia
January 20
January 22
Wikipedia & MediaWiki hackathon & workshops San Francisco, CA, USA
January 20
January 22
SCALE 10x - Southern California Linux Expo Los Angeles, CA, USA
January 27
January 29
DebianMed Meeting Southport2012 Southport, UK

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds