Back in 2008, the Emacs project was looking to move away from CVS and onto a new version control system. At that time, the three main distributed version control system (DVCS) choices, Git, Mercurial, and Bazaar, seemed roughly equivalent, so Richard Stallman chose Bazaar because it was a GNU project. That choice was always controversial, but Stallman has been steadfast in his support for Bazaar—until now.
The topic of switching has come up a number of times in the intervening years, but a decision was always deferred, usually because Stallman was unconvinced that Bazaar (aka bzr) was a demonstrably worse choice than the others. But his opinion seems to have changed. In response to a recent call for a switch to Git from Eric S. Raymond, Stallman indicated a different view:
The last time the subject of switching to Git came up was in March 2013, when John Wiegley broached the subject. He mentioned major Bazaar bugs that had remained unfixed for years, some of which affected Emacs directly. That problem is "significant enough that I think it justifies us switching over to Git all by itself". He addressed his argument to Stallman, who replied that he had contacted the maintainer about a specific bug (also reported here) and was hopeful that a solution would be found soon.
The problem was eventually solved in Bazaar 2.6, which was released in July 2013. But it had originally been reported in 2011 and evidently required a poke from Stallman to get it fixed—not good indicators of a lively project. In addition, as Karl Fogel pointed out, the 2.6 release process got stuck between a beta that was released in July 2012 and a planned full 2.6 release in August of that year. It was still languishing in that state in March 2013, without any news or updates to the status or plans, Fogel said. His internal "project health meter [...] is hovering down near the low end of the dial". The release delay is particularly worrisome:
At that time, Stallman was still hopeful that the bugs could get fixed and, furthermore, that it would be shown that Bazaar was being maintained. Several in the thread were convinced that even if the bugs got fixed at Stallman's behest, it would not indicate that the project was actively maintained. But, Stallman was adamant that the Bazaar maintainers be given some time:
In the intervening ten months or so, Stallman has evidently come around to recognize that the Bazaar project is, sadly, not keeping up with the maintenance that the code requires. His declaration in response to Raymond set off an immediate call by Fogel to move to Git "now that RMS [Stallman] has dropped the bzr requirement" though Fogel did recognize that there might be other DVCS preferences.
Raymond's original message offered a few reasons for moving away from Bazaar, for example "escaping the gravitational pull of bzr's failure" and making Emacs development friendlier for new hackers who wouldn't have to learn a different DVCS than they are used to. He also is in the Git camp, albeit reluctantly:
As part of the message, Raymond offered to help with any technical hurdles in converting from Bazaar to Git, but it turns out that is not really needed. Andreas Schwab maintains a Git mirror of the Emacs Bazaar repository that has already handled the cleanups and other problems that Raymond expected to find in the conversion process. So it would seem that migrating is largely a matter of switching to using Git and pointing it at the existing repository—at least from a technical perspective.
The only voice in support of staying with Bazaar was provided by Eli Zaretskii. In a sub-thread, he outlined his reasons for liking Bazaar and, perhaps more importantly, for disliking Git. Other than Zaretskii, though, little support for staying with Bazaar was heard. Some said they liked Bazaar far better than Git, but would be concerned if Emacs were to stick with an unmaintained VCS.
There is another alternative, though, of course: Mercurial (aka hg). Jordi Gutiérrez Hermoso raised that possibility. Part of his reasoning is that Mercurial is more aligned with the GNU project philosophically—its GPLv2+ license is more in line with GNU aims than Git's GPLv2-only license, for example. It is, he said, not really a technical argument, but rather a social one.
Several felt like they had heard some of that before: back in 2008 when the switch to Bazaar was made. While support for Mercurial seems a bit higher than that for Bazaar, and Mercurial is much more actively maintained, it suffers from one of the same problems as Bazaar: it isn't Git. Like it or not—and many of the biggest proponents of moving to Git did not like it—Git has a larger mindshare, bigger developer community, and is more popular than Mercurial. Git reasonably fills the hole, even if it is not the best social match. As David Kastrup put it:
The best social match was Bazaar when it was chosen and folks are leery of making that mistake again. Stephen J. Turnbull, who maintains XEmacs (which uses Mercurial), described at length why he thinks Git is right choice of Emacs:
Fogel also piled on with a handful of bullet points that explained Git's superiority—without really knocking Mercurial. So a switch to Git for Emacs would seem to be coming, even though that choice seems somewhat reluctantly made. How soon Emacs will switch is up in the air; Raymond's call for an immediate switch has been rebuffed by maintainer Stefan Monnier. There may not be a huge amount to do, but there are steps that need to be taken before a switch can be made. In particular, the 24.4 release should be completed before switching to Git, Monnier (and others) said.
Switching to a new VCS should clearly not be taken lightly. The Emacs project didn't, either now or in 2008, but there were, perhaps, additional considerations that were not looked at when the switch to Bazaar was made. It might have been difficult to predict the fate of the Bazaar project back then but, given where things are today, that fate will make Bazaar an unlikely choice for any new projects. Unfortunately, Mercurial may also be something of a casualty of Git's success (and popularity). One hopes not, as there should be room for more than one open-source DVCS.
Mozilla unveiled the first Firefox OS phones less than one year ago, but already the browser-maker is branching out into other classes of device. In a series of blog posts the first week of January, Mozilla announced that the lightweight, Web-centric platform would soon be available on smart TVs, tablets, and desktops, as well as on higher-powered smartphone hardware, and that the organization would be starting a developer program intended to accelerate development of the OS.
The first blog post was from Jay Sullivan, Mozilla's chief operating officer (COO), on January 6. He noted first that at the end of 2013, three different Firefox OS phone models were available in fourteen separate markets. Those numbers, he said, exceeded Mozilla's expectations. But, he continued, while Mozilla itself will continue to focus its Firefox OS efforts on "developing markets"—where supporting lower-end phone hardware has the biggest impact—hardware vendors have expressed interest in deploying it elsewhere. The first such example is phone vendor ZTE (who released the Firefox OS developer phones in 2013); Sullivan said to expect the announcement of higher-end Firefox OS phones within the week.
Targeting a high-end device is indeed a deviation from Firefox OS's initial plan, which sought to pair up the lower overhead of a minimal OS running only Web applications with less expensive hardware. But an even bigger departure is Panasonic's announcement that it would be using Firefox OS for a line of smart TVs.
Panasonic's press release is (as is typical) short on details, but says that "basic functions, such as menus and EPGs (Electronic Program Guide) which are currently written as embedded programs, will be written in HTML5, making it possible for developers to easily create applications for smartphones or tablets to remotely access and operate the TV." The press release also says that Panasonic is partnering with Mozilla to develop the products, which is undoubtedly good news to those who are wary of Mozilla's historic dependence on Google's search-engine placement fee as the organization's only significant source of revenue.
The second blog post addressing Firefox OS's next steps appeared (without a byline) on the official Mozilla Blog, shortly after Sullivan's. It repeats the information about Panasonic's smart TV plan and ZTE's higher-end phones, adding the detail that the new ZTE phones will include "dual core options like the Open C and Open II" (which are reputed to be ZTE product names).
Perhaps more interestingly, the post also announces that Mozilla is working with motherboard manufacturer VIA on a "nettop"-style desktop device powered by Firefox OS. The device in question comes from VIA's "APC" line, which initially marketed a $49 motherboard designed to run Android on the desktop. The new revision includes a bit more branding—a newer motherboard called the APC Rock is available based on the ARM Cortex-A9 processor, as is the APC Paper, which includes the same motherboard inside a diminutive case that looks like a hardbound book (down to the book-like cardboard cover). No word, so far, on anything called Scissors. VIA's own press release about the product includes a link to the source code for the Firefox OS image shipped on the Paper and Rock, hosted at GitHub, and notes that developers who fix known issues (tagged with the "Free APC" label in the GitHub repository's issue list) will be rewarded with a free device.
The final nugget of Firefox OS expansion news is Mozilla's own contributor program, which was outlined on the Mozilla Hacks blog. The program is aimed at adapting Firefox OS for the tablet form factor. The initial tablets (which will be seeded to accepted developers) will be 10-inch devices built by Foxconn. Sign up is not yet open for the program, but will be announced later on Mozilla Hacks. Notably, the post specifies that not just core Firefox OS contributors, but other participants including localization and translation contributors and bug fixers, will all be accepted.
Expanding to tablets, TVs, and desktops all at once certainly sounds like rapid growth, perhaps even to the point where one might worry that the Firefox OS team could be spreading itself a bit too thin if it is not careful. On the other hand, some of these form factors are arguably a more natural fit for the Firefox OS model—where all applications are implemented in HTML5 and the rest of the operating system is bare bones—than they are for a full-fledged, traditional distribution.
Take smart TVs, for instance: despite the name "smart," the vast majority of the apps and services that run on current smart TV products are pretty minimal, concerned as they are with pushing content one way from a service provider directly to the screen. Rendering a video stream is usually handled by dedicated hardware, most apps do little but perform authentication and provide a content-selection UI, and generally only one app is run at a time—all factors that argue in favor of a lightweight OS. The South Korean electronics manufacturer LG, for one, licensed webOS (which is similarly low on resource consumption and tailored for HTML5 apps) from HP in February 2013, and has been shipping webOS smart TVs since. Tablets are much more similar to phones of course, but there, too, there is proven demand for less-interactive apps. As much as developers may (rightly) bemoan the consumption-only model that Apple has pushed with the iPad, there is little doubt that most consumer tablets are used primarily for accessing web services, read-only apps, and other CPU-un-intensive tasks.
VIA's Rock and Paper are certainly an oddity—even more so when one considers the cardboard case—but that does not mean that they will be unsuccessful. When Firefox OS was first announced, a lot of commentators dismissed it as untenable, particularly when compared to the smartphone juggernauts of iOS and Android. But the project has survived, is shipping, and even seems to be growing. That is good news for fans of web technology, as well as for fans of Mozilla, since success with Firefox OS will presumably enable the project to undertake still more work in the months and years to come.
On January 6, "press day" at the Consumer Electronics Show (CES) in Las Vegas, Google announced the Open Auto Alliance (OAA), a new consortium of companies with the mission of improving Android for use in cars. That makes OAA the third industry coalition focused around getting Linux into automobiles (after the GENIVI Alliance and Automotive Grade Linux), which complicates the picture somewhat for fans of free software—particularly since there are several companies that participate in more than one group.
The official Android blog carried the initial announcement, which points readers to the OAA site, where a lengthier press release can be found. The announcement of an Android-in-cars effort had been anticipated for several weeks; in December The Wall Street Journal predicted a partnership with Audi (the original article is paywalled, but others reference it), a prediction buoyed by Audi's adoption of Google Earth for its existing in-vehicle infotainment (IVI) systems.
Indeed, Audi was the most visible carmaker in the OAA announcement, and it later demonstrated an Android tablet that will ship with some of its future cars. But OAA surprised many onlookers who expected a bilateral deal between Google and Audi; in particular OAA was surprising for the fact that there were other automakers involved—General Motors, Honda, and Hyundai—as well as graphic chip vendor NVIDIA.
The announcement itself is light on detail; it notes that "there’s still an important device that isn’t yet connected as seamlessly to the other screens in our lives – the car" and promises "accelerating auto innovation with an approach that offers openness, customization and scale." As for the form this innovation takes, the announcement notes that a lot of people already take their Android phone or tablet with them in the car, but that it is "not yet a driving-optimized experience."
Wouldn't it be great, the announcement says, to "bring your favorite apps and music with you, and use them safely with your car's built-in controls and in-dash display." Google and the OAA will "enable new forms of integration with Android devices, and adapting Android for the car," the announcement says. The press release is similar in tone, albeit augmented with a quote from each of the companies mentioned.
Strictly speaking, though, none of that text actually says that OAA will be adapting Android to run as the operating system of the IVI system. In fact, it sounds much like the phone/IVI-head-unit integration already found in many current cars: the phone can be paired to the head unit with Bluetooth or connected over USB, then use the car's dash display, audio system, and inputs to control applications running on the phone. The OAA site FAQ does say that the group is "also developing new Android platform features that will enable the car itself to become a connected Android device," but puts the priority on better integration with existing Android devices.
On January 7, Audi previewed an Android tablet designed for use specifically inside the car. It connects to the car's IVI system over WiFi, and offers controls for the entertainment system and a digital instrument cluster display.
OAA's initial focus, therefore, seems to be on improving the tethering experience between Android handheld devices and IVI units—perhaps making IVI tethering a built-in feature, rather than requiring a custom app for each car, as is largely the case today. The announcement and the site also highlight driver safety and an "experience" optimized for driving, which might imply some sort of "car mode" intended to reduce the odds of distracting the user behind the steering wheel. There are also IVI products that implement Android app support by running Android inside a sandboxed virtual machine, so it is possible that Google is interested in improving that experience.
Nevertheless, a lot of the CES coverage given to the OAA announcement focused on the prospect of Android-powered IVI head units; perhaps that is simply the more exciting side of the equation, or perhaps the terse announcement and newly minted OAA site simply do not provide enough detail.
On the other hand, if the OAA is primarily focused on improving handheld-device tethering and integration, that would explain several peculiarities surrounding the announcement. For starters, GM, Honda, and Hyundai are also members of the GENIVI Alliance, which is focused on designing a Linux-based IVI platform. GENIVI's chief deliverable is its compliance program, which certifies products against a fairly detailed specification. Some bits of GENIVI's specification are "functional" requirements that specify only an API, but others mandate specific components—including PulseAudio, GStreamer, FUSE, and systemd—for which Android has specific alternatives.
Focusing on device integration also explains the statement in the press release from GM's Mary Chan: "We see huge opportunities for the Android platform paired with OnStar 4G LTE connectivity in future Chevrolet, Buick, GMC and Cadillac vehicles." The wording in a press-release quote is not accidental; GM is being careful to say that it looks forward to seeing Android work with GM's own OnStar platform. That makes sense; it would, after all, be quite a surprise for GM (on any other carmaker) to put financial resources toward developing two separate IVI platforms at the same time.
On the other hand, there are still several lingering questions about OAA. First, there is the absence of several automakers who already ship an Android-based IVI platform in their vehicles: Renault (with its R-Link system, which includes a full-fledged "app store"), Kia (with its Uvo platform), and Volvo (with Sensus). There is also the question of whether or not Audi is (as was predicted) adopting Android as its own IVI OS. Audi's own IVI platform, called Multi Media Interface (MMI), is based on QNX. The tablet demonstration, notably, did not indicate that Audi was migrating its head units to Android.
For developers, the biggest unanswered question remains just what Google has in mind to improve the in-car Android experience. There is an existing standard called MirrorLink, for instance, that is devoted entirely to tethering phones to IVI head units. MirrorLink is maintained by yet another industry alliance, the Car Connectivity Consortium (CCC), and it already supports Android. Direct WiFi connectivity like that demonstrated in the Audi tablet is also existing technology, for instance in the Wi-Fi Alliance's Miracast standard.
Using an Android device to control the IVI system in the car, as in Audi's demonstration, is more original, but the implementation challenge there is primarily one of work on the IVI unit side, rather than changes to Android itself on the device. That is, the Android tablet does not need new APIs just because the app it runs is connecting to a vehicle.
The biggest mystery about the OAA announcement, however, is why Google felt the need to start an industry consortium to improve Android for automotive usage, rather than simply joining the CCC, GENIVI, or other groups. Many CES bloggers (and blog commenters) drew immediate comparisons to Google's Open Handset Alliance (OHA), the organization Google launched in 2007 as the nominal vendor-neutral alliance backing Android. Although it is mostly forgotten now, OHA was initially touted as a collaborative project focused on the principle of improving the smartphone world through open source, of which Android was merely the first joint project. In practice, though, OHA is essentially just Google's partner program, and Android today is developed within Google and then delivered to hardware vendors.
And whether or not joining OHA is beneficial to Android device-makers is a matter of debate. Google has been criticized in the past for tightly controlling what OHA device makers are allowed to do, and there are precious few device makers building products on the Android source code without participating in Google's official partner program. R-Link and Uvo are both heavily customized forks of Android, so perhaps one of the reasons for launching OAA is to rein in incompatible Android derivatives in favor of an "official" flavor. The OAA announcement and press release both invite other companies to join; time will tell whether existing automotive Android system builders will take the offer.
GENIVI's community manager Jeremiah Foster posted some brief comments about the announcement on Google+. Although he welcomes Google's foray into the automotive space, he bemoans the lack of detail. Ultimately, he says, "this seems to be much more of a reactive, defensive approach which will likely fracture the marketplace a bit more rather than rally it behind a common standard." From the mobile side, Foster makes a good point about the defensive uses of OAA. Apple announced its own "iOS in the Car" initiative in June 2013, aiming to provide access to iPod and iPad devices from IVI head units.
As ars technica (and many others) have noted in the past, Google's real interest in maintaining control over Android is not in preventing competing forks of the OS itself, but in building business for the services it integrates, from search to GMail to Hangouts. If nothing else, Google taking an active interest in the burgeoning IVI space indicates that the company wants to be sure users can still access their Google accounts when behind the wheel. Precisely what effect that will have on the products released by carmakers and automotive suppliers remains to be seen.
There are a number of free software cryptographic libraries out there, but they tend to be focused on things like performance and being full-featured, rather than other attributes like readability or auditability. But the TweetNaCl project [PDF] is targeting its efforts at a simple, easy-to-read library that still provides strong cryptography and protects adopters from timing attacks. In fact, as the name might indicate, TweetNaCl is so small that it was sent out as 100 tweets on Twitter.
TweetNaCl is an outgrowth of the NaCl project (NaCl stands for "Networking and Cryptography library" and is pronounced "salt"), which is an effort to provide "all of the core operations needed to build higher-level cryptographic tools". It does so by providing a selected set of cryptographic algorithms with few or no options (e.g. key size), rather than take the toolkit approach of OpenSSL and other libraries that provide many different algorithms and options. The NaCl core team consists of Daniel J. Bernstein (djb of qmail and other networking-and-security tool fame), Tanja Lange, and Peter Schwabe. NaCl and the cryptography it implements is described at some length in a paper [PDF] by Bernstein.
TweetNaCl takes the NaCl API, some 25 functions that can be called by applications, but implements them in readable C code. The optimizations, assembly code, and other performance shortcuts taken by NaCl are gone.
Given that 100 tweets could contain at most 14,000 characters, it won't come as a surprise that tweetnacl.c (the source can also be found in the paper or slightly rearranged in this GitHub repository) comes in at 13,438 bytes in length. That is the size of the compressed version of the code (so it could fit in 100 tweets); the human-readable version is 809 lines and 16,621 bytes—still quite manageable.
According to the TweetNaCl paper (which was written by the core NaCl team plus Wesley Janssen), NaCl is "rapidly becoming the crypto library of choice for a new generation of applications". That is the reason it was chosen as the API for the simple, auditable cryptographic library that became TweetNaCl. The idea is that TweetNaCl is "short enough and simple enough for humans to audit against a mathematical description of the functionality in NaCl" such as that in Bernstein's NaCl paper. That doesn't complete the auditing process, though, as the TweetNaCl paper warns: "Of course, compilers also need to be audited (or to produce proofs of correct translations), as do other critical system components."
The readability of the code affects its performance somewhat, but the claim is that TweetNaCl is fast enough for typical applications that just need to do some low-volume cryptographic operations. So an application project could potentially audit the correctness of the code it uses for cryptography, which is something that is difficult or impossible to do with other libraries. In addition, the NaCl high-level API makes it difficult to choose a poor combination of algorithms that lead to bad security:
TweetNaCl also follows NaCl's lead in avoiding timing-based attacks. It does so by not having any branches or array indices that depend on secret data. It is also thread-safe and does no dynamic memory allocation. It is completely compatible with NaCl and has been verified using the NaCl test suite.
The algorithms provided by TweetNaCl (and, of course, NaCl) are generally ones that Bernstein has published including the Salsa20 stream cipher family, the Poly1305 message authentication code (MAC), and the Curve25519 Diffie-Hellman key exchange. It also uses SHA-512 for cryptographic hashing.
The code for both TweetNaCl and NaCl have been placed into the public domain, so they are freely available. It will be interesting to see if projects start to use TweetNaCl, or at least provide it as an option. NaCl is used in a number of recent applications including BitTorrent Live and DNSCrypt; it is destined for the next generation of the Tor protocol as well. Replacing NaCl with a compatible, auditable library seems like it might be a good option, especially for Tor.
The fact that TweetNaCl is small and auditable doesn't mean that the audit work has been done, however. It would be nice to see some project or projects take on the auditing of TweetNaCl and publicly report the results. A reasonably careful project would not rely on another project's audit alone, of course, but if enough projects do some auditing, some level of trust in the code will be built. That would lead to quite a different situation than we have with today's cryptographic applications.
It doesn't require leaking classified information. Nor does it violate the law. To pull off the quarter-Snowden with a twist, which requires even less than a quarter of Snowden's courage, an NSA employee need only resign their position, seek out a trustworthy journalist of their choice, and announce that while they aren't at liberty to reveal any state secrets, they believe that Congress ought to rein in the NSA immediately. "If Senators Dianne Feinstein and Ron Wyden, who are permitted to see classified information, are listening," the staffer could say, "I'd like to brief them on my concerns." At least one of those Senate Intelligence Committee members will take the plea seriously.
The quarter-Snowden with a twist requires giving up a lucrative, intellectually challenging job during a time when the economy continues to be slow. But it is the right thing to do. And as far as patriotic sacrifices go, it is far less burdensome than the price many have paid.
TAO is retail rather than wholesale.
That is, as well as TAO works (and it appears to work quite well indeed), they can't deploy it against all of us – or even most of us. They must be installed on each individual target's own equipment, sometimes remotely but sometimes through "supply chain interdiction" or "black bag jobs". By their nature, targeted exploits must be used selectively. Of course, "selectively" at the scale of NSA might still be quite large, but it is still a tiny fraction of what they collect through mass collection.
I blame our industry. Computing is such a lousy experience that you can't distinguish normal operation from enemy action.
|Created:||January 3, 2014||Updated:||October 28, 2014|
From the CVE entry:
If USCAN_EXCLUSION is enabled, uscan doesn't correctly handle filenames containing whitespace. This can be abused my malicious upstream to delete files of their choice.
|Created:||January 6, 2014||Updated:||January 21, 2014|
|Description:||From the Debian advisory:
Several vulnerabilities have been discovered in uscan, a tool to scan upstream sits for new releases of packages, which is part of the devscripts package. An attacker controlling a website from which uscan would attempt to download a source tarball could execute arbitrary code with the privileges of the user running uscan.
|Created:||January 3, 2014||Updated:||January 8, 2014|
From the Red Hat bug report:
Gitolite was found to be vulnerable to local filesystem information leak, where it could create world writable files in the repositories (particularly the gitolite-admin one) depending on the user umask running gitolite setup.
|Package(s):||EC2 kernel||CVE #(s):||CVE-2013-4588|
|Created:||January 3, 2014||Updated:||January 8, 2014|
From the Ubuntu advisory:
A flaw was discovered in the Linux kernel's IP Virtual Server (IP_VS) support. A local user with the CAP_NET_ADMIN capability could exploit this flaw to gain additional administrative privileges.
|Created:||January 3, 2014||Updated:||January 8, 2014|
From the Ubuntu advisory:
Evan Huus reported a buffer overflow in the Linux kernel's radiotap header parsing. A remote attacker could cause a denial of service (buffer over- read) via a specially crafted header.
|Package(s):||kernel||CVE #(s):||CVE-2013-4516 CVE-2013-7026|
|Created:||January 3, 2014||Updated:||January 8, 2014|
From the Ubuntu advisory:
Nico Golde and Fabian Yamaguchi reported a flaw in the Linux kernel's driver for the SystemBase Multi-2/PCI serial card. An unprivileged user could obtain sensitive information from kernel memory. (CVE-2013-4516)
A race condition flaw was discovered in the Linux kernel's ipc shared memory implimentation. A local user could exploit this flaw to cause a denial of service (system crash) or possibly have unspecied other impacts. (CVE-2013-7026)
|Created:||January 3, 2014||Updated:||January 8, 2014|
From the Ubuntu advisory:
Catalin Marinas reported a flaw in the get_user and put_user API functions in the Linux kernel on ARM platforms. An unprivileged local user could exploit this flaw to gain administrator privileges.
|Created:||January 8, 2014||Updated:||November 24, 2014|
|Description:||From the Red Hat bugzilla:
A buffer overflow flaw was reported in libsrtp, Cisco's reference implementation of the Secure Real-time Transport Protocol (SRTP), in how the crypto_policy_set_from_profile_for_rtp() function applies cryptographic profiles to an srtp_policy. This could allow for a crash of a client linked against libsrtp (like asterisk or linphone).
|Created:||January 6, 2014||Updated:||January 21, 2014|
|Description:||From the openSUSE advisory:
- CVE-2013-6436: Fix crashes in lxc memtune code, one of which results in DoS f8c1cb90-CVE-2013-6436.patch, 9faf3f29-LXC-memtune.patch bnc#854486
|Created:||January 8, 2014||Updated:||January 29, 2014|
|Description:||From the X.Org advisory:
Scanning of the libXfont sources with the cppcheck static analyzer included a report of:
[lib/libXfont/src/bitmap/bdfread.c:341]: (warning) scanf without field width limits can crash with huge input data.Evaluation of this report by X.Org developers concluded that a BDF font file containing a longer than expected string could overflow the buffer on the stack. Testing in X servers built with Stack Protector resulted in an immediate crash when reading a user-provided specially crafted font.
As libXfont is used to read user-specified font files in all X servers distributed by X.Org, including the Xorg server which is often run with root privileges or as setuid-root in order to access hardware, this bug may lead to an unprivileged user acquiring root privileges in some systems.
|Created:||January 6, 2014||Updated:||January 5, 2015|
|Description:||From the Red Hat bugzilla:
Raphael Geissert discovered multiple denial of service flaws in OpenJPEG. If a specially-crafted image were opened by an application linked against OpenJPEG, it could cause the application to crash.
|Created:||January 6, 2014||Updated:||January 20, 2014|
|Description:||From the openSUSE advisory:
nagios was updated to fix a possible denial of service in CGI executables.
|Created:||January 7, 2014||Updated:||January 22, 2014|
|Description:||From the CVE entry:
Net-SNMP 5.7.1 and earlier, when AgentX is registering to handle a MIB and processing GETNEXT requests, allows remote attackers to cause a denial of service (crash or infinite loop, CPU consumption, and hang) by causing the AgentX subagent to timeout.
|Created:||January 7, 2014||Updated:||February 24, 2014|
|Description:||From the Debian advisory:
Anton Johannson discovered that an invalid TLS handshake package could crash OpenSSL with a NULL pointer dereference.
|Created:||January 7, 2014||Updated:||April 2, 2014|
|Description:||From the CVE entry:
The XenAPI backend in OpenStack Compute (Nova) Folsom, Grizzly, and Havana before 2013.2 does not properly apply security groups (1) when resizing an image or (2) during live migration, which allows remote attackers to bypass intended restrictions.
|Created:||January 6, 2014||Updated:||February 4, 2014|
|Description:||From the Red Hat bugzilla:
Poppler was recently reported to be vulnerable to a flaw, which can be exploited by malicious people to cause a DoS (Denial of Service) in an application using the library.
The vulnerability is caused due to a format string error when handling extraneous bytes within a segment in the "JBIG2Stream::readSegments()" method in JBIG2Stream.cc, which can be exploited to cause a crash.
The issue is said to be fixed in Poppler 0.24.5.
|Package(s):||typo3-src||CVE #(s):||CVE-2013-7073 CVE-2013-7074 CVE-2013-7075 CVE-2013-7076 CVE-2013-7078 CVE-2013-7079 CVE-2013-7080 CVE-2013-7081|
|Created:||January 2, 2014||Updated:||August 29, 2016|
|Description:||The TYPO3 advisory lists vulnerabilities of the following types: "Cross-Site Scripting, Information Disclosure, Mass Assignment, Open Redirection and Insecure Unserialize"
The CVE entries are as follows:
CVE-2013-7073: The Content Editing Wizards component in TYPO3 4.5.0 through 4.5.31, 4.7.0 through 4.7.16, 6.0.0 through 6.0.11, and 6.1.0 through 6.1.6 does not check permissions, which allows remote authenticated editors to read arbitrary TYPO3 table columns via unspecified parameters.
CVE-2013-7074: Multiple cross-site scripting (XSS) vulnerabilities in Content Editing Wizards in TYPO3 4.5.x before 4.5.32, 4.7.x before 4.7.17, 6.0.x before 6.0.12, 6.1.x before 6.1.7, and the development versions of 6.2 allow remote authenticated users to inject arbitrary web script or HTML via unspecified parameters.
CVE-2013-7075: The Content Editing Wizards component in TYPO3 4.5.0 through 4.5.31, 4.7.0 through 4.7.16, 6.0.0 through 6.0.11, and 6.1.0 through 6.1.6 allows remote authenticated backend users to unserialize arbitrary PHP objects, delete arbitrary files, and possibly have other unspecified impacts via an unspecified parameter, related to a "missing signature."
CVE-2013-7076: Cross-site scripting (XSS) vulnerability in Extension Manager in TYPO3 4.5.x before 4.5.32 and 4.7.x before 4.7.17 allows remote attackers to inject arbitrary web script or HTML via unspecified vectors.
CVE-2013-7078: ** RESERVED ** This candidate has been reserved by an organization or individual that will use it when announcing a new security problem. When the candidate has been publicized, the details for this candidate will be provided.
CVE-2013-7079: Open redirect vulnerability in the OpenID extension in TYPO3 4.5.0 through 4.5.31, 4.7.0 through 4.7.16, 6.0.0 through 6.0.11, and 6.1.0 through 6.1.6 allows remote attackers to redirect users to arbitrary web sites and conduct phishing attacks via unspecified vectors.
CVE-2013-7080: The creating record functionality in Extension table administration library (feuser_adminLib.inc) in TYPO3 4.5.0 through 4.5.31, 4.7.0 through 4.7.16, and 6.0.0 through 6.0.11 allows remote attackers to write to arbitrary fields in the configuration database table via crafted links, aka "Mass Assignment."
CVE-2013-7081: The (old) Form Content Element component in TYPO3 4.5.0 through 4.5.31, 4.7.0 through 4.7.16, 6.0.0 through 6.0.11, and 6.1.0 through 6.1.6 allows remote authenticated editors to generate arbitrary HMAC signatures and bypass intended access restrictions via unspecified vectors.
|Created:||January 6, 2014||Updated:||January 8, 2014|
|Description:||From the CVE entry:
The dissect_sip_common function in epan/dissectors/packet-sip.c in the SIP dissector in Wireshark 1.8.x before 1.8.12 and 1.10.x before 1.10.4 does not check for empty lines, which allows remote attackers to cause a denial of service (infinite loop) via a crafted packet.
Page editor: Jake Edge
Brief itemsreleased on January 4. Linus says: "Anyway, things have been nice and quiet, and if I wasn't travelling, this would probably be the last -rc: there isn't really anything holding up a release, even if there are a couple of patches still going through discussions and percolating through maintainers. But rather than do a real 3.13 next weekend, I'll be on the road and decidedly *not* opening the merge window, so I'll do an rc8 next week instead, needed or not."
Perhaps this should be termed the Indiana Jones philosophy of validation.
Kernel development news3.13-rc6. Linus has said that this cycle will almost certainly go to -rc8, even if things look stable (as they indeed do) to avoid opening the merge window while he is attending linux.conf.au. Your editor, wishing to avoid writing highly technical articles during that period for exactly the same reason, deems this the right time for our traditional, non-technical look at the 3.13 development cycle and where the patches came from this time around.
There have been just under 12,000 non-merge changesets pulled into the mainline kernel for 3.13 so far; the total will almost certainly exceed 12,000 by the time the final release happens. 3.13 is thus a significantly busier cycle than its immediate predecessors; indeed, only three previous cycles (2.6.25, 3.8, and 3.10) have brought in more changes. Those changes, which added 446,000 lines and deleted 241,000 for a net growth of 205,000 lines, were contributed by 1,339 developers. The most active of those developers were:
Most active 3.13 developers
By changesets Sachin Kamat 361 3.0% Jingoo Han 323 2.7% Marcel Holtmann 225 1.9% Viresh Kumar 169 1.4% Lars-Peter Clausen 150 1.3% H Hartley Sweeten 147 1.2% Ville Syrjälä 145 1.2% Joe Perches 135 1.1% Mark Brown 122 1.0% Takashi Iwai 120 1.0% Lee Jones 113 0.9% Linus Walleij 103 0.9% Peter Zijlstra 92 0.8% Wei Yongjun 88 0.7% Ben Widawsky 88 0.7% Al Viro 87 0.7% Ian Abbott 85 0.7% Russell King 83 0.7% Thierry Reding 80 0.7% Ingo Molnar 76 0.6%
By changed lines Ben Skeggs 19014 3.5% Greg Kroah-Hartman 17378 3.2% Jovi Zhangwei 16377 3.0% Guenter Roeck 13013 2.4% Eugene Krasnikov 10082 1.8% Patrick McHardy 8863 1.6% Joe Perches 7076 1.3% Ralf Baechle 6687 1.2% Archit Taneja 6246 1.1% Akhil Bhansali 6214 1.1% Aaro Koskinen 6164 1.1% Ard Biesheuvel 5814 1.1% Dave Chinner 5311 1.0% David Howells 5287 1.0% Russell King 5125 0.9% Hisashi Nakamura 4605 0.8% Ian Abbott 4452 0.8% Kent Overstreet 4349 0.8% Thierry Escande 4236 0.8% Jens Axboe 3745 0.7%
Sachin Kamat's and Jongoo Han's extensive janitorial work throughout the driver subsystem put them in the top two positions for changesets merged for the second cycle in a row. Marcel Holtmann did extensive surgery in the Bluetooth layer, Viresh Kumar did a lot of cleanup work in the cpufreq subsystem, and Lars-Peter Clausen did a lot of development in the driver tree, focusing especially on industrial I/O and audio drivers.
In the "lines changed" column, Ben Skeggs's work is concentrated, as always, on the nouveau driver. Greg Kroah-Hartman and Jovi Zhangwei do not properly belong on the list this month; they show up as a result of the addition of ktap to the staging tree (by Jovi) and its subsequent removal (by Greg). Guenter Roeck removed support for the Renesas H8/300 architecture, and Eugene Krasnikov contributed a single patch adding a driver for Qualcomm WCN3660/WCN3680 wireless adapters. Patrick McHardy's #6 position, resulting from the addition of the nftables subsystem, also merits a mention.
A minimum of 217 companies supported work on the 3.13 kernel; the most active of those were:
Most active 3.13 employers
By changesets Intel 1428 11.9% (None) 1323 11.1% Linaro 1166 9.7% Red Hat 1082 9.0% Samsung 594 5.0% (Unknown) 570 4.8% IBM 419 3.5% (Consultant) 342 2.9% SUSE 328 2.7% Texas Instruments 263 2.2% Outreach Program for Women 218 1.8% Freescale 206 1.7% 198 1.7% NVidia 180 1.5% Vision Engraving Systems 147 1.2% Oracle 135 1.1% Renesas Electronics 123 1.0% Free Electrons 121 1.0% Huawei Technologies 119 1.0% ARM 111 0.9%
By lines changed Red Hat 63583 11.7% Intel 59780 11.0% (None) 51458 9.4%11.0% Linaro 32054 5.9% (Unknown) 26712 4.9% Texas Instruments 20219 3.7% Linux Foundation 18262 3.4% Huawei Technologies 18182 3.3% IBM 15435 2.8% (Consultant) 14802 2.7% Samsung 14739 2.7% Ericsson 13722 2.5% NVidia 10884 2.0% Astaro 8863 1.6% Wind River 8421 1.5% Renesas Electronics 7337 1.3% SUSE 7230 1.3% Fusion-IO 6956 1.3% Western Digital 6590 1.2% Nokia 6479 1.2%
The percentage of contributions from volunteers is up a bit this time around, but not by enough to suggest any real change in its long-term decline. Perhaps the biggest surprise here, though, is that, for the first time, Red Hat has been pushed down in the "by changesets" column by Linaro. If there was ever any doubt that the mobile and embedded industries are playing an ever larger role in the development of the kernel, this should help to dispel them. That said, if one looks at the employers of the subsystem maintainers who merged these patches, the picture looks a bit different:
Employers with the most non-author signoffs Red Hat 2115 19.2% Intel 1704 15.5% Linux Foundation 1282 11.6% Linaro 912 8.3% 553 5.0% Samsung 464 4.2% (None) 403 3.7% Texas Instruments 350 3.2% Novell 348 3.2% IBM 289 2.6%
The situation is changing here, with the mobile/embedded sector having a bigger presence than it did even one year ago, but, for the most part, entry into subsystem trees is still controlled by developers working for a relatively small number of mostly enterprise-oriented companies.
Finally, it can be interesting to look at first-time contributors — developers whose first patch ever went into 3.13. There were 219 of these first-time contributors in this development cycle. Your editor decided to look at the very first patch from each first-time contributor and see which files were touched. These changes are spread out throughout the kernel tree, but the most common places for first-time contributors to make their first changes in 3.13 were:
Directory Contributors drivers/staging 24 drivers/net 21 include 21 net 19 arch/arm 14 drivers/gpu 10 arch/powerpc 10 arch/x86 7 drivers/media 7 Documentation 7
One of the justifications behind the staging tree was that it would serve as an entry point for new developers; these numbers suggest that it is working. That said, if one looks at longer periods, more new contributors work in drivers/net than anywhere else.
Another interesting question is: what is the employment situation for first-time contributors to the kernel? Are new kernel hackers still volunteers, or do they have jobs already? The numbers are hazy, but there are still some conclusions that can be drawn:
Employer Count (Unknown) 97 Intel 21 Huawei Technologies 6 Samsung 6 Linaro 5 (None) 4 AMD 3 Texas Instruments 3 Outreach Program for Women 3
Another way to put this information is that 118 of the first-time contributors in 3.13 were working for companies, 97 of them were unknown, and four were known to be volunteers. Many (but not all) of the unknowns will eventually turn out to have been working on their own time. But, even if every single one of them were a volunteer, we would still have more first-time contributors coming from companies than working on their own. In a time when experienced kernel developers can be hard to hire, companies will have little choice but to grow their own; some companies, clearly, are working to do just that.
And that, in turn, suggests that the long-term decline in volunteer contributions may not be a big problem in the end. Getting code into the kernel remains a good way to get a job, but, it seems, quite a few developers are successful at getting the job first, and contributing afterward. With luck, that will help us to continue to have a stream of new developers coming into the kernel development community.first part of this series, we discussed what Jailhouse is, had a look at its data structures, covered how it is enabled, and what it does to initialize CPUs. This part concludes the series with a look at how Jailhouse handles interrupts, what is done to create a cell, and how the hypervisor is disabled.
Modern x86 processors are equipped with a "local advanced programmable interrupt controller" (LAPIC) that handles delivery of inter-processor interrupts (IPIs) as well as external interrupts that the I/O APIC, which is part of the system's chipset, generates. Currently, Jailhouse virtualizes the LAPIC only; the I/O APIC is simply mapped into the Linux cell, which is not quite safe because a malicious guest (or Linux kernel module) could reprogram it to tamper with other guests' work.
The LAPIC works in one of two modes: "xAPIC" or "x2APIC". The xAPIC mode is programmed via memory mapped I/O (MMIO), while the x2APIC uses model-specific registers (MSRs). x2APIC mode is backward-compatible with xAPIC, and its MSR addresses directly map to offsets in the MMIO page. When Jailhouse's apic_init() function initializes the LAPIC, it checks to see if x2APIC mode is enabled and sets up its apic_ops access methods appropriately. Internally, Jailhouse refers to all APIC registers by their MSR addresses. For xAPIC, these values are transparently converted to the corresponding MMIO offsets (see the read_xapic() and write_xapic() functions in apic.c as examples).
Jailhouse virtualizes the LAPIC in both modes, however the mechanism is slightly different. For xAPIC mode, a special LAPIC access page (apic_access_page[PAGE_SIZE] defined in vmx.c) is mapped into the guest's physical address space at XAPIC_BASE (0xfee00000); this happens in vmx_cell_init(). Later, in vmcs_setup(), LAPIC virtualization is enabled; this way, every time a guest tries to access the virtual LAPIC MMIO region, a trap back to the hypervisor (a "VM exit") occurs. No data is really read from the virtual LAPIC MMIO page or written to it, so CPUs can share this page. For x2APIC, instead, normal MSR bitmaps are used. By default, Jailhouse traps access to all LAPIC registers; however, if apic_init() detects that host LAPIC is in x2APIC mode, the bitmap is changed so that only ICR (interrupt control register) access is trapped. This happens when the master CPU executes vmx_init().
There is a special case when a guest tries to access a virtual x2APIC on a system where x2APIC is not enabled. In this case, the MSR bitmap remains unmodified. Jailhouse intercepts accesses to all LAPIC registers and passes incoming requests to xAPIC using the apic_ops access methods, effectively emulating an x2APIC on top of xAPIC. Since LAPIC registers are referred to in apic.c by their MSR addresses regardless the mode, this emulation has very little overhead.
The main reason behind Jailhouse's trapping of ICR (and few other registers) access is isolation: a cell shouldn't be able to send an IPI to a CPU that is not in its own CPU set, and the ICR is what defines an interrupt's destination. To achieve this isolation, apic_cpu_init() is called by the master CPU during initialization; it stores the mapping from the apic_id to the associated cpu_id in an array called, appropriately, apic_to_cpu_id. When a CPU is assigned a logical LAPIC ID, Jailhouse ensures that it is equal to cpu_id. This way, when an IPI is sent to a physical or logical destination, the hypervisor is able to map it to cpu_id and check if the CPU is in the cell's set. See apic_deliver_ipi() for details.
Now let's turn to interrupt handling. In vmcs_setup(), Jailhouse does not enable traps to the hypervisor on external interrupts and sets the exception bitmaps to all zeroes. This means that the only interrupt that results in a VM exit is a non-maskable interrupt (NMI); everything else is dispatched through the guest's IDT and handled in guest mode. Since cells assert full control over their own resources, this makes sense.
Currently, NMIs can only come from the hypervisor itself, which uses them to control guest CPUs (arch_suspend_cpu() in apic.c is an example). When an NMI occurs in a guest, that guest exits VM mode and Jailhouse re-throws the NMI in host mode. The CPU dispatches it through the host IDT and jumps to apic_nmi_handler(). It schedules another VM exit using a virtual machines extensions (VMX) feature known as a "preemption timer." vmcs_setup() sets this timer to zero, so, if it is enabled, a VM exit occurs immediately after VM entry. The reason behind this indirection is serialization: this way, NMIs (which are asynchronous by nature) are always delivered after entry into the guest system and cannot interfere with the host-to-guest transition.
Jailhouse runs with interrupts disabled so no interrupt other than an NMI can occur. Any exception in host mode is considered to be a serious fault and results in panic.
To create a new cell, Jailhouse needs to "shrink" the Linux cell by moving hardware resources to the new cell. It also obviously needs to load the guest image and perform a CPU reset to jump to the guest's entry point. This process starts in the Linux cell with the JAILHOUSE_CELL_CREATE ioctl() command, leading to a jailhouse_cell_create() function call in the kernel. This function copies the cell configuration and guest image from user space (the jailhouse user-space tool reads these from files and stores them in memory). Then, the cell's physical memory region is mapped and the guest image is moved to the target (physical) address specified by the user.
After that, jailhouse_cell_create() calls the standard Linux cpu_down() function to offline each CPU assigned to the new cell; this is required so that the kernel won't try to schedule processes on those CPUs. Finally, the loader issues a hypercall (JAILHOUSE_HC_CELL_CREATE) using the VMCALL instruction and passes a pointer to a struct jailhouse_cell_desc that describes the new cell. This causes a VM exit from the Linux cell to the hypervisor; vmx_handle_exit() dispatches the call to the cell_create() function defined in hypervisor/control.c. In turn, cell_create() suspends all CPUs assigned to the cell except the one executing the function (if it is in the cell's CPU set) to prevent races. This is done in cell_suspend(), which indirectly signals an NMI (as described above) to each CPU and waits for the cpu_stopped flag to be set on the target's cpu_data. Then, the cell configuration is mapped from the Linux cell to a per-CPU region above FOREIGN_MAPPING_BASE in the host's virtual address space (the loader copies this structure into kernel space).
Memory regions are checked as with the Linux cell, and the new cell is allocated and initialized. After that, the Linux cell is shrunk: all of the new cell's CPUs are removed from the Linux cell's CPU set, the Linux cell's mappings for the guest's physical addresses are destroyed, and the new cell's I/O resources have their bits set in the Linux cell's io_bitmap, so accessing them will result in VM exit (and panic). Finally, the new cell is added to the list of cells (which is a singly linked list having linux_cell as its head) and each CPU in the cell is reset using arch_cpu_reset().
On the next VM entry, the CPU will start executing code located at 0x000ffff0 in real mode. If one is running apic-demo according to the instructions in the README file, this is where apic-demo.bin's 16-bit entry point is. The address 0x000ffff0 is different from the normal x86 reset vector (0xfffffff0), and there is a reason: Jailhouse is not designed to run unmodified guests and has no BIOS emulation, so it can simplify the boot process and skip much of the work required for a real reset vector to work.
Cells are represented by struct cell, defined in x86/include/asm/cell.h. This structure contains the page table directories for use with the VMX and VT-d virtualization extensions, the io_bitmap for VMX, cpu_set, and other fields. It is initialized as follows. First, cell_init() copies a name for the cell from a descriptor and allocates cpu_data->cpu_set if needed (sets less than 64 CPUs in size are stored within struct cell in the small_cpu_set field). Then, arch_cell_create(), the same function that shrinks the Linux cell, calls vmx_cell_init() for the new cell; it allocates VMX and VT-d resources (page directories and I/O bitmap), creates EPT mappings for the guest's physical address ranges (as per struct jailhouse_cell_desc), maps the LAPIC access page described above, and copies the I/O bitmap to struct cell from the cell descriptor (struct jailhouse_cell_desc). For the Linux cell, the master CPU calls this function during system-wide initialization.
When the Linux cell is shrunk, jailhouse_cell_create() has already put the detached CPUs offline. Linux never uses guest memory pages since they are taken from the region reserved at boot as described in part 1. However, Jailhouse currently takes no action to detach I/O resources or devices in general. If they were attached to the Linux cell, they will remain attached, and it may cause a panic if a Linux driver tries to use an I/O port that has been moved to another cell. To prevent this, you should not assign these resources to the Linux cell.
As of this writing, Jailhouse has no support for cell destruction. However this feature has recently appeared in the development branch and will likely be available soon. When a cell is destroyed, its CPUs and memory pages are reassigned back to the Linux cell, and other resources are also returned to where they originated from.
To disable Jailhouse, the user-space tool issues the JAILHOUSE_DISABLE ioctl() command, causing a call to jailhouse_disable(). This function calls leave_hypervisor() (found in main.c) on each CPU in the Linux cell and waits for these calls to complete. Then the hypervisor_mem mapping created in jailhouse_enable() is destroyed, the function brings up all offlined CPUs (which were presumably moved to other cells), and exits. From this point, Linux kernel will be running on bare metal again.
The leave_hypervisor() call issues a JAILHOUSE_HC_DISABLE hypercall, causing a VM exit at the given CPU, after which vmx_handle_exit() calls shutdown(). For the first Linux CPU that called it, this function iterates over CPUs in all cells other than Linux cell and calls arch_shutdown_cpu() for each of these CPUs. A call to arch_shutdown_cpu() is equivalent to suspending the CPU, setting cpu_data->shutdown_cpu to true, then resuming the CPU. As described above, this sequence transfers the control to apic_handle_events(), but this time this function detects that the CPU is shutting down. It disables the LAPIC and effectively executes a VMXOFF; HLT sequence to disable VMX on the CPU and halt it. This way, the hypervisor is disabled on all CPUs outside of the Linux cell.
When shutdown() returns, VT-d is disabled and the hypervisor restores the Linux environment for the CPU. First, the cpu_data->linux_* fields are copied from VMCS guest area. Then, arch_cpu_restore() is called to disable VMX (without halting the CPU this time) and restore various register values from cpu_data->linux_*. Afterward, the general-purpose registers are popped from the hypervisor stack, the Linux stack is restored, the RAX register is zeroed and a RET instruction is issued. For the Linux kernel, everything will look like leave_hypervisor() has returned successfully; this happens to each CPU in the Linux cell. After that, any offlined CPUs (likely halted by arch_shutdown_cpu()) are brought back to the active state, as described earlier.
Jailhouse is a young project that is developing quickly. It is a lightweight system that does not intend to replace full-featured hypervisors like Xen or KVM, but this doesn't mean that Jailhouse itself is feature-limited. It is rare project that has a potential both in a classroom and in production, and we hope this article helped you to understand it better.previous installment in LWN's ongoing series on the Btrfs filesystem covered multiple device handling: various ways of setting up a single filesystem on a set of physical devices. Another interesting aspect of Btrfs can be thought of as working in the opposite manner: subvolumes allow the creation of multiple filesystems on a single device (or array of devices). Subvolumes create a number of interesting possibilities not supported by other Linux filesystems. This article will discuss how to use the subvolume feature and the associated snapshot mechanism.
A typical Unix-style filesystem contains a single directory tree with a single root. By default, a Btrfs filesystem is organized in the same way. Subvolumes change that picture by creating alternative roots that function as independent filesystems in their own right. This can be illustrated with a simple example:
# mkfs.btrfs /dev/sdb5 # mount /dev/sdb5 /mnt/1 # cd /mnt/1 # touch a
Thus far, we have a mundane btrfs filesystem with a single empty file (called "a") on it. To create a subvolume and create a file within it, one can type:
# btrfs subvolume create subv # touch subv/b # tree . ├── a └── subv └── b 1 directory, 2 files
The subvolume has been created with the name subv; thus far, the operation looks nearly indistinguishable from having simply created a directory by that name. But there are some differences that pop up if one looks for them. For example:
# ln a subv/ ln: failed to create hard link ‘subv/a’ => ‘a’: Invalid cross-device link
So, even though subv looks like an ordinary subdirectory, the filesystem treats it as if it were on a separate physical device; moving into subv is like crossing an ordinary Unix mount point, even though it's still housed within the original btrfs filesystem. The subvolume can also be mounted independently:
# btrfs subvolume list /mnt/1 ID 257 gen 8 top level 5 path subv # mount -o subvolid=257 /dev/sdb5 /mnt/2 # tree /mnt/2 /mnt/2 └── b 0 directories, 1 file
The end result is that each subvolume can be treated as its own filesystem. It is entirely possible to create a whole series of subvolumes and mount each separately, ending up with a set of independent filesystems all sharing the underlying storage device. Once the subvolumes have been created, there is no need to mount the "root" device at all if only the subvolumes are of interest.
Btrfs will normally mount the root volume unless explicitly told to do otherwise with the subvolid= mount option. But that is simply a default; if one wanted the new subvolume to be mounted by default instead, one could run:
btrfs subvolume set-default 257 /mnt/1
Thereafter, mounting /dev/sdb5 with no subvolid= option will mount the subvolume subv. The root volume has a subvolume ID of zero, so mounting with subvolid=0 will mount the root.
Subvolumes can be made to go away with:
btrfs subvolume delete path
For ordinary subvolumes (as opposed to snapshots, described below), the subvolume indicated by path must be empty before it can be deleted.
A snapshot in Btrfs is a special type of subvolume — one which contains a copy of the current state of some other subvolume. If we return to our simple filesystem created above:
# btrfs subvolume snapshot /mnt/1 /mnt/1/snapshot # tree /mnt/1 /mnt/1 ├── a ├── snapshot │ ├── a │ └── subv └── subv └── b 3 directories, 3 files
The snapshot subcommand creates a snapshot of the given subvolume (the /mnt/1 root volume in this case), placing that snapshot under the requested name (/mnt/1/snapshot) in that subvolume. As a result, we now have a new subvolume called snapshot which appears to contain a full copy of everything that was in the filesystem previously. But, of course, Btrfs is a copy-on-write filesystem, so there is no need to actually copy all of that data; the snapshot simply has a reference to the current root of the filesystem. If anything is changed — in either the main volume or the snapshot — a copy of the relevant data will be made, so the other copy will remain unchanged.
Note also that the contents of the existing subvolume (subv) do not appear in the snapshot. If a snapshot of a subvolume is desired, that must be created separately.
Snapshots clearly have a useful backup function. If, for example, one has a Linux system using Btrfs, one can create a snapshot prior to installing a set of distribution updates. If the updates go well, the snapshot can simply be deleted. (Deletion is done with "btrfs subvolume delete" as above, but snapshots are not expected to be empty before being deleted). Should the update go badly, instead, the snapshot can be made the default subvolume and, after a reboot, everything is as it was before.
Snapshots can also be used to implement a simple "time machine" functionality. While working on this article series, your editor set aside a Btrfs partition to contain a copy of /home. On occasion, a simple script runs:
rsync -aix --delete /home /home-backup btrfs subvolume snapshot /home-backup /home-backup/ss/`date +%y-%m-%d_%H-%M`
The rsync command makes /home-backup look identical to /home; a snapshot is then made of that state of affairs. Over time, the result is the creation of a directory full of timestamped snapshots; returning to the state of /home at any given time is a simple matter of going into the proper snapshot. Of course, if /home is also on a Btrfs filesystem, one could make regular snapshots without the rsync step, but the redundancy that comes with a backup drive would be lost.
One can quickly get used to having this kind of resource available. This also seems like an area that is just waiting for the development of some higher-level tools. Some projects are already underway; see Snapper or btrfs-time-machine, for example. There is also an "autosnap" feature that has been posted in the past, though it does not seem to have seen any development recently. For now, most snapshot users are most likely achieving the desired functionality through their own sets of ad hoc scripts.
It typically will not take long before one starts to wonder how much disk space is used by each subvolume. A naive use of a tool like du may or may not produce a useful answer; it is slow and unable to take into account the sharing of data between subvolumes (snapshots in particular). Beyond that, in many situations, it would be nice to be able to divide a volume into subvolumes but not to allow any given subvolume to soak up all of the available storage space. These needs can be met through the Btrfs subvolume quota group mechanism.
Before getting into quotas, though, a couple of caveats are worth mentioning. One is that "quotas" in this sense are not normal, per-user disk quotas; those can be managed on Btrfs just like with any other filesystem. Btrfs subvolume quotas, instead, track and regulate usage by subvolumes, with no regard for the ownership of the files that actually take up the space. The other thing worth bearing in mind is that the quota mechanism is relatively new. The management tools are on the rudimentary side, there seem to be some performance issues associated with quotas, and there's still a sharp edge or two in there waiting for unlucky users.
By default, Btrfs filesystems do not have quotas enabled. To turn this feature on, run:
# btrfs quota enable path
A bit more work is required to retrofit quotas into an older Btrfs filesystem; see this wiki page for details. Once quotas are established, one can look at actual usage with:
# btrfs qgroup show /home-backup qgroupid rfer excl -------- ---- ---- 0/5 21184458752 49152 0/277 21146079232 2872635392 0/281 20667858944 598929408 0/282 20731035648 499802112 0/284 20733419520 416395264 0/286 20765806592 661327872 0/288 20492754944 807755776 0/290 20672286720 427991040 0/292 20718280704 466567168 0/294 21184458752 49152
This command was run in the time-machine partition described above, where all of the subvolumes are snapshots. The qgroupid is the ID number (actually a pair of numbers — see below) associated with the quota group governing each subvolume, rfer is the total amount of data referred to in the subvolume, and excl is the amount of data that is not shared with any other subvolume. In short, "rfer" approximates what "du" would indicate for the amount of space used in a subvolume, while "excl" tells how much space would be freed by deleting the subvolume.
...or, something approximately like that. In this case, the subvolume marked 0/5 is the root volume, which cannot be deleted. "0/294" is the most recently created snapshot; it differs little from the current state of the filesystem, so there is not much data that is unique to the snapshot itself. If one were to delete a number of files from the main filesystem, the amount of "excl" data in that last snapshot would increase (since those files still exist in the snapshot) while the amount of free space in the filesystem as a whole would not increase.
Limits can be applied to subvolumes with a command like:
# btrfs qgroup limit 30M /mnt/1/subv
One can then test the limit with:
# dd if=/dev/zero of=/mnt/1/subv/junk bs=10k dd: error writing ‘junk’: Disk quota exceeded 2271+0 records in 2270+0 records out 23244800 bytes (23 MB) copied, 0.0334957 s, 694 MB/s
One immediate conclusion that can be drawn is that the limits are somewhat approximate at best; in this case, a limit of 30MB was requested, but the enforcement kicked in rather sooner than that. This happens even though the system appears to have a clear understanding of both the limit and current usage:
# btrfs qgroup show -r /mnt/1 qgroupid rfer excl max_rfer -------- ---- ---- -------- 0/5 16384 16384 0 0/257 23261184 23261184 31457280
The 0/257 line corresponds to the subvolume of interest; the current usage is shown as being rather less than the limit, but writes were limited anyway.
There is another interesting complication with subvolume quotas, as demonstrated by:
# rm /mnt/1/subv/junk rm: cannot remove ‘/mnt/1/subv/junk’: Disk quota exceeded
In a copy-on-write world, even deleting data requires allocating space, for a while at least. A user in this situation would appear to be stuck; little can be done until somebody raises the limit for at least as long as it takes to remove some files. This particular problem has been known to the Btrfs developers since 2012, but there does not yet appear to be a fix in the works.
The quota group is somewhat more flexible than has been shown so far; it can, for example, organize quotas in hierarchies that apply limits at multiple levels. Imagine one had a Btrfs filesystem to be used for home directories, among other things. Each user's home could be set up as a separate subvolume with something like this:
# cd /mnt/1 # btrfs subvolume create home # btrfs subvolume create home/user1 # btrfs subvolume create home/user2 # btrfs subvolume create home/user3
By default, each subvolume is in its own quota group, so each user's usage can be limited easily enough. But if there are other hierarchies in the same Btrfs filesystem, it might be nice to limit the usage of home as a whole. One would start by creating a new quota group:
# btrfs qgroup create 1/1 home
Quota group IDs are, as we have seen, a pair of numbers; the first of those numbers corresponds to the group's level in the hierarchy. At the leaf level, that number is zero; IDs at that level have the subvolume ID as the second number of the pair. All higher levels are created by the administrator, with the second number being arbitrary.
The assembly of the hierarchy is done by assigning the bottom-level groups to the new higher-level groups. In this case, the subvolumes created for the user-level directories have IDs 258, 259, and 260 (as seen with btrfs subvolume list), so the assignment is done with:
# btrfs qgroup assign 0/258 1/1 . # btrfs qgroup assign 0/259 1/1 . # btrfs qgroup assign 0/260 1/1 .
Limits can then be applied with:
# btrfs qgroup limit 5M 0/258 . # btrfs qgroup limit 5M 0/259 . # btrfs qgroup limit 5M 0/260 . # btrfs qgroup limit 10M 1/1 .
With this setup, any individual user can use up to 5MB of space within their own subvolume. But users as a whole will be limited to 10MB of space within the home subvolume, so if user1 and user2 use their full quotas, user3 will be entirely out of luck. After creating exactly such a situation, querying the quota status on the filesystem shows:
# btrfs qgroup show -r . qgroupid rfer excl max_rfer -------- ---- ---- -------- 0/5 16384 16384 0 0/257 16384 16384 0 0/258 5189632 5189632 5242880 0/259 5189632 5189632 5242880 0/260 16384 16384 5242880 1/1 10346496 10346496 10485760
We see that the first two user subvolumes have exhausted their quotas; that is also true of the upper-level quota group (1/1) that we created for home as a whole. As far as your editor can tell, there is no way to query the shape of the hierarchy; one simply needs to know how that hierarchy was built to work with it effectively.
As can be seen, subvolume quota support still shows signs of being relatively new code; there is still a fair amount of work to be done before it is truly ready for production use. Subvolume and snapshot support in general, though, has been around for years and is in relatively good shape. All told, subvolumes offer a highly useful feature set; in the future, we may well wonder how we ran our systems without them.
At this point, our survey of the major features of the Btrfs filesystem is complete. The next (and final) installment in this series will cover a number of loose ends, the send/receive feature, and more.
Patches and updates
Core kernel code
Filesystems and block I/O
Virtualization and containers
Page editor: Jonathan Corbet
In something of a surprise, Red Hat and CentOS announced a partnership on January 7. CentOS has long been a path for folks to try out Red Hat Enterprise Linux (RHEL), which CentOS rebuilds and rebrands, and some of those who try CentOS may become Red Hat customers. That turns CentOS into something of a RHEL on-ramp for some portion of its users. While the announcement seemed to come a bit out of left field, it actually makes quite a bit of sense for both organizations and will likely benefit Fedora as well.
CentOS came about as the "Community Enterprise Operating System" in 2003 or so. In 2005, it ran afoul of Red Hat's legal department over its use of Red Hat trademarks on its then-new web site. That dispute was eventually patched up, though not without CentOS resorting to referring to Red Hat as a PNAELV (Prominent North American Enterprise Linux Vendor) for a time. Since then, CentOS and Red Hat have more or less happily coexisted. Now, it seems, there will be more than just coexistence.
Both CentOS and Red Hat put out their own announcement of the partnership, each in their own style. CentOS's was characteristically chatty, while Red Hat's was more "corporate". Both organizations see CentOS filling a niche in between the buttoned-down RHEL server distribution and the fast-developing, more experimental Fedora.
Another significant piece of the puzzle is that several of the main CentOS contributors are now Red Hat employees: Johnny Hughes Jr, Jim Perrin, Fabian Arrotin, and Karanbir Singh. They will be working on CentOS full time in the Open Source and Standards group. Beyond that, there is a new governing board (currently nine members) made up of existing core CentOS contributors, one community-nominated member (Arrotin), as well as three Red-Hat-appointed members. The board is now dominated by Red Hat employees—and it will always be a majority-Red-Hat board—but the governance goal for the project is to be "public, open, and inclusive", Singh said in the CentOS announcement.
Singh goes on to list several things that won't be changing as a result of the partnership, as well as some things that will. For the most part, the nuts and bolts of how CentOS operates today, how it is built and distributed, and its isolation from the internals of RHEL development, all will remain the same. Beyond just jobs, Red Hat will be providing more resources for build and delivery systems, access to the Red Hat legal team (which will help remove some of the barriers to community participation in QA, for example), and support for CentOS variants.
These variants seem to be one of the key new plans going forward. Somewhat akin to Fedora spins, CentOS variants will pair the long-lived CentOS base with a faster moving "enterprise-y" project like Gluster, OpenStack, OpenShift, oVirt, etc. to provide a stable platform for those kinds of projects. Given the development speed of those projects, there is currently an impedance mismatch with both RHEL (which moves far too slowly) and Fedora (which moves too quickly and tends to be more experimental in nature). The recent Xen4CentOS project should provide something of a template for variants.
The ownership of the CentOS trademarks, along with the requirement that the board have a majority of Red Hat employees makes it clear that, for all the talk of partnership and joining forces, this is really an acquisition by Red Hat. The CentOS project will live on, but as a subsidiary of Red Hat—much as Fedora is today. Some will disagree, but most would agree that Red Hat's stewardship of Fedora has been quite good over the years; one expects its treatment of CentOS will be similar. Like with Fedora, though, some (perhaps large) part of the development of the distribution will be directed by Red Hat, possibly in directions others in the CentOS community are not particularly interested in.
There are a number of advantages for Red Hat that are outlined in its FAQ about the partnership. Largely it boils down to filling the hole between RHEL and Fedora. CentOS was already doing that job, but with Red Hat's help it can do it better.
The git.centos.org repository is one area where Red Hat's assistance will be felt. Instead of relying on source RPMs, there will soon be an official Git repository for all of CentOS and its variants. That will presumably allow for easier access to Red Hat's patches for the kernel and other packages, which will not only help CentOS, but will be a boon to other rebranded RHELs like Scientific Linux. The fact that it may also help Oracle (which was one reason for Red Hat's switch to releasing kernel tarballs rather than patches) has evidently been discounted.
For Fedora, very little should change, which is the theme of project leader Robyn Bergeron's blog post. There are some obvious places where CentOS and Fedora could work more closely, she said, and those will be explored: "I would expect that over time, the things that make sense to collaborate on will become more obvious, and that teams from the two respective communities will gravitate towards one another when it makes sense." She also extended a welcome to the CentOS developer and user communities including, of course, her new coworkers.
As with any newly announced initiative, it will take some time—quite possibly a year or two—to see the effects of CentOS being absorbed into the Red Hat family. Even then, the full effects may take more time to be recognized. It is ten years since the RHEL/Fedora split and it often seems like the effects of that are still being discovered at times. CentOS + Red Hat certainly appears to be a good move for all involved, but we will have to check back in periodically over the coming years to be sure.
Red Hat Enterprise LinuxWe encourage customers to plan their migration from Red Hat Enterprise Linux 6.2 to a more recent version of Red Hat Enterprise Linux 6. As a benefit of the Red Hat subscription model, customers can use their active subscriptions to entitle any system on a currently supported Red Hat Enterprise Linux 6 release (6.3, 6.4, or 6.5 for which EUS is available)."
Newsletters and articles of interest
Page editor: Rebecca Sobol
GNU Radio is the leading free software package for working with software-defined radio (SDR). The project released its latest update, version 3.7.2, in December of 2013, but despite the minor-sounding version number, the update incorporates several noteworthy improvements.
For the benefit of the uninitiated, basic SDR (at least on the receiver side, which is the usual place to start, since transmitting on most frequencies requires a license) involves capturing radio signals to an analog-to-digital converter (ADC), resulting in a digital data stream. Then most or all of the signal-processing that would historically have been done in hardware can be performed using software instead: filtering, amplification, demultiplexing, modulation/demodulation, etc. Doing the signal processing in software has a number of benefits, starting with nearly complete flexibility: one can reconfigure the pipeline at will, and how much analysis and transformation one does is limited only by the power of the host computer, which itself is much faster than traditional radio hardware.
But there are also entirely new kinds of processing that SDR permits and older technology does not. For example, most ADCs can sample extremely wide frequency ranges, so one can capture data from entire blocks of the radio spectrum and look for signals using software (as opposed to tuning in to a single channel first), or automatically adapt to changing conditions.
At heart, GNU Radio is a framework used to construct and execute SDR pipelines—although the official GNU Radio term is "flow graph." In that sense it is akin to GStreamer, although designed for different tasks and optimized for working with analog input and output (the fact that SDR involves a different part of the electromagnetic spectrum than visible light is just an implementation detail). Recent versions of GNU Radio include GNU Radio Companion (GRC), a GUI tool for creating, editing, and running flow graphs, but the GNU Radio project also provides Python bindings. In most cases, one might prefer to create and test a flow graph in GRC, but the finished product is often best distributed as a Python program.
The flow graph concept is a familiar one to many (we most recently looked at flow-based programming in MicroFlo), but GNU Radio's particular hardware needs mean that getting started requires some planning and investment. That is, while one can construct GStreamer pipelines to manipulate purely digital audio/video content, one really does need a radio receiver or transmitter to do anything particularly interesting with GNU Radio. The first and perhaps most well-known SDR hardware associated with GNU Radio is the Universal Software Radio Peripheral (USRP) line sold by Ettus Research. A USRP is a USB-attached transceiver that includes field-programmable gate arrays (FPGAs) to speed up common signal processing tasks. USRPs are quite pricey, however (starting around $700), which has led to a number of cheaper open hardware efforts like the HackRF (expected to be released soon for less than half the price of USRP).
By far the most economical option, though, is an FM-and-digital-television tuner using the Realtek RTL2832U chip. Its frequency range and sampling depth are more modest, but a USB tuner stick can be found as cheaply as $10. The RTL2832U family of USB tuners is supported by the RTL-SDR project, and GNU Radio has supported it as an input source since the GNU Radio 3.6 days. If one is just getting started with an RTL-based tuner (or with SDR altogether), it can be quite valuable to first explore the simpler command-line tools bundled with the RTL-SDR package, as well as other programs linked to from the RTL-SDR wiki, before diving right in to GNU Radio.
The GNU Radio project has an active community of developers, to the point that even an x.x.N release like the new 3.7.2 contains quite a few updates. Of course, another factor that makes GNU Radio rapidly changing is the fact that it is composed of a large set of individual modules, rather than a single, monolithic executable. Each release tends to incorporate a slate of new "blocks" (as they are called), as well as revisions to existing blocks.
The main idea is that individual SDR applications can be built by chaining together various GNU Radio blocks. Starting with a source block (such as the RTLSDR source block), one connects the output of each block to a converter, filter, modulator, or other intermediate block, finally ending with a sink block (such as an audio sink that plays on the soundcard, or a file sink that saves the data stream to disk). One of the main factors that actually makes GNU Radio powerful is the scope and quality of its collection of blocks.
In addition to the signal-processing blocks mentioned previously, GNU Radio has a complete catalog of mathematical and logical functions, blocks for converting between data types, and for interleaving and de-interleaving streams, error correction, and much more. Taking advantage of SDR's ability to analyze a signal requires such a framework; it would hardly be possible to do analysis without the Fast Fourier Transform, and a lot of the common signal processing tasks require conversion from raw sample data to vectors or vice versa.
The changes in the 3.7.2 release include several new blocks, such as gr::digital::correlate_and_sync, which analyzes captured radio data looking for an expected signal, automatically determines the correct timing and phase parameters, and pushes that metadata downstream to other blocks. There are also several new "channel model" blocks (i.e., mathematical models of the medium through which a signal passes en route to the receiver), as well as a number of smaller utility blocks.
In addition to the new blocks, there are several updated components. GNU Radio includes tools for developing graphical user interfaces for flow graphs, at present supporting Qt and wxWidgets. The 3.7.2 release adds some new functionality to the Qt functions, allowing programs to trigger actions based on the state of the flow graph—so that the Qt code in the program can respond to changes in the signal being processed. There are also improvements to VOLK, the Vector-Optimized Library of Kernels, which is a library that attempts to optimize usage of the processor's SIMD instructions, as well as to the code for working with Orthogonal frequency-division multiplexing (OFDM) signals.
Finally, GNU Radio developer Philip Balister started using the Coverity static analyzer on GNU Radio in mid 2013. The fruit of that work can now be seen in 3.7.2, with the closing of 25 bugs detected through Balister's static analysis.
I tested GNU Radio 3.7.2 using the project's Live DVD, which includes the new release as well as RTL-SDR, drivers for the USRP and HackRF, and number of auxiliary programs. I am by no means well-versed at SDR (or at radio work in general); the full extent of my own signal processing was scanning for and receiving local city services radio dispatches. But I will testify to the fact that despite the inherent complexity of the SDR problem space and the size of the GNU Radio tool suite, the project makes it easy to get in and learn one's way around. My reading on the project's recent history indicates that recent releases have put a big emphasis on making blocks easy to find in GRC and in improving the documentation. Case in point: every block in GRC has a complete manpage-style tooltip that pops up when the mouse cursor hovers over it, listing the block's parameters and usage.
The project has also managed to maintain an organized wiki containing both tutorial and reference documentation. Furthermore, the community produces the Comprehensive GNU Radio Archive Network (CGRAN), which is analogous to CPAN for Perl, and provides speedy access to example code in addition to full-blown GNU Radio applications. SDR is a wild world; follow any SDR-related blog and one will see daily reports on tracking satellites, picking up GSM signals, decoding digital highway signs, and just about everything else in the EM spectrum. Proof that GNU Radio is a well-organized project is the fact that it not only allows for such diverse usage, but it manages to make it comprehensible.
At dot.kde.org, Jos Poortvliet announces the availability of a "tech preview" of KDE Frameworks 5. This is the basis set of modules and libraries for the forthcoming KDE 5 generation of applications. The preview is based on Qt 5, but there are quite a few changes in the works, including redefining the frameworks as a set of Qt Addons. There is additional background information on the site for developers.
Newsletters and articles
I urge all Emacs developers to read this, then sleep on it, then read it again - not least because I think Emacs development has fallen into some of the same traps the author decribes. But *that* is a discussion for another day; the conversation we need to have now is about escaping the gravitational pull of bzr's failure.
Libre Graphics World has an introduction to Valentina, an open source pattern design and editing application. Patternmaking is a niche with few options available; "fashion designers who are only starting their business are mostly locked between expensive software products they cannot afford, rather simplistic free-as-in-lunch applications, and various generic CAD systems (from affordable to pirated expensive ones) that don't make it easy"—where "expensive" evidently runs in the five-figure range. Valentina and a few other free software projects are making progress, though there is clearly quite a lot remaining to be done.
Page editor: Nathan Willis
Articles of interest
Calls for PresentationsWe invite submissions of papers addressing all areas of audio processing and media creation based on Linux. Papers can focus on technical, artistic and scientific issues and should target developers or users. In our call for music, we are looking for works that have been produced or composed entirely/mostly using Linux."
|January 10||January 18
|Paris Mini Debconf 2014||Paris, France|
|January 15||February 28
|FOSSASIA 2014||Phnom Penh, Cambodia|
|January 15||April 2
|Libre Graphics Meeting 2014||Leipzig, Germany|
|January 17||March 26
|16. Deutscher Perl-Workshop 2014||Hannover, Germany|
|January 19||May 20
|PGCon 2014||Ottawa, Canada|
|January 19||March 22||Linux Info Tag||Augsburg, Germany|
|January 22||May 2
|LOPSA-EAST 2014||New Brunswick, NJ, USA|
|January 28||June 19
|USENIX Annual Technical Conference||Philadelphia, PA, USA|
|January 30||July 20
|OSCON 2014||Portland, OR, USA|
|January 31||March 29||Hong Kong Open Source Conference 2014||Hong Kong, Hong Kong|
|January 31||March 24
|Linux Storage Filesystem & MM Summit||Napa Valley, CA, USA|
|January 31||March 15
|Women MiniDebConf Barcelona 2014||Barcelona, Spain|
|January 31||May 15
|ScilabTEC 2014||Paris, France|
|February 1||April 29
|Android Builders Summit||San Jose, CA, USA|
|February 1||April 7
|ApacheCon 2014||Denver, CO, USA|
|February 1||March 26
|Collaboration Summit||Napa Valley, CA, USA|
|February 3||May 1
|Linux Audio Conference 2014||Karlsruhe, Germany|
|February 5||March 20||Nordic PostgreSQL Day 2014||Stockholm, Sweden|
|February 8||February 14
|Linux Vacation / Eastern Europe Winter 2014||Minsk, Belarus|
|February 9||July 21
|EuroPython 2014||Berlin, Germany|
|February 14||May 12
|OpenStack Summit||Atlanta, GA, USA|
|February 27||August 20
|USENIX Security '14||San Diego, CA, USA|
If the CFP deadline for your event does not appear here, please tell us about it.
|Real World Cryptography Workshop||NYC, NY, USA|
|QtDay Italy||Florence, Italy|
|Paris Mini Debconf 2014||Paris, France|
|January 31||CentOS Dojo||Brussels, Belgium|
|FOSDEM 2014||Brussels, Belgium|
|Config Management Camp||Gent, Belgium|
|Open Daylight Summit||Santa Clara, CA, USA|
|Django Weekend Cardiff||Cardiff, Wales, UK|
|devconf.cz||Brno, Czech Republic|
|Linux Vacation / Eastern Europe Winter 2014||Minsk, Belarus|
|conf.kde.in 2014||Gandhinagar, India|
|Southern California Linux Expo||Los Angeles, CA, USA|
|February 25||Open Source Software and Govenrment||McLean, VA, USA|
|FOSSASIA 2014||Phnom Penh, Cambodia|
|Linaro Connect Asia||Macao, China|
|Erlang SF Factory Bay Area 2014||San Francisco, CA, USA|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds