LWN.net Weekly Edition for December 4, 2014
Checking out the OnePlus One
The CyanogenMod Android-based firmware for mobile phones has been around for five years or so now; we have looked at various versions along the way, most recently 11.0 M6 back in May. But within the last year, CyanogenMod (CM) has grown from its roots as a replacement firmware to actually being pre-installed—it now ships on the OnePlus One phone. The phones are only available for purchase by invitation (or via a Black Friday sale), but we were able to get our hands on one. Overall, the One makes for a nice showcase, both of CM's capabilities and the hardware designers at OnePlus.
![[Promo photo]](https://static.lwn.net/images/2014/oneplus-promo-sm.jpg)
Most everything about the phone is big—its dimensions, processor, memory, battery life, and screen are all oversized, seemingly—but its price is toward the low end. For around $350 you can get a phone with a Snapdragon 801 2.5 GHz quad-core CPU, 3G of RAM, 64G of storage, a 5.5-inch (14cm) 1920x1080 display, and a 3100mAh battery. The phone is noticeably bigger than my Galaxy Nexus that it replaced, but somehow doesn't seem too big. The weight is perfectly manageable, coming in at 5.7 ounces (162g). The battery life has been nothing short of phenomenal—it goes for several (four or five) days between charges with moderate usage—though it does seem to take quite some time to recharge (six or more hours). In addition, the construction seems solid, though I have thankfully avoided a drop test so far.
![[OnePlus One]](https://static.lwn.net/images/2014/oneplus-sm.jpg)
The phone runs CM 11S—a customized and expanded version of the "standard" CM 11.0, which is based on Android 4.4 (KitKat). The One and its software come from a partnership between OnePlus and Cyanogen Inc.. Interestingly, OnePlus also made some arrangement with Google so that the standard apps (e.g. Maps, Play store, etc.) are shipped with the phone, rather than requiring a separate download as is the case for installing CM on phones. OnePlus has committed to continue updating the phone software for two years; the first over-the-air update for the One came within a few days of receiving the device.
The standard theme is rather square—sparse—with icons and other elements that have simple images and sharp 90° corners. It is an interesting choice, if a little hard to get used to at first. The CM hexagon also appears: in the boot animation and "please wait" spinner, for example. All of that can be changed, of course, with various free and fairly inexpensive themes available for the phone. For anyone familiar with using Android, the One is, unsurprisingly, easy to use. There are differences from Android and CM, of course, but they largely show up in the margins—settings in particular.
![[Home screen]](https://static.lwn.net/images/2014/oneplus-homescreen-sm.png)
One of the more obvious differences is in the camera hardware and app. That combination provides many more shooting "modes" than other phone cameras I have used: things like high dynamic range (HDR), raw, posterize, sepia, and "clear image", which combines ten separate images into the final output image to produce more detail with less noise. In addition, the Gallery app shows 15 images (seen below at right) with different characteristics (though it is a bit unclear what, exactly, they represent) for editing purposes.
The sensor for the rear-facing camera is 13 megapixels—oversized for a phone, once again—while the front-facing camera is 5 megapixels. The main (rear-facing) camera has six lenses and an f/2.0 aperture for low-light picture taking. I am no photography expert (as my photos here and elsewhere will attest), but there appear to be lots of things to try out with this highly portable, "always present" camera in the coming years.
![[Gallery app]](https://static.lwn.net/images/2014/oneplus-gallery-sm.png)
As with all CM releases, it is the customization possibilities that really set it apart. The lock screen can be modified in various ways to provide shortcuts for functions like the camera or to start apps like the Chrome browser, phone, or text messaging. In addition, functions can activated with gestures on the blank sleeping screen: "V" for the flashlight or a circle for the camera. Those can be individually enabled and disabled, though adding your own custom gestures ("M" for maps?) might make a nice addition.
The "Profiles" feature is likely to be useful to many. One can associate a trigger, which is a particular WiFi network (by SSID) or near-field communication (NFC) tag, with a profile. Multiple preferences can be set automatically when the trigger is encountered. So, for instance, connecting to the home network might turn off the lock screen, disable mobile data, and enable data syncing. Connecting to the work network might, instead, ratchet up the security settings. There are a wide array of features that can be configured for each profile.
![[Photo from the One]](https://static.lwn.net/images/2014/oneplus-camera-sm.jpg)
Privacy Guard provides lots of control over the permissions that are granted to apps installed on the system. As was the case in our CM 11.0 M6 review, though, disallowing network access on a per-app basis is not one of the options. Disabling network access (thus, ads) might well annoy app developers (and Google itself), but there are controls to configure almost any other permission that was granted at install time. In addition, there is a wealth of information about which permissions have actually been used by each app and how recently, which should make it easier to determine which apps are sneaking around behind the owner's back—and lock them down.
The owner can unlock the bootloader in the usual way using the following command:
$ fastboot oem unlockIt is important to note that doing so will wipe all of the data off the phone, so it should only be done before doing anything else with the phone or after a backup. After that, a custom recovery image (e.g. ClockworkMod recovery) can be flashed to the device; from there, it is straightforward to switch to some other firmware. When the Lollipop-based CM 12 nightlies stabilize a bit, that seems like an obvious choice to be taken for a spin.
As a debut, both for OnePlus and for pre-installed CyanogenMod, the One makes quite an impression. How, exactly, either of the two companies is making any money at that price point is rather unclear, but that is their business—literally. If you can get your hands on an invite, it is definitely a phone worth checking out.
Stunt Rally: Racing for Linux
Quality open-source racing games are not hard to come by. There's SuperTuxKart for those who like cartoonish kart racing games, Speed Dreams for something more realistic, and Extreme Tux Racer for casual gamers. Stunt Rally is another racing game that stands out from the crowd with its attention to detail, along with some whimsical tracks and vehicles.
Gameplay
![[Jungle track]](https://static.lwn.net/images/2014/stuntrally-jungle-sm.png)
Starting the game leads to a menu that could use a little aesthetic polish. There are a number of gameplay options available, but new users will probably want to start by playing a single course or by launching the tutorial. Once a vehicle and track are chosen, players compete against AI opponents. The game can be controlled with the keyboard or can be configured to use a game controller in the "Input" settings. Stunt Rally's graphics are impressive, and the focus on gravel tracks and 4-wheel-drive vehicles gives the game a nice, gritty feel.
Vehicles include several different types of cars, a futuristic spaceship, a hovercraft, and an alien spheroid starcraft. Overall, the game is fun, save for one annoyance: going off-road and landing in a ditch or deep in water doesn't lead to the vehicle respawning on the road after a few seconds, which is what I expected. Instead, you can hit a "rewind" button to go back in time; while this returns the car to the track, it doesn't lead to a penalty for the player, which did not feel right to me. Nonetheless, it's hard to complain about being able to ride a bouncy alien sphere on Mars. Overall, the game is a blast.
![[Mars track]](https://static.lwn.net/images/2014/stuntrally-mars-sm.png)
There are over 150 different tracks to race on, including a desert, a jungle, the planet Mars, Greece, a metropolis shrouded by fog, and many more. Online multiplayer racing is theoretically possible using a master server list. Unfortunately, not a single multiplayer match was listed during my playtime. One can also host multiplayer games.
Technical details
Stunt Rally's lead developer is based in Poland and goes by the pseudonym Crystal Hammer. He described the game's technical details and history in an email conversation.
The game is not yet available for download from most Linux distribution
repositories. This is due to a licensing issue: the entire project is
open-source (under GPLv3) except for the sky textures, which have a
non-commercial redistribution license. Crystal Hammer doesn't "have
plans on replacing them, since I don't think there are any of such good
quality and with a compatible license
". He said it would be fairly
easy to replace them with open-source textures, though "I suspect that would lower the game's
quality
".
To play the game, users must download a Linux binary tarball or Windows
executable from the project's home page. The source
code can be obtained from the project's Git repository.
The minimum hardware
requirements are a dual-core, 2.0 GHz CPU, and a GPU at least as strong
as "a GeForce 9600 GT or Radeon HD 3870 with Shader Model 3.0
supported and 256 MB GPU RAM
". The project notes that one can run
the game on lower graphical settings with weaker hardware, and that
"integrated graphics processors (from Intel and AMD) will work, but
may be slow, especially the older ones.
" Nonetheless, I was able to
play on the "High" graphical setting on my laptop with an Intel HD 4000
graphics and a dual-core 2.50 GHz processor.
Crystal Hammer began Stunt Rally in 2009, when he forked the game VDrift. He saw the engine as a good base for his own work:
The project, written in C++, relies on a number of dependencies: "We
use Boost, OGRE, SDL, Bullet collision (is also used in VDrift), MyGui, and
for OGRE: PagedGeometry (for vegetation) and Shiny (material generator
library)
". For those unfamiliar with some of these tools: Boost is a collection of general-purpose
C++ libraries, SDL is the Simple
DirectMedia Layer (a cross-platform library commonly used for video game
development among other uses), MyGUI is a
graphical user interface (GUI) library for games and 3D applications, OGRE is a 3D graphics rendering engine,
and Bullet is a physics
engine that is widely used for things like collision detection.
The project is a true labor of love for Crystal Hammer; it's all done in
his spare time, as he works full-time for a company as a C++ and C#
developer. He has no intention to monetize the project , nor even to accept
donations: "I may think about this again later, if a few people do
want that
". He particularly enjoys working on the art assets and new
tracks, while he finds coding AI and realistic car damage "difficult to do (also too time consuming)
". He sees Stunt Rally as substantially different from the VDrift base:
Crystal Hammer's knowledge of racing physics is self-taught, by studying libraries like Bullet, as well as by studying VDrift's code base. He also read books on vehicle and tire dynamics, including materials on Pacejka's Magic formula, which is a means to model tire forces when a vehicle is not perfectly following the curves of the road. Just working on tire physics was laborious and consumed weeks of Crystal Hammer's spare time, he said.
For those interested in contributing, there's lots of work to do. Crystal Hammer mostly would like to have new programmers to develop features or squash bugs, but localization, graphic design, and game testers are also welcome. Currently, he is the only developer, so he'd appreciate the help: "There was a time, like a year, around 2012, when we were 4 guys. We still
keep in contact and they still commit small patches once in a while
". A roadmap page shows a list of tasks Crystal Hammer would like to have worked on, and a bug tracker is used to keep track of their progress. It'll be interesting to see what turns this racing game will take in the years to come.
A preview of darktable 1.6
The darktable project recently announced the first release-candidate (RC) builds for its upcoming version 1.6 release. The new version will add a slideshow presentation tool to darktable's primary photo-editing features, plus several new image operations and support for new digital cameras. This time, several of the additions add to darktable's automatic adjustment capabilities, making the application a bit more friendly for users who are new to high-end photo editing.
The first release candidate arrived on November 16 with the official version number 1.5.1. Indications from the IRC channel are that a second RC build should be expected imminently, with a final 1.6 release before the end of 2014. That would make the 1.6 release just under a year after the last stable upgrade, version 1.4, which we looked at in January.
![[darktable 1.6]](https://static.lwn.net/images/2014/11-dt-darkroom-sm.png)
The RC is tagged in the project's GitHub repository; users can download a source package from that location and compile it locally. As an alternative, binary packages are built regularly for many Linux distributions; in some cases, the packagers build the development series as well as the stable releases.
New user-visible features include slideshow mode, one new image-correction operation, support for better controlling the import process of images, and enhancements to two existing tools. The slideshow mode is noteworthy for the fact that it extends darktable's feature set in a new direction—much as the addition of the geolocation "map mode" did in 1.4. The slideshow feature lets users step through an image collection ("collection," in this case, being darktable's terminology for a top-level image gallery). The feature set is comparable to most other slideshow tools; with automatic and manual advance.
There are clearly dozens upon dozens of applications that can present a slideshow of images these days. The advantage to using darktable's feature are that the collection shown can be generated by filtering one's image library (say, on image ratings, tags, geolocation, or any other metadata field) and that the slideshow can display images as they have been adjusted within darktable. In other words, the user can make color corrections, enhancement elements, and apply filters, then run the slideshow without having to export anything first. For experimentation, this is handy feature.
Image editing
![[darktable 1.6 defringing]](https://static.lwn.net/images/2014/11-dt-defringe-sm.png)
On the editing side (in darktable's "darkroom" mode), the new defringe image operation lets the user zero in on a specific type of color distortion: longitudinal chromatic aberration (LCA). LCA is an aberration caused by the fact that different wavelengths of light have slightly different focal lengths. In an extreme zoom shot, this is visible as a violet halo on objects next to a very bright part of the image. It is different from lateral chromatic aberration, which is the red and green fringing sometimes seen at the outside edges of an image.
![[darktable 1.6 gamut clipping]](https://static.lwn.net/images/2014/11-dt-clipping-sm.png)
Another new feature allows the user to selectively re-map the input color profile of an image into a color range more suitable for working with. Most of the time, an image's input profile (which should correspond to the camera's color space) can be easily converted to a standard working space (like AdobeRGB or L*a*b*).
But sometimes the profile conversion chosen by darktable causes some artifacts in extreme corner cases—such as in highly saturated, blue lights, which can end up converted to negative values—resulting in unsightly black pixels. For those situations, users can tweak the input profile settings manually to avoid such artifacts. Although experienced users may appreciate more control over the input profile settings, for many others the main benefit will be the simple "gamut clipping" option, which can instantly fix the black-pixel problem.
Several existing tools are upgraded in the new release. Most prominent is the basecurve tool, which is used to apply a "base" tone curve to the raw sensor data in an image file. Darktable's tool now includes an array of preset basecurves that correspond to various camera-maker presets. The manufacturers apply these curves in-camera when saving JPEG images, so by including such presets, darktable can create a tone curve that makes a raw image file match the in-camera JPEG. Of course, the manufacturer's preset may not be to the user's liking; luckily it can be deactivated if desired, and other tools used to adjust the image.
![[darktable 1.6 basecurve presets]](https://static.lwn.net/images/2014/11-dt-basecurve-sm.png)
For a lot of users, though, automatically adjusting raw files to match the camera's JPEG images is a major convenience. Many people shoot in RAW+JPEG mode, and even those who do not are used to seeing the in-camera JPEGs used as thumbnails. The notion of automatically doing the useful thing has also been applied to the levels (i.e., basic histogram) tool, which can now estimate a good setting automatically, rather than requiring the user to manually adjust levels.
![[darktable 1.6 sliders]](https://static.lwn.net/images/2014/11-dt-sliders-sm.png)
Finally, several of the existing tools have historically sported adjustment sliders that stopped at some arbitrarily chosen minimum and maximum values. This is easiest to see in the exposure-compensation slider, which is set to +/-3 by default. Those limits are usually sensible, but darktable now offers a way to get around them: right-clicking on the slider allows the user to enter any numeric value. The slider readjusts its scale to match the entered value.
Further polish
Beyond the tool set, the new darktable RC also extends the application's functionality in some lower-level features. For example, the new release supports "huge" image sizes—specifically, those with more than 32-bit indexes, or 4 gigapixels. Fortunately for those who wrestle with such enormous pictures, darktable now makes better use of multiple processor cores: color conversion and exporting to OpenEXR images are now multi-core operations. The application also supports embedding color profiles in PNG and TIFF image files, which it had previously lacked.
One lower-level feature addition that will be an immediate boon to certain customers is support for Fujifilm's X-Trans image sensor. The X-Trans series uses a different pattern to arrange the red, green, and blue subpixels. Without explicit support for the design, raw images from many Fujifilm cameras are unusable.
Speaking of raw format support, darktable now uses the rawspeed library for image-file decoding, rather than LibRaw (although, like LibRaw, rawspeed builds on the same dcraw basic decoding functions used by most free-software photo editors). Rawspeed is a subproject from the same team that works on the competing photo editor Rawstudio; regardless of which editor one prefers, it is always refreshing to see such projects working together.
On the whole, darktable continues to improve with each release; in addition to new tools and editing features, however, the project is also making steady improvements to usability—a process that will be appreciated by new and experienced users alike.
Touring the hidden corners of LWN
One of the more surprising outcomes (to us) of the recent systemd "debates" in our comments section was finding out that some subscribers did not know of our comment filtering feature. Subscribers have been able to filter out specific commenters since 2010, but knowledge of that feature seems to have dissipated over time. We certainly could do a better job of documenting all of our features, but we thought it might be a good time to both introduce a couple of new features while refreshing people's memories of some of the features we already offer.
New stuff
To start with, there are some new features to investigate. Inspired by some of the suggestions about our comment-filtering feature, we have now added the ability to filter out comments from non-subscribers (i.e. guests). As with configuring anything about comment filters (or any other LWN customization), visit the "My Account" page. The controls for the feature are under the "Comment Filtering" heading. Comment filtering is available for all subscribers at the "professional hacker" level or above.
As with filtering individual users, the guest filtering provides a JavaScript-based interface that will show the presence of comments, the number of replies, and the filtered comment's author. Clicking on the "+" icon will expose the comments (and any replies); the comment subtree can be collapsed again by using the "-" icon.
A much more wide-ranging change is that we are working on a new, responsive design for LWN—one that will scale well from small, high-DPI screens on phones and tablets up to desktop screens of varying resolutions. We offered a preview of that functionality to our "maniacal supporter" subscribers recently—we are now ready to give all of our subscribers a look.
To try it out, subscribers at any level can visit the "Customization" page from "My Account". Under "Display Preferences" there is an option to "Use the new (in-development) page engine"; simply check that box and save your preferences to see how things look. We are most definitely interested in feedback, especially regarding how it looks and works on the vast array of different devices out there. Please send any comments to the "sitecode" email address at lwn.net.
While there may not be many subscribers who are using Internet Explorer 8 to access LWN, a warning is in order for any that are. The new display code does not yet work correctly with IE 8.
Oldies but goodies
Another customization feature that has been around for a bit is the "Display old parent in unread comments screen", which shows some more context (the parent comment) when displaying unread comments. It is located in the "Display preferences" section of the "Account customization" screen. Subscribers at any level have access to the unread comments feature, thus they also can set this option.
For those who get annoyed by the ads we show—count us among them at times—it is possible to turn off all advertisements for "professional hackers" subscribers and above. That option can be found in the "Advertising preferences" section of the customization page.
Another feature that readers often miss is our mailing lists. We have two for subscribers: "Daily" and "Daily Headlines". Each of those sends at most one message per day with the news items (or headlines) posted that day. The "Notify" and "Just freed" lists are for anyone; "Notify" is a once-per-week notification that the weekly edition is available, while "Just freed" will send one message on any day where content has come out from behind the LWN paywall. Subscriptions to those lists can be adjusted in the "Mailing lists" section of your account page.
We also have a variety of RSS feeds. In addition, things posted to our daily page are also echoed in our Twitter feed.
Keeping up with the conferences and other events in our community is made easier with the LWN community calendar, which we maintain with lots of help from our readers. In addition, CFPs (calls for papers or proposals) can be tracked in the LWN CFP deadline calendar. Both calendars are summarized for the next few months in each week's Announcements page. As always, if your favorite event does not appear there, please submit it for inclusion.
The latest weekly edition always has new content for our subscribers, but we try to make it easy to find older content as well. Our "Archives" page is a good place to start. It has links to the ten most recent weekly editions, but it also links to several indexes that may be useful. For example, our conference coverage is indexed by conference name and by year; we have an index of guest author articles as well. Finally, both our kernel and security articles have their own indexes.
One more site "feature" that bears mentioning: the subscription page. All of the content and features you see here were supported almost entirely by our subscribers—many thanks to you! If you like what you see here and aren't a subscriber yet, please consider changing that. We have been reporting on the Linux and free software world for 16 years now and have been subscriber-supported for 12 of those years. We'd like to continue for many more, but can only do that with your support.
Do you have a favorite LWN feature that we missed listing here? Let's hear about it in the comments. The same goes for feature requests, though more complicated or elaborate changes are probably best sent to our inbox: the "lwn" alias here at lwn.net. We probably can only get to a small fraction of your suggestions, but our "ears" are certainly open.
Security
The GnuPG 2.1 release
GNU Privacy Guard (GnuPG) is the best-known free-software implementation of the OpenPGP cryptography standard. For the past few years, the GnuPG project has actively maintained its existing stable branch, version 2.0.x, its "classic" branch (version 1.4), and continued working on a more modern replacement that implements several important improvements. In early November, the project made its first official release of this development code: GnuPG 2.1.0. There are quite a few interesting changes to be found in version 2.1, although the decision to switch over from the 2.0 series to 2.1 should, nevertheless, be carefully considered.
The new release is available as source code bundles directly from the GnuPG project. Despite several beta releases of version 2.1 over the years (the first was in 2010), the project still emphasizes that the 2.1 series has not yet been subjected to extensive real-world testing. Nevertheless, it is referring to 2.1.0 as the "modern" series, rather than as "unstable" or some other designation suggesting that it is not ready for deployment.
It is vital to note, however, that version 2.1 cannot be installed simultaneously with the 2.0 series. In addition to affecting those users who are interested in compiling the new release for themselves, this also means it is likely to be some time before binary 2.1 packages make their way into many Linux distributions. The "classic" 1.4 series, though, can be installed alongside either GnuPG 2.0 or 2.1
Interfaces and key storage
Several changes in 2.1 will be noticed immediately by GnuPG users because they introduce interface changes to the command set and differences in how secret material is stored. For example, previous GnuPG versions have all stored public-key pairs in two separate files. The secring.gpg file contained both the public and private keys for a user's key pairs, while the pubring.gpg file contained just the public half of those same pairs. That design decision meant that GnuPG had to work to ensure that the two files remained in sync, increasing code complexity.
The new design does away with the two-file setup, and keeps private keys inside a key-store directory (~/.gnupg/private-keys-v1.d). In addition, the code required to manage the secring.gpg file has been factored out of the gpg binary. Instead, secret key management is handled entirely by the gpg-agent daemon. The new design also enables some other long-requested features, such as the ability to import a subkey into an existing secret key. gpg-agent is also started on demand by the GnuPG tools, whereas in past releases, users needed to start it manually or by adding it to a session-startup script.
The storage of public keys has also changed in the new release. GnuPG 2.1 stores public keys in a "keybox" file that was originally developed for GnuPG's S/MIME tool, gpgsm. It is optimized for read efficiency; since the number of public keys a user has on file typically outnumbers the number of private keys (and often by a large margin), providing fast access to the public key store is important.
Several of the GnuPG command-line tools have also received a refresh. In particular, the key-generation interface is now faster, by virtue of only requiring users to enter a name and email address: the many other possible parameters for a key can be filled by default values (which is likely to reduce errors in addition to saving time). This quick-generation behavior is used when gpg2 --gen-key is invoked; the full interface as found in earlier releases can be triggered with gpg2 --full-gen-key.
Other conveniences for key-generation are found in the new release. First, there are now "quick" versions of the key-generation and key-signing commands, developed in order to save time when performing repetitive tasks. Running
gpg2 --quick-gen-key 'John Doe <doe@example.net>'
or
gpg2 --quick-sign-key '1234 5678 90AB CDEF 1234 5678'
will prompt the user for a yes/no confirmation, but will otherwise perform the requested operations without further questions. Both commands, though, do perform basic sanity checks and will warn the user if (for example) asked to create a key for a name/email pair that already exists.
Second, key-revocation certificates are now created by default and saved in the directory ~/.gnupg/openpgp-revocs.d/. Each revocation certificate even includes brief instructions for usage at the top of the file. Since the preparation of revocation certificates before they are needed falls under the "good ideas that are easy to forget" umbrella, this is likely a change many users will appreciate.
Finally, the command-line key listing format has been changed to be more informative. For traditional encryption algorithms, the algorithm name has been reformatted for clarity (e.g., dsa2048 rather than 2048D). For elliptic curve cryptography (ECC), the name of the curve is displayed, rather than the algorithm.
Ellipses ....
ECC support, of course, is another major feature that debuts in GnuPG 2.1—for some users, it may even be the most significant change. According to the release notes, GnuPG 2.1 is the first "mainstream" implementation of public-key ECC in an OpenPGP tool, a fact that has an upside and a downside as well. The downside, naturally, is that ECC keys are not widely deployed. The upside is that GnuPG's support for ECC should make deploying such keys relatively easy.
Nevertheless, GnuPG 2.1 still hides the ECC key-generation option by default. Users must use the --gen-full-key option and add the --expert flag to see it. ECC support is an OpenPGP extension documented in RFC 6637.
At the moment, GnuPG supports seven different ECC curves: Curve25519, NIST P-256, NIST P-384, NIST P-521, Brainpool P-256, Brainpool P-384, and Brainpool P-512. The Curve25519 support, for now, is limited to digital signature and not encryption. It is not part of the OpenPGP standard (although IETF approval is expected by many to arrive someday), but it is still noteworthy. It is regarded by many in the community as safer than the NIST (US National Institute of Standards and Technology) and Brainpool curves, which are suspected of being vulnerable to US government codebreakers.
On the subject of bad cryptography, all support for PGP-2 keys has been removed in GnuPG 2.1. PGP-2 keys are no longer regarded as safe, in particular because the algorithms mandate the use of the MD5 hash function. GnuPG 2.1 will no longer import PGP-2 keys, and the project recommends that users keep a copy of GnuPG 1.4 on hand if they need to decrypt data that has been previously encrypted with a PGP-2 key.
Additional features
There are, of course, many other smaller feature additions and enhancements to be found in the new release. X.509 certificate creation has been improved in a number of ways, for example. Users can create self-signed certificates, create batches of certificates based on a parameter file, and can export certificates directly to PKCS#8 or PKCS#1 format. This last feature allows users to create certificates for immediate use with OpenSSL servers (requiring no conversion). The batch-generation mode is also a feature that is already found in OpenSSL.
Smartcard support has been updated, with support for several new card-reader devices and hardware token types. Most notable on this front are the ability to use USB sticks with a built-in smartcard exactly like other smartcard devices and full support for Gnuk tokens (a free-software cryptographic token based on the STM32F103 microcontroller).
Finally, there have been several changes to the way GnuPG interoperates with keyservers. In prior releases, GnuPG spawned temporary processes to connect to remote keyservers—which meant that the program could not maintain any persistent state about the keyserver. The new release merges in a formerly separate project called dirmngr that was previously limited to interacting with X.509 servers; it now manages keyserver connections as well.
One immediate benefit of using dirmngr to mediate keyserver access is that it can properly cope with keyserver pools. The issue is that Keyserver pools tend to be configured in round-robin arrangements, which works well enough until the specific keyserver GnuPG has connected to goes down or becomes unreachable. In prior releases, GnuPG would continue trying to access such an unreachable keyserver until the DNS entry for it expired. Dirmngr, in contrast, flags unreachable keyservers and sends another DNS lookup request to the pool—which should return a new, working host in considerably less time.
A security-critical program like GnuPG obviously warrants a high degree of scrutiny before a new release in adopted. To be sure, no one wants to migrate their company to a new PGP key format only to discover that there is a serious cryptographic flaw in the implementation of the new cipher. That said, there are certainly many new benefits to be found in GnuPG 2.1 over the 2.0 series. Hopefully, widespread vetting will come and more users can take advantage of ECC, updated smartcard support, and the many interface improvements offered.
Brief items
Security quotes of the week
Four-year-old comment security bug affects 86 percent of WordPress sites (Ars Technica)
Ars Technica reports on a recently discovered bug in WordPress 3 sites that could be used to launch malicious script-based attacks on site visitors’ browsers. "The vulnerability, discovered by Jouko Pynnonen of Klikki Oy, allows an attacker to craft a comment on a blog post that includes malicious JavaScript code. On sites that allow comments without authentication—the default setting for WordPress—this could allow anyone to post malicious scripts within comments that could target site visitors or administrators. A proof of concept attack developed by Klikki Oy was able to hijack a WordPress site administrator’s session and create a new WordPress administrative account with a known password, change the current administrative password, and launch malicious PHP code on the server. That means an attacker could essentially lock the existing site administrator out and hijack the WordPress installation for malicious purposes." WordPress 4.0 is not vulnerable to the attack.
New vulnerabilities
apparmor: privilege escalation
Package(s): | apparmor | CVE #(s): | CVE-2014-1424 | ||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the Ubuntu advisory: An AppArmor policy miscompilation flaw was discovered in apparmor_parser. Under certain circumstances, a malicious application could use this flaw to perform operations that are not allowed by AppArmor policy. The flaw may also prevent applications from accessing resources that are allowed by AppArmor policy. | ||||||
Alerts: |
|
asterisk: denial of service
Package(s): | asterisk | CVE #(s): | CVE-2014-6610 | ||||||||||||||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||||||||||||||
Description: | From the Mandriva advisory: Remote crash when handling out of call message in certain dialplan configurations (CVE-2014-6610). | ||||||||||||||||||
Alerts: |
|
asterisk: multiple vulnerabilities
Package(s): | asterisk | CVE #(s): | |||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the Mandriva advisory: Mixed IP address families in access control lists may permit unwanted traffic. High call load may result in hung channels in ConfBridge. Permission escalation through ConfBridge actions/dialplan functions. | ||||||
Alerts: |
|
chromium-browser: two vulnerabilities
Package(s): | chromium-browser | CVE #(s): | CVE-2014-7899 CVE-2014-7906 | ||||||||||||||||
Created: | November 25, 2014 | Updated: | December 3, 2014 | ||||||||||||||||
Description: | From the CVE entries:
Google Chrome before 38.0.2125.101 allows remote attackers to spoof the address bar by placing a blob: substring at the beginning of the URL, followed by the original URI scheme and a long username string. (CVE-2014-7899) Use-after-free vulnerability in the Pepper plugins in Google Chrome before 39.0.2171.65 allows remote attackers to cause a denial of service or possibly have unspecified other impact via crafted Flash content that triggers an attempted PepperMediaDeviceManager access outside of the object's lifetime. (CVE-2014-7906) | ||||||||||||||||||
Alerts: |
|
clamav: denial of service
Package(s): | clamav | CVE #(s): | CVE-2013-6497 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | November 20, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory:
Certain javascript files causes ClamAV to segfault when scanned with the -a (list archived files) (CVE-2013-6497). | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
clamav: buffer overflow
Package(s): | clamav | CVE #(s): | CVE-2014-9050 | ||||||||||||||||||||||||||||||||||||
Created: | November 26, 2014 | Updated: | December 11, 2014 | ||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
A heap buffer overflow was reported in ClamAV when scanning a specially crafted y0da Crypter obfuscated PE file. | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
drupal7: multiple vulnerabilities
Package(s): | drupal7 | CVE #(s): | CVE-2014-9015 CVE-2014-9016 | ||||||||||||||||||||||||||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||||||
Description: | From the Debian advisory: CVE-2014-9015 - Aaron Averill discovered that a specially crafted request can give a user access to another user's session, allowing an attacker to hijack a random session. CVE-2014-9016 - Michael Cullum, Javier Nieto and Andres Rojas Guerrero discovered that the password hashing API allows an attacker to send specially crafted requests resulting in CPU and memory exhaustion. This may lead to the site becoming unavailable or unresponsive (denial of service). | ||||||||||||||||||||||||||||||
Alerts: |
|
drupal: cross-site scripting
Package(s): | drupal6 | CVE #(s): | CVE-2012-6662 | ||||||||||||||||||||||||||||
Created: | December 3, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||||||
Description: | From the CVE entry:
Cross-site scripting (XSS) vulnerability in the default content option in jquery.ui.tooltip.js in the Tooltip widget in jQuery UI before 1.10.0 allows remote attackers to inject arbitrary web script or HTML via the title attribute, which is not properly handled in the autocomplete combo box demo. | ||||||||||||||||||||||||||||||
Alerts: |
|
erlang: command injection
Package(s): | erlang | CVE #(s): | CVE-2014-1693 | ||||||||||||||||||||
Created: | December 2, 2014 | Updated: | March 30, 2015 | ||||||||||||||||||||
Description: | From the Red Hat bugzilla:
An FTP command injection flaw was found in Erlang's FTP module. Several functions in the FTP module do not properly sanitize the input before passing it into a control socket. A local attacker can use this flaw to execute arbitrary FTP commands on a system that uses this module. | ||||||||||||||||||||||
Alerts: |
|
facter: privilege escalation
Package(s): | facter | CVE #(s): | CVE-2014-3248 | ||||||||||||
Created: | November 24, 2014 | Updated: | December 29, 2014 | ||||||||||||
Description: | From the CVE entry:
Untrusted search path vulnerability in Puppet Enterprise 2.8 before 2.8.7, Puppet before 2.7.26 and 3.x before 3.6.2, Facter 1.6.x and 2.x before 2.0.2, Hiera before 1.3.4, and Mcollective before 2.5.2, when running with Ruby 1.9.1 or earlier, allows local users to gain privileges via a Trojan horse file in the current working directory, as demonstrated using (1) rubygems/defaults/operating_system.rb, (2) Win32API.rb, (3) Win32API.so, (4) safe_yaml.rb, (5) safe_yaml/deep.rb, or (6) safe_yaml/deep.so; or (7) operatingsystem.rb, (8) operatingsystem.so, (9) osfamily.rb, or (10) osfamily.so in puppet/confine. | ||||||||||||||
Alerts: |
|
ffmpeg: multiple vulnerabilities
Package(s): | ffmpeg | CVE #(s): | CVE-2014-5271 CVE-2014-5272 CVE-2014-8541 CVE-2014-8542 CVE-2014-8543 CVE-2014-8544 CVE-2014-8545 CVE-2014-8546 CVE-2014-8547 CVE-2014-8548 | ||||||||||||||||||||||||||||||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||||||||||
Description: | From the Magiea advisory: A heap-based buffer overflow in the encode_slice function in libavcodec/proresenc_kostya.c in FFmpeg before 2.0.6 can cause a crash, allowing a malicious image file to cause a denial of service (CVE-2014-5271). libavcodec/iff.c in FFmpeg before 2.0.6 allows an attacker to have an unspecified impact via a crafted iff image, which triggers an out-of-bounds array access, related to the rgb8 and rgbn formats (CVE-2014-5272). libavcodec/mjpegdec.c in FFmpeg before 2.0.6 considers only dimension differences, and not bits-per-pixel differences, when determining whether an image size has changed, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted MJPEG data (CVE-2014-8541). libavcodec/utils.c in FFmpeg before 2.0.6 omits a certain codec ID during enforcement of alignment, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted JV data (CVE-2014-8542). libavcodec/mmvideo.c in FFmpeg before 2.0.6 does not consider all lines of HHV Intra blocks during validation of image height, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted MM video data (CVE-2014-8543). libavcodec/tiff.c in FFmpeg before 2.0.6 does not properly validate bits-per-pixel fields, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted TIFF data (CVE-2014-8544). libavcodec/pngdec.c in FFmpeg before 2.0.6 accepts the monochrome-black format without verifying that the bits-per-pixel value is 1, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted PNG data (CVE-2014-8545). Integer underflow in libavcodec/cinepak.c in FFmpeg before 2.0.6 allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted Cinepak video data (CVE-2014-8546). libavcodec/gifdec.c in FFmpeg before 2.0.6 does not properly compute image heights, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted GIF data (CVE-2014-8547). Off-by-one error in libavcodec/smc.c in FFmpeg before 2.0.6 allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted Quicktime Graphics (aka SMC) video data (CVE-2014-8548). | ||||||||||||||||||||||||||||||||||
Alerts: |
|
flac: multiple vulnerabilities
Package(s): | flac | CVE #(s): | CVE-2014-8962 CVE-2014-9028 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | November 28, 2014 | Updated: | August 18, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the CVE entries: Stack-based buffer overflow in stream_decoder.c in libFLAC before 1.3.1 allows remote attackers to execute arbitrary code via a crafted .flac file. (CVE-2014-8962) Heap-based buffer overflow in stream_decoder.c in libFLAC before 1.3.1 allows remote attackers to execute arbitrary code via a crafted .flac file. (CVE-2014-9028) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
glibc: code execution
Package(s): | glibc | CVE #(s): | CVE-2014-7817 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | November 27, 2014 | Updated: | March 4, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
The function wordexp() fails to properly handle the WRDE_NOCMD flag when processing arithmetic inputs in the form of "$((... ``))" where "..." can be anything valid. The backticks in the arithmetic epxression are evaluated by in a shell even if WRDE_NOCMD forbade command substitution. This allows an attacker to attempt to pass dangerous commands via constructs of the above form, and bypass the WRDE_NOCMD flag. This update fixes the issue (CVE-2014-7817). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
icecast: information leak
Package(s): | icecast | CVE #(s): | CVE-2014-9018 | ||||||||||||||||||||||||||||||||
Created: | November 27, 2014 | Updated: | December 8, 2014 | ||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
Icecast did not properly handle the launching of "scripts" on connect or disconnect of sources. This could result in sensitive information from these scripts leaking to (external) clients. (CVE-2014-9018) | ||||||||||||||||||||||||||||||||||
Alerts: |
|
imagemagick: denial of service
Package(s): | imagemagick | CVE #(s): | CVE-2014-8716 | ||||||||||||||||||||
Created: | November 24, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||
Description: | From the Mageia advisory:
ImageMagick is vulnerable to a denial of service due to out-of-bounds memory accesses in the JPEG decoder. | ||||||||||||||||||||||
Alerts: |
|
java-1.6.0-ibm: privilege escalation
Package(s): | java-1.6.0-ibm | CVE #(s): | CVE-2014-3065 | ||||||||||||||||||||||||||||||||||||||||
Created: | November 20, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
CVE-2014-3065 IBM JDK: privilege escalation via shared class cache | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kdebase4-runtime, kwebkitpart: code execution
Package(s): | kdebase4-runtime | CVE #(s): | CVE-2014-8600 | ||||||||||||||||||||||||
Created: | November 21, 2014 | Updated: | December 8, 2014 | ||||||||||||||||||||||||
Description: | From the Mageia advisory: kwebkitpart and the bookmarks:// io slave were not sanitizing input correctly allowing to some javascript being executed on the context of the referenced hostname (CVE-2014-8600). | ||||||||||||||||||||||||||
Alerts: |
|
kernel: multiple vulnerabilities
Package(s): | kernel | CVE #(s): | CVE-2014-7843 CVE-2014-7842 CVE-2014-7841 CVE-2014-7826 CVE-2014-7825 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | November 21, 2014 | Updated: | March 3, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bug reports: CVE-2014-7843 - It was found that a read of n*PAGE_SIZE+1 from /dev/zero will cause the kernel to panic due to an unhandled exception since it's not handling the single byte case with a fixup (anything larger than a single byte will properly fault.) A local, unprivileged user could use this flaw to crash the system. CVE-2014-7842 - It was found that reporting emulation failures to user space can lead to either local or L2->L1 DoS. In the case of local DoS attacker needs access to MMIO area or be able to generate port access. Please note that on certain systems HPET is mapped to userspace as part of vdso (vvar) and thus an unprivileged user may generate MMIO transactions (and enter the emulator) this way. CVE-2014-7841 - An SCTP server doing ASCONF will panic on malformed INIT ping-of-death in the form of: ------------ INIT[PARAM: SET_PRIMARY_IP] ------------> A remote attacker could use this flaw to crash the system by sending a maliciously prepared SCTP packet in order to trigger a NULL pointer dereference on the server. From the CVE entries: CVE-2014-7826 - kernel/trace/trace_syscalls.c in the Linux kernel through 3.17.2 does not properly handle private syscall numbers during use of the ftrace subsystem, which allows local users to gain privileges or cause a denial of service (invalid pointer dereference) via a crafted application. CVE-2014-7825 - kernel/trace/trace_syscalls.c in the Linux kernel through 3.17.2 does not properly handle private syscall numbers during use of the perf subsystem, which allows local users to cause a denial of service (out-of-bounds read and OOPS) or bypass the ASLR protection mechanism via a crafted application. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
krb5: ticket forgery
Package(s): | krb5 | CVE #(s): | CVE-2014-5351 | ||||||||||||||||||||||||||||||||
Created: | November 21, 2014 | Updated: | March 9, 2015 | ||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory: The kadm5_randkey_principal_3 function in lib/kadm5/srv/svr_principal.c in kadmind in MIT Kerberos 5 (aka krb5) before 1.13 sends old keys in a response to a -randkey -keepold request, which allows remote authenticated users to forge tickets by leveraging administrative access. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
libksba: denial of service
Package(s): | libksba | CVE #(s): | CVE-2014-9087 | ||||||||||||||||||||||||||||||||||||
Created: | November 27, 2014 | Updated: | March 29, 2015 | ||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory:
By using special crafted S/MIME messages or ECC based OpenPGP data, it is possible to create a buffer overflow, which could lead to a denial of service (CVE-2014-9087). | ||||||||||||||||||||||||||||||||||||||
Alerts: |
|
libreoffice: code execution
Package(s): | libreoffice | CVE #(s): | |||||
Created: | November 24, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the freedesktop.org bug report:
Crash while importing malformed .rtf file. According to valgrind there are several invalid writes, including near malloc'd block. Seems to be potentially exploitable. | ||||||
Alerts: |
|
lsyncd: command injection
Package(s): | lsyncd | CVE #(s): | CVE-2014-8990 | ||||||||||||||||
Created: | December 3, 2014 | Updated: | February 13, 2017 | ||||||||||||||||
Description: | From the Red Hat bugzilla:
It was reported that lsyncd is vulnerable to command injection. If a filename has "`" (backticks), what between backticks will be executed with lsyncd process privileges. | ||||||||||||||||||
Alerts: |
|
mariadb: denial of service
Package(s): | mariadb | CVE #(s): | CVE-2014-6564 | ||||||||||||||||||||
Created: | November 21, 2014 | Updated: | December 12, 2014 | ||||||||||||||||||||
Description: | From the CVE entry: Unspecified vulnerability in Oracle MySQL Server 5.6.19 and earlier allows remote authenticated users to affect availability via vectors related to SERVER:INNODB FULLTEXT SEARCH DML. | ||||||||||||||||||||||
Alerts: |
|
mod-wsgi: privilege escalation
Package(s): | mod-wsgi | CVE #(s): | CVE-2014-8583 | ||||||||||||||||||||||||
Created: | December 3, 2014 | Updated: | December 30, 2016 | ||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
It was discovered that mod_wsgi incorrectly handled errors when setting up the working directory and group access rights. A malicious application could possibly use this issue to cause a local privilege escalation when using daemon mode. | ||||||||||||||||||||||||||
Alerts: |
|
moodle: multiple vulnerabilities
Package(s): | moodle | CVE #(s): | CVE-2014-7830 CVE-2014-7832 CVE-2014-7833 CVE-2014-7834 CVE-2014-7835 CVE-2014-7836 CVE-2014-7837 CVE-2014-7838 CVE-2014-7845 CVE-2014-7846 CVE-2014-7847 CVE-2014-7848 | ||||||||
Created: | November 24, 2014 | Updated: | December 3, 2014 | ||||||||
Description: | From the Mageia advisory:
In Moodle before 2.6.5, an XSS issue through $searchcourse in mod/feedback/mapcourse.php, due to the last search string in the Feedback module not being escaped in the search input field (CVE-2014-7830). In Moodle before 2.6.5, the word list for temporary password generation was short, therefore the pool of possible passwords was not big enough (CVE-2014-7845). In Moodle before 2.6.5, capability checks in the LTI module only checked access to the course and not to the activity (CVE-2014-7832). In Moodle before 2.6.5, group-level entries in Database activity module became visible to users in other groups after being edited by a teacher (CVE-2014-7833). In Moodle before 2.6.5, unprivileged users could access the list of available tags in the system (CVE-2014-7846). In Moodle before 2.6.5, the script used to geo-map IP addresses was available to unauthenticated users increasing server load when used by other parties (CVE-2014-7847). In Moodle before 2.6.5, when using the web service function for Forum discussions, group permissions were not checked (CVE-2014-7834). In Moodle before 2.6.5, by directly accessing an internal file, an unauthenticated user can be shown an error message containing the file system path of the Moodle install (CVE-2014-7848). In Moodle before 2.6.5, if web service with file upload function was available, user could upload XSS file to his profile picture area (CVE-2014-7835). In Moodle before 2.6.5, two files in the LTI module lacked a session key check, potentially allowing cross-site request forgery (CVE-2014-7836). In Moodle before 2.6.5, by tweaking URLs, users who were able to delete pages in at least one Wiki activity in the course were able to delete pages in other Wiki pages in the same course (CVE-2014-7837). In Moodle before 2.6.5, set tracking script in the Forum module lacked a session key check, potentially allowing cross-site request forgery (CVE-2014-7838). | ||||||||||
Alerts: |
|
mozilla: multiple vulnerabilities
Package(s): | firefox thunderbird seamonkey | CVE #(s): | CVE-2014-1587 CVE-2014-1590 CVE-2014-1592 CVE-2014-1593 CVE-2014-1594 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | December 3, 2014 | Updated: | February 3, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat advisory:
Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2014-1587, CVE-2014-1590, CVE-2014-1592, CVE-2014-1593) A flaw was found in the Alarm API, which could allow applications to schedule actions to be run in the future. A malicious web application could use this flaw to bypass the same-origin policy. (CVE-2014-1594) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mozilla: multiple vulnerabilities
Package(s): | firefox thunderbird seamonkey | CVE #(s): | CVE-2014-1588 CVE-2014-1589 CVE-2014-1591 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | December 3, 2014 | Updated: | February 3, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Ubuntu advisory:
Gary Kwong, Randell Jesup, Nils Ohlmeier, Jesse Ruderman, Max Jonas Werner, Christian Holler, Jon Coppeard, Eric Rahm, Byron Campen, Eric Rescorla, and Xidorn Quan discovered multiple memory safety issues in Firefox. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to cause a denial of service via application crash, or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2014-1588) Cody Crews discovered a way to trigger chrome-level XBL bindings from web content in some circumstances. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit this to bypass security restrictions. (CVE-2014-1589) Muneaki Nishimura discovered that CSP violation reports did not remove path information in some circumstances. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit this to obtain sensitive information. (CVE-2014-1591) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
mutt: denial of service
Package(s): | mutt | CVE #(s): | CVE-2014-9116 | ||||||||||||||||||||||||||||||||||||||||||||||||
Created: | December 1, 2014 | Updated: | January 2, 2017 | ||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
A flaw was discovered in mutt, a text-based mailreader. A specially crafted mail header could cause mutt to crash, leading to a denial of service condition. | ||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
openssl: TLS handshake problem
Package(s): | openssl | CVE #(s): | |||||
Created: | November 24, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the openSUSE bug report:
openssl-1.0.1i-2.1.4 that comes with OpenSUSE 13.2 is configured with 'no-ec2m' . This exposes a bug in openssl that let the client advertise a non-prime field curve, that it however doesn't actually support. | ||||||
Alerts: |
|
openstack-neutron: denial of service
Package(s): | openstack-neutron | CVE #(s): | CVE-2014-7821 | ||||||||||||||||
Created: | December 3, 2014 | Updated: | April 22, 2015 | ||||||||||||||||
Description: | From the CVE entry:
OpenStack Neutron before 2014.1.4 and 2014.2.x before 2014.2.1 allows remote authenticated users to cause a denial of service (crash) via a crafted dns_nameservers value in the DNS configuration. | ||||||||||||||||||
Alerts: |
|
openstack-trove: information disclosure
Package(s): | openstack-trove | CVE #(s): | CVE-2014-7231 | ||||
Created: | December 3, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the CVE entry:
The strutils.mask_password function in the OpenStack Oslo utility library, Cinder, Nova, and Trove before 2013.2.4 and 2014.1 before 2014.1.3 does not properly mask passwords when logging commands, which allows local users to obtain passwords by reading the log. | ||||||
Alerts: |
|
openvpn: denial of service
Package(s): | openvpn | CVE #(s): | CVE-2014-8104 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | December 2, 2014 | Updated: | March 29, 2015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
Dragana Damjanovic discovered that an authenticated client could crash an OpenVPN server by sending a control packet containing less than four bytes as payload. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
oxide-qt: multiple vulnerabilities
Package(s): | oxide-qt | CVE #(s): | CVE-2014-7904 CVE-2014-7907 CVE-2014-7908 CVE-2014-7909 CVE-2014-7910 | ||||||||||||||||||||
Created: | November 20, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||
Description: | From the Ubuntu advisory:
A buffer overflow was discovered in Skia. If a user were tricked in to opening a specially crafted website, an attacked could potentially exploit this to cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. (CVE-2014-7904) Multiple use-after-frees were discovered in Blink. If a user were tricked in to opening a specially crafted website, an attacked could potentially exploit these to cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. (CVE-2014-7907) An integer overflow was discovered in media. If a user were tricked in to opening a specially crafted website, an attacked could potentially exploit this to cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. (CVE-2014-7908) An uninitialized memory read was discovered in Skia. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit this to cause a denial of service via renderer crash. (CVE-2014-7909) Multiple security issues were discovered in Chromium. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to read uninitialized memory, cause a denial of service via application crash or execute arbitrary code with the privileges of the user invoking the program. (CVE-2014-7910) | ||||||||||||||||||||||
Alerts: |
|
phpmyadmin: multiple vulnerabilities
Package(s): | phpmyadmin | CVE #(s): | CVE-2014-8958 CVE-2014-8959 CVE-2014-8960 CVE-2014-8961 | ||||||||||||||||||||||||||||||||
Created: | November 26, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||||||||||
Description: | From the Mandriva advisory:
Multiple vulnerabilities has been discovered and corrected in phpmyadmin: * Multiple XSS vulnerabilities (CVE-2014-8958). * Local file inclusion vulnerability (CVE-2014-8959). * XSS vulnerability in error reporting functionality (CVE-2014-8960). * Leakage of line count of an arbitrary file (CVE-2014-8961). This upgrade provides the latest phpmyadmin version (4.2.12) to address these vulnerabilities. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
php-smarty: cross-site scripting
Package(s): | php-smarty | CVE #(s): | CVE-2012-4437 | ||||
Created: | November 24, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the CVE entry:
Cross-site scripting (XSS) vulnerability in the SmartyException class in Smarty (aka smarty-php) before 3.1.12 allows remote attackers to inject arbitrary web script or HTML via unspecified vectors that trigger a Smarty exception. | ||||||
Alerts: |
|
privoxy: denial of service
Package(s): | privoxy | CVE #(s): | |||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the Mageia advisory: The logrotate configuration of the privoxy package did not function properly, causing its log files not to be rotated. The log file(s) could potentially fill up the disk. | ||||||
Alerts: |
|
python-djblets: cross-site scripting
Package(s): | python-djblets | CVE #(s): | CVE-2014-3995 | ||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||
Description: | From the Mageia advisory: Cross-site scripting (XSS) vulnerability in gravatars/templatetags/gravatars.py in Djblets before 0.7.30 Django allows remote attackers to inject arbitrary web script or HTML via a user display name (CVE-2014-3995). | ||||||
Alerts: |
|
python-imaging, python-pillow: code execution
Package(s): | python-imaging, python-pillow | CVE #(s): | CVE-2014-3007 | ||||||||||||||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||||||||||||||
Description: | From the Mageia advisory: Python Image Library (PIL) 1.1.7 and earlier and Pillow 2.3 might allow remote attackers to execute arbitrary commands via shell metacharacters, due to an incomplete fix for CVE-2014-1932 (CVE-2014-3007). | ||||||||||||||||||
Alerts: |
|
ruby: denial of service
Package(s): | ruby | CVE #(s): | CVE-2014-8090 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | November 21, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory: Due to an incomplete fix for CVE-2014-8080, 100% CPU utilization can occur as a result of recursive expansion with an empty String. When reading text nodes from an XML document, the REXML parser in Ruby can be coerced into allocating extremely large string objects which can consume all of the memory on a machine, causing a denial of service (CVE-2014-8090). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
rubygem-actionpack: two information leaks
Package(s): | rubygem-actionpack-3_2 | CVE #(s): | CVE-2014-7818 CVE-2014-7829 | ||||||||||||
Created: | November 27, 2014 | Updated: | March 5, 2015 | ||||||||||||
Description: | From the openSUSE advisory:
- Arbitrary file existence disclosure (CVE-2014-7829). - Arbitrary file existence disclosure (CVE-2014-7818). | ||||||||||||||
Alerts: |
|
rubygem-sprockets: directory traversal
Package(s): | rubygem-sprockets | CVE #(s): | CVE-2014-7819 | ||||||||||||||||||||||||||||
Created: | November 26, 2014 | Updated: | February 20, 2015 | ||||||||||||||||||||||||||||
Description: | From the CVE entry:
Multiple directory traversal vulnerabilities in server.rb in Sprockets before 2.0.5, 2.1.x before 2.1.4, 2.2.x before 2.2.3, 2.3.x before 2.3.3, 2.4.x before 2.4.6, 2.5.x before 2.5.1, 2.6.x and 2.7.x before 2.7.1, 2.8.x before 2.8.3, 2.9.x before 2.9.4, 2.10.x before 2.10.2, 2.11.x before 2.11.3, 2.12.x before 2.12.3, and 3.x before 3.0.0.beta.3, as distributed with Ruby on Rails 3.x and 4.x, allow remote attackers to determine the existence of files outside the application root via a ../ (dot dot slash) sequence with (1) double slashes or (2) URL encoding. | ||||||||||||||||||||||||||||||
Alerts: |
|
tcpdump: three vulnerabilities
Package(s): | tcpdump | CVE #(s): | CVE-2014-8767 CVE-2014-8768 CVE-2014-8769 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | November 27, 2014 | Updated: | February 13, 2015 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | Bug #1165160 - CVE-2014-8767 tcpdump: denial of service in verbose mode using malformed OLSR payload Bug #1165161 - CVE-2014-8768 tcpdump: denial of service in verbose mode using malformed Geonet payload Bug #1165162 - CVE-2014-8769 tcpdump: unreliable output using malformed AOVD payload | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
teeworlds: information leak
Package(s): | teeworlds | CVE #(s): | |||||||||||||
Created: | December 2, 2014 | Updated: | December 4, 2014 | ||||||||||||
Description: | From the Mageia advisory:
A security flaw was found in the teeworlds server prior to 0.6.3 where an incorrect offset check could enable an attacker to read the memory or trigger a segmentation fault. | ||||||||||||||
Alerts: |
|
wireshark: multiple vulnerabilities
Package(s): | wireshark | CVE #(s): | CVE-2014-8710 CVE-2014-8711 CVE-2014-8712 CVE-2014-8713 CVE-2014-8714 | ||||||||||||||||||||||||||||||||||||||||||||
Created: | November 21, 2014 | Updated: | December 4, 2014 | ||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Mageia advisory: SigComp UDVM buffer overflow (CVE-2014-8710). AMQP crash (CVE-2014-8711). NCP crashes (CVE-2014-8712, CVE-2014-8713). TN5250 infinite loops (CVE-2014-8714). | ||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
wordpress: multiple vulnerabilities
Package(s): | wordpress | CVE #(s): | CVE-2014-9031 CVE-2014-9032 CVE-2014-9033 CVE-2014-9034 CVE-2014-9035 CVE-2014-9036 CVE-2014-9037 CVE-2014-9038 CVE-2014-9039 | ||||||||||||||||||||||||
Created: | November 27, 2014 | Updated: | December 3, 2014 | ||||||||||||||||||||||||
Description: | From the Mageia advisory:
XSS in wptexturize() via comments or posts, exploitable for unauthenticated users (CVE-2014-9031). XSS in media playlists (CVE-2014-9032). CSRF in the password reset process (CVE-2014-9033). Denial of service for giant passwords. The phpass library by Solar Designer was used in both projects without setting a maximum password length, which can lead to CPU exhaustion upon hashing (CVE-2014-9034). XSS in Press This (CVE-2014-9035). XSS in HTML filtering of CSS in posts (CVE-2014-9036). Hash comparison vulnerability in old-style MD5-stored passwords (CVE-2014-9037). SSRF: Safe HTTP requests did not sufficiently block the loopback IP address space (CVE-2014-9038). Previously an email address change would not invalidate a previous password reset email (CVE-2014-9039). | ||||||||||||||||||||||||||
Alerts: |
|
xen: multiple vulnerabilities
Package(s): | xen | CVE #(s): | CVE-2014-8594 CVE-2014-8595 CVE-2014-9030 | ||||||||||||||||||||||||||||||||
Created: | December 2, 2014 | Updated: | December 12, 2014 | ||||||||||||||||||||||||||||||||
Description: | From the CVE entries:
The do_mmu_update function in arch/x86/mm.c in Xen 4.x through 4.4.x does not properly restrict updates to only PV page tables, which allows remote PV guests to cause a denial of service (NULL pointer dereference) by leveraging hardware emulation services for HVM guests using Hardware Assisted Paging (HAP). (CVE-2014-8594) arch/x86/x86_emulate/x86_emulate.c in Xen 3.2.1 through 4.4.x does not properly check privileges, which allows local HVM guest users to gain privileges or cause a denial of service (crash) via a crafted (1) CALL, (2) JMP, (3) RETF, (4) LCALL, (5) LJMP, or (6) LRET far branch instruction. (CVE-2014-8595) The do_mmu_update function in arch/x86/mm.c in Xen 3.2.x through 4.4.x does not properly manage page references, which allows remote domains to cause a denial of service by leveraging control over an HVM guest and a crafted MMU_MACHPHYS_UPDATE. (CVE-2014-9030) | ||||||||||||||||||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.18-rc7, released on November 30. Linus seems happy enough, despite the persistent lockup problem that has defied all debugging attempts so far. "At the same time, with the holidays coming up, and the problem _not_ being a regression, I suspect that what will happen is that I'll release 3.18 on time in a week, because delaying it will either mess up the merge window and the holiday season, or I'd have to delay it a *lot*."
3.18-rc6 was released on November 23.
Stable updates: 3.10.61, 3.14.25, and 3.17.4 were released on November 21.
Quotes of the week
static inline void * someone_think_of_a_name_for_this(gfp_t gfp_mask, unsigned int order) { return (void *)__get_free_pages(gfp, order); }
McKenney: Stupid RCU Tricks: rcutorture Catches an RCU Bug
On his blog, Paul McKenney investigates a bug in read-copy update (RCU) in preparation for the 3.19 merge window. "Of course, we all have specific patches that we are suspicious of. So my next step was to revert suspect patches and to otherwise attempt to outguess the bug. Unfortunately, I quickly learned that the bug is difficult to reproduce, requiring something like 100 hours of focused rcutorture testing. Bisection based on 100-hour tests would have consumed the remainder of 2014 and a significant fraction of 2015, so something better was required. In fact, something way better was required because there was only a very small number of failures, which meant that the expected test time to reproduce the bug might well have been 200 hours or even 300 hours instead of my best guess of 100 hours."
Version 2 of the kdbus patches posted
The second version of the kdbus patches have been posted to the Linux kernel mailing list by Greg Kroah-Hartman. The biggest change since the original patch set (which we looked at in early November) is that kdbus now provides a filesystem-based interface (kdbusfs) rather than the /dev/kdbus device-based interface. There are lots of other changes in response to v1 review comments as well. "kdbus is a kernel-level IPC implementation that aims for resemblance to [the] protocol layer with the existing userspace D-Bus daemon while enabling some features that couldn't be implemented before in userspace."
Kernel development news
ACCESS_ONCE() and compiler bugs
The ACCESS_ONCE() macro is used throughout the kernel to ensure that code generated by the compiler will access the indicated variable once (and only once); see this article for details on how it works and when its use is necessary. When that article was written (2012), there were 200 invocations of ACCESS_ONCE() in the kernel; now there are over 700 of them. Like many low-level techniques for concurrency management, ACCESS_ONCE() relies on trickery that is best hidden from view. And, like such techniques, it may break if the compiler changes behavior or, as has been seen recently, contains a bug.Back in November, Christian Borntraeger posted a message regarding the interactions between ACCESS_ONCE() and an obscure GCC bug. To understand the problem, it is worth looking at the macro, which is defined simply in current kernels (in <linux/compiler.h>):
#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))
In short, ACCESS_ONCE() forces the variable to be treated as being a volatile type, even though it (like almost all variables in the kernel) is not declared that way. The problem reported by Christian is that GCC 4.6 and 4.7 will drop the volatile modifier if the variable passed into it is not of a scalar type. It works fine if x is an int, for example, but not if x has a more complicated type. For example, ACCESS_ONCE() is often used with page table entries, which are defined as having the pte_t type:
typedef struct { unsigned long pte; } pte_t;
In this case, the volatile semantics will be lost in buggy compilers, leading to buggy kernels. Christian started by looking for ways to work around the problem, only to be informed that normal kernel practice is to avoid working around compiler bugs whenever possible; instead, the buggy versions should simply be blacklisted in the kernel build system. But 4.6 and 4.7 are installed on a lot of systems; blacklisting them would inconvenience many users. And, as Linus put it, there can be reasons for approaches other than blacklisting:
One way of being less fragile would be to change the affected ACCESS_ONCE() calls to point to the scalar parts of the relevant non-scalar types. So, if code does something like:
pte_t p = ACCESS_ONCE(pte);
It could be changed to something like:
unsigned long p = ACCESS_ONCE(pte->pte);
This type of change requires auditing all ACCESS_ONCE() calls, though, to find the ones using non-scalar types; that would be a lengthy and error-prone process that would not prevent the addition of new bugs in the future.
Another approach to the problem explored by Christian was to remove a number of problematic ACCESS_ONCE() calls and just put in a compiler barrier with barrier() instead. In many cases, a barrier is sufficient, but in others it is not. Once again, a detailed audit is required, and there is nothing preventing new code from adding buggy ACCESS_ONCE() calls.
So Christian headed down the path of changing ACCESS_ONCE() to simply disallow the use of non-scalar types altogether. In the most recent version of the patch set, ACCESS_ONCE() looks like this:
#define __ACCESS_ONCE(x) ({ \ __maybe_unused typeof(x) __var = 0; \ (volatile typeof(x) *)&(x); }) #define ACCESS_ONCE(x) (*__ACCESS_ONCE(x))
This version will cause compilation failures if a non-scalar type is passed into the macro. But what about the situations where a non-scalar type needs to be used? For these cases, Christian has introduced two new macros, READ_ONCE() and ASSIGN_ONCE(). The definition of the former looks like this:
static __always_inline void __read_once_size(volatile void *p, void *res, int size) { switch (size) { case 1: *(u8 *)res = *(volatile u8 *)p; break; case 2: *(u16 *)res = *(volatile u16 *)p; break; case 4: *(u32 *)res = *(volatile u32 *)p; break; #ifdef CONFIG_64BIT case 8: *(u64 *)res = *(volatile u64 *)p; break; #endif } } #define READ_ONCE(p) \ ({ typeof(p) __val; __read_once_size(&p, &__val, sizeof(__val)); __val; })
Essentially, it works by forcing the use of scalar types, even if the variable passed in does not have such a type. Providing a single access macro that worked on both the left-hand and right-hand sides of an assignment turned out to not be trivial, so the separate ASSIGN_ONCE() was provided for the left-hand side case.
Christian's patch set replaces ACCESS_ONCE() calls with READ_ONCE() or ASSIGN_ONCE() in cases where the latter are needed. Comments in the code suggest that those macros should be preferred to ACCESS_ONCE() in the future, but most existing ACCESS_ONCE() calls have not been changed. Developers using ACCESS_ONCE() to access non-scalar types in the future will get an unpleasant surprise from the compiler, though.
This version of the patch has received few comments and seems likely to make it into the mainline in the near future; backports to the stable series are also probably on the agenda. There are times when it is best to simply avoid versions of the compiler with known bugs altogether. But, as can be seen here, compiler bugs can also be seen as a signal that things could be done better in the kernel, leading to more robust code overall.
Splicing out syscalls for tiny kernels
It is no secret that the Linux kernel has grown over time; the constant addition of features and hardware support means that almost every development cycle adds more code than it removes. The good news is that, for most of us, the increase in hardware speed and size has far outstripped the growth of the kernel, so few of us begrudge the extra resources that a larger kernel requires. Developers working on tiny systems, though, are still concerned about every byte consumed by the kernel. Accommodating their needs seems likely to be a source of ongoing stress in the community.The latest example comes from Pieter Smith's patch set to remove support for the splice() family of system calls, including sendfile() and tee(). There will be many tiny systems with dedicated applications that have no need for those calls; removing them from the kernel makes 8KB of memory available for other purposes. The Linux "tinification" developers see that as a worthwhile gain, but some others disagree.
In particular, David Miller opposed the
change, saying "I think starting to compile out system calls is a
very slippery slope we should not begin the journey down.
" He
worries that, even if a specific system works today without
splice(), there may be a surprise tomorrow when some library
starts using that system call. Developers working on Linux systems, David
appears to be arguing, should be able to count on having the basic system
call set available to them anywhere.
The tinification developers have a couple of answers to this concern. One is that developers working on tiny systems know what they are doing and which system calls they can do without. As Josh Triplett put it:
The other response is that the kernel has, in fact, provided support for compiling out major subsystems since the beginning. Quoting Josh again:
(This list goes on for some time; see the original mail for all the details). Eric Biederman added that the SYSV IPC system calls have been optional for a long time, and Alan Cox listed more optional items as well. David finally seemed to concede that making system calls optional was not a new thing for the Linux kernel, but he stopped short of actually supporting the splice() removal patch.
Without his opposition, though, this patch may go in. But a look at the kernel tinification project list makes it clear that this discussion is likely to return in the future. The tinification developers would like to be able to compile out support for SMP systems, random number generation, signal handling, capabilities, non-root users, sockets, the ability for processes to exit, and more. Eventually, they would like to have an automated tool that can examine a user-space image and build a configuration removing every system call that the given programs do not use.
Needless to say, any kernel that has been stripped down to that extent will not resemble a contemporary Linux system. But, on the other hand, neither do the ancient (but much smaller) kernels that these users often employ now. If Linux wants to have a place on tiny systems, the kernel will have to adapt to the resource constraints that come with such systems. That will bring challenges beyond convincing developers to allow important functionality to be configured out; the tinification developers will also have to figure out a way to allow this configuration without introducing large numbers of new configuration options and adding complexity to the build system.
It looks like a hard line to walk. But the Linux kernel embodies the solution to a lot of hard problems already; where there are willing developers, there is usually a way. If the tinification developers can find a way here, Linux has a much better chance of being present on the tiny systems that are likely to be embedded in all kinds of devices in the coming years. That seems like a goal worth trying for.
Version 2 of the kdbus patch set
When the long-awaited kdbus patch set hit linux-kernel at the end of October, it ran into a number of criticisms from reviewers. Some developers might have given up in discouragement, muttering about how unfriendly the kernel development community is. The kdbus developers know better than that, though. This can be seen in the version 2 posting; the code has changed significantly in response to the comments that were received the first time around. Kdbus may still not be ready for immediate inclusion into the mainline, but it does seem to be getting closer.
No more device files
One of the biggest complaints about the first version was its use of device files to manage interaction with the system. Devices need to be named; that forced a hierarchical global naming system on kdbus domains — which were otherwise not inherently hierarchical. The global namespace imposed a privilege requirement, making it harder for unprivileged users to create kdbus domains; it also added complications for users wanting to checkpoint and restore containers.
The second version does away with the device abstraction, replacing it with a virtual filesystem called "kdbusfs." This filesystem will normally be mounted under /sys/fs/kdbus. Creating a new kdbus domain (a container that holds a namespace for one or more buses) is simply a matter of mounting an instance of this filesystem; the domain will persist until the filesystem is unmounted. No special privileges are needed to create a new domain — but mounting a filesystem still requires privileges of its own.
A newly created domain will contain no buses at the outset. What it does have is a file called control; a bus can be created by opening that file and issuing a KDBUS_CMD_BUS_MAKE ioctl() command. That bus will remain in existence as long as the file descriptor for the control file is held open. Only one bus may be created on any given control file descriptor, but the control file can be opened multiple times to create multiple buses. The control file can also be used to create custom endpoints for well-known services.
Each bus is represented by its own directory underneath the domain directory; endpoints are represented as files within the bus directory. Connecting to a bus is a matter of opening the kdbusfs file corresponding to the desired endpoint; for most clients, that will be the file simply called bus. Messages can then be sent and received with ioctl() commands on the resulting file descriptor.
As can be seen, the device abstraction is gone, but the interface is still somewhat device-like in that it is heavily based on ioctl() calls. There has been a small amount of discussion on whether it might make more sense to just use operations like read() and write() to interact with kdbus, but there appears to be little interest in making (or asking for) that sort of change.
Metadata issues
A significant change that has been made is in the area of security. In version 1, the recipient of a message could specify a set of credential information that must accompany the message. This information can include anything from the process ID through to capabilities, command line information, audit information, security IDs, and more. Some reviewers (Andy Lutomirski in particular) complained that this approach could lead to information leaks and, maybe, worse security issues; instead, they said, the sender of a message should be in control of the metadata that goes along with the message.
The updated patch set contains a response to that request by changing the protocol. When a client connects to the bus, it runs the KDBUS_CMD_HELLO ioctl() command to set up a number of parameters for the connection; one of those parameters is now a bitmask describing which metadata can be sent with messages. It is possible for the creator of the bus to specify a minimum set of metadata to go with messages, though; in that case, a client refusing to send that metadata will not be allowed to connect to the bus.
There is still some disagreement over which metadata should be sent, whether it's optional or not. Andy disagrees with providing command-line (and related) information, on the basis that it can be set by the process involved and thus carries no trustworthy information. This metadata is evidently used mostly for debugging purposes; Andy suggests that it should just be grabbed out of /proc instead. He is also opposed to the sending of capability information, noting that capabilities are generally problematic in Linux and their use should not be encouraged.
One other interesting bit of metadata that can be attached to messages is
the time that the sending process started
executing. It is there to prevent race conditions associated with the
reuse of process IDs, which can happen quickly on a busy system. Andy
dislikes that approach, noting that it will not work well with either
namespaces or checkpointing. He prefers instead his own "highpid" solution. This patch adds a second,
64-bit, unique number associated with each process; interested programs can
then detect process ID reuse by seeing if that number changes. Eric
Biederman disagreed with that approach,
saying "What we need are not race free pids, but a file descriptor
based process management api.
" Andy was
not opposed to that idea, but he would like to see something simple
that can be of use to kdbus now.
Andy had a number of other comments, including pointing out a couple of places where, he
contended, he could use kdbus to gain root access on any system where it
was installed. Even so, he seems happy
with the direction the code is going, saying "And thanks for
addressing most of the issues. The code is starting to look much better to
me.
"
Toward the mainline
In theory, resolving the remaining issues should be relatively straightforward, though it is not hard to see the "highpid" idea running into resistance at some point. But the number of reviewers for the second kdbus posting has been relatively small, perhaps as a result of the holidays in the US. The addition of a significant core API of this type requires more attention than kdbus has gotten so far. That suggests that there may still be significant issues that have not yet been raised by reviewers. Kdbus is getting closer to mainline inclusion, but it may well take a few more development cycles to get to a point where most developers are happy with it.
Some 3.18 development statistics
As of the 3.18-rc6 release, 11,186 non-merge changesets have been pulled into the mainline repository for the 3.18 development cycle. That makes this release about 1,000 changesets smaller than its immediate predecessors, but still not a slow development cycle by any means. Since this cycle is getting close to its end, it's a good time to look at where the code that came into the mainline during this cycle came from. (For those who are curious about what changes were merged, see 3.18 Merge window, part 1, part 2, and part 3).1,428 developers have contributed code to the 3.18 release — about normal for the last year or so. The most active developers were:
Most active 3.18 developers
By changesets H Hartley Sweeten 237 2.1% Mauro Carvalho Chehab 179 1.6% Ian Abbott 162 1.4% Geert Uytterhoeven 121 1.1% Hans Verkuil 100 0.9% Ville Syrjälä 98 0.9% Navin Patidar 98 0.9% Sujith Manoharan 83 0.7% Johan Hedberg 82 0.7% Eric Dumazet 77 0.7% Lars-Peter Clausen 75 0.7% Antti Palosaari 72 0.6% Fabian Frederick 71 0.6% Daniel Vetter 70 0.6% Florian Fainelli 70 0.6% Felipe Balbi 70 0.6% Benjamin Romer 68 0.6% Laurent Pinchart 64 0.6% Andy Shevchenko 62 0.6% Malcolm Priestley 61 0.5%
By changed lines Larry Finger 74831 10.2% Greg Kroah-Hartman 73298 10.0% Hans Verkuil 22266 3.0% Alexander Duyck 16617 2.3% Greg Ungerer 11981 1.6% Linus Walleij 10628 1.5% John L. Hammond 10269 1.4% Navin Patidar 8148 1.1% Philipp Zabel 7149 1.0% Martin Peres 6890 0.9% Mark Einon 6771 0.9% Mauro Carvalho Chehab 6520 0.9% Ian Munsie 5773 0.8% H Hartley Sweeten 5134 0.7% Alexei Starovoitov 4505 0.6% Yan, Zheng 4485 0.6% Antti Palosaari 4181 0.6% Roy Spliet 3785 0.5% Christoph Hellwig 3765 0.5% Juergen Gross 3745 0.5%
As is usually the case, H. Hartley Sweeten tops the by-changesets list with the epic task of getting the Comedi drivers into shape in the staging tree. Mauro Carvalho Chehab, the Video4Linux2 maintainer, did a lot of cleanup work in that tree as well during this cycle, while Ian Abbott's changes were, once again, applied to the Comedi drivers. Geert Uytterhoeven did a lot of work in the ARM and driver trees, while Hans Verkuil also made a lot of improvements to the core Video4Linux2 subsystem.
On the "lines changed" side, Larry Finger removed the r8192ee driver from the staging tree, while Greg Kroah-Hartman removed two other drivers from staging. Alexander Duyck added the "fm10k" driver for Intel FM10000 Ethernet switch host interfaces, and Greg Ungerer removed a bunch of old m68k code.
Some 200 companies (that we were able to identify) supported development on the code merged for 3.18. The most active of those were:
Most active 3.18 employers
By changesets (None) 1244 11.0% Intel 1238 10.9% Red Hat 863 7.6% (Unknown) 828 7.3% Samsung 523 4.6% Linaro 370 3.3% IBM 340 3.0% SUSE 326 2.9% 324 2.9% (Consultant) 321 2.8% Freescale 238 2.1% FOSS Outreach Program for Women 238 2.1% Vision Engraving Systems 237 2.1% Texas Instruments 199 1.8% Renesas Electronics 179 1.6% MEV Limited 162 1.4% Free Electrons 155 1.4% Qualcomm 141 1.2% Oracle 135 1.2% ARM 114 1.0%
By lines changed (None) 185247 25.3% Linux Foundation 73354 10.0% Intel 73168 10.0% (Unknown) 28460 3.9% Cisco 27939 3.8% Red Hat 27335 3.7% Linaro 23586 3.2% Samsung 19228 2.6% IBM 18194 2.5% SUSE 16736 2.3% 14110 1.9% (Consultant) 12455 1.7% Accelerated Concepts 11986 1.6% Texas Instruments 11305 1.5% C-DAC 8400 1.1% Pengutronix 8232 1.1% Freescale 7265 1.0% (Academia) 7076 1.0% Qualcomm 5398 0.7% Code Aurora Forum 5377 0.7%
(Note that the above table has been updated; the curious can see the original version published on this page here).
As is often the case, there are few surprises here. The level of contributions from developers working on their own time remains steady at about 11%, a level it has maintained since the 3.13 kernel. So it might be safe to say that, for now, the decline in volunteer contributions appears to have leveled out.
How important are volunteer contributions to the Linux kernel? Many kernel developers started that way, so it is natural to think that a decline in volunteers will lead, eventually, to a shortage of kernel developers overall. As it happens, the period starting with the 3.13 release (roughly calendar year 2014) saw first-time contributions from 1,521 developers. Looking at who those developers worked for yields these results:
Employer Developers (Unknown) 651 (None) 137 Intel 115 37 Samsung 35 Huawei 33 IBM 32 Red Hat 25 Freescale 21 Linaro 17
All told, 733 first-time developers were identifiably working for some company or other when their first patch was accepted into the mainline. A large portion of the unknowns above are probably volunteers, so one can guess that a roughly equal number of first-time developers were working on their own time. So roughly half of our new developers in the last year were volunteers.
The picture changes a little, though, when one narrows things down to first-time developers who contributed to more than one release. When one looks at developers who contributed to three out of the last five releases, the picture is:
Employer Developers (Unknown) 48 Intel 24 (None) 21 Huawei 10 IBM 7 Samsung 6 Outreach Program for Women 6 ARM 4 Linaro 4 Red Hat 3 Broadcom 3
Overall, 126 new developers contributing to at least three releases in the last year worked for companies at the time of their first contribution — rather more than the number of volunteers. So it seems fair to say that a lot of our new developers are getting their start within an employment situation, rather than contributing as volunteers then being hired.
Where are these new developers working in the kernel? If one looks at all new developers, the staging tree comes out on top; 301 developers started there, compared to 122 in drivers/net, the second-most popular starting place. But the most popular place for a three-version developer to make their first contribution is in drivers/net; 25 new developers contributed there, while 20 contributed within the staging tree. So, while staging is arguably helping to bring in new developers, a lot of the developers who start there appear to not stay in the kernel community.
Overall, the pattern looks reasonably healthy. There are multiple paths for developers looking to join our community, and it is possible for new developers to work almost anywhere in the kernel tree. That would help to explain how the kernel development community continues to grow over time. For now, there doesn't appear to be any reason to believe that we will not continue to crank out kernel releases at a high rate indefinitely.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Filesystems and block I/O
Janitorial
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Term limits and the Debian Technical Committee
Debian's Technical Committee (often abbreviated as TC or, on Debian lists, ctte) has been in the news quite a bit lately. The TC acts as Debian's final arbitrator in disagreements between project members, and 2014 has seen more than the average number of such disagreements. In addition, some of the debates within the Debian community as a whole have evidently proved to be enough of a strain that several long-serving TC members have resigned from the committee in recent months. Naturally, high-profile technical disputes and resignations from the TC cause attention to turn to make up and processes of the TC itself. On December 1, former Debian Project Leader (DPL) Stefano Zacchiroli proposed a major change to how the TC operates: implementing limited terms for TC members.
An old idea
The idea of TC term limits was raised most recently in May, when Anthony Towns suggested adopting some set of rules that would change TC membership from its current de-facto "for life" appointment to something finite and well-defined. Towns speculated on a variety of possible options without promoting any one option.
Several other project members (including some on the TC) weighed in
during the ensuing discussion, and the general consensus seemed to be
that there were merits to idea. For one, a never-changing TC could
(theoretically) turn into a cabal or simply get trapped in "groupthink" caused by having a limited set of voices. For another, as Russ Allbery noted, the perpetual
nature of a TC appointment may be causing appointments to skew toward
cautious and conservative choices. In contrast, he said, "I think our DPL selection process works extremely well and benefits
greatly from having a yearly election.
"
But the final major reason for considering time-limited terms is
that—as pointed out by Allbery, Towns, and others—the TC's
lack of a mechanism for stepping down can make a departure difficult.
Towns said " Nevertheless, the discussion started by Towns about term limits
ended without a concrete plan of action. There were concerns about
how to implement term limits without making arbitrary decisions about
what constitutes "enough" time, as well as concerns about how to
implement any term-limiting mechanism without causing undue
turmoil—by (for example) immediately losing half of the TC's membership.
Perhaps the turmoil within Debian and in the TC itself over the
past few months served to make the prospect of shaking up the TC
membership rules seem less intimidating. Or perhaps with several
seats opening up on the TC due to resignations, it was simply a good time to consider
other changes as well. Either way, in mid-November, Zacchiroli sent out
a message proposing a change to section 6
of the Debian Constitution to implement
TC term limits. His proposal is a General Resolution (GR), which
would require a vote by the entire project.
Zacchiroli's initial draft underwent multiple revisions during the
last half of November, but by December 1, he made it a formal
proposal. The current version of the proposal aims to set the maximum term for
TC members at around four years, but with some flexibility built in to
account for resignations and other departures. The goal is to replace
two TC members each calendar year, so that all seats on the committee
are rotated through every four years. In addition, former members
must stay off the TC for at least one year before they can be
re-appointed.
The specifics of the
wording are worth looking at as well. Each year on January 1, if two senior TC
members have served for more than 3.5 years, those two will have their
memberships marked for expiration—in other words, their terms
will end on the coming December 31. Because new appointments to the
TC can happen at any time, there is some variation in how long a "full"
term would last; as Towns observed,
" Dropped along the way were provisions to prevent the
term-expiration mechanism from leaving the TC with less than four
members (out of the total of eight seats), various suggestions to
change the number of TC seats, and a suggestion that the remaining TC
members decide whether or not to re-appoint a member whose term is
expiring. Objections to these ideas varied, although the ones that
seemed simply too different from Zacchiroli's core proposal (such as
changing the size of the TC) were usually dropped on the grounds that
proponents should raise them as separate GRs.
Similarly, Clint Adams proposed
eliminating the TC altogether. The idea does not seem to have
widespread support, although Allbery commented that he had considered making a
similar proposal himself in the past—only to decide that
whatever dispute-resolution method replaced it would not be any better.
That said, there was considerably more discussion of how the rules
could be adjusted to place an upper limit the amount of churn that the
TC undergoes each year. This year, for example, three committee
members are stepping down; if two additional seats were to expire
automatically, then more than half of the TC would be replaced in a
single year—an outcome few consider ideal for the health and
stability of the project.
Some of the early discussions about the proposal included
specifying a transition mechanism to let the current longstanding TC
members rotate out gradually rather than all at once. Ultimately,
some modifications to the two-senior-seats-automatically-expire plan
arose that would throttle the turnover rate, and have the beneficial
side effect of making the addition of a transition mechanism into the
Constitution unnecessary.
Three alternatives (summarized by
Nussbaum) to the original two-seats-expire-per-year plan
were proposed. The first, which is known as the
2 − R plan,
would have the two seats automatically expire if there are no other
departures from the TC, but would subtract from those automatic
expirations the number of resignations, retirements, or removals ("R")
that happened during the past year—stopping at zero, of course.
The second alternative is a slight adjustment of the first, and is
known as the 2 − R′ plan. It would subtract from
2 only the number of resignations or departures of people who would
otherwise be candidates for seat expiration (that is, resignations by
members with 3.5 years experience or more). In short, this plan
would ensure that the resignation of junior TC members would not cause
the most senior members to remain on the committee an additional year.
The third alternative, known as 2 − S, is a subtle
modification of the 2 − R′ plan. It would subtract
from 2 only the number of resignations in the past year by members
whose terms would definitely have expired at the end of the year
otherwise. That is, under the 2 − S plan, only a
resignation by one of the two most senior seats can decrease the
number of automatic term expirations. Under the 2 − R′ plan, it would be
possible for the third-most-senior member to resign and cause a
reduction in the number of automatic seat expirations, if at least
three members had been on the TC for longer than 3.5 years.
Such a condition cannot arise when there have been several years of
two-seat rotations in a row, of course. But it
happens to be the case now, since so many of the existing members have
been on the committee for a considerable length of time. And more
importantly, as Raphaël Hertzog pointed
out, it can happen again if there are several resignations (followed by
several appointments) in the same year.
If one happens to find the distinctions between the various
expiration formulae less than perfectly clear, fear not. Nussbaum outlined the practical effects of the main plans (the
original, 2-seat plan and 2 − R). Under the original
plan, Bdale Garbee and Steve Langasek's terms would expire on January
1, 2015. Subsequently:
While under 2 − R, the resignations
already announced in 2014 would mean no additional seats expire in
January 2015, after which:
The differences in the long term are, to be sure, subtle enough
that most assessments of which plan is better will boil down to
personal preference. Ultimately, Nussbaum added
the 2 − R option as an amendment to Zacchiroli's proposal.
Zacchiroli's proposal quickly garnered enough seconds to move it
forward for a vote. As per project procedure, at
least two weeks of discussion will follow, after which any of the
proposal's sponsors may call for a vote.
There seems to be little resistance to the idea of rotating TC
members more frequently—if nothing else, to prevent burnout
among qualified project members. But the term-limit idea would
constitute a major change in how Debian functions, which is a notion
that makes many people uneasy to one degree or another.
On the other hand, the main objection to too much rotation within
the TC is the hard-to-define notion that it would weaken the project.
Towns, for his part, contended
that the idea of "newbies" on the TC causing weakness
to Debian are " The discussion process is taking place on the debian-vote mailing
list. Whenever the final vote itself takes place, the outcome will be
announced there as well. Although the exact form of the process has
yet to be decided, the way things stand today it seems likely that
Debian will soon have a formal process in place to regularly rotate
members in and out of its top decision-making body.
it would be nice if there was a way out of the ctte that had
more of a feeling of winning / leaving at the top of the game
",
while Allbery sought to find a way to give TC members "
a clean break point where they can stop without any perceived
implications of resigning, so they can either decide they've done enough
or they can come back refreshed and with fresh eyes.
" On Allbery's final point, it is indeed easy to read comments and discussion
threads about several of the recent TC resignations and find people
speculating on the reasons behind and ramifications of each individual
departure.
A new proposal
the max age is 5.5 years (appointment on Jul 2nd, hitting 4.49
years on Jan 1st, then expiring at 5.49 years next Jan 1st)
".
Nevertheless, most on the list seemed to find the issues of regular
rollover and requiring a one-year "mandatory vacation
"
(as current DPL Lucas Nussbaum called it) to be the most salient factors:
precisely how long anyone sits on the TC is an implementation detail.
2017-01-01: Keith is the oldest member with 3.09y, nobody expires
2018-01-01: Keith is the oldest member with 4.09y, nobody expires
2019-01-01: Keith membership expires, none of the other does
2020-01-01: we have 5 members over the 4.5y limit, two expire
2021-01-01: we have 3+2=5 members over the 4.5y limit, two expire
2017-01-01: Andi and Don expire, 2 replacements
2018-01-01: Keith is the oldest member with 4.09y, nobody expires
2019-01-01: Keith membership expires, none of the other does
2020-01-01: we have 3 members over the 4.5y limit, two expire
2021-01-01: we have 1+2=3 members over the 4.5y limit, two expire
What's next
at the far end of hypothetical
". There
is, the argument goes, not a shortage of project members who would make positive
contributions to the TC, and new committee members will still be
selected by the sitting TC with the approval of the DPL. So fears
about a TC composed of unqualified people apt to make poor, reckless
decisions are unfounded.
Brief items
Distribution quotes of the week
[2] - And Linux, as we know, is all about choice.
> 2) It has an uninspiring installer.
Ok I need more information on what this means in comparison to what? I have installed pretty much every major Linux distribution and I have never found any one of them 'inspiring'. Even the Ubuntu one is more of "well at least its not the base Debian installer" versus "OMG I am alive and free because of this installer."
The "Devuan" Debian fork
A group of developers has announced the existence of a fork of the Debian distribution called "Devuan." "First mid-term goal is to produce a reliable and minimalist base distribution that stays away from the homogenization and lock-in promoted by systemd. This distribution should be ready about the time Debian Jessie is ready and will constitute a seamless alternative to its dist-upgrade. As of today, the only ones resisting are the Slackware and Gentoo distributions, but we need to provide a solid ground also for apt-get based distributions. All project on the downstream side of Debian that are concerned by the systemd avalanche are welcome to keep an eye on our initiative and evaluate it as an alternative base."
Distribution News
Debian GNU/Linux
BSP in Switzerland (St-Cergue)
There will be a Debian Bug Squashing Party from January 30-February 1 in St-Cergue, Switzerland. "We invite Debian Developers and Maintainers, regular contributors as well as new potential contributors to join this event. Regular contributors will be present to help newcomers fix their first bugs or scratch their itches in Debian."
Fedora
Fedora Council election results
The election results for the first Fedora Council election are available. Congratulations go to Rex Dieter and Langdon White, the newly elected representatives.Fedora 21 betas for ARM and POWER
Fedora 21 Betas for ARM aarch64 and POWER architectures are available for testing.
openSUSE
Announcing openSUSE board election 2014/2015
The openSUSE board has three seats open for election and the election schedule has been announced. The initial phase, which is open now, allows openSUSE contributors who are not yet members to become members so that they may vote or stand for a seat. Nominations are also open.
Newsletters and articles of interest
Distribution newsletters
- Debian Misc Developer News (#37) (November 25)
- Debian Project News (December 1)
- DistroWatch Weekly, Issue 586 (November 24)
- DistroWatch Weekly, Issue 587 (December 1)
- 5 things in Fedora this week (November 19)
- 5 things in Fedora this week (December 2)
- Ubuntu Weekly Newsletter, Issue 393 (November 23)
- Ubuntu Weekly Newsletter, Issue 394 (November 30)
They make Mageia: David Walser (Mageia Blog)
The Mageia blog talks with David Walser, about his work in Mageia. "I stumbled into my current role at Mageia completely by accident. I had upgraded my sister’s laptop from Mandriva 2010.2 to Mageia 1, and noticed one Mandriva package left on the system because it had a newer release tag than the Mageia package. The reason was because Mandriva had done a security update for the package, but when it was imported into Mageia, the release version was imported rather than the updates version. I was concerned about other security updates that might have been missed, and began investigating this. I started filing bugs for missing security updates and helping the QA team test updates that got packaged, to help the updates get released more expeditiously."
Page editor: Rebecca Sobol
Development
The Rocket containerization system
The field of software-container options for Linux expanded again this week with the launch of the Rocket project by the team behind CoreOS. Rocket is a direct challenger to the popular Docker containerization system. The decision to split from Docker was, evidently, driven by CoreOS developers' dissatisfaction with several recent moves within the Docker project. Primarily, the CoreOS team's concern is Docker's expansion from a standalone container format to a larger platform that includes tools for additional parts of the software-deployment puzzle.
There is no shortage of other Linux containerization projects apart from Docker already, of course—LXC, OpenVZ, lmctfy, and Sandstorm, to name a few. But CoreOS was historically a big proponent of (and contributor to) Docker.
The idea behind CoreOS was to build a lightweight and easy-to-administer server operating system, on which Docker containers can be used to deploy and manage all user applications. In fact, CoreOS strives to be downright minimalist in comparison to standard Linux distributions. The project maintains etcd to synchronize system configuration across a set of machines and fleet to perform system initialization across a cluster, but even that set of tools is austere compared to the offerings of some cloud-computing providers.
Launch
On December 1, the CoreOS team posted an announcement on its blog,
introducing Rocket and explaining the rationale behind it. Chief
among its stated justifications for the new project was that Docker
had begun to grow from its initial concept as "a simple
component, a composable unit
" into a larger and more complex
deployment framework:
The post also highlighted the fact that, early on in its history, the Docker project had published a manifesto that argued in favor of simple container design—and that the manifesto has since been removed.
The announcement then sets out the principles behind Rocket. The
various tools will be independent "composable
" units,
security primitives "for strong trust, image auditing and
application identity
" will be available, and container images
will be easy to discover and retrieve through any available protocol.
In addition, the project emphasizes that the Rocket container format
will be "well-specified and developed by a community.
"
To that end, it has published the first draft of the App Container
Image (ACI) specification
on GitHub.
As for Rocket itself, it was launched at version 0.10. There is a command-line tool (rkt) for running an ACI image, as well as a draft specification describing the runtime environment and facilities needed to support an ACI container, and the beginnings of a protocol for finding and downloading an ACI image.
Rocket is, for the moment, certainly a lightweight framework in keeping with what one might expect form CoreOS. Running a containerized application with Rocket involves three "stages."
Stage zero is the container-preparation step; the rkt binary generates a manifest for the container, creates the initial filesystem required, then fetches the necessary ACI image file and unpacks it into the new container's directory. Stage one involves setting up the various cgroups, namespaces, and mount points required by the container, then launching the container's systemd process. Stage two consists of actually launching the application inside its container.
What's up with Docker
The Docker project, understandably, did not view the announcement of Rocket in quite the same light as CoreOS. In a December 1 post on the Docker blog, Ben Golub defends the decision to expand the Docker tool set beyond its initial single-container roots:
We think it would be a shame if the clean, open interfaces, anywhere portability, and robust set of ecosystem tools that exist for single Docker container applications were lost when we went to a world of multiple container, distributed applications. As a result, we have been promoting the concept of a more comprehensive set of orchestration services that cover functionality like networking, scheduling, composition, clustering, etc.
But the existence of such higher-level orchestration tools and
multi-container applications, he said, does
not prevent anyone from using the Docker single-container format. He does
acknowledge that " The post concludes by noting that " Interestingly enough, the CoreOS announcement of Rocket also goes
out of its way to reassure users that CoreOS will continue to support
Docker containers in the future. Less clear is exactly what that
support will look like; the wording says to " In any case, at present, Rocket and its corresponding ACI
specification makes use of the same underlying Linux facilities
employed by Docker, LXC containers, and most of the other offerings.
One might well ask whether or not a "community specification" is
strictly necessary as an independent entity. But as containerization
continues to make its way into the enterprise market, it is hardly
surprising to see more than one project vie for privilege of defining
what a standard container should look like.
Over the years, Python's source repositories have moved a number of times,
from CVS on SourceForge to Subversion at Python.org and, eventually, to
Mercurial (aka hg), still on Python Software Foundation (PSF)
infrastructure. But the new Python.org site code lives at GitHub (thus in
a Git repository) and it looks like more pieces of Python's source may be
moving in that direction. While some are concerned about moving away from a
Python-based DVCS
(i.e. Mercurial)
into a closed-source web service, there is a strong pragmatic streak in the
Python community that may be winning out. For good or ill, GitHub
has won the popularity battle over any of the other alternatives, so new
contributors are more likely to be familiar with that service, which makes
it attractive for Python.
The discussion got started when Nick Coghlan posted some thoughts on his Python Enhancement
Proposal (PEP 474)
from July. It suggested creating a "forge" for hosting some Python
documentation repositories using Kallithea—a Python-based web
application for hosting
Git and Mercurial repositories—once it has a stable
release. More recently, though, Coghlan realized that there may not be a
need to require hosting those types of repositories on PSF
infrastructure as the PEP specified; if that is the case, "
But others looked at the same set of facts a bit differently. Donald Stufft
compared the workflow of the current
patch-based system to one that uses GitHub-like pull requests (PRs). Both for
contributors and maintainers (i.e. Python core developers), the time
required to handle a simple patch was something like 10-15 minutes with the
existing system, he said, while a PR-based system would reduce that to less than a
minute—quite possibly much less.
Python benevolent dictator for life (BDFL) Guido van Rossum agreed, noting that GitHub has easily won the
popularity race. He was also skeptical that the PSF should be running
servers:
Moving the CPython code and docs is not a priority, but everything else
(PEPs, HOWTOs etc.) can be moved easily and I am in favor of moving to
GitHub. For PEPs I've noticed that for most PEPs these days (unless the
primary author is a core dev) the author sets up a git repo first anyway,
and the friction of moving between such repos and the "official" repo is a
pain.
GitHub, however, only supports Git, so
those who are currently using
Mercurial and want to continue would be out of luck. Bitbucket supports
both, though, so in Coghlan's opinion, it would
make a better interim solution. But Stufft is concerned that taking the
trouble to move, but choosing the less popular site, makes little sense.
On the other hand, some are worried about lock-in with
GitHub (and other closed-source solutions, including Bitbucket). As Coghlan put it:
a small number of vendors disagree with this
direction
", some of whom have "
technical or
philosophical differences, which appears to be the case with the
recent announcement regarding Rocket.
"
this is all part of a
healthy, open source process
" and by welcoming competition. It
also, however, notes the "questionable rhetoric and timing of the Rocket
announcement
" and says that a follow-up post addressing some of
the technical arguments from the Rocket project is still to come.
expect Docker to
continue to be fully integrated with CoreOS as it is today
",
which might suggest that CoreOS is not interested in supporting
Docker's newer orchestration tools.
Moving some of Python to GitHub?
then the obvious
candidate for Mercurial hosting that supports online editing + pull
requests is the PSF's BitBucket account
".
The feature set that GitHub provides is what will keep the repositories there, though,
Stufft said: "You probably won’t want to get your
data out because Github’s features are compelling enough that you
don’t want to lose them
". Furthermore, he looked at the Python-affiliated repositories on the two sites
and found that there were half a dozen active repositories on GitHub and
three largely inactive repositories on Bitbucket.
The discussion got a bit testy at times, with Coghlan complaining that choosing GitHub based on its
popularity was anti-community: "I'm very, very disappointed to see folks so willing to
abandon fellow community members for the sake of following the
crowd
". He went on to suggest that perhaps Ruby or JavaScript would
be a better choice for a language to work on since they get better press.
Van Rossum called that "a really low
blow
" and pointed out: "*A DVCS repo is a social network, so it
matters in a functional way what everyone else is using.*
" He
continued:
Eventually, Stufft proposed another PEP (481) that would migrate three
documentation repositories (the Development Guide, the development system in a box
(devinabox), and the PEPs) to
GitHub. Unlike the situation with many PEPs, Van Rossum stated that he didn't feel it was his job to accept or reject the
PEP, though he made a strong case for moving to GitHub; he believes that
most of the community is probably already using GitHub in one way or
another, lock-in doesn't really concern him since the most important data
is already stored in multiple places, and, in his mind, Python does not
have an "additional hidden agenda of bringing freedom to all software
".
It turns out that Brett Cannon is the contact for two of the three repositories mentioned in the PEP (devguide and devinabox), so Van Rossum is leaving the decision to Cannon for those two. Coghlan is the largest contributor to the PEPs repository, so the decision on that will be left up to him. He is currently exploring the possibility of using RhodeCode Enterprise (a Python-based, hosted solution with open code, but one that has licensing issues that Coghlan did acknowledge). For his part, Cannon noted his preference for open, Mercurial-and-Python-based solutions, but he is willing to consider other options. There may be a discussion at the Python language summit (which precedes PyCon), but, if so, Van Rossum said he probably won't take part—it's clear he has tired of the discussion at this point.
There are good arguments on both sides of the issue, but it is a little sad to see Python potentially moving away from the DVCS written in the language and into the more popular (and feature-rich, seemingly) DVCS and hosting site (Git and GitHub). While Van Rossum does not plan to propose moving the CPython (main Python language code) repository to GitHub anytime soon, the clear implication is that he would not be surprised if that happens eventually. While it might make pragmatic sense on a number of different levels, and may have all the benefits that have been mentioned, it would certainly be something of a blow to the open-source Python DVCS communities. With luck, those communities will find the time to fill the functionality gaps, but the popularity gap will be much harder to overcome.
Kawa — fast scripting on the Java platform
Kawa is a general-purpose Scheme-based programming language that runs on the Java platform. It aims to combine the strengths of dynamic scripting languages (less boilerplate, fast and easy start-up, a read-eval-print loop or REPL, no required compilation step) with the strengths of traditional compiled languages (fast execution, static error detection, modularity, zero-overhead Java platform integration). I created Kawa in 1996, and have maintained it since. The new 2.0 release has many improvements.
Projects and businesses using Kawa include: MIT App Inventor (formerly Google App Inventor), which uses Kawa to translate its visual blocks language; HypeDyn, which is a hypertext fiction authoring tool; and Nü Echo, which uses Kawa for speech-application development tools. Kawa is flexible: you can run source code on the fly, type it into a REPL, or compile it to .jar files. You can write portably, ignoring anything Java-specific, or write high-performance, statically-typed Java-platform-centric code. You can use it to script mostly-Java applications, or you can write big (modular and efficient) Kawa programs. Kawa has many interesting features; below we'll look at a few of them.
Scheme and standards
Kawa is a dialect of Scheme, which has a long history in programming-language and compiler research, and in teaching. Kawa 2.0 supports almost all of R7RS (Revised7 Report on the Algorithmic Language Scheme), the 2013 language specification. (Full continuations is the major missing feature, though there is a project working on that.) Scheme is part of the Lisp family of languages, which also includes Common Lisp, Dylan, and Clojure.
One of the strengths of Lisp-family languages (and why some consider them weird) is the uniform prefix syntax for calling a function or invoking an operator:
(op arg1 arg2 ... argN)If op is a function, this evaluates each of arg1 through argN, and then calls op with the resulting values. The same syntax is used for arithmetic:
(+ 3 4 5)and program structure:
; (This line is a comment - from semi-colon to end-of-line.) ; Define variable 'pi' to have the value 3.13. (define pi 3.13) ; Define single-argument function 'abs' with parameter 'x'. (define (abs x) ; Standard function 'negative?' returns true if argument is less than zero. (if (negative? x) (- x) x)
Having a simple regular core syntax makes it easier to write tools and to extend the language (including new control structures) via macros.
Performance and type specifiers
Kawa gives run-time performance a high priority. The language facilitates compiler analysis and optimization. Flow analysis is helped by lexical scoping and the fact that a variable in a module (source file) can only be assigned to in that module. Most of the time the compiler knows which function is being called, so it can generate code to directly invoke a method. You can also associate a custom handler with a function for inlining, specialization, or type-checking.
To aid with type inference and type checking, Kawa supports optional type specifiers, which are specified using two colons. For example:
(define (find-next-string strings ::vector[string] start ::int) ::string ...)
This defines find-next-string with two parameters: strings is a vector of strings, and start is a native (Java) int; the return type is a string.
Kawa also does a good job of catching errors at compile time.
The Kawa runtime doesn't need to do a lot of initialization, so start-up is much faster than other scripting languages based on the Java virtual machine (JVM). The compiler is fast enough that Kawa doesn't use an interpreter. Each expression you type into the REPL is compiled on-the-fly to JVM bytecodes, which (if executed frequently) may be compiled to native code by the just-in-time (JIT) compiler.
Function calls and object construction
If the operator op in an expression like (op arg1 ... argN)) is a type, then the Kawa compiler looks for a suitable constructor or factory method.
(javax.swing.JButton "click here") ; equivalent to Java's: new javax.swing.JButton("click here")
If the op is a list-like type with a default constructor and has an add method, then an instance is created, and all the arguments are added:
(java.util.ArrayList 11 22 33) ; evaluates to: [11, 22, 33]
Kawa allows keyword arguments, which can be used in an object constructor form to set properties:
(javax.swing.JButton text: "Do it!" tool-tip-text: "do it")
The Kawa manual has more details and examples. There are also examples for other frameworks, such as for Android and for JavaFX.
Other scripting languages also have convenient syntax for constructing nested object structures (for example Groovy builders), but they require custom builder helper objects and/or are much less efficient. Kawa's object constructor does most of the work at compile-time, generating code as good as hand-written Java, but less verbose. Also, you don't need to implement a custom builder if the defaults work, as they do for Swing GUI construction, for example.
Extended literals
Most programming languages provide convenient literal syntax only for certain built-in types, such as numbers, strings, and lists. Other types of values are encoded by constructing strings, which are susceptible to injection attacks, and which can't be checked at compile-time.
Kawa supports user-defined extended literal types, which have the form:
&tag{text}The tag is usually an identifier. The text can have escaped sub-expressions:
&tag{some-text&[expression]more-text}The expression is evaluated and combined with the literal text. Combined is often just string-concatenation, but it can be anything depending on the &tag. As an example, assume:
(define base-uri "http://example.com/")then the following concatenates base-uri with the literal "index.html" to create a new URI object:
&URI{&[base-uri]index.html}
The above example gets de-sugared into:
($construct$:URI $<<$ base-uri $>>$ "index.html")
The $construct$:URI is a compound name (similar to an XML "qualified name") in the predefined $construct$ namespace. The $<<$ and $>>$ are just special symbols to mark an embedded sub-expression; by default they're bound to unique empty strings. So the user (or library writer) just needs to provide a definition of the compound name $construct$:URI as either a procedure or macro, resolved using standard Scheme name lookup rules; no special parser hooks or other magic is involved. This procedure or macro can do arbitrary processing, such as construct a complex data structure, or search a cache.
Here is a simple-minded definition of $construct$:URI as a function that just concatenates all the arguments (the literal text and the embedded sub-expressions) using the standard string-append function, and passes the result to the URI constructor function:
(define ($construct$:URI . args) (URI (apply string-append args)))
The next section uses extended literals for something more interesting: shell-like process forms.
Shell scripting
Many scripting languages let you invoke system commands (processes). You can send data to the standard input, extract the resulting output, look at the return code, and sometimes even pipe commands together. However, this is rarely as easy as it is using the old Bourne shell; for example command substitution is awkward. Kawa's solution is two-fold:
- A "process expression" (typically a function call) evaluates to a Java Process value, which provides access to a Unix-style (or Windows) process.
- In a context requiring a string, a Process is automatically converted to a string comprising the standard output from the process.
A trivial example:
#|kawa:1|# (define p1 &`{date --utc})
("#|...|#" is the Scheme syntax for nestable comments; the default REPL prompt has that form to aid cutting and pasting code.)
The &`{...} syntax uses the extended-literal syntax from the previous section, where the backtick is the 'tag', so it is syntactic sugar for
($construct$:` "date --utc")where $construct$:` might be defined as:
(define ($construct$:` . args) (apply run-process args))This in turns translates into an expression that creates a gnu.kawa.functions.LProcess object, as you see if you write it:
#|kawa:2|# (write p1) gnu.kawa.functions.LProcess@377dca04
An LProcess is automatically converted to a string (or bytevector) in a context that requires it. This means you can convert to a string (or bytevector):
#|kawa:3|# (define s1 ::string p1) ; Define s1 as a string. #|kawa:4|# (write s1) "Wed Nov 1 01:18:21 UTC 2014\n" #|kawa:5|# (define b1 ::bytevector p1) (write b1) #u8(87 101 100 32 74 97 110 ... 52 10)
The display procedure prints the LProcess in "human" form, as a unquoted string:
#|kawa:6|# (display p1) Wed Nov 1 01:18:21 UTC 2014
This is also the default REPL formatting:
#|kawa:7|# &`{date --utc} Wed Nov 1 01:18:22 UTC 2014
We don't have room here to discuss redirection, here documents, pipelines, adjusting the environment, and flow control based on return codes, though I will briefly touch on argument processing and substitution. See the Kawa manual for details, and here for more on text vs. binary files.
Argument processing
To substitute the result of an expression into the argument list is simple using the &[] construct:
(define my-printer (lookup-my-printer)) &`{lpr -P &[my-printer] log.pdf}Because a process is auto-convertible to a string, no special syntax is needed for command substitution:
&`{echo The directory is: &[&`{pwd}]}though you'd normally use this short-hand:
&`{echo The directory is: &`{pwd}}
Splitting a command line into arguments follows shell quoting and escaping rules. Dealing with substitution depends on quotation context. The simplest case is when the value is a list (or vector) of strings, and the substitution is not inside quotes. In that case each list element becomes a separate argument:
(define arg-list ["-P" "office" "foo.pdf" "bar.pdf"]) &`{lpr &[arg-list]}
An interesting case is when the value is a string, and we're inside double quotes; in that case newline is an argument separator, but all other characters are literal. This is useful when you have one filename per line, and the filenames may contain spaces, as in the output from find:
&`{ls -l "&`{find . -name '*.pdf'}"}This solves a problem that is quite painful with traditional shells.
Using an external shell
The sh tag uses an explicit shell, like the C system() function:
&sh{lpr -P office *.pdf}This is equivalent to:
&`{/bin/sh "lpr -P office *.pdf"}
Kawa adds quotation characters in order to pass the same argument values as when not using a shell (assuming no use of shell-specific features such as globbing or redirection). Getting shell quoting right is non-trivial (in single quotes all characters except single quote are literal, including backslash), and not something you want application programmers to have to deal with. Consider:
(define authors ["O'Conner" "de Beauvoir"]) &sh{list-books &[authors]}The command passed to the shell is the following:
list-books 'O'\''Conner' 'de Beauvoir'
Having quoting be handled by the $construct$:sh
implementation
automatically eliminates common code injection problems.
I intend to implement a &sql
form that would avoid
SQL injection the same way.
In closing
Some (biased) reasons why you might choose Kawa over other languages, concentrating on those that run on the Java platform: Java is verbose and requires a compilation step; Scala is complex, intimidating, and has a slow compiler; Jython, JRuby, Groovy, and Clojure are much slower in both execution and start-up. Kawa is not standing still: plans for the next half-year include a new argument-passing convention (which will enable ML-style patterns); full continuation support (which will help with coroutines and asynchronous event handling); and higher-level optimized sequence/iteration operations. I hope you will try out Kawa, and that you will find it productive and enjoyable.
Brief items
Quotes of the week(s)
Aaaaand what instead happened was:
- We announced and set up a Just Solve The Problem Wiki for the first problem.
- A lot of people worked on the Wiki.
- I got very busy.
- People kept working on the Wiki.
- It’s been two years.
GNU LibreJS 6.0.6 released
Version 6.0.6 of the LibreJS add-on for Firefox and other Mozilla-based browsers has been released. LibreJS is a selective JavaScript blocker that disables non-free JavaScript programs. New in this version are support for private-browsing mode and enhanced support for mailto: links on a page where non-free JavaScript has been blocked.
Firefox 34 released
Mozilla has released Firefox 34. This version changes the default search engine, includes the Firefox Hello real-time communication client, implements HTTP/2 (draft14) and ALPN, disables SSLv3, and more. See the release notes for details.QEMU Advent Calendar 2014 unveiled
The QEMU project has launched its own "Advent calendar" site. Starting with December 1, each day another new virtual machine disk image appears and can be downloaded for exploration in QEMU. The December 1 offering was a Slackware image of truly historic proportions.
Rocket, a new container runtime from CoreOS
CoreOS has announced that it is moving away from Docker and toward "Rocket," a new container runtime that it has developed. "Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned."
Newsletters and articles
Development newsletters from the past two weeks
- What's cooking in git.git (November 26)
- Haskell Weekly News (November 15)
- LLVM Weekly (November 24)
- LLVM Weekly (December 1)
- OCaml Weekly News (November 25)
- OCaml Weekly News (December 2)
- OpenStack Community Weekly Newsletter (November 21)
- OpenStack Community Weekly Newsletter (November 28)
- Perl Weekly (November 24)
- Perl Weekly (December 1)
- PostgreSQL Weekly News (November 23)
- PostgreSQL Weekly News (November 30)
- Python Weekly (November 20)
- Python Weekly (November 27)
- Ruby Weekly (November 20)
- Ruby Weekly (November 27)
- This Week in Rust (November 24)
- This Week in Rust (December 1)
- Tor Weekly News (November 26)
- Tor Weekly News (December 3)
- Wikimedia Tech News (November 24)
Introducing AcousticBrainz
MusicBrainz, the not-for-profit project that maintains an
assortment of "open content" music metadata databases, has announced
a new effort named AcousticBrainz. AcousticBrainz
is designed to be an open, crowd-sourced database cataloging various
"audio features" of music, including "low-level spectral
information such as tempo, and additional high level descriptors for
genres, moods, keys, scales and much more.
" The data collected
is more comprehensive than MusicBrainz's existing AcoustID database,
which deals only with acoustic fingerprinting for song recognition.
The new project is a partnership with the Music Technology Group at
Universitat Pompeu Fabra, and uses that group's free-software toolkit
Essentia to perform its
acoustic analyses. A follow-up
post digs into the AcousticBrainz analysis of the project's initial
650,000-track data set, including examinations of genre, mood, key,
and other factors.
New features in Git 2.2.0
The "Atlassian Developers" site has a summary of interesting features in the recent Git 2.2.0 release, including signed pushes. "This is an important step in preventing man-in-the-middle attacks and any other unauthorized updates to your repository's refs. git push has learnt the --signed flag which applies your GPG signature to a "push certificate" sent over the wire during the push invocation. On the server-side, git receive-pack (the command that handles incoming git pushes) has learnt to verify GPG-signed push certificates. Failed verifications can be used to reject pushes and those that succeed can be logged in a file to provide an audit log of when and who pushed particular ref updates or objects to your git server."
Page editor: Nathan Willis
Announcements
Brief items
FSFE: Support FSFE’s work in 2015
The Free Software Foundation Europe is seeking donations for its work in 2015. "The best way to support the FSFE's work is to become a Fellow (a sustaining member of the FSFE). All Fellowship contributions directly benefit the FSFE’s work towards a free society. Fellows receive a state- of-the-art Fellowship smartcard which, together with the free GnuPG encryption software and a card reader, can be used to sign and encrypt e-mails, to secure SSH keys, to securely log into a computer from a potentially insecure machine, or to store the user’s hard disk encryption keys. Since the encryption key is stored on the card itself, it is almost impossible to steal."
Articles of interest
Free Software Supporter - Issue 80
The Free Software Foundation's newsletter for November is out. Topics include FSF is hiring, organize a Giving Guide Giveaway, ThinkPenguin router that respects your freedom, copyleft.org, lobbyists pushing forward on TPP agreements, GNU Tools Cauldron 2014 videos posted, LibrePlanet, and much more.Mapping the world with open source (Opensource.com)
Opensource.com talks with Paul Ramsey, senior strategist at the open source company Boundless. "Boundless is the “Red Hat of geospatial”, which says a bit about our business model, but doesn’t really explain our technology. GIS professionals and IT professionals (and, really, anyone with a custom mapping problem) use our tools to store their data, in a spatial SQL database (PostGIS), publish maps and data over the web (GeoServer), and view or edit data in web browsers (OpenLayers) or on the desktop (QGIS). Basically, our tools let developers build web applications that understand and can attractively visualize location. We help people take spatial data out of the GIS department and use it to improve workflows and make decisions anywhere in the organization. This is part of what we see as a move towards what we call Spatial IT, where spatial data is used to empower decision-making across an enterprise."
The Impact of the Linux Philosophy (Opensource.com)
Starting with the premise that all operating systems have a philosophy, this article on Opensource.com looks at the Linux philosophy and how it differs from other operating systems. "Imagine for a moment the chaos and frustration that would result from attempting to use a nail gun that asked you if you really wanted to shoot that nail and would not allow you to pull the trigger until you said the word “yes” aloud. Linux allows you to use the nail gun as you choose. Other operating systems let you know that you can use nails but don't tell you what tool is used to insert the nails let alone allow you to put your own finger on the trigger."
New Books
Black Hat Python -- New from No Starch Press
No Starch Press has released "Black Hat Python" by Justin Seitz.JavaScript for Kids -- New from No Starch Press
No Starch Press has released "JavaScript for Kids" by Nick Morgan.
Calls for Presentations
LCA2015 Debian Miniconf & NZ2015 mini-DebConf
There will be a New Zealand mini-DebConf preceding linux.conf.au, on January 10-11, 2015. That will be followed by the Debian Miniconf at LCA, on January 12. The call for presentations is open until December 21.Prague PostgreSQL Developer Day 2015 call for papers
Prague PostgreSQL Developer Day 2015 will be held February 12, with some additional activities on February 11, in Prague, Czech Republic. The call for papers ends January 5. Most talks will be in Czech, but a few talks in English are welcome.Embedded Linux Conference 2015 - Call for Participation
The Embedded Linux Conference will be held March 23-25 in San Jose, California. The theme for this year is "Drones, Things and Automobiles". The call for papers deadline is January 9. "Presentations should be of a technical nature, covering topics related to use of Linux in embedded systems. Topics related to consumer electronics are particularly encouraged, but any proposals about Linux that are of general relevance to most embedded developers are welcome."
Announcing netdev 0.1
"Netdev" is a new conference aimed at networking developers; it will be held February 14 to 17 in balmy Ottawa, Canada. The call for papers is open now, with a submission deadline of January 10. "Netdev 0.1 (year 0, conference 1) is a community-driven conference geared towards Linux netheads. Linux kernel networking and user space utilization of the interfaces to the Linux kernel networking subsystem are the focus. If you are using Linux as a boot system for proprietary networking, then this conference may not be for you."
Update: the conference organizers have posted more information on the CFP and the types of proposals they are looking for.
LSF/MM 2015 Call For Proposals
The 2015 Linux Storage, Filesystem, and Memory Management summit will be held March 9 and 10 in Boston. The call for agenda proposals has gone out, with a deadline of January 16. Attendance will be capped to facilitate discussions, so developers who are interested in attending this event might want to get their proposals in soon.CFP Deadlines: December 4, 2014 to February 2, 2015
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
December 7 | January 31 February 1 |
FOSDEM'15 Distribution Devroom/Miniconf | Brussels, Belgium |
December 8 | February 18 February 20 |
Linux Foundation Collaboration Summit | Santa Rosa, CA, USA |
December 10 | February 19 February 22 |
Southern California Linux Expo | Los Angeles, CA, USA |
December 14 | January 12 | LCA Kernel miniconf | Auckland, New Zealand |
December 17 | March 25 March 27 |
PGConf US 2015 | New York City, NY, USA |
December 21 | January 10 January 11 |
NZ2015 mini-DebConf | Auckland, New Zealand |
December 21 | January 12 | LCA2015 Debian Miniconf | Auckland, New Zealand |
December 23 | March 13 March 15 |
FOSSASIA | Singapore |
December 31 | March 17 March 19 |
OpenPOWER Summit | San Jose, CA, USA |
January 1 | March 21 March 22 |
Kansas Linux Fest | Lawrence, Kansas, USA |
January 2 | May 21 May 22 |
ScilabTEC 2015 | Paris, France |
January 5 | January 12 | Linux.conf.au 2015 Multimedia and Music Miniconf | Auckland, New Zealand |
January 5 | March 23 March 25 |
Android Builders Summit | San Jose, CA, USA |
January 5 | February 11 February 12 |
Prague PostgreSQL Developer Days 2015 | Prague, Czech Republic |
January 9 | March 23 March 25 |
Embedded Linux Conference | San Jose, CA, USA |
January 10 | May 16 May 17 |
11th Intl. Conf. on Open Source Systems | Florence, Italy |
January 11 | March 12 March 14 |
Studencki Festiwal Informatyczny / Academic IT Festival | Cracow, Poland |
January 11 | March 11 | Nordic PostgreSQL Day 2015 | Copenhagen, Denmark |
January 16 | March 9 March 10 |
Linux Storage, Filesystem, and Memory Management Summit | Boston, MA, USA |
January 19 | June 16 June 20 |
PGCon | Ottawa, Canada |
January 19 | June 10 June 13 |
BSDCan | Ottawa, Canada |
January 24 | February 14 February 17 |
Netdev 0.1 | Ottawa, Ontario, Canada |
January 30 | April 25 April 26 |
LinuxFest Northwest | Bellingham, WA, USA |
February 1 | April 13 April 17 |
ApacheCon North America | Austin, TX, USA |
February 1 | April 29 May 2 |
Libre Graphics Meeting 2015 | Toronto, Canada |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
LCA 2015 and InternetNZ Diversity Program
LCA 2015 and InternetNZ are supporting diversity at linux.conf.au. "The InternetNZ Diversity Programme is one of the many ways we ensure that the LCA 2015 continues to be an open and welcoming conference for everyone. Together with InternetNZ this program has been created to assist under-represented delegates who contribute to the Open Source community but, without financial assistance, would not be able to attend LCA 2015."
Linux Foundation Announces 2015 Events Schedule
The Linux Foundation has announced the schedule for all their 2015 conferences. The announcement contains links to all the conferences, as well as call for participation deadlines.Events: December 4, 2014 to February 2, 2015
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
December 5 December 7 |
SciPy India | Bombay, India |
December 27 December 30 |
31st Chaos Communication Congress | Hamburg, Germany |
January 10 January 11 |
NZ2015 mini-DebConf | Auckland, New Zealand |
January 12 | Linux.conf.au 2015 Multimedia and Music Miniconf | Auckland, New Zealand |
January 12 January 16 |
linux.conf.au 2015 | Auckland, New Zealand |
January 12 | LCA Kernel miniconf | Auckland, New Zealand |
January 12 | LCA2015 Debian Miniconf | Auckland, New Zealand |
January 13 | Linux.Conf.Au 2015 Systems Administration Miniconf | Auckland, New Zealand |
January 23 | Open Source in the Legal Field | Santa Clara, CA, USA |
January 31 February 1 |
FOSDEM'15 Distribution Devroom/Miniconf | Brussels, Belgium |
January 31 February 1 |
FOSDEM 2015 | Brussels, Belgium |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol