|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for December 4, 2014

Checking out the OnePlus One

By Jake Edge
December 3, 2014

The CyanogenMod Android-based firmware for mobile phones has been around for five years or so now; we have looked at various versions along the way, most recently 11.0 M6 back in May. But within the last year, CyanogenMod (CM) has grown from its roots as a replacement firmware to actually being pre-installed—it now ships on the OnePlus One phone. The phones are only available for purchase by invitation (or via a Black Friday sale), but we were able to get our hands on one. Overall, the One makes for a nice showcase, both of CM's capabilities and the hardware designers at OnePlus.

[Promo photo]

Most everything about the phone is big—its dimensions, processor, memory, battery life, and screen are all oversized, seemingly—but its price is toward the low end. For around $350 you can get a phone with a Snapdragon 801 2.5 GHz quad-core CPU, 3G of RAM, 64G of storage, a 5.5-inch (14cm) 1920x1080 display, and a 3100mAh battery. The phone is noticeably bigger than my Galaxy Nexus that it replaced, but somehow doesn't seem too big. The weight is perfectly manageable, coming in at 5.7 ounces (162g). The battery life has been nothing short of phenomenal—it goes for several (four or five) days between charges with moderate usage—though it does seem to take quite some time to recharge (six or more hours). In addition, the construction seems solid, though I have thankfully avoided a drop test so far.

[OnePlus One]

The phone runs CM 11S—a customized and expanded version of the "standard" CM 11.0, which is based on Android 4.4 (KitKat). The One and its software come from a partnership between OnePlus and Cyanogen Inc.. Interestingly, OnePlus also made some arrangement with Google so that the standard apps (e.g. Maps, Play store, etc.) are shipped with the phone, rather than requiring a separate download as is the case for installing CM on phones. OnePlus has committed to continue updating the phone software for two years; the first over-the-air update for the One came within a few days of receiving the device.

The standard theme is rather square—sparse—with icons and other elements that have simple images and sharp 90° corners. It is an interesting choice, if a little hard to get used to at first. The CM hexagon also appears: in the boot animation and "please wait" spinner, for example. All of that can be changed, of course, with various free and fairly inexpensive themes available for the phone. For anyone familiar with using Android, the One is, unsurprisingly, easy to use. There are differences from Android and CM, of course, but they largely show up in the margins—settings in particular.

[Home screen]

One of the more obvious differences is in the camera hardware and app. That combination provides many more shooting "modes" than other phone cameras I have used: things like high dynamic range (HDR), raw, posterize, sepia, and "clear image", which combines ten separate images into the final output image to produce more detail with less noise. In addition, the Gallery app shows 15 images (seen below at right) with different characteristics (though it is a bit unclear what, exactly, they represent) for editing purposes.

The sensor for the rear-facing camera is 13 megapixels—oversized for a phone, once again—while the front-facing camera is 5 megapixels. The main (rear-facing) camera has six lenses and an f/2.0 aperture for low-light picture taking. I am no photography expert (as my photos here and elsewhere will attest), but there appear to be lots of things to try out with this highly portable, "always present" camera in the coming years.

[Gallery app]

As with all CM releases, it is the customization possibilities that really set it apart. The lock screen can be modified in various ways to provide shortcuts for functions like the camera or to start apps like the Chrome browser, phone, or text messaging. In addition, functions can activated with gestures on the blank sleeping screen: "V" for the flashlight or a circle for the camera. Those can be individually enabled and disabled, though adding your own custom gestures ("M" for maps?) might make a nice addition.

The "Profiles" feature is likely to be useful to many. One can associate a trigger, which is a particular WiFi network (by SSID) or near-field communication (NFC) tag, with a profile. Multiple preferences can be set automatically when the trigger is encountered. So, for instance, connecting to the home network might turn off the lock screen, disable mobile data, and enable data syncing. Connecting to the work network might, instead, ratchet up the security settings. There are a wide array of features that can be configured for each profile.

[Photo from the One]

Privacy Guard provides lots of control over the permissions that are granted to apps installed on the system. As was the case in our CM 11.0 M6 review, though, disallowing network access on a per-app basis is not one of the options. Disabling network access (thus, ads) might well annoy app developers (and Google itself), but there are controls to configure almost any other permission that was granted at install time. In addition, there is a wealth of information about which permissions have actually been used by each app and how recently, which should make it easier to determine which apps are sneaking around behind the owner's back—and lock them down.

The owner can unlock the bootloader in the usual way using the following command:

    $ fastboot oem unlock
It is important to note that doing so will wipe all of the data off the phone, so it should only be done before doing anything else with the phone or after a backup. After that, a custom recovery image (e.g. ClockworkMod recovery) can be flashed to the device; from there, it is straightforward to switch to some other firmware. When the Lollipop-based CM 12 nightlies stabilize a bit, that seems like an obvious choice to be taken for a spin.

As a debut, both for OnePlus and for pre-installed CyanogenMod, the One makes quite an impression. How, exactly, either of the two companies is making any money at that price point is rather unclear, but that is their business—literally. If you can get your hands on an invite, it is definitely a phone worth checking out.

Comments (12 posted)

Stunt Rally: Racing for Linux

December 3, 2014

This article was contributed by Adam Saunders

Quality open-source racing games are not hard to come by. There's SuperTuxKart for those who like cartoonish kart racing games, Speed Dreams for something more realistic, and Extreme Tux Racer for casual gamers. Stunt Rally is another racing game that stands out from the crowd with its attention to detail, along with some whimsical tracks and vehicles.

Gameplay

[Jungle track]

Starting the game leads to a menu that could use a little aesthetic polish. There are a number of gameplay options available, but new users will probably want to start by playing a single course or by launching the tutorial. Once a vehicle and track are chosen, players compete against AI opponents. The game can be controlled with the keyboard or can be configured to use a game controller in the "Input" settings. Stunt Rally's graphics are impressive, and the focus on gravel tracks and 4-wheel-drive vehicles gives the game a nice, gritty feel.

Vehicles include several different types of cars, a futuristic spaceship, a hovercraft, and an alien spheroid starcraft. Overall, the game is fun, save for one annoyance: going off-road and landing in a ditch or deep in water doesn't lead to the vehicle respawning on the road after a few seconds, which is what I expected. Instead, you can hit a "rewind" button to go back in time; while this returns the car to the track, it doesn't lead to a penalty for the player, which did not feel right to me. Nonetheless, it's hard to complain about being able to ride a bouncy alien sphere on Mars. Overall, the game is a blast.

[Mars track]

There are over 150 different tracks to race on, including a desert, a jungle, the planet Mars, Greece, a metropolis shrouded by fog, and many more. Online multiplayer racing is theoretically possible using a master server list. Unfortunately, not a single multiplayer match was listed during my playtime. One can also host multiplayer games.

Technical details

Stunt Rally's lead developer is based in Poland and goes by the pseudonym Crystal Hammer. He described the game's technical details and history in an email conversation.

The game is not yet available for download from most Linux distribution repositories. This is due to a licensing issue: the entire project is open-source (under GPLv3) except for the sky textures, which have a non-commercial redistribution license. Crystal Hammer doesn't "have plans on replacing them, since I don't think there are any of such good quality and with a compatible license". He said it would be fairly easy to replace them with open-source textures, though "I suspect that would lower the game's quality".

To play the game, users must download a Linux binary tarball or Windows executable from the project's home page. The source code can be obtained from the project's Git repository. The minimum hardware requirements are a dual-core, 2.0 GHz CPU, and a GPU at least as strong as "a GeForce 9600 GT or Radeon HD 3870 with Shader Model 3.0 supported and 256 MB GPU RAM". The project notes that one can run the game on lower graphical settings with weaker hardware, and that "integrated graphics processors (from Intel and AMD) will work, but may be slow, especially the older ones." Nonetheless, I was able to play on the "High" graphical setting on my laptop with an Intel HD 4000 graphics and a dual-core 2.50 GHz processor.

Crystal Hammer began Stunt Rally in 2009, when he forked the game VDrift. He saw the engine as a good base for his own work:

I was looking for open source simulation code which I could use. I wanted to create my own game. I really liked the code from VDrift, but didn't like the gameplay much.

The project, written in C++, relies on a number of dependencies: "We use Boost, OGRE, SDL, Bullet collision (is also used in VDrift), MyGui, and for OGRE: PagedGeometry (for vegetation) and Shiny (material generator library)". For those unfamiliar with some of these tools: Boost is a collection of general-purpose C++ libraries, SDL is the Simple DirectMedia Layer (a cross-platform library commonly used for video game development among other uses), MyGUI is a graphical user interface (GUI) library for games and 3D applications, OGRE is a 3D graphics rendering engine, and Bullet is a physics engine that is widely used for things like collision detection.

The project is a true labor of love for Crystal Hammer; it's all done in his spare time, as he works full-time for a company as a C++ and C# developer. He has no intention to monetize the project , nor even to accept donations: "I may think about this again later, if a few people do want that". He particularly enjoys working on the art assets and new tracks, while he finds coding AI and realistic car damage "difficult to do (also too time consuming)". He sees Stunt Rally as substantially different from the VDrift base:

Firstly: gravel. This is a completely different style of driving, all our cars have 4WD, 3 differentials, and slide a lot, it's part of the gameplay. In VDrift you drive on asphalt, have more grip and less engine power. Secondly: stunt tracks. We have a lot (167) of tracks and some of them are very twisted and stunt-like. Thirdly, the features list. There are many more things implemented like the track editor (used to make our tracks), multiplayer or hovering spaceships to name just a few.

Crystal Hammer's knowledge of racing physics is self-taught, by studying libraries like Bullet, as well as by studying VDrift's code base. He also read books on vehicle and tire dynamics, including materials on Pacejka's Magic formula, which is a means to model tire forces when a vehicle is not perfectly following the curves of the road. Just working on tire physics was laborious and consumed weeks of Crystal Hammer's spare time, he said.

For those interested in contributing, there's lots of work to do. Crystal Hammer mostly would like to have new programmers to develop features or squash bugs, but localization, graphic design, and game testers are also welcome. Currently, he is the only developer, so he'd appreciate the help: "There was a time, like a year, around 2012, when we were 4 guys. We still keep in contact and they still commit small patches once in a while". A roadmap page shows a list of tasks Crystal Hammer would like to have worked on, and a bug tracker is used to keep track of their progress. It'll be interesting to see what turns this racing game will take in the years to come.

Comments (4 posted)

A preview of darktable 1.6

By Nathan Willis
December 1, 2014

The darktable project recently announced the first release-candidate (RC) builds for its upcoming version 1.6 release. The new version will add a slideshow presentation tool to darktable's primary photo-editing features, plus several new image operations and support for new digital cameras. This time, several of the additions add to darktable's automatic adjustment capabilities, making the application a bit more friendly for users who are new to high-end photo editing.

The first release candidate arrived on November 16 with the official version number 1.5.1. Indications from the IRC channel are that a second RC build should be expected imminently, with a final 1.6 release before the end of 2014. That would make the 1.6 release just under a year after the last stable upgrade, version 1.4, which we looked at in January.

[darktable 1.6]

The RC is tagged in the project's GitHub repository; users can download a source package from that location and compile it locally. As an alternative, binary packages are built regularly for many Linux distributions; in some cases, the packagers build the development series as well as the stable releases.

New user-visible features include slideshow mode, one new image-correction operation, support for better controlling the import process of images, and enhancements to two existing tools. The slideshow mode is noteworthy for the fact that it extends darktable's feature set in a new direction—much as the addition of the geolocation "map mode" did in 1.4. The slideshow feature lets users step through an image collection ("collection," in this case, being darktable's terminology for a top-level image gallery). The feature set is comparable to most other slideshow tools; with automatic and manual advance.

There are clearly dozens upon dozens of applications that can present a slideshow of images these days. The advantage to using darktable's feature are that the collection shown can be generated by filtering one's image library (say, on image ratings, tags, geolocation, or any other metadata field) and that the slideshow can display images as they have been adjusted within darktable. In other words, the user can make color corrections, enhancement elements, and apply filters, then run the slideshow without having to export anything first. For experimentation, this is handy feature.

Image editing

[darktable 1.6 defringing]

On the editing side (in darktable's "darkroom" mode), the new defringe image operation lets the user zero in on a specific type of color distortion: longitudinal chromatic aberration (LCA). LCA is an aberration caused by the fact that different wavelengths of light have slightly different focal lengths. In an extreme zoom shot, this is visible as a violet halo on objects next to a very bright part of the image. It is different from lateral chromatic aberration, which is the red and green fringing sometimes seen at the outside edges of an image.

[darktable 1.6 gamut clipping]

Another new feature allows the user to selectively re-map the input color profile of an image into a color range more suitable for working with. Most of the time, an image's input profile (which should correspond to the camera's color space) can be easily converted to a standard working space (like AdobeRGB or L*a*b*).

But sometimes the profile conversion chosen by darktable causes some artifacts in extreme corner cases—such as in highly saturated, blue lights, which can end up converted to negative values—resulting in unsightly black pixels. For those situations, users can tweak the input profile settings manually to avoid such artifacts. Although experienced users may appreciate more control over the input profile settings, for many others the main benefit will be the simple "gamut clipping" option, which can instantly fix the black-pixel problem.

Several existing tools are upgraded in the new release. Most prominent is the basecurve tool, which is used to apply a "base" tone curve to the raw sensor data in an image file. Darktable's tool now includes an array of preset basecurves that correspond to various camera-maker presets. The manufacturers apply these curves in-camera when saving JPEG images, so by including such presets, darktable can create a tone curve that makes a raw image file match the in-camera JPEG. Of course, the manufacturer's preset may not be to the user's liking; luckily it can be deactivated if desired, and other tools used to adjust the image.

[darktable 1.6 basecurve presets]

For a lot of users, though, automatically adjusting raw files to match the camera's JPEG images is a major convenience. Many people shoot in RAW+JPEG mode, and even those who do not are used to seeing the in-camera JPEGs used as thumbnails. The notion of automatically doing the useful thing has also been applied to the levels (i.e., basic histogram) tool, which can now estimate a good setting automatically, rather than requiring the user to manually adjust levels.

[darktable 1.6 sliders]

Finally, several of the existing tools have historically sported adjustment sliders that stopped at some arbitrarily chosen minimum and maximum values. This is easiest to see in the exposure-compensation slider, which is set to +/-3 by default. Those limits are usually sensible, but darktable now offers a way to get around them: right-clicking on the slider allows the user to enter any numeric value. The slider readjusts its scale to match the entered value.

Further polish

Beyond the tool set, the new darktable RC also extends the application's functionality in some lower-level features. For example, the new release supports "huge" image sizes—specifically, those with more than 32-bit indexes, or 4 gigapixels. Fortunately for those who wrestle with such enormous pictures, darktable now makes better use of multiple processor cores: color conversion and exporting to OpenEXR images are now multi-core operations. The application also supports embedding color profiles in PNG and TIFF image files, which it had previously lacked.

One lower-level feature addition that will be an immediate boon to certain customers is support for Fujifilm's X-Trans image sensor. The X-Trans series uses a different pattern to arrange the red, green, and blue subpixels. Without explicit support for the design, raw images from many Fujifilm cameras are unusable.

Speaking of raw format support, darktable now uses the rawspeed library for image-file decoding, rather than LibRaw (although, like LibRaw, rawspeed builds on the same dcraw basic decoding functions used by most free-software photo editors). Rawspeed is a subproject from the same team that works on the competing photo editor Rawstudio; regardless of which editor one prefers, it is always refreshing to see such projects working together.

On the whole, darktable continues to improve with each release; in addition to new tools and editing features, however, the project is also making steady improvements to usability—a process that will be appreciated by new and experienced users alike.

Comments (10 posted)

Touring the hidden corners of LWN

By Jake Edge
November 29, 2014

One of the more surprising outcomes (to us) of the recent systemd "debates" in our comments section was finding out that some subscribers did not know of our comment filtering feature. Subscribers have been able to filter out specific commenters since 2010, but knowledge of that feature seems to have dissipated over time. We certainly could do a better job of documenting all of our features, but we thought it might be a good time to both introduce a couple of new features while refreshing people's memories of some of the features we already offer.

New stuff

To start with, there are some new features to investigate. Inspired by some of the suggestions about our comment-filtering feature, we have now added the ability to filter out comments from non-subscribers (i.e. guests). As with configuring anything about comment filters (or any other LWN customization), visit the "My Account" page. The controls for the feature are under the "Comment Filtering" heading. Comment filtering is available for all subscribers at the "professional hacker" level or above.

As with filtering individual users, the guest filtering provides a JavaScript-based interface that will show the presence of comments, the number of replies, and the filtered comment's author. Clicking on the "+" icon will expose the comments (and any replies); the comment subtree can be collapsed again by using the "-" icon.

A much more wide-ranging change is that we are working on a new, responsive design for LWN—one that will scale well from small, high-DPI screens on phones and tablets up to desktop screens of varying resolutions. We offered a preview of that functionality to our "maniacal supporter" subscribers recently—we are now ready to give all of our subscribers a look.

To try it out, subscribers at any level can visit the "Customization" page from "My Account". Under "Display Preferences" there is an option to "Use the new (in-development) page engine"; simply check that box and save your preferences to see how things look. We are most definitely interested in feedback, especially regarding how it looks and works on the vast array of different devices out there. Please send any comments to the "sitecode" email address at lwn.net.

While there may not be many subscribers who are using Internet Explorer 8 to access LWN, a warning is in order for any that are. The new display code does not yet work correctly with IE 8.

Oldies but goodies

Another customization feature that has been around for a bit is the "Display old parent in unread comments screen", which shows some more context (the parent comment) when displaying unread comments. It is located in the "Display preferences" section of the "Account customization" screen. Subscribers at any level have access to the unread comments feature, thus they also can set this option.

For those who get annoyed by the ads we show—count us among them at times—it is possible to turn off all advertisements for "professional hackers" subscribers and above. That option can be found in the "Advertising preferences" section of the customization page.

Another feature that readers often miss is our mailing lists. We have two for subscribers: "Daily" and "Daily Headlines". Each of those sends at most one message per day with the news items (or headlines) posted that day. The "Notify" and "Just freed" lists are for anyone; "Notify" is a once-per-week notification that the weekly edition is available, while "Just freed" will send one message on any day where content has come out from behind the LWN paywall. Subscriptions to those lists can be adjusted in the "Mailing lists" section of your account page.

We also have a variety of RSS feeds. In addition, things posted to our daily page are also echoed in our Twitter feed.

Keeping up with the conferences and other events in our community is made easier with the LWN community calendar, which we maintain with lots of help from our readers. In addition, CFPs (calls for papers or proposals) can be tracked in the LWN CFP deadline calendar. Both calendars are summarized for the next few months in each week's Announcements page. As always, if your favorite event does not appear there, please submit it for inclusion.

The latest weekly edition always has new content for our subscribers, but we try to make it easy to find older content as well. Our "Archives" page is a good place to start. It has links to the ten most recent weekly editions, but it also links to several indexes that may be useful. For example, our conference coverage is indexed by conference name and by year; we have an index of guest author articles as well. Finally, both our kernel and security articles have their own indexes.

One more site "feature" that bears mentioning: the subscription page. All of the content and features you see here were supported almost entirely by our subscribers—many thanks to you! If you like what you see here and aren't a subscriber yet, please consider changing that. We have been reporting on the Linux and free software world for 16 years now and have been subscriber-supported for 12 of those years. We'd like to continue for many more, but can only do that with your support.

Do you have a favorite LWN feature that we missed listing here? Let's hear about it in the comments. The same goes for feature requests, though more complicated or elaborate changes are probably best sent to our inbox: the "lwn" alias here at lwn.net. We probably can only get to a small fraction of your suggestions, but our "ears" are certainly open.

Comments (93 posted)

Page editor: Jonathan Corbet

Security

The GnuPG 2.1 release

By Nathan Willis
December 3, 2014

GNU Privacy Guard (GnuPG) is the best-known free-software implementation of the OpenPGP cryptography standard. For the past few years, the GnuPG project has actively maintained its existing stable branch, version 2.0.x, its "classic" branch (version 1.4), and continued working on a more modern replacement that implements several important improvements. In early November, the project made its first official release of this development code: GnuPG 2.1.0. There are quite a few interesting changes to be found in version 2.1, although the decision to switch over from the 2.0 series to 2.1 should, nevertheless, be carefully considered.

The new release is available as source code bundles directly from the GnuPG project. Despite several beta releases of version 2.1 over the years (the first was in 2010), the project still emphasizes that the 2.1 series has not yet been subjected to extensive real-world testing. Nevertheless, it is referring to 2.1.0 as the "modern" series, rather than as "unstable" or some other designation suggesting that it is not ready for deployment.

It is vital to note, however, that version 2.1 cannot be installed simultaneously with the 2.0 series. In addition to affecting those users who are interested in compiling the new release for themselves, this also means it is likely to be some time before binary 2.1 packages make their way into many Linux distributions. The "classic" 1.4 series, though, can be installed alongside either GnuPG 2.0 or 2.1

Interfaces and key storage

Several changes in 2.1 will be noticed immediately by GnuPG users because they introduce interface changes to the command set and differences in how secret material is stored. For example, previous GnuPG versions have all stored public-key pairs in two separate files. The secring.gpg file contained both the public and private keys for a user's key pairs, while the pubring.gpg file contained just the public half of those same pairs. That design decision meant that GnuPG had to work to ensure that the two files remained in sync, increasing code complexity.

The new design does away with the two-file setup, and keeps private keys inside a key-store directory (~/.gnupg/private-keys-v1.d). In addition, the code required to manage the secring.gpg file has been factored out of the gpg binary. Instead, secret key management is handled entirely by the gpg-agent daemon. The new design also enables some other long-requested features, such as the ability to import a subkey into an existing secret key. gpg-agent is also started on demand by the GnuPG tools, whereas in past releases, users needed to start it manually or by adding it to a session-startup script.

The storage of public keys has also changed in the new release. GnuPG 2.1 stores public keys in a "keybox" file that was originally developed for GnuPG's S/MIME tool, gpgsm. It is optimized for read efficiency; since the number of public keys a user has on file typically outnumbers the number of private keys (and often by a large margin), providing fast access to the public key store is important.

Several of the GnuPG command-line tools have also received a refresh. In particular, the key-generation interface is now faster, by virtue of only requiring users to enter a name and email address: the many other possible parameters for a key can be filled by default values (which is likely to reduce errors in addition to saving time). This quick-generation behavior is used when gpg2 --gen-key is invoked; the full interface as found in earlier releases can be triggered with gpg2 --full-gen-key.

Other conveniences for key-generation are found in the new release. First, there are now "quick" versions of the key-generation and key-signing commands, developed in order to save time when performing repetitive tasks. Running

    gpg2 --quick-gen-key 'John Doe <doe@example.net>'

or

    gpg2 --quick-sign-key '1234 5678 90AB CDEF 1234 5678' 

will prompt the user for a yes/no confirmation, but will otherwise perform the requested operations without further questions. Both commands, though, do perform basic sanity checks and will warn the user if (for example) asked to create a key for a name/email pair that already exists.

Second, key-revocation certificates are now created by default and saved in the directory ~/.gnupg/openpgp-revocs.d/. Each revocation certificate even includes brief instructions for usage at the top of the file. Since the preparation of revocation certificates before they are needed falls under the "good ideas that are easy to forget" umbrella, this is likely a change many users will appreciate.

Finally, the command-line key listing format has been changed to be more informative. For traditional encryption algorithms, the algorithm name has been reformatted for clarity (e.g., dsa2048 rather than 2048D). For elliptic curve cryptography (ECC), the name of the curve is displayed, rather than the algorithm.

Ellipses ....

ECC support, of course, is another major feature that debuts in GnuPG 2.1—for some users, it may even be the most significant change. According to the release notes, GnuPG 2.1 is the first "mainstream" implementation of public-key ECC in an OpenPGP tool, a fact that has an upside and a downside as well. The downside, naturally, is that ECC keys are not widely deployed. The upside is that GnuPG's support for ECC should make deploying such keys relatively easy.

Nevertheless, GnuPG 2.1 still hides the ECC key-generation option by default. Users must use the --gen-full-key option and add the --expert flag to see it. ECC support is an OpenPGP extension documented in RFC 6637.

At the moment, GnuPG supports seven different ECC curves: Curve25519, NIST P-256, NIST P-384, NIST P-521, Brainpool P-256, Brainpool P-384, and Brainpool P-512. The Curve25519 support, for now, is limited to digital signature and not encryption. It is not part of the OpenPGP standard (although IETF approval is expected by many to arrive someday), but it is still noteworthy. It is regarded by many in the community as safer than the NIST (US National Institute of Standards and Technology) and Brainpool curves, which are suspected of being vulnerable to US government codebreakers.

On the subject of bad cryptography, all support for PGP-2 keys has been removed in GnuPG 2.1. PGP-2 keys are no longer regarded as safe, in particular because the algorithms mandate the use of the MD5 hash function. GnuPG 2.1 will no longer import PGP-2 keys, and the project recommends that users keep a copy of GnuPG 1.4 on hand if they need to decrypt data that has been previously encrypted with a PGP-2 key.

Additional features

There are, of course, many other smaller feature additions and enhancements to be found in the new release. X.509 certificate creation has been improved in a number of ways, for example. Users can create self-signed certificates, create batches of certificates based on a parameter file, and can export certificates directly to PKCS#8 or PKCS#1 format. This last feature allows users to create certificates for immediate use with OpenSSL servers (requiring no conversion). The batch-generation mode is also a feature that is already found in OpenSSL.

Smartcard support has been updated, with support for several new card-reader devices and hardware token types. Most notable on this front are the ability to use USB sticks with a built-in smartcard exactly like other smartcard devices and full support for Gnuk tokens (a free-software cryptographic token based on the STM32F103 microcontroller).

Finally, there have been several changes to the way GnuPG interoperates with keyservers. In prior releases, GnuPG spawned temporary processes to connect to remote keyservers—which meant that the program could not maintain any persistent state about the keyserver. The new release merges in a formerly separate project called dirmngr that was previously limited to interacting with X.509 servers; it now manages keyserver connections as well.

One immediate benefit of using dirmngr to mediate keyserver access is that it can properly cope with keyserver pools. The issue is that Keyserver pools tend to be configured in round-robin arrangements, which works well enough until the specific keyserver GnuPG has connected to goes down or becomes unreachable. In prior releases, GnuPG would continue trying to access such an unreachable keyserver until the DNS entry for it expired. Dirmngr, in contrast, flags unreachable keyservers and sends another DNS lookup request to the pool—which should return a new, working host in considerably less time.

A security-critical program like GnuPG obviously warrants a high degree of scrutiny before a new release in adopted. To be sure, no one wants to migrate their company to a new PGP key format only to discover that there is a serious cryptographic flaw in the implementation of the new cipher. That said, there are certainly many new benefits to be found in GnuPG 2.1 over the 2.0 series. Hopefully, widespread vetting will come and more users can take advantage of ECC, updated smartcard support, and the many interface improvements offered.

Comments (3 posted)

Brief items

Security quotes of the week

I think hard times are coming when we will be wanting the voices of writers who can see alternatives to how we live now and can see through our fear-stricken society and its obsessive technologies to other ways of being, and even imagine some real grounds for hope. We will need writers who can remember freedom. Poets, visionaries—the realists of a larger reality.
Ursula K. Le Guin (Thanks to Paul Wise.)

One particular executive had a malware infection on his computer from which the source could not be determined. The executive’s system was patched up to date, had antivirus and up to date anti-malware protection. Web logs were scoured and all attempts made to identify the source of the infection but to no avail. Finally after all traditional means of infection were covered; IT started looking into other possibilities. They finally asked the Executive, “Have there been any changes in your life recently”? The executive answer “Well yes, I quit smoking two weeks ago and switched to e-cigarettes”. And that was the answer they were looking for, the made in china e-cigarette had malware hard coded into the charger and when plugged into a computer’s USB port the malware phoned home and infected the system.
Jrockilla on reddit

The web browser is called 'telnet'. Support for http and html is very limited. Enter 'telnet <server> 80' at the shell prompt, then go type the http request. Check RfC 2616 for details. Due to lack of support for images, css and javascript the browser is not vulnerable to cross site scripting, web bugs and other modern attacks.
README from the QEMU advent calendar day 1 entry

Comments (3 posted)

Four-year-old comment security bug affects 86 percent of WordPress sites (Ars Technica)

Ars Technica reports on a recently discovered bug in WordPress 3 sites that could be used to launch malicious script-based attacks on site visitors’ browsers. "The vulnerability, discovered by Jouko Pynnonen of Klikki Oy, allows an attacker to craft a comment on a blog post that includes malicious JavaScript code. On sites that allow comments without authentication—the default setting for WordPress—this could allow anyone to post malicious scripts within comments that could target site visitors or administrators. A proof of concept attack developed by Klikki Oy was able to hijack a WordPress site administrator’s session and create a new WordPress administrative account with a known password, change the current administrative password, and launch malicious PHP code on the server. That means an attacker could essentially lock the existing site administrator out and hijack the WordPress installation for malicious purposes." WordPress 4.0 is not vulnerable to the attack.

Comments (18 posted)

New vulnerabilities

apparmor: privilege escalation

Package(s):apparmor CVE #(s):CVE-2014-1424
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Ubuntu advisory:

An AppArmor policy miscompilation flaw was discovered in apparmor_parser. Under certain circumstances, a malicious application could use this flaw to perform operations that are not allowed by AppArmor policy. The flaw may also prevent applications from accessing resources that are allowed by AppArmor policy.

Alerts:
Ubuntu USN-2413-1 apparmor 2014-11-20

Comments (none posted)

asterisk: denial of service

Package(s):asterisk CVE #(s):CVE-2014-6610
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Mandriva advisory:

Remote crash when handling out of call message in certain dialplan configurations (CVE-2014-6610).

Alerts:
Debian-LTS DLA-455-1 asterisk 2016-05-03
Mageia MGASA-2014-0490 asterisk 2014-11-26
Gentoo 201411-10 asterisk 2014-11-23
Mandriva MDVSA-2014:218 asterisk 2014-11-21

Comments (none posted)

asterisk: multiple vulnerabilities

Package(s):asterisk CVE #(s):
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Mandriva advisory:

Mixed IP address families in access control lists may permit unwanted traffic.

High call load may result in hung channels in ConfBridge.

Permission escalation through ConfBridge actions/dialplan functions.

Alerts:
Mandriva MDVSA-2014:218 asterisk 2014-11-21

Comments (none posted)

chromium-browser: two vulnerabilities

Package(s):chromium-browser CVE #(s):CVE-2014-7899 CVE-2014-7906
Created:November 25, 2014 Updated:December 3, 2014
Description: From the CVE entries:

Google Chrome before 38.0.2125.101 allows remote attackers to spoof the address bar by placing a blob: substring at the beginning of the URL, followed by the original URI scheme and a long username string. (CVE-2014-7899)

Use-after-free vulnerability in the Pepper plugins in Google Chrome before 39.0.2171.65 allows remote attackers to cause a denial of service or possibly have unspecified other impact via crafted Flash content that triggers an attempted PepperMediaDeviceManager access outside of the object's lifetime. (CVE-2014-7906)

Alerts:
Gentoo 201412-13 chromium 2014-12-13
openSUSE openSUSE-SU-2014:1626-1 chromium 2014-12-12
Mageia MGASA-2014-0485 chromium-browser-stable 2014-11-25
Red Hat RHSA-2014:1894-01 chromium-browser 2014-11-24

Comments (none posted)

clamav: denial of service

Package(s):clamav CVE #(s):CVE-2013-6497
Created:November 20, 2014 Updated:December 3, 2014
Description: From the Mandriva advisory:

Certain javascript files causes ClamAV to segfault when scanned with the -a (list archived files) (CVE-2013-6497).

Alerts:
Mandriva MDVSA-2015:166 clamav 2015-03-29
Ubuntu USN-2488-2 clamav 2015-02-12
openSUSE openSUSE-SU-2014:1679-1 clamav 2014-12-21
SUSE SUSE-SU-2014:1571-1 clamav 2014-12-05
SUSE SUSE-SU-2014:1574-1 clamav 2014-12-05
openSUSE openSUSE-SU-2014:1560-1 clamav 2014-12-05
Ubuntu USN-2423-1 clamav 2014-11-26
Fedora FEDORA-2014-15463 clamav 2014-11-27
Mageia MGASA-2014-0487 clamav 2014-11-26
Fedora FEDORA-2014-15473 clamav 2014-11-22
Mandriva MDVSA-2014:217 clamav 2014-11-20

Comments (none posted)

clamav: buffer overflow

Package(s):clamav CVE #(s):CVE-2014-9050
Created:November 26, 2014 Updated:December 11, 2014
Description: From the Mageia advisory:

A heap buffer overflow was reported in ClamAV when scanning a specially crafted y0da Crypter obfuscated PE file.

Alerts:
Mandriva MDVSA-2015:166 clamav 2015-03-29
openSUSE openSUSE-SU-2014:1679-1 clamav 2014-12-21
Gentoo 201412-05 clamav 2014-12-10
SUSE SUSE-SU-2014:1571-1 clamav 2014-12-05
SUSE SUSE-SU-2014:1574-1 clamav 2014-12-05
openSUSE openSUSE-SU-2014:1560-1 clamav 2014-12-05
Ubuntu USN-2423-1 clamav 2014-11-26
Fedora FEDORA-2014-15463 clamav 2014-11-27
Mageia MGASA-2014-0487 clamav 2014-11-26

Comments (none posted)

drupal7: multiple vulnerabilities

Package(s):drupal7 CVE #(s):CVE-2014-9015 CVE-2014-9016
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Debian advisory:

CVE-2014-9015 - Aaron Averill discovered that a specially crafted request can give a user access to another user's session, allowing an attacker to hijack a random session.

CVE-2014-9016 - Michael Cullum, Javier Nieto and Andres Rojas Guerrero discovered that the password hashing API allows an attacker to send specially crafted requests resulting in CPU and memory exhaustion. This may lead to the site becoming unavailable or unresponsive (denial of service).

Alerts:
Mandriva MDVSA-2015:181 drupal 2015-03-30
Fedora FEDORA-2014-15522 drupal7 2014-12-03
Fedora FEDORA-2014-15528 drupal7 2014-12-03
Fedora FEDORA-2014-15515 drupal6 2014-12-03
Fedora FEDORA-2014-15519 drupal6 2014-12-03
Mageia MGASA-2014-0492 drupal 2014-11-26
Debian DSA-3075-1 drupal7 2014-11-20

Comments (none posted)

drupal: cross-site scripting

Package(s):drupal6 CVE #(s):CVE-2012-6662
Created:December 3, 2014 Updated:December 3, 2014
Description: From the CVE entry:

Cross-site scripting (XSS) vulnerability in the default content option in jquery.ui.tooltip.js in the Tooltip widget in jQuery UI before 1.10.0 allows remote attackers to inject arbitrary web script or HTML via the title attribute, which is not properly handled in the autocomplete combo box demo.

Alerts:
Scientific Linux SLSA-2015:1462-1 ipa 2015-08-03
Oracle ELSA-2015-1462 ipa 2015-07-29
Red Hat RHSA-2015:1462-01 ipa 2015-07-22
Scientific Linux SLSA-2015:0442-1 ipa 2015-03-25
Red Hat RHSA-2015:0442-01 ipa 2015-03-05
Fedora FEDORA-2014-15515 drupal6 2014-12-03
Fedora FEDORA-2014-15519 drupal6 2014-12-03

Comments (none posted)

erlang: command injection

Package(s):erlang CVE #(s):CVE-2014-1693
Created:December 2, 2014 Updated:March 30, 2015
Description: From the Red Hat bugzilla:

An FTP command injection flaw was found in Erlang's FTP module. Several functions in the FTP module do not properly sanitize the input before passing it into a control socket. A local attacker can use this flaw to execute arbitrary FTP commands on a system that uses this module.

Alerts:
Mandriva MDVSA-2015:174 erlang 2015-03-30
Mageia MGASA-2014-0553 erlang 2014-12-26
Fedora FEDORA-2014-17009 erlang 2014-12-23
Fedora FEDORA-2014-16214 erlang 2014-12-15
Fedora FEDORA-2014-15394 erlang 2014-12-01

Comments (none posted)

facter: privilege escalation

Package(s):facter CVE #(s):CVE-2014-3248
Created:November 24, 2014 Updated:December 29, 2014
Description: From the CVE entry:

Untrusted search path vulnerability in Puppet Enterprise 2.8 before 2.8.7, Puppet before 2.7.26 and 3.x before 3.6.2, Facter 1.6.x and 2.x before 2.0.2, Hiera before 1.3.4, and Mcollective before 2.5.2, when running with Ruby 1.9.1 or earlier, allows local users to gain privileges via a Trojan horse file in the current working directory, as demonstrated using (1) rubygems/defaults/operating_system.rb, (2) Win32API.rb, (3) Win32API.so, (4) safe_yaml.rb, (5) safe_yaml/deep.rb, or (6) safe_yaml/deep.so; or (7) operatingsystem.rb, (8) operatingsystem.so, (9) osfamily.rb, or (10) osfamily.so in puppet/confine.

Alerts:
Gentoo 201412-45 facter 2014-12-26
Gentoo 201412-15 mcollective 2014-12-13
Fedora FEDORA-2014-12699 facter 2014-11-22

Comments (none posted)

ffmpeg: multiple vulnerabilities

Package(s):ffmpeg CVE #(s):CVE-2014-5271 CVE-2014-5272 CVE-2014-8541 CVE-2014-8542 CVE-2014-8543 CVE-2014-8544 CVE-2014-8545 CVE-2014-8546 CVE-2014-8547 CVE-2014-8548
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Magiea advisory:

A heap-based buffer overflow in the encode_slice function in libavcodec/proresenc_kostya.c in FFmpeg before 2.0.6 can cause a crash, allowing a malicious image file to cause a denial of service (CVE-2014-5271).

libavcodec/iff.c in FFmpeg before 2.0.6 allows an attacker to have an unspecified impact via a crafted iff image, which triggers an out-of-bounds array access, related to the rgb8 and rgbn formats (CVE-2014-5272).

libavcodec/mjpegdec.c in FFmpeg before 2.0.6 considers only dimension differences, and not bits-per-pixel differences, when determining whether an image size has changed, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted MJPEG data (CVE-2014-8541).

libavcodec/utils.c in FFmpeg before 2.0.6 omits a certain codec ID during enforcement of alignment, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted JV data (CVE-2014-8542).

libavcodec/mmvideo.c in FFmpeg before 2.0.6 does not consider all lines of HHV Intra blocks during validation of image height, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted MM video data (CVE-2014-8543).

libavcodec/tiff.c in FFmpeg before 2.0.6 does not properly validate bits-per-pixel fields, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted TIFF data (CVE-2014-8544).

libavcodec/pngdec.c in FFmpeg before 2.0.6 accepts the monochrome-black format without verifying that the bits-per-pixel value is 1, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted PNG data (CVE-2014-8545).

Integer underflow in libavcodec/cinepak.c in FFmpeg before 2.0.6 allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted Cinepak video data (CVE-2014-8546).

libavcodec/gifdec.c in FFmpeg before 2.0.6 does not properly compute image heights, which allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted GIF data (CVE-2014-8547).

Off-by-one error in libavcodec/smc.c in FFmpeg before 2.0.6 allows remote attackers to cause a denial of service (out-of-bounds access) or possibly have unspecified other impact via crafted Quicktime Graphics (aka SMC) video data (CVE-2014-8548).

Alerts:
Ubuntu USN-2944-1 libav 2016-04-04
Gentoo 201603-06 ffmpeg 2016-03-12
Mandriva MDVSA-2015:173 ffmpeg 2015-03-30
Ubuntu USN-2534-1 libav 2015-03-17
Debian DSA-3189-1 libav 2015-03-15
Mageia MGASA-2014-0491 avidemux 2014-11-26
Mageia MGASA-2014-0473 ffmpeg 2014-11-21
Mageia MGASA-2014-0464 ffmpeg 2014-11-21

Comments (none posted)

flac: multiple vulnerabilities

Package(s):flac CVE #(s):CVE-2014-8962 CVE-2014-9028
Created:November 28, 2014 Updated:August 18, 2015
Description:

From the CVE entries:

Stack-based buffer overflow in stream_decoder.c in libFLAC before 1.3.1 allows remote attackers to execute arbitrary code via a crafted .flac file. (CVE-2014-8962)

Heap-based buffer overflow in stream_decoder.c in libFLAC before 1.3.1 allows remote attackers to execute arbitrary code via a crafted .flac file. (CVE-2014-9028)

Alerts:
Fedora FEDORA-2015-13160 flac 2015-08-18
Fedora FEDORA-2015-13145 flac 2015-08-15
Mandriva MDVSA-2015:188 flac 2015-04-02
Scientific Linux SLSA-2015:0767-1 flac 2015-04-01
Oracle ELSA-2015-0767 flac 2015-03-31
Oracle ELSA-2015-0767 flac 2015-03-31
CentOS CESA-2015:0767 flac 2015-03-31
CentOS CESA-2015:0767 flac 2015-04-01
Red Hat RHSA-2015:0767-01 flac 2015-04-01
Gentoo 201412-40 flac 2014-12-25
Fedora FEDORA-2014-16272 flac 2014-12-20
Mandriva MDVSA-2014:239 flac 2014-12-14
Fedora FEDORA-2014-16251 mingw-flac 2014-12-13
Fedora FEDORA-2014-16270 mingw-flac 2014-12-13
Fedora FEDORA-2014-16148 mingw-flac 2014-12-13
Fedora FEDORA-2014-16175 flac 2014-12-13
openSUSE openSUSE-SU-2014:1588-1 flac 2014-12-08
Fedora FEDORA-2014-16258 flac 2014-12-07
Mageia MGASA-2014-0499 flac 2014-11-29
Debian DSA-3082-1 flac 2014-11-30
Ubuntu USN-2426-1 flac 2014-11-27

Comments (none posted)

glibc: code execution

Package(s):glibc CVE #(s):CVE-2014-7817
Created:November 27, 2014 Updated:March 4, 2015
Description: From the Mageia advisory:

The function wordexp() fails to properly handle the WRDE_NOCMD flag when processing arithmetic inputs in the form of "$((... ``))" where "..." can be anything valid. The backticks in the arithmetic epxression are evaluated by in a shell even if WRDE_NOCMD forbade command substitution. This allows an attacker to attempt to pass dangerous commands via constructs of the above form, and bypass the WRDE_NOCMD flag. This update fixes the issue (CVE-2014-7817).

Alerts:
Gentoo 201602-02 glibc 2016-02-17
Mandriva MDVSA-2015:168 glibc 2015-03-30
openSUSE openSUSE-SU-2015:0351-1 glibc 2015-02-23
Oracle ELSA-2015-0327 glibc 2015-03-09
Fedora FEDORA-2015-2845 glibc 2015-03-04
Fedora FEDORA-2015-2837 glibc 2015-03-04
Oracle ELSA-2015-0092 glibc 2015-01-27
Debian DSA-3142-1 eglibc 2015-01-27
Scientific Linux SLSA-2015:0016-1 glibc 2015-01-07
Oracle ELSA-2015-0016 glibc 2015-01-07
CentOS CESA-2015:0016 glibc 2015-01-07
Red Hat RHSA-2015:0016-01 glibc 2015-01-07
Scientific Linux SLSA-2014:2023-1 glibc 2014-12-19
Oracle ELSA-2014-2023 glibc 2014-12-18
CentOS CESA-2014:2023 glibc 2014-12-19
Red Hat RHSA-2014:2023-01 glibc 2014-12-18
Ubuntu USN-2432-1 eglibc, glibc 2014-12-03
Mandriva MDVSA-2014:232 glibc 2014-11-27
Mageia MGASA-2014-0496 glibc 2014-11-26

Comments (none posted)

icecast: information leak

Package(s):icecast CVE #(s):CVE-2014-9018
Created:November 27, 2014 Updated:December 8, 2014
Description: From the Mageia advisory:

Icecast did not properly handle the launching of "scripts" on connect or disconnect of sources. This could result in sensitive information from these scripts leaking to (external) clients. (CVE-2014-9018)

Alerts:
Gentoo 201412-38 icecast 2014-12-25
Fedora FEDORA-2014-16483 icecast 2014-12-15
Fedora FEDORA-2014-16394 icecast 2014-12-15
Fedora FEDORA-2014-16435 icecast 2014-12-15
openSUSE openSUSE-SU-2014:1591-1 icecast 2014-12-08
openSUSE openSUSE-SU-2014:1593-1 icecast 2014-12-08
Mandriva MDVSA-2014:231 icecast 2014-11-27
Mageia MGASA-2014-0494 icecast 2014-11-26

Comments (none posted)

imagemagick: denial of service

Package(s):imagemagick CVE #(s):CVE-2014-8716
Created:November 24, 2014 Updated:December 3, 2014
Description: From the Mageia advisory:

ImageMagick is vulnerable to a denial of service due to out-of-bounds memory accesses in the JPEG decoder.

Alerts:
Ubuntu USN-3131-1 imagemagick 2016-11-21
Mandriva MDVSA-2015:105 imagemagick 2015-03-29
openSUSE openSUSE-SU-2014:1492-1 ImageMagick 2014-11-25
Mandriva MDVSA-2014:226 imagemagick 2014-11-25
Mageia MGASA-2014-0482 imagemagick 2014-11-22

Comments (none posted)

java-1.6.0-ibm: privilege escalation

Package(s):java-1.6.0-ibm CVE #(s):CVE-2014-3065
Created:November 20, 2014 Updated:December 3, 2014
Description: From the Red Hat advisory:

CVE-2014-3065 IBM JDK: privilege escalation via shared class cache

Alerts:
SUSE SUSE-SU-2015:0376-1 java-1_5_0-ibm 2015-02-25
SUSE SUSE-SU-2015:0392-1 java-1_6_0-ibm 2015-02-27
SUSE SUSE-SU-2014:1549-1 java-1_7_1-ibm 2014-12-03
SUSE SUSE-SU-2014:1526-2 IBM Java 2014-12-02
SUSE SUSE-SU-2014:1526-1 IBM Java 2014-11-28
Red Hat RHSA-2014:1880-01 java-1.7.1-ibm 2014-11-20
Red Hat RHSA-2014:1882-01 java-1.7.0-ibm 2014-11-20
Red Hat RHSA-2014:1881-01 java-1.5.0-ibm 2014-11-20
Red Hat RHSA-2014:1876-01 java-1.7.0-ibm 2014-11-19
Red Hat RHSA-2014:1877-01 java-1.6.0-ibm 2014-11-19

Comments (none posted)

kdebase4-runtime, kwebkitpart: code execution

Package(s):kdebase4-runtime CVE #(s):CVE-2014-8600
Created:November 21, 2014 Updated:December 8, 2014
Description:

From the Mageia advisory:

kwebkitpart and the bookmarks:// io slave were not sanitizing input correctly allowing to some javascript being executed on the context of the referenced hostname (CVE-2014-8600).

Alerts:
openSUSE openSUSE-SU-2015:0573-1 kdebase4-runtime, 2015-03-23
Fedora FEDORA-2014-15124 kwebkitpart 2014-12-07
Fedora FEDORA-2014-15130 kwebkitpart 2014-12-06
Ubuntu USN-2414-1 kde-runtime 2014-11-24
Fedora FEDORA-2014-15532 kde-runtime 2014-11-25
Mageia MGASA-2014-0478 kdebase4-runtime 2014-11-21

Comments (none posted)

kernel: multiple vulnerabilities

Package(s):kernel CVE #(s):CVE-2014-7843 CVE-2014-7842 CVE-2014-7841 CVE-2014-7826 CVE-2014-7825
Created:November 21, 2014 Updated:March 3, 2015
Description:

From the Red Hat bug reports:

CVE-2014-7843 - It was found that a read of n*PAGE_SIZE+1 from /dev/zero will cause the kernel to panic due to an unhandled exception since it's not handling the single byte case with a fixup (anything larger than a single byte will properly fault.) A local, unprivileged user could use this flaw to crash the system.

CVE-2014-7842 - It was found that reporting emulation failures to user space can lead to either local or L2->L1 DoS. In the case of local DoS attacker needs access to MMIO area or be able to generate port access. Please note that on certain systems HPET is mapped to userspace as part of vdso (vvar) and thus an unprivileged user may generate MMIO transactions (and enter the emulator) this way.

CVE-2014-7841 - An SCTP server doing ASCONF will panic on malformed INIT ping-of-death in the form of:

     ------------ INIT[PARAM: SET_PRIMARY_IP] ------------>

A remote attacker could use this flaw to crash the system by sending a maliciously prepared SCTP packet in order to trigger a NULL pointer dereference on the server.

From the CVE entries:

CVE-2014-7826 - kernel/trace/trace_syscalls.c in the Linux kernel through 3.17.2 does not properly handle private syscall numbers during use of the ftrace subsystem, which allows local users to gain privileges or cause a denial of service (invalid pointer dereference) via a crafted application.

CVE-2014-7825 - kernel/trace/trace_syscalls.c in the Linux kernel through 3.17.2 does not properly handle private syscall numbers during use of the perf subsystem, which allows local users to cause a denial of service (out-of-bounds read and OOPS) or bypass the ASLR protection mechanism via a crafted application.

Alerts:
Oracle ELSA-2016-3502 kernel 2.6.39 2016-01-09
Oracle ELSA-2016-3502 kernel 2.6.39 2016-01-09
Scientific Linux SLSA-2016:0855-1 kernel 2016-06-16
Red Hat RHSA-2016:0855-01 kernel 2016-05-10
Scientific Linux SLSA-2015:2152-2 kernel 2015-12-21
Oracle ELSA-2015-2152 kernel 2015-11-25
Red Hat RHSA-2015:2152-02 kernel 2015-11-19
Scientific Linux SLSA-2015:0864-1 kernel 2015-04-21
Oracle ELSA-2015-0864 kernel 2015-04-21
CentOS CESA-2015:0864 kernel 2015-04-22
SUSE SUSE-SU-2015:0736-1 Real Time Linux Kernel 2015-04-20
Red Hat RHSA-2015:0864-01 kernel 2015-04-21
SUSE SUSE-SU-2015:0652-1 Linux kernel 2015-04-02
Scientific Linux SLSA-2015:0290-1 kernel 2015-03-25
SUSE SUSE-SU-2015:0581-1 kernel 2015-03-24
openSUSE openSUSE-SU-2015:0566-1 kernel 2015-03-21
Oracle ELSA-2015-3012 kernel 2015-03-19
Oracle ELSA-2015-3012 kernel 2015-03-19
SUSE SUSE-SU-2015:0529-1 the Linux Kernel 2015-03-18
Red Hat RHSA-2015:0695-01 kernel 2015-03-17
SUSE SUSE-SU-2015:0481-1 kernel 2015-03-11
Red Hat RHSA-2015:0290-01 kernel 2015-03-05
Oracle ELSA-2015-0290 kernel 2015-03-12
Red Hat RHSA-2015:0285-01 kernel 2015-03-03
Red Hat RHSA-2015:0284-01 kernel 2015-03-03
Oracle ELSA-2015-3005 kernel 2015-01-29
Oracle ELSA-2015-3005 kernel 2015-01-29
Oracle ELSA-2015-3004 kernel 2015-01-29
Oracle ELSA-2015-3004 kernel 2015-01-29
Oracle ELSA-2015-3003 kernel 2015-01-29
Oracle ELSA-2015-3003 kernel 2015-01-29
CentOS CESA-2015:0102 kernel 2015-01-30
CentOS CESA-2015:0102 kernel 2015-01-29
Scientific Linux SLSA-2015:0102-1 kernel 2015-01-28
Oracle ELSA-2015-0087 kernel 2015-01-28
Oracle ELSA-2015-0102 kernel 2015-01-28
CentOS CESA-2015:0087 kernel 2015-01-28
Red Hat RHSA-2015:0102-01 kernel 2015-01-28
Scientific Linux SLSA-2015:0087-1 kernel 2015-01-28
Red Hat RHSA-2015:0087-01 kernel 2015-01-27
Mandriva MDVSA-2015:027 kernel 2015-01-16
SUSE SUSE-SU-2015:0068-1 the Linux Kernel 2015-01-16
SUSE SUSE-SU-2014:1695-2 Linux kernel 2015-01-14
Ubuntu USN-2464-1 linux-ti-omap4 2015-01-13
Ubuntu USN-2467-1 linux-lts-utopic 2015-01-13
Ubuntu USN-2465-1 linux-lts-trusty 2015-01-13
Ubuntu USN-2463-1 kernel 2015-01-13
Ubuntu USN-2466-1 kernel 2015-01-13
Ubuntu USN-2468-1 kernel 2015-01-13
Fedora FEDORA-2014-17244 kernel 2015-01-05
SUSE SUSE-SU-2014:1695-1 kernel 2014-12-23
SUSE SUSE-SU-2014:1693-1 kernel 2014-12-23
SUSE SUSE-SU-2014:1693-2 kernel 2014-12-24
openSUSE openSUSE-SU-2014:1669-1 kernel 2014-12-19
openSUSE openSUSE-SU-2014:1677-1 kernel 2014-12-21
openSUSE openSUSE-SU-2014:1678-1 kernel 2014-12-21
Debian-LTS DLA-118-1 linux-2.6 2014-12-21
Ubuntu USN-2448-2 kernel 2014-12-19
Ubuntu USN-2447-2 kernel 2014-12-19
Ubuntu USN-2444-1 linux-ti-omap4 2014-12-11
Ubuntu USN-2447-1 linux-lts-utopic 2014-12-11
Ubuntu USN-2445-1 linux-lts-trusty 2014-12-11
Ubuntu USN-2448-1 kernel 2014-12-11
Ubuntu USN-2446-1 kernel 2014-12-11
Ubuntu USN-2443-1 kernel 2014-12-11
Ubuntu USN-2441-1 kernel 2014-12-11
Ubuntu USN-2442-1 EC2 kernel 2014-12-11
Debian DSA-3093-1 kernel 2014-12-08
Red Hat RHSA-2014:1943-01 kernel-rt 2014-12-02
Mandriva MDVSA-2014:230 kernel 2014-11-27
Fedora FEDORA-2014-15200 kernel 2014-11-20

Comments (none posted)

krb5: ticket forgery

Package(s):krb5 CVE #(s):CVE-2014-5351
Created:November 21, 2014 Updated:March 9, 2015
Description:

From the Mageia advisory:

The kadm5_randkey_principal_3 function in lib/kadm5/srv/svr_principal.c in kadmind in MIT Kerberos 5 (aka krb5) before 1.13 sends old keys in a response to a -randkey -keepold request, which allows remote authenticated users to forge tickets by leveraging administrative access.

Alerts:
Fedora FEDORA-2015-2382 krb5 2015-03-09
SUSE SUSE-SU-2015:0290-2 krb5 2015-02-16
SUSE SUSE-SU-2015:0290-1 krb5 2015-02-16
Ubuntu USN-2498-1 krb5 2015-02-10
openSUSE openSUSE-SU-2015:0255-1 krb5 2015-02-11
Gentoo 201412-53 mit-krb5 2014-12-31
Mandriva MDVSA-2014:224 krb5 2014-11-21
Mageia MGASA-2014-0477 krb5 2014-11-21

Comments (none posted)

libksba: denial of service

Package(s):libksba CVE #(s):CVE-2014-9087
Created:November 27, 2014 Updated:March 29, 2015
Description: From the Mageia advisory:

By using special crafted S/MIME messages or ECC based OpenPGP data, it is possible to create a buffer overflow, which could lead to a denial of service (CVE-2014-9087).

Alerts:
Mandriva MDVSA-2015:151 libksba 2015-03-29
Debian-LTS DLA-141-1 libksba 2015-01-29
openSUSE openSUSE-SU-2014:1682-1 libksba 2014-12-22
Fedora FEDORA-2014-15838 libksba 2014-12-07
Fedora FEDORA-2014-15847 libksba 2014-12-06
Ubuntu USN-2427-1 libksba 2014-11-27
Mandriva MDVSA-2014:234 libksba 2014-11-28
Debian DSA-3078-1 libksba 2014-11-27
Mageia MGASA-2014-0498 libksba 2014-11-26

Comments (none posted)

libreoffice: code execution

Package(s):libreoffice CVE #(s):
Created:November 24, 2014 Updated:December 3, 2014
Description: From the freedesktop.org bug report:

Crash while importing malformed .rtf file. According to valgrind there are several invalid writes, including near malloc'd block. Seems to be potentially exploitable.

Alerts:
Fedora FEDORA-2014-15486 libreoffice 2014-11-22

Comments (none posted)

lsyncd: command injection

Package(s):lsyncd CVE #(s):CVE-2014-8990
Created:December 3, 2014 Updated:February 13, 2017
Description: From the Red Hat bugzilla:

It was reported that lsyncd is vulnerable to command injection. If a filename has "`" (backticks), what between backticks will be executed with lsyncd process privileges.

Alerts:
Gentoo 201702-05 lsyncd 2017-02-11
Debian DSA-3130-1 lsyncd 2015-01-16
Fedora FEDORA-2014-15373 lsyncd 2014-12-03
Fedora FEDORA-2014-15393 lsyncd 2014-12-03

Comments (none posted)

mariadb: denial of service

Package(s):mariadb CVE #(s):CVE-2014-6564
Created:November 21, 2014 Updated:December 12, 2014
Description:

From the CVE entry:

Unspecified vulnerability in Oracle MySQL Server 5.6.19 and earlier allows remote authenticated users to affect availability via vectors related to SERVER:INNODB FULLTEXT SEARCH DML.

Alerts:
SUSE SUSE-SU-2015:0743-1 mariadb 2015-04-21
SUSE SUSE-SU-2015:0620-1 MySQL 2015-03-28
Fedora FEDORA-2014-16003 mariadb 2014-12-12
Oracle ELSA-2014-1859 mysql55-mysql 2014-11-17
Oracle ELSA-2014-1861 mariadb 2014-11-17

Comments (none posted)

mod-wsgi: privilege escalation

Package(s):mod-wsgi CVE #(s):CVE-2014-8583
Created:December 3, 2014 Updated:December 30, 2016
Description: From the Ubuntu advisory:

It was discovered that mod_wsgi incorrectly handled errors when setting up the working directory and group access rights. A malicious application could possibly use this issue to cause a local privilege escalation when using daemon mode.

Alerts:
Gentoo 201612-49 mod_wsgi 2016-12-30
Mandriva MDVSA-2015:180 apache-mod_wsgi 2015-03-30
Mandriva MDVSA-2014:253 apache-mod_wsgi 2014-12-15
openSUSE openSUSE-SU-2014:1590-1 apache2-mod_wsgi 2014-12-08
Mageia MGASA-2014-0513 apache-mod_wsgi 2014-12-05
Ubuntu USN-2431-1 mod-wsgi 2014-12-03

Comments (none posted)

moodle: multiple vulnerabilities

Package(s):moodle CVE #(s):CVE-2014-7830 CVE-2014-7832 CVE-2014-7833 CVE-2014-7834 CVE-2014-7835 CVE-2014-7836 CVE-2014-7837 CVE-2014-7838 CVE-2014-7845 CVE-2014-7846 CVE-2014-7847 CVE-2014-7848
Created:November 24, 2014 Updated:December 3, 2014
Description: From the Mageia advisory:

In Moodle before 2.6.5, an XSS issue through $searchcourse in mod/feedback/mapcourse.php, due to the last search string in the Feedback module not being escaped in the search input field (CVE-2014-7830).

In Moodle before 2.6.5, the word list for temporary password generation was short, therefore the pool of possible passwords was not big enough (CVE-2014-7845).

In Moodle before 2.6.5, capability checks in the LTI module only checked access to the course and not to the activity (CVE-2014-7832).

In Moodle before 2.6.5, group-level entries in Database activity module became visible to users in other groups after being edited by a teacher (CVE-2014-7833).

In Moodle before 2.6.5, unprivileged users could access the list of available tags in the system (CVE-2014-7846).

In Moodle before 2.6.5, the script used to geo-map IP addresses was available to unauthenticated users increasing server load when used by other parties (CVE-2014-7847).

In Moodle before 2.6.5, when using the web service function for Forum discussions, group permissions were not checked (CVE-2014-7834).

In Moodle before 2.6.5, by directly accessing an internal file, an unauthenticated user can be shown an error message containing the file system path of the Moodle install (CVE-2014-7848).

In Moodle before 2.6.5, if web service with file upload function was available, user could upload XSS file to his profile picture area (CVE-2014-7835).

In Moodle before 2.6.5, two files in the LTI module lacked a session key check, potentially allowing cross-site request forgery (CVE-2014-7836).

In Moodle before 2.6.5, by tweaking URLs, users who were able to delete pages in at least one Wiki activity in the course were able to delete pages in other Wiki pages in the same course (CVE-2014-7837).

In Moodle before 2.6.5, set tracking script in the Forum module lacked a session key check, potentially allowing cross-site request forgery (CVE-2014-7838).

Alerts:
Fedora FEDORA-2014-15102 moodle 2014-11-25
Mageia MGASA-2014-0483 moodle 2014-11-22

Comments (none posted)

mozilla: multiple vulnerabilities

Package(s):firefox thunderbird seamonkey CVE #(s):CVE-2014-1587 CVE-2014-1590 CVE-2014-1592 CVE-2014-1593 CVE-2014-1594
Created:December 3, 2014 Updated:February 3, 2015
Description: From the Red Hat advisory:

Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2014-1587, CVE-2014-1590, CVE-2014-1592, CVE-2014-1593)

A flaw was found in the Alarm API, which could allow applications to schedule actions to be run in the future. A malicious web application could use this flaw to bypass the same-origin policy. (CVE-2014-1594)

Alerts:
openSUSE openSUSE-SU-2015:1266-1 firefox, thunderbird 2015-07-18
Gentoo 201504-01 firefox 2015-04-07
Fedora FEDORA-2015-1133 seamonkey 2015-02-03
Fedora FEDORA-2015-1066 seamonkey 2015-02-03
openSUSE openSUSE-SU-2015:0138-1 Firefox 2015-01-25
Fedora FEDORA-2014-17217 seamonkey 2014-12-27
Fedora FEDORA-2014-17219 seamonkey 2014-12-27
Fedora FEDORA-2014-17126 seamonkey 2014-12-27
openSUSE openSUSE-SU-2014:1654-1 thunderbird 2014-12-17
openSUSE openSUSE-SU-2014:1656-1 seamonkey 2014-12-17
openSUSE openSUSE-SU-2014:1655-1 seamonkey 2014-12-17
SUSE SUSE-SU-2014:1624-1 Mozilla Firefox 2014-12-12
Mageia MGASA-2014-0518 iceape 2014-12-09
openSUSE openSUSE-SU-2014:1581-1 firefox 2014-12-07
Fedora FEDORA-2014-16242 thunderbird 2014-12-07
Fedora FEDORA-2014-16242 firefox 2014-12-07
Debian DSA-3092-1 icedove 2014-12-07
Ubuntu USN-2428-1 thunderbird 2014-12-03
Scientific Linux SLSA-2014:1924-1 thunderbird 2014-12-03
Scientific Linux SLSA-2014:1919-1 firefox 2014-12-03
Oracle ELSA-2014-1919 firefox 2014-12-03
Oracle ELSA-2014-1919 firefox 2014-12-03
Mageia MGASA-2014-0507 firefox, thunderbird 2014-12-03
Fedora FEDORA-2014-16259 thunderbird 2014-12-04
Fedora FEDORA-2014-16259 firefox 2014-12-04
Debian DSA-3090-1 iceweasel 2014-12-04
CentOS CESA-2014:1924 thunderbird 2014-12-03
CentOS CESA-2014:1924 thunderbird 2014-12-03
CentOS CESA-2014:1919 firefox 2014-12-04
CentOS CESA-2014:1919 firefox 2014-12-03
CentOS CESA-2014:1919 firefox 2014-12-03
Ubuntu USN-2424-1 firefox 2014-12-02
Slackware SSA:2014-337-01 thunderbird 2014-12-02
Oracle ELSA-2014-1924 thunderbird 2014-12-02
Oracle ELSA-2014-1919 firefox 2014-12-03
Red Hat RHSA-2014:1924-01 thunderbird 2014-12-02
Red Hat RHSA-2014:1919-01 firefox 2014-12-02

Comments (none posted)

mozilla: multiple vulnerabilities

Package(s):firefox thunderbird seamonkey CVE #(s):CVE-2014-1588 CVE-2014-1589 CVE-2014-1591
Created:December 3, 2014 Updated:February 3, 2015
Description: From the Ubuntu advisory:

Gary Kwong, Randell Jesup, Nils Ohlmeier, Jesse Ruderman, Max Jonas Werner, Christian Holler, Jon Coppeard, Eric Rahm, Byron Campen, Eric Rescorla, and Xidorn Quan discovered multiple memory safety issues in Firefox. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to cause a denial of service via application crash, or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2014-1588)

Cody Crews discovered a way to trigger chrome-level XBL bindings from web content in some circumstances. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit this to bypass security restrictions. (CVE-2014-1589)

Muneaki Nishimura discovered that CSP violation reports did not remove path information in some circumstances. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit this to obtain sensitive information. (CVE-2014-1591)

Alerts:
Gentoo 201504-01 firefox 2015-04-07
Fedora FEDORA-2015-1133 seamonkey 2015-02-03
Fedora FEDORA-2015-1066 seamonkey 2015-02-03
Fedora FEDORA-2014-17217 seamonkey 2014-12-27
Fedora FEDORA-2014-17219 seamonkey 2014-12-27
Fedora FEDORA-2014-17126 seamonkey 2014-12-27
openSUSE openSUSE-SU-2014:1656-1 seamonkey 2014-12-17
openSUSE openSUSE-SU-2014:1655-1 seamonkey 2014-12-17
SUSE SUSE-SU-2014:1624-1 Mozilla Firefox 2014-12-12
Mageia MGASA-2014-0518 iceape 2014-12-09
openSUSE openSUSE-SU-2014:1581-1 firefox 2014-12-07
Fedora FEDORA-2014-16259 thunderbird 2014-12-04
Fedora FEDORA-2014-16259 firefox 2014-12-04
Ubuntu USN-2424-1 firefox 2014-12-02

Comments (none posted)

mutt: denial of service

Package(s):mutt CVE #(s):CVE-2014-9116
Created:December 1, 2014 Updated:January 2, 2017
Description: From the Debian advisory:

A flaw was discovered in mutt, a text-based mailreader. A specially crafted mail header could cause mutt to crash, leading to a denial of service condition.

Alerts:
Gentoo 201701-04 mutt 2017-01-01
Slackware SSA:2015-111-07 mutt 2015-04-21
Mandriva MDVSA-2015:078 mutt 2015-03-28
Arch Linux ASA-201503-6 mutt 2015-03-09
Fedora FEDORA-2014-16494 mutt 2015-02-15
Fedora FEDORA-2014-16782 mutt 2015-02-15
SUSE SUSE-SU-2015:0012-1 mutt 2015-01-06
openSUSE openSUSE-SU-2014:1635-1 mutt 2014-12-15
Mandriva MDVSA-2014:245 mutt 2014-12-14
Ubuntu USN-2440-1 mutt 2014-12-11
Mageia MGASA-2014-0509 mutt 2014-12-05
Debian DSA-3083-1 mutt 2014-11-30

Comments (none posted)

openssl: TLS handshake problem

Package(s):openssl CVE #(s):
Created:November 24, 2014 Updated:December 3, 2014
Description: From the openSUSE bug report:

openssl-1.0.1i-2.1.4 that comes with OpenSUSE 13.2 is configured with 'no-ec2m' . This exposes a bug in openssl that let the client advertise a non-prime field curve, that it however doesn't actually support.

Alerts:
openSUSE openSUSE-SU-2014:1474-1 openssl 2014-11-24

Comments (none posted)

openstack-neutron: denial of service

Package(s):openstack-neutron CVE #(s):CVE-2014-7821
Created:December 3, 2014 Updated:April 22, 2015
Description: From the CVE entry:

OpenStack Neutron before 2014.1.4 and 2014.2.x before 2014.2.1 allows remote authenticated users to cause a denial of service (crash) via a crafted dns_nameservers value in the DNS configuration.

Alerts:
Fedora FEDORA-2015-5997 openstack-neutron 2015-04-21
Red Hat RHSA-2015:0044-01 openstack-neutron 2015-01-13
Red Hat RHSA-2014:1938-01 openstack-neutron 2014-12-02
Red Hat RHSA-2014:1942-01 openstack-neutron 2014-12-02

Comments (none posted)

openstack-trove: information disclosure

Package(s):openstack-trove CVE #(s):CVE-2014-7231
Created:December 3, 2014 Updated:December 3, 2014
Description: From the CVE entry:

The strutils.mask_password function in the OpenStack Oslo utility library, Cinder, Nova, and Trove before 2013.2.4 and 2014.1 before 2014.1.3 does not properly mask passwords when logging commands, which allows local users to obtain passwords by reading the log.

Alerts:
Red Hat RHSA-2014:1939-01 openstack-trove 2014-12-02

Comments (none posted)

openvpn: denial of service

Package(s):openvpn CVE #(s):CVE-2014-8104
Created:December 2, 2014 Updated:March 29, 2015
Description: From the Debian advisory:

Dragana Damjanovic discovered that an authenticated client could crash an OpenVPN server by sending a control packet containing less than four bytes as payload.

Alerts:
Mandriva MDVSA-2015:139 openvpn 2015-03-29
Gentoo 201412-41 openvpn 2014-12-26
SUSE SUSE-SU-2014:1694-1 openvpn 2014-12-23
Mandriva MDVSA-2014:246 openvpn 2014-12-14
Fedora FEDORA-2014-16234 pkcs11-helper 2014-12-13
Fedora FEDORA-2014-16273 pkcs11-helper 2014-12-13
Fedora FEDORA-2014-16234 openvpn 2014-12-13
Fedora FEDORA-2014-16273 openvpn 2014-12-13
Fedora FEDORA-2014-16060 openvpn 2014-12-12
Slackware SSA:2014-344-04 openvpn 2014-12-10
SUSE SUSE-SU-2014:1605-1 OpenVPN 2014-12-09
openSUSE openSUSE-SU-2014:1594-1 openvpn 2014-12-08
Mageia MGASA-2014-0512 openvpn 2014-12-05
Ubuntu USN-2430-1 openvpn 2014-12-02
Debian DSA-3084-1 openvpn 2014-12-01

Comments (none posted)

oxide-qt: multiple vulnerabilities

Package(s):oxide-qt CVE #(s):CVE-2014-7904 CVE-2014-7907 CVE-2014-7908 CVE-2014-7909 CVE-2014-7910
Created:November 20, 2014 Updated:December 3, 2014
Description: From the Ubuntu advisory:

A buffer overflow was discovered in Skia. If a user were tricked in to opening a specially crafted website, an attacked could potentially exploit this to cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. (CVE-2014-7904)

Multiple use-after-frees were discovered in Blink. If a user were tricked in to opening a specially crafted website, an attacked could potentially exploit these to cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. (CVE-2014-7907)

An integer overflow was discovered in media. If a user were tricked in to opening a specially crafted website, an attacked could potentially exploit this to cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. (CVE-2014-7908)

An uninitialized memory read was discovered in Skia. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit this to cause a denial of service via renderer crash. (CVE-2014-7909)

Multiple security issues were discovered in Chromium. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to read uninitialized memory, cause a denial of service via application crash or execute arbitrary code with the privileges of the user invoking the program. (CVE-2014-7910)

Alerts:
Gentoo 201412-13 chromium 2014-12-13
openSUSE openSUSE-SU-2014:1626-1 chromium 2014-12-12
Mageia MGASA-2014-0485 chromium-browser-stable 2014-11-25
Red Hat RHSA-2014:1894-01 chromium-browser 2014-11-24
Ubuntu USN-2410-1 oxide-qt 2014-11-19

Comments (none posted)

phpmyadmin: multiple vulnerabilities

Package(s):phpmyadmin CVE #(s):CVE-2014-8958 CVE-2014-8959 CVE-2014-8960 CVE-2014-8961
Created:November 26, 2014 Updated:December 3, 2014
Description: From the Mandriva advisory:

Multiple vulnerabilities has been discovered and corrected in phpmyadmin:

* Multiple XSS vulnerabilities (CVE-2014-8958).

* Local file inclusion vulnerability (CVE-2014-8959).

* XSS vulnerability in error reporting functionality (CVE-2014-8960).

* Leakage of line count of an arbitrary file (CVE-2014-8961).

This upgrade provides the latest phpmyadmin version (4.2.12) to address these vulnerabilities.

Alerts:
Debian-LTS DLA-336-1 phpmyadmin 2015-10-28
Debian DSA-3382-1 phpmyadmin 2015-10-28
Gentoo 201505-03 phpmyadmin 2015-05-31
openSUSE openSUSE-SU-2014:1561-1 phpMyAdmin 2014-12-05
Fedora FEDORA-2014-15535 phpMyAdmin 2014-12-01
Fedora FEDORA-2014-15538 phpMyAdmin 2014-12-01
Mageia MGASA-2014-0495 phpmyadmin 2014-11-26
Mandriva MDVSA-2014:228 phpmyadmin 2014-11-26

Comments (none posted)

php-smarty: cross-site scripting

Package(s):php-smarty CVE #(s):CVE-2012-4437
Created:November 24, 2014 Updated:December 3, 2014
Description: From the CVE entry:

Cross-site scripting (XSS) vulnerability in the SmartyException class in Smarty (aka smarty-php) before 3.1.12 allows remote attackers to inject arbitrary web script or HTML via unspecified vectors that trigger a Smarty exception.

Alerts:
Mandriva MDVSA-2014:221 php-smarty 2014-11-21

Comments (none posted)

privoxy: denial of service

Package(s):privoxy CVE #(s):
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Mageia advisory:

The logrotate configuration of the privoxy package did not function properly, causing its log files not to be rotated. The log file(s) could potentially fill up the disk.

Alerts:
Mageia MGASA-2014-0463 privoxy 2014-11-21

Comments (none posted)

python-djblets: cross-site scripting

Package(s):python-djblets CVE #(s):CVE-2014-3995
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Mageia advisory:

Cross-site scripting (XSS) vulnerability in gravatars/templatetags/gravatars.py in Djblets before 0.7.30 Django allows remote attackers to inject arbitrary web script or HTML via a user display name (CVE-2014-3995).

Alerts:
Mageia MGASA-2014-0462 python-djblets 2014-11-21

Comments (none posted)

python-imaging, python-pillow: code execution

Package(s):python-imaging, python-pillow CVE #(s):CVE-2014-3007
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Mageia advisory:

Python Image Library (PIL) 1.1.7 and earlier and Pillow 2.3 might allow remote attackers to execute arbitrary commands via shell metacharacters, due to an incomplete fix for CVE-2014-1932 (CVE-2014-3007).

Alerts:
Mandriva MDVSA-2015:099 python-pillow 2015-03-28
Fedora FEDORA-2014-14980 python-pillow 2014-11-22
Fedora FEDORA-2014-14883 python-pillow 2014-11-22
Mageia MGASA-2014-0476 python-imaging, python-pillow 2014-11-21

Comments (none posted)

ruby: denial of service

Package(s):ruby CVE #(s):CVE-2014-8090
Created:November 21, 2014 Updated:December 3, 2014
Description:

From the Mageia advisory:

Due to an incomplete fix for CVE-2014-8080, 100% CPU utilization can occur as a result of recursive expansion with an empty String. When reading text nodes from an XML document, the REXML parser in Ruby can be coerced into allocating extremely large string objects which can consume all of the memory on a machine, causing a denial of service (CVE-2014-8090).

Alerts:
Debian-LTS DLA-200-1 ruby1.9.1 2015-04-15
Mandriva MDVSA-2015:129 ruby 2015-03-29
Debian DSA-3159-1 ruby1.8 2015-02-10
Debian DSA-3157-1 ruby1.9.1 2015-02-09
openSUSE openSUSE-SU-2015:0002-1 ruby20 2015-01-02
openSUSE openSUSE-SU-2015:0007-1 ruby2.1 2015-01-02
Gentoo 201412-27 ruby 2014-12-13
openSUSE openSUSE-SU-2014:1589-1 ruby19 2014-12-08
Scientific Linux SLSA-2014:1911-1 ruby 2014-12-01
Scientific Linux SLSA-2014:1912-1 ruby 2014-12-01
CentOS CESA-2014:1911 ruby 2014-12-01
CentOS CESA-2014:1912 ruby 2014-12-01
Oracle ELSA-2014-1911 ruby 2014-11-26
Oracle ELSA-2014-1912 ruby 2014-11-26
Red Hat RHSA-2014:1914-01 ruby200-ruby 2014-11-26
Red Hat RHSA-2014:1913-01 ruby193-ruby 2014-11-26
Red Hat RHSA-2014:1911-01 ruby 2014-11-26
Red Hat RHSA-2014:1912-01 ruby 2014-11-26
Mandriva MDVSA-2014:225 ruby 2014-11-25
Ubuntu USN-2412-1 ruby1.8, ruby1.9.1, ruby2.0, ruby2.1 2014-11-20
Mageia MGASA-2014-0472 ruby 2014-11-21

Comments (none posted)

rubygem-actionpack: two information leaks

Package(s):rubygem-actionpack-3_2 CVE #(s):CVE-2014-7818 CVE-2014-7829
Created:November 27, 2014 Updated:March 5, 2015
Description: From the openSUSE advisory:

- Arbitrary file existence disclosure (CVE-2014-7829).

- Arbitrary file existence disclosure (CVE-2014-7818).

Alerts:
Fedora FEDORA-2014-15371 rubygem-actionpack 2015-03-05
Fedora FEDORA-2014-15342 rubygem-actionpack 2015-02-15
openSUSE openSUSE-SU-2014:1515-1 rubygem-actionpack-3_2 2014-11-27

Comments (none posted)

rubygem-sprockets: directory traversal

Package(s):rubygem-sprockets CVE #(s):CVE-2014-7819
Created:November 26, 2014 Updated:February 20, 2015
Description: From the CVE entry:

Multiple directory traversal vulnerabilities in server.rb in Sprockets before 2.0.5, 2.1.x before 2.1.4, 2.2.x before 2.2.3, 2.3.x before 2.3.3, 2.4.x before 2.4.6, 2.5.x before 2.5.1, 2.6.x and 2.7.x before 2.7.1, 2.8.x before 2.8.3, 2.9.x before 2.9.4, 2.10.x before 2.10.2, 2.11.x before 2.11.3, 2.12.x before 2.12.3, and 3.x before 3.0.0.beta.3, as distributed with Ruby on Rails 3.x and 4.x, allow remote attackers to determine the existence of files outside the application root via a ../ (dot dot slash) sequence with (1) double slashes or (2) URL encoding.

Alerts:
Mageia MGASA-2015-0074 ruby-sprockets 2015-02-19
Fedora FEDORA-2014-15489 rubygem-sprockets 2015-02-15
Fedora FEDORA-2014-15413 rubygem-sprockets 2015-02-15
openSUSE openSUSE-SU-2014:1513-1 rubygem-sprockets 2014-11-27
openSUSE openSUSE-SU-2014:1514-1 rubygem-sprockets 2014-11-27
openSUSE openSUSE-SU-2014:1504-1 rubygem-sprockets-2_2 2014-11-26
openSUSE openSUSE-SU-2014:1502-1 rubygem-sprockets-2_1 2014-11-26

Comments (none posted)

tcpdump: three vulnerabilities

Package(s):tcpdump CVE #(s):CVE-2014-8767 CVE-2014-8768 CVE-2014-8769
Created:November 27, 2014 Updated:February 13, 2015
Description:

Bug #1165160 - CVE-2014-8767 tcpdump: denial of service in verbose mode using malformed OLSR payload

Bug #1165161 - CVE-2014-8768 tcpdump: denial of service in verbose mode using malformed Geonet payload

Bug #1165162 - CVE-2014-8769 tcpdump: unreliable output using malformed AOVD payload

Alerts:
Mandriva MDVSA-2015:125 tcpdump 2015-03-29
Arch Linux ASA-201503-20 tcpdump 2015-03-20
openSUSE openSUSE-SU-2015:0284-1 tcpdump 2015-02-13
Gentoo 201502-05 tcpdump 2015-02-07
Fedora FEDORA-2014-16861 tcpdump 2014-12-18
Mandriva MDVSA-2014:240 tcpdump 2014-12-14
Debian DSA-3086-1 tcpdump 2014-12-03
Ubuntu USN-2433-1 tcpdump 2014-12-04
Fedora FEDORA-2014-15549 tcpdump 2014-12-04
Mageia MGASA-2014-0503 tcpdump 2014-12-01
Fedora FEDORA-2014-15541 tcpdump 2014-11-27

Comments (none posted)

teeworlds: information leak

Package(s):teeworlds CVE #(s):
Created:December 2, 2014 Updated:December 4, 2014
Description: From the Mageia advisory:

A security flaw was found in the teeworlds server prior to 0.6.3 where an incorrect offset check could enable an attacker to read the memory or trigger a segmentation fault.

Alerts:
Fedora FEDORA-2014-15701 teeworlds 2014-12-04
Fedora FEDORA-2014-15733 teeworlds 2014-12-04
Mageia MGASA-2014-0502 teeworlds 2014-12-01

Comments (none posted)

wireshark: multiple vulnerabilities

Package(s):wireshark CVE #(s):CVE-2014-8710 CVE-2014-8711 CVE-2014-8712 CVE-2014-8713 CVE-2014-8714
Created:November 21, 2014 Updated:December 4, 2014
Description:

From the Mageia advisory:

SigComp UDVM buffer overflow (CVE-2014-8710).

AMQP crash (CVE-2014-8711).

NCP crashes (CVE-2014-8712, CVE-2014-8713).

TN5250 infinite loops (CVE-2014-8714).

Alerts:
Scientific Linux SLSA-2015:2393-1 wireshark 2015-12-21
Red Hat RHSA-2015:2393-01 wireshark 2015-11-19
Scientific Linux SLSA-2015:1460-1 wireshark 2015-08-03
Oracle ELSA-2015-1460 wireshark 2015-07-29
Red Hat RHSA-2015:1460-01 wireshark 2015-07-22
Debian-LTS DLA-198-1 wireshark 2015-04-22
Fedora FEDORA-2014-15244 wireshark 2014-12-04
openSUSE openSUSE-SU-2014:1503-1 wireshark 2014-11-26
Debian DSA-3076-1 wireshark 2014-11-25
Mandriva MDVSA-2014:223 wireshark 2014-11-21
Mageia MGASA-2014-0471 wireshark 2014-11-21

Comments (none posted)

wordpress: multiple vulnerabilities

Package(s):wordpress CVE #(s):CVE-2014-9031 CVE-2014-9032 CVE-2014-9033 CVE-2014-9034 CVE-2014-9035 CVE-2014-9036 CVE-2014-9037 CVE-2014-9038 CVE-2014-9039
Created:November 27, 2014 Updated:December 3, 2014
Description: From the Mageia advisory:

XSS in wptexturize() via comments or posts, exploitable for unauthenticated users (CVE-2014-9031).

XSS in media playlists (CVE-2014-9032).

CSRF in the password reset process (CVE-2014-9033).

Denial of service for giant passwords. The phpass library by Solar Designer was used in both projects without setting a maximum password length, which can lead to CPU exhaustion upon hashing (CVE-2014-9034).

XSS in Press This (CVE-2014-9035).

XSS in HTML filtering of CSS in posts (CVE-2014-9036).

Hash comparison vulnerability in old-style MD5-stored passwords (CVE-2014-9037).

SSRF: Safe HTTP requests did not sufficiently block the loopback IP address space (CVE-2014-9038).

Previously an email address change would not invalidate a previous password reset email (CVE-2014-9039).

Alerts:
Debian-LTS DLA-236-1 wordpress 2015-06-01
Fedora FEDORA-2014-15526 wordpress 2014-12-03
Fedora FEDORA-2014-15507 wordpress 2014-12-03
Debian DSA-3085-1 wordpress 2014-12-03
Mandriva MDVSA-2014:233 wordpress 2014-11-27
Mageia MGASA-2014-0493 wordpress 2014-11-26

Comments (none posted)

xen: multiple vulnerabilities

Package(s):xen CVE #(s):CVE-2014-8594 CVE-2014-8595 CVE-2014-9030
Created:December 2, 2014 Updated:December 12, 2014
Description: From the CVE entries:

The do_mmu_update function in arch/x86/mm.c in Xen 4.x through 4.4.x does not properly restrict updates to only PV page tables, which allows remote PV guests to cause a denial of service (NULL pointer dereference) by leveraging hardware emulation services for HVM guests using Hardware Assisted Paging (HAP). (CVE-2014-8594)

arch/x86/x86_emulate/x86_emulate.c in Xen 3.2.1 through 4.4.x does not properly check privileges, which allows local HVM guest users to gain privileges or cause a denial of service (crash) via a crafted (1) CALL, (2) JMP, (3) RETF, (4) LCALL, (5) LJMP, or (6) LRET far branch instruction. (CVE-2014-8595)

The do_mmu_update function in arch/x86/mm.c in Xen 3.2.x through 4.4.x does not properly manage page references, which allows remote domains to cause a denial of service by leveraging control over an HVM guest and a crafted MMU_MACHPHYS_UPDATE. (CVE-2014-9030)

Alerts:
Gentoo 201504-04 xen 2015-04-11
openSUSE openSUSE-SU-2015:0256-1 xen 2015-02-11
openSUSE openSUSE-SU-2015:0226-1 xen 2015-02-06
Debian DSA-3140-1 xen 2015-01-27
SUSE SUSE-SU-2015:0022-1 xen 2015-01-09
Fedora FEDORA-2014-15951 xen 2014-12-12
Fedora FEDORA-2014-15503 xen 2014-12-01
Fedora FEDORA-2014-15521 xen 2014-12-01

Comments (none posted)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 3.18-rc7, released on November 30. Linus seems happy enough, despite the persistent lockup problem that has defied all debugging attempts so far. "At the same time, with the holidays coming up, and the problem _not_ being a regression, I suspect that what will happen is that I'll release 3.18 on time in a week, because delaying it will either mess up the merge window and the holiday season, or I'd have to delay it a *lot*."

3.18-rc6 was released on November 23.

Stable updates: 3.10.61, 3.14.25, and 3.17.4 were released on November 21.

Comments (none posted)

Quotes of the week

"This is hard!!! Let's go do some single-threaded programming!"
Paul McKenney

I look forward to the patch that makes us all lazy by default.
Chris Mason; one might argue that it was applied years ago.

static inline void *
someone_think_of_a_name_for_this(gfp_t gfp_mask, unsigned int order)
{
    return (void *)__get_free_pages(gfp, order);
}
— API design, Andrew Morton style

Comments (2 posted)

McKenney: Stupid RCU Tricks: rcutorture Catches an RCU Bug

On his blog, Paul McKenney investigates a bug in read-copy update (RCU) in preparation for the 3.19 merge window. "Of course, we all have specific patches that we are suspicious of. So my next step was to revert suspect patches and to otherwise attempt to outguess the bug. Unfortunately, I quickly learned that the bug is difficult to reproduce, requiring something like 100 hours of focused rcutorture testing. Bisection based on 100-hour tests would have consumed the remainder of 2014 and a significant fraction of 2015, so something better was required. In fact, something way better was required because there was only a very small number of failures, which meant that the expected test time to reproduce the bug might well have been 200 hours or even 300 hours instead of my best guess of 100 hours."

Comments (18 posted)

Version 2 of the kdbus patches posted

The second version of the kdbus patches have been posted to the Linux kernel mailing list by Greg Kroah-Hartman. The biggest change since the original patch set (which we looked at in early November) is that kdbus now provides a filesystem-based interface (kdbusfs) rather than the /dev/kdbus device-based interface. There are lots of other changes in response to v1 review comments as well. "kdbus is a kernel-level IPC implementation that aims for resemblance to [the] protocol layer with the existing userspace D-Bus daemon while enabling some features that couldn't be implemented before in userspace."

Comments (14 posted)

Kernel development news

ACCESS_ONCE() and compiler bugs

By Jonathan Corbet
December 3, 2014
The ACCESS_ONCE() macro is used throughout the kernel to ensure that code generated by the compiler will access the indicated variable once (and only once); see this article for details on how it works and when its use is necessary. When that article was written (2012), there were 200 invocations of ACCESS_ONCE() in the kernel; now there are over 700 of them. Like many low-level techniques for concurrency management, ACCESS_ONCE() relies on trickery that is best hidden from view. And, like such techniques, it may break if the compiler changes behavior or, as has been seen recently, contains a bug.

Back in November, Christian Borntraeger posted a message regarding the interactions between ACCESS_ONCE() and an obscure GCC bug. To understand the problem, it is worth looking at the macro, which is defined simply in current kernels (in <linux/compiler.h>):

    #define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))

In short, ACCESS_ONCE() forces the variable to be treated as being a volatile type, even though it (like almost all variables in the kernel) is not declared that way. The problem reported by Christian is that GCC 4.6 and 4.7 will drop the volatile modifier if the variable passed into it is not of a scalar type. It works fine if x is an int, for example, but not if x has a more complicated type. For example, ACCESS_ONCE() is often used with page table entries, which are defined as having the pte_t type:

    typedef struct {
	unsigned long pte;
    } pte_t;

In this case, the volatile semantics will be lost in buggy compilers, leading to buggy kernels. Christian started by looking for ways to work around the problem, only to be informed that normal kernel practice is to avoid working around compiler bugs whenever possible; instead, the buggy versions should simply be blacklisted in the kernel build system. But 4.6 and 4.7 are installed on a lot of systems; blacklisting them would inconvenience many users. And, as Linus put it, there can be reasons for approaches other than blacklisting:

So I do agree with Heiko that we generally don't want to work around compiler bugs if we can avoid it. But sometimes the compiler bugs do end up saying "you're doing something very fragile". Maybe we should try to be less fragile here.

One way of being less fragile would be to change the affected ACCESS_ONCE() calls to point to the scalar parts of the relevant non-scalar types. So, if code does something like:

    pte_t p = ACCESS_ONCE(pte);

It could be changed to something like:

    unsigned long p = ACCESS_ONCE(pte->pte);

This type of change requires auditing all ACCESS_ONCE() calls, though, to find the ones using non-scalar types; that would be a lengthy and error-prone process that would not prevent the addition of new bugs in the future.

Another approach to the problem explored by Christian was to remove a number of problematic ACCESS_ONCE() calls and just put in a compiler barrier with barrier() instead. In many cases, a barrier is sufficient, but in others it is not. Once again, a detailed audit is required, and there is nothing preventing new code from adding buggy ACCESS_ONCE() calls.

So Christian headed down the path of changing ACCESS_ONCE() to simply disallow the use of non-scalar types altogether. In the most recent version of the patch set, ACCESS_ONCE() looks like this:

    #define __ACCESS_ONCE(x) ({ \
	       __maybe_unused typeof(x) __var = 0; \
	       (volatile typeof(x) *)&(x); })
    #define ACCESS_ONCE(x) (*__ACCESS_ONCE(x))

This version will cause compilation failures if a non-scalar type is passed into the macro. But what about the situations where a non-scalar type needs to be used? For these cases, Christian has introduced two new macros, READ_ONCE() and ASSIGN_ONCE(). The definition of the former looks like this:

    static __always_inline void __read_once_size(volatile void *p, void *res, int size)
    {
    	switch (size) {
    	case 1: *(u8 *)res = *(volatile u8 *)p; break;
    	case 2: *(u16 *)res = *(volatile u16 *)p; break;
    	case 4: *(u32 *)res = *(volatile u32 *)p; break;
    #ifdef CONFIG_64BIT
    	case 8: *(u64 *)res = *(volatile u64 *)p; break;
    #endif
        }
    }
    
    #define READ_ONCE(p) \
          ({ typeof(p) __val; __read_once_size(&p, &__val, sizeof(__val)); __val; })

Essentially, it works by forcing the use of scalar types, even if the variable passed in does not have such a type. Providing a single access macro that worked on both the left-hand and right-hand sides of an assignment turned out to not be trivial, so the separate ASSIGN_ONCE() was provided for the left-hand side case.

Christian's patch set replaces ACCESS_ONCE() calls with READ_ONCE() or ASSIGN_ONCE() in cases where the latter are needed. Comments in the code suggest that those macros should be preferred to ACCESS_ONCE() in the future, but most existing ACCESS_ONCE() calls have not been changed. Developers using ACCESS_ONCE() to access non-scalar types in the future will get an unpleasant surprise from the compiler, though.

This version of the patch has received few comments and seems likely to make it into the mainline in the near future; backports to the stable series are also probably on the agenda. There are times when it is best to simply avoid versions of the compiler with known bugs altogether. But, as can be seen here, compiler bugs can also be seen as a signal that things could be done better in the kernel, leading to more robust code overall.

Comments (14 posted)

Splicing out syscalls for tiny kernels

By Jonathan Corbet
December 3, 2014
It is no secret that the Linux kernel has grown over time; the constant addition of features and hardware support means that almost every development cycle adds more code than it removes. The good news is that, for most of us, the increase in hardware speed and size has far outstripped the growth of the kernel, so few of us begrudge the extra resources that a larger kernel requires. Developers working on tiny systems, though, are still concerned about every byte consumed by the kernel. Accommodating their needs seems likely to be a source of ongoing stress in the community.

The latest example comes from Pieter Smith's patch set to remove support for the splice() family of system calls, including sendfile() and tee(). There will be many tiny systems with dedicated applications that have no need for those calls; removing them from the kernel makes 8KB of memory available for other purposes. The Linux "tinification" developers see that as a worthwhile gain, but some others disagree.

In particular, David Miller opposed the change, saying "I think starting to compile out system calls is a very slippery slope we should not begin the journey down." He worries that, even if a specific system works today without splice(), there may be a surprise tomorrow when some library starts using that system call. Developers working on Linux systems, David appears to be arguing, should be able to count on having the basic system call set available to them anywhere.

The tinification developers have a couple of answers to this concern. One is that developers working on tiny systems know what they are doing and which system calls they can do without. As Josh Triplett put it:

We're talking about embedded systems small enough that you're booting with init=/your/app and don't even call fork(), where you know exactly what code you're putting in and what libraries you use. And they're almost certainly not running glibc.

The other response is that the kernel has, in fact, provided support for compiling out major subsystems since the beginning. Quoting Josh again:

It's not a "slippery slope"; it's been our standard practice for ages. We started down that road long, long ago, when we first introduced Kconfig and optional/modular features. /dev/* are user-facing interfaces, yet you can compile them out or make them modular. /sys/* and/proc/* are user-facing interfaces, yet you can compile part or all of them out. Filesystem names passed to mount are user-facing interfaces, yet you can compile them out.

(This list goes on for some time; see the original mail for all the details). Eric Biederman added that the SYSV IPC system calls have been optional for a long time, and Alan Cox listed more optional items as well. David finally seemed to concede that making system calls optional was not a new thing for the Linux kernel, but he stopped short of actually supporting the splice() removal patch.

Without his opposition, though, this patch may go in. But a look at the kernel tinification project list makes it clear that this discussion is likely to return in the future. The tinification developers would like to be able to compile out support for SMP systems, random number generation, signal handling, capabilities, non-root users, sockets, the ability for processes to exit, and more. Eventually, they would like to have an automated tool that can examine a user-space image and build a configuration removing every system call that the given programs do not use.

Needless to say, any kernel that has been stripped down to that extent will not resemble a contemporary Linux system. But, on the other hand, neither do the ancient (but much smaller) kernels that these users often employ now. If Linux wants to have a place on tiny systems, the kernel will have to adapt to the resource constraints that come with such systems. That will bring challenges beyond convincing developers to allow important functionality to be configured out; the tinification developers will also have to figure out a way to allow this configuration without introducing large numbers of new configuration options and adding complexity to the build system.

It looks like a hard line to walk. But the Linux kernel embodies the solution to a lot of hard problems already; where there are willing developers, there is usually a way. If the tinification developers can find a way here, Linux has a much better chance of being present on the tiny systems that are likely to be embedded in all kinds of devices in the coming years. That seems like a goal worth trying for.

Comments (7 posted)

Version 2 of the kdbus patch set

By Jonathan Corbet
December 3, 2014
When the long-awaited kdbus patch set hit linux-kernel at the end of October, it ran into a number of criticisms from reviewers. Some developers might have given up in discouragement, muttering about how unfriendly the kernel development community is. The kdbus developers know better than that, though. This can be seen in the version 2 posting; the code has changed significantly in response to the comments that were received the first time around. Kdbus may still not be ready for immediate inclusion into the mainline, but it does seem to be getting closer.

No more device files

One of the biggest complaints about the first version was its use of device files to manage interaction with the system. Devices need to be named; that forced a hierarchical global naming system on kdbus domains — which were otherwise not inherently hierarchical. The global namespace imposed a privilege requirement, making it harder for unprivileged users to create kdbus domains; it also added complications for users wanting to checkpoint and restore containers.

The second version does away with the device abstraction, replacing it with a virtual filesystem called "kdbusfs." This filesystem will normally be mounted under /sys/fs/kdbus. Creating a new kdbus domain (a container that holds a namespace for one or more buses) is simply a matter of mounting an instance of this filesystem; the domain will persist until the filesystem is unmounted. No special privileges are needed to create a new domain — but mounting a filesystem still requires privileges of its own.

A newly created domain will contain no buses at the outset. What it does have is a file called control; a bus can be created by opening that file and issuing a KDBUS_CMD_BUS_MAKE ioctl() command. That bus will remain in existence as long as the file descriptor for the control file is held open. Only one bus may be created on any given control file descriptor, but the control file can be opened multiple times to create multiple buses. The control file can also be used to create custom endpoints for well-known services.

Each bus is represented by its own directory underneath the domain directory; endpoints are represented as files within the bus directory. Connecting to a bus is a matter of opening the kdbusfs file corresponding to the desired endpoint; for most clients, that will be the file simply called bus. Messages can then be sent and received with ioctl() commands on the resulting file descriptor.

As can be seen, the device abstraction is gone, but the interface is still somewhat device-like in that it is heavily based on ioctl() calls. There has been a small amount of discussion on whether it might make more sense to just use operations like read() and write() to interact with kdbus, but there appears to be little interest in making (or asking for) that sort of change.

Metadata issues

A significant change that has been made is in the area of security. In version 1, the recipient of a message could specify a set of credential information that must accompany the message. This information can include anything from the process ID through to capabilities, command line information, audit information, security IDs, and more. Some reviewers (Andy Lutomirski in particular) complained that this approach could lead to information leaks and, maybe, worse security issues; instead, they said, the sender of a message should be in control of the metadata that goes along with the message.

The updated patch set contains a response to that request by changing the protocol. When a client connects to the bus, it runs the KDBUS_CMD_HELLO ioctl() command to set up a number of parameters for the connection; one of those parameters is now a bitmask describing which metadata can be sent with messages. It is possible for the creator of the bus to specify a minimum set of metadata to go with messages, though; in that case, a client refusing to send that metadata will not be allowed to connect to the bus.

There is still some disagreement over which metadata should be sent, whether it's optional or not. Andy disagrees with providing command-line (and related) information, on the basis that it can be set by the process involved and thus carries no trustworthy information. This metadata is evidently used mostly for debugging purposes; Andy suggests that it should just be grabbed out of /proc instead. He is also opposed to the sending of capability information, noting that capabilities are generally problematic in Linux and their use should not be encouraged.

One other interesting bit of metadata that can be attached to messages is the time that the sending process started executing. It is there to prevent race conditions associated with the reuse of process IDs, which can happen quickly on a busy system. Andy dislikes that approach, noting that it will not work well with either namespaces or checkpointing. He prefers instead his own "highpid" solution. This patch adds a second, 64-bit, unique number associated with each process; interested programs can then detect process ID reuse by seeing if that number changes. Eric Biederman disagreed with that approach, saying "What we need are not race free pids, but a file descriptor based process management api." Andy was not opposed to that idea, but he would like to see something simple that can be of use to kdbus now.

Andy had a number of other comments, including pointing out a couple of places where, he contended, he could use kdbus to gain root access on any system where it was installed. Even so, he seems happy with the direction the code is going, saying "And thanks for addressing most of the issues. The code is starting to look much better to me."

Toward the mainline

In theory, resolving the remaining issues should be relatively straightforward, though it is not hard to see the "highpid" idea running into resistance at some point. But the number of reviewers for the second kdbus posting has been relatively small, perhaps as a result of the holidays in the US. The addition of a significant core API of this type requires more attention than kdbus has gotten so far. That suggests that there may still be significant issues that have not yet been raised by reviewers. Kdbus is getting closer to mainline inclusion, but it may well take a few more development cycles to get to a point where most developers are happy with it.

Comments (1 posted)

Some 3.18 development statistics

By Jonathan Corbet
November 25, 2014
As of the 3.18-rc6 release, 11,186 non-merge changesets have been pulled into the mainline repository for the 3.18 development cycle. That makes this release about 1,000 changesets smaller than its immediate predecessors, but still not a slow development cycle by any means. Since this cycle is getting close to its end, it's a good time to look at where the code that came into the mainline during this cycle came from. (For those who are curious about what changes were merged, see 3.18 Merge window, part 1, part 2, and part 3).

1,428 developers have contributed code to the 3.18 release — about normal for the last year or so. The most active developers were:

Most active 3.18 developers
By changesets
H Hartley Sweeten2372.1%
Mauro Carvalho Chehab1791.6%
Ian Abbott1621.4%
Geert Uytterhoeven1211.1%
Hans Verkuil1000.9%
Ville Syrjälä980.9%
Navin Patidar980.9%
Sujith Manoharan830.7%
Johan Hedberg820.7%
Eric Dumazet770.7%
Lars-Peter Clausen750.7%
Antti Palosaari720.6%
Fabian Frederick710.6%
Daniel Vetter700.6%
Florian Fainelli700.6%
Felipe Balbi700.6%
Benjamin Romer680.6%
Laurent Pinchart640.6%
Andy Shevchenko620.6%
Malcolm Priestley610.5%
By changed lines
Larry Finger7483110.2%
Greg Kroah-Hartman7329810.0%
Hans Verkuil222663.0%
Alexander Duyck166172.3%
Greg Ungerer119811.6%
Linus Walleij106281.5%
John L. Hammond102691.4%
Navin Patidar81481.1%
Philipp Zabel71491.0%
Martin Peres68900.9%
Mark Einon67710.9%
Mauro Carvalho Chehab65200.9%
Ian Munsie57730.8%
H Hartley Sweeten51340.7%
Alexei Starovoitov45050.6%
Yan, Zheng44850.6%
Antti Palosaari41810.6%
Roy Spliet37850.5%
Christoph Hellwig37650.5%
Juergen Gross37450.5%

As is usually the case, H. Hartley Sweeten tops the by-changesets list with the epic task of getting the Comedi drivers into shape in the staging tree. Mauro Carvalho Chehab, the Video4Linux2 maintainer, did a lot of cleanup work in that tree as well during this cycle, while Ian Abbott's changes were, once again, applied to the Comedi drivers. Geert Uytterhoeven did a lot of work in the ARM and driver trees, while Hans Verkuil also made a lot of improvements to the core Video4Linux2 subsystem.

On the "lines changed" side, Larry Finger removed the r8192ee driver from the staging tree, while Greg Kroah-Hartman removed two other drivers from staging. Alexander Duyck added the "fm10k" driver for Intel FM10000 Ethernet switch host interfaces, and Greg Ungerer removed a bunch of old m68k code.

Some 200 companies (that we were able to identify) supported development on the code merged for 3.18. The most active of those were:

Most active 3.18 employers
By changesets
(None)124411.0%
Intel123810.9%
Red Hat8637.6%
(Unknown)8287.3%
Samsung5234.6%
Linaro3703.3%
IBM3403.0%
SUSE3262.9%
Google3242.9%
(Consultant)3212.8%
Freescale2382.1%
FOSS Outreach Program for Women2382.1%
Vision Engraving Systems2372.1%
Texas Instruments1991.8%
Renesas Electronics1791.6%
MEV Limited1621.4%
Free Electrons1551.4%
Qualcomm1411.2%
Oracle1351.2%
ARM1141.0%
By lines changed
(None)18524725.3%
Linux Foundation7335410.0%
Intel7316810.0%
(Unknown)284603.9%
Cisco279393.8%
Red Hat273353.7%
Linaro235863.2%
Samsung192282.6%
IBM181942.5%
SUSE167362.3%
Google141101.9%
(Consultant)124551.7%
Accelerated Concepts119861.6%
Texas Instruments113051.5%
C-DAC84001.1%
Pengutronix82321.1%
Freescale72651.0%
(Academia)70761.0%
Qualcomm53980.7%
Code Aurora Forum53770.7%

(Note that the above table has been updated; the curious can see the original version published on this page here).

As is often the case, there are few surprises here. The level of contributions from developers working on their own time remains steady at about 11%, a level it has maintained since the 3.13 kernel. So it might be safe to say that, for now, the decline in volunteer contributions appears to have leveled out.

How important are volunteer contributions to the Linux kernel? Many kernel developers started that way, so it is natural to think that a decline in volunteers will lead, eventually, to a shortage of kernel developers overall. As it happens, the period starting with the 3.13 release (roughly calendar year 2014) saw first-time contributions from 1,521 developers. Looking at who those developers worked for yields these results:

EmployerDevelopers
(Unknown)651
(None)137
Intel115
Google37
Samsung35
Huawei33
IBM32
Red Hat25
Freescale21
Linaro17

All told, 733 first-time developers were identifiably working for some company or other when their first patch was accepted into the mainline. A large portion of the unknowns above are probably volunteers, so one can guess that a roughly equal number of first-time developers were working on their own time. So roughly half of our new developers in the last year were volunteers.

The picture changes a little, though, when one narrows things down to first-time developers who contributed to more than one release. When one looks at developers who contributed to three out of the last five releases, the picture is:

EmployerDevelopers
(Unknown)48
Intel24
(None)21
Huawei10
IBM7
Samsung6
Outreach Program for Women6
ARM4
Linaro4
Red Hat3
Broadcom3

Overall, 126 new developers contributing to at least three releases in the last year worked for companies at the time of their first contribution — rather more than the number of volunteers. So it seems fair to say that a lot of our new developers are getting their start within an employment situation, rather than contributing as volunteers then being hired.

Where are these new developers working in the kernel? If one looks at all new developers, the staging tree comes out on top; 301 developers started there, compared to 122 in drivers/net, the second-most popular starting place. But the most popular place for a three-version developer to make their first contribution is in drivers/net; 25 new developers contributed there, while 20 contributed within the staging tree. So, while staging is arguably helping to bring in new developers, a lot of the developers who start there appear to not stay in the kernel community.

Overall, the pattern looks reasonably healthy. There are multiple paths for developers looking to join our community, and it is possible for new developers to work almost anywhere in the kernel tree. That would help to explain how the kernel development community continues to grow over time. For now, there doesn't appear to be any reason to believe that we will not continue to crank out kernel releases at a high rate indefinitely.

Comments (21 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 3.18-rc7 ?
Linus Torvalds Linux 3.18-rc6 ?
Greg KH Linux 3.17.4 ?
Luis Henriques Linux 3.16.7-ckt2 ?
Greg KH Linux 3.14.25 ?
Steven Rostedt 3.14.25-rt22 ?
Jiri Slaby Linux 3.12.33 ?
Steven Rostedt 3.12.33-rt47 ?
Greg KH Linux 3.10.61 ?
Steven Rostedt 3.10.61-rt65 ?
Zefan Li Linux 3.4.105 ?
Steven Rostedt 3.2.64-rt94 ?

Architecture-specific

Build system

Core kernel code

Development tools

Device drivers

Sebastian Hesselbarth phy: add the Berlin USB PHY driver ?
Jarkko Sakkinen TPM 2.0 support ?

Device driver infrastructure

Filesystems and block I/O

Janitorial

Paolo Bonzini KVM: ia64: remove ?

Memory management

Networking

Security-related

Virtualization and containers

Miscellaneous

Page editor: Jonathan Corbet

Distributions

Term limits and the Debian Technical Committee

By Nathan Willis
December 3, 2014

Debian's Technical Committee (often abbreviated as TC or, on Debian lists, ctte) has been in the news quite a bit lately. The TC acts as Debian's final arbitrator in disagreements between project members, and 2014 has seen more than the average number of such disagreements. In addition, some of the debates within the Debian community as a whole have evidently proved to be enough of a strain that several long-serving TC members have resigned from the committee in recent months. Naturally, high-profile technical disputes and resignations from the TC cause attention to turn to make up and processes of the TC itself. On December 1, former Debian Project Leader (DPL) Stefano Zacchiroli proposed a major change to how the TC operates: implementing limited terms for TC members.

An old idea

The idea of TC term limits was raised most recently in May, when Anthony Towns suggested adopting some set of rules that would change TC membership from its current de-facto "for life" appointment to something finite and well-defined. Towns speculated on a variety of possible options without promoting any one option.

Several other project members (including some on the TC) weighed in during the ensuing discussion, and the general consensus seemed to be that there were merits to idea. For one, a never-changing TC could (theoretically) turn into a cabal or simply get trapped in "groupthink" caused by having a limited set of voices. For another, as Russ Allbery noted, the perpetual nature of a TC appointment may be causing appointments to skew toward cautious and conservative choices. In contrast, he said, "I think our DPL selection process works extremely well and benefits greatly from having a yearly election."

But the final major reason for considering time-limited terms is that—as pointed out by Allbery, Towns, and others—the TC's lack of a mechanism for stepping down can make a departure difficult. Towns said "it would be nice if there was a way out of the ctte that had more of a feeling of winning / leaving at the top of the game", while Allbery sought to find a way to give TC members "a clean break point where they can stop without any perceived implications of resigning, so they can either decide they've done enough or they can come back refreshed and with fresh eyes." On Allbery's final point, it is indeed easy to read comments and discussion threads about several of the recent TC resignations and find people speculating on the reasons behind and ramifications of each individual departure.

Nevertheless, the discussion started by Towns about term limits ended without a concrete plan of action. There were concerns about how to implement term limits without making arbitrary decisions about what constitutes "enough" time, as well as concerns about how to implement any term-limiting mechanism without causing undue turmoil—by (for example) immediately losing half of the TC's membership.

A new proposal

Perhaps the turmoil within Debian and in the TC itself over the past few months served to make the prospect of shaking up the TC membership rules seem less intimidating. Or perhaps with several seats opening up on the TC due to resignations, it was simply a good time to consider other changes as well. Either way, in mid-November, Zacchiroli sent out a message proposing a change to section 6 of the Debian Constitution to implement TC term limits. His proposal is a General Resolution (GR), which would require a vote by the entire project.

Zacchiroli's initial draft underwent multiple revisions during the last half of November, but by December 1, he made it a formal proposal. The current version of the proposal aims to set the maximum term for TC members at around four years, but with some flexibility built in to account for resignations and other departures. The goal is to replace two TC members each calendar year, so that all seats on the committee are rotated through every four years. In addition, former members must stay off the TC for at least one year before they can be re-appointed.

The specifics of the wording are worth looking at as well. Each year on January 1, if two senior TC members have served for more than 3.5 years, those two will have their memberships marked for expiration—in other words, their terms will end on the coming December 31. Because new appointments to the TC can happen at any time, there is some variation in how long a "full" term would last; as Towns observed, "the max age is 5.5 years (appointment on Jul 2nd, hitting 4.49 years on Jan 1st, then expiring at 5.49 years next Jan 1st)". Nevertheless, most on the list seemed to find the issues of regular rollover and requiring a one-year "mandatory vacation" (as current DPL Lucas Nussbaum called it) to be the most salient factors: precisely how long anyone sits on the TC is an implementation detail.

Dropped along the way were provisions to prevent the term-expiration mechanism from leaving the TC with less than four members (out of the total of eight seats), various suggestions to change the number of TC seats, and a suggestion that the remaining TC members decide whether or not to re-appoint a member whose term is expiring. Objections to these ideas varied, although the ones that seemed simply too different from Zacchiroli's core proposal (such as changing the size of the TC) were usually dropped on the grounds that proponents should raise them as separate GRs.

Similarly, Clint Adams proposed eliminating the TC altogether. The idea does not seem to have widespread support, although Allbery commented that he had considered making a similar proposal himself in the past—only to decide that whatever dispute-resolution method replaced it would not be any better.

That said, there was considerably more discussion of how the rules could be adjusted to place an upper limit the amount of churn that the TC undergoes each year. This year, for example, three committee members are stepping down; if two additional seats were to expire automatically, then more than half of the TC would be replaced in a single year—an outcome few consider ideal for the health and stability of the project.

Some of the early discussions about the proposal included specifying a transition mechanism to let the current longstanding TC members rotate out gradually rather than all at once. Ultimately, some modifications to the two-senior-seats-automatically-expire plan arose that would throttle the turnover rate, and have the beneficial side effect of making the addition of a transition mechanism into the Constitution unnecessary.

Three alternatives (summarized by Nussbaum) to the original two-seats-expire-per-year plan were proposed. The first, which is known as the 2 − R plan, would have the two seats automatically expire if there are no other departures from the TC, but would subtract from those automatic expirations the number of resignations, retirements, or removals ("R") that happened during the past year—stopping at zero, of course.

The second alternative is a slight adjustment of the first, and is known as the 2 − R′ plan. It would subtract from 2 only the number of resignations or departures of people who would otherwise be candidates for seat expiration (that is, resignations by members with 3.5 years experience or more). In short, this plan would ensure that the resignation of junior TC members would not cause the most senior members to remain on the committee an additional year.

The third alternative, known as 2 − S, is a subtle modification of the 2 − R′ plan. It would subtract from 2 only the number of resignations in the past year by members whose terms would definitely have expired at the end of the year otherwise. That is, under the 2 − S plan, only a resignation by one of the two most senior seats can decrease the number of automatic term expirations. Under the 2 − R′ plan, it would be possible for the third-most-senior member to resign and cause a reduction in the number of automatic seat expirations, if at least three members had been on the TC for longer than 3.5 years.

Such a condition cannot arise when there have been several years of two-seat rotations in a row, of course. But it happens to be the case now, since so many of the existing members have been on the committee for a considerable length of time. And more importantly, as Raphaël Hertzog pointed out, it can happen again if there are several resignations (followed by several appointments) in the same year.

If one happens to find the distinctions between the various expiration formulae less than perfectly clear, fear not. Nussbaum outlined the practical effects of the main plans (the original, 2-seat plan and 2 − R). Under the original plan, Bdale Garbee and Steve Langasek's terms would expire on January 1, 2015. Subsequently:

2016-01-01: Andi and Don expire, 2 replacements
2017-01-01: Keith is the oldest member with 3.09y, nobody expires
2018-01-01: Keith is the oldest member with 4.09y, nobody expires
2019-01-01: Keith membership expires, none of the other does
2020-01-01: we have 5 members over the 4.5y limit, two expire
2021-01-01: we have 3+2=5 members over the 4.5y limit, two expire

While under 2 − R, the resignations already announced in 2014 would mean no additional seats expire in January 2015, after which:

2016-01-01: Bdale and Steve expire, 2 replacements
2017-01-01: Andi and Don expire, 2 replacements
2018-01-01: Keith is the oldest member with 4.09y, nobody expires
2019-01-01: Keith membership expires, none of the other does
2020-01-01: we have 3 members over the 4.5y limit, two expire
2021-01-01: we have 1+2=3 members over the 4.5y limit, two expire

The differences in the long term are, to be sure, subtle enough that most assessments of which plan is better will boil down to personal preference. Ultimately, Nussbaum added the 2 − R option as an amendment to Zacchiroli's proposal.

What's next

Zacchiroli's proposal quickly garnered enough seconds to move it forward for a vote. As per project procedure, at least two weeks of discussion will follow, after which any of the proposal's sponsors may call for a vote.

There seems to be little resistance to the idea of rotating TC members more frequently—if nothing else, to prevent burnout among qualified project members. But the term-limit idea would constitute a major change in how Debian functions, which is a notion that makes many people uneasy to one degree or another.

On the other hand, the main objection to too much rotation within the TC is the hard-to-define notion that it would weaken the project. Towns, for his part, contended that the idea of "newbies" on the TC causing weakness to Debian are "at the far end of hypothetical". There is, the argument goes, not a shortage of project members who would make positive contributions to the TC, and new committee members will still be selected by the sitting TC with the approval of the DPL. So fears about a TC composed of unqualified people apt to make poor, reckless decisions are unfounded.

The discussion process is taking place on the debian-vote mailing list. Whenever the final vote itself takes place, the outcome will be announced there as well. Although the exact form of the process has yet to be decided, the way things stand today it seems likely that Debian will soon have a formal process in place to regularly rotate members in and out of its top decision-making body.

Comments (6 posted)

Brief items

Distribution quotes of the week

In other words, the way I choose to look at this GR is that the project as a whole just voted to take away the sticks that we were using to beat each other with.
-- Russ Allbery (Thanks to Paul Wise)

This has long been the case. However, if it explains _why_, I forget, for the same reason that this never works. (Yeah yeah whatever, I just want to install my system now and keep using "godmode" as my root password just like I always have so I don't forget it.)
-- Matthew Miller

tl;dr it's a mess, sorry about that. Stable output naming isn't something that any of our desktop environments care about, afaik, so it's not something I'd ever see as a regression. In a sense, noticing this level of implementation detail is the price you pay for choosing not to run something that gets it right for you. [2]

[2] - And Linux, as we know, is all about choice.

-- Adam Jackson

On 2 December 2014 at 09:15, <jfm512-at-free.fr> wrote:

> 2) It has an uninspiring installer.

Ok I need more information on what this means in comparison to what? I have installed pretty much every major Linux distribution and I have never found any one of them 'inspiring'. Even the Ubuntu one is more of "well at least its not the base Debian installer" versus "OMG I am alive and free because of this installer."

-- Stephen John Smoogen

Comments (57 posted)

The "Devuan" Debian fork

A group of developers has announced the existence of a fork of the Debian distribution called "Devuan." "First mid-term goal is to produce a reliable and minimalist base distribution that stays away from the homogenization and lock-in promoted by systemd. This distribution should be ready about the time Debian Jessie is ready and will constitute a seamless alternative to its dist-upgrade. As of today, the only ones resisting are the Slackware and Gentoo distributions, but we need to provide a solid ground also for apt-get based distributions. All project on the downstream side of Debian that are concerned by the systemd avalanche are welcome to keep an eye on our initiative and evaluate it as an alternative base."

Comments (440 posted)

Distribution News

Debian GNU/Linux

BSP in Switzerland (St-Cergue)

There will be a Debian Bug Squashing Party from January 30-February 1 in St-Cergue, Switzerland. "We invite Debian Developers and Maintainers, regular contributors as well as new potential contributors to join this event. Regular contributors will be present to help newcomers fix their first bugs or scratch their itches in Debian."

Full Story (comments: none)

Fedora

Fedora Council election results

The election results for the first Fedora Council election are available. Congratulations go to Rex Dieter and Langdon White, the newly elected representatives.

Full Story (comments: none)

Fedora 21 betas for ARM and POWER

Fedora 21 Betas for ARM aarch64 and POWER architectures are available for testing.

Comments (none posted)

openSUSE

Announcing openSUSE board election 2014/2015

The openSUSE board has three seats open for election and the election schedule has been announced. The initial phase, which is open now, allows openSUSE contributors who are not yet members to become members so that they may vote or stand for a seat. Nominations are also open.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

They make Mageia: David Walser (Mageia Blog)

The Mageia blog talks with David Walser, about his work in Mageia. "I stumbled into my current role at Mageia completely by accident. I had upgraded my sister’s laptop from Mandriva 2010.2 to Mageia 1, and noticed one Mandriva package left on the system because it had a newer release tag than the Mageia package. The reason was because Mandriva had done a security update for the package, but when it was imported into Mageia, the release version was imported rather than the updates version. I was concerned about other security updates that might have been missed, and began investigating this. I started filing bugs for missing security updates and helping the QA team test updates that got packaged, to help the updates get released more expeditiously."

Comments (none posted)

Page editor: Rebecca Sobol

Development

The Rocket containerization system

By Nathan Willis
December 3, 2014

The field of software-container options for Linux expanded again this week with the launch of the Rocket project by the team behind CoreOS. Rocket is a direct challenger to the popular Docker containerization system. The decision to split from Docker was, evidently, driven by CoreOS developers' dissatisfaction with several recent moves within the Docker project. Primarily, the CoreOS team's concern is Docker's expansion from a standalone container format to a larger platform that includes tools for additional parts of the software-deployment puzzle.

There is no shortage of other Linux containerization projects apart from Docker already, of course—LXC, OpenVZ, lmctfy, and Sandstorm, to name a few. But CoreOS was historically a big proponent of (and contributor to) Docker.

The idea behind CoreOS was to build a lightweight and easy-to-administer server operating system, on which Docker containers can be used to deploy and manage all user applications. In fact, CoreOS strives to be downright minimalist in comparison to standard Linux distributions. The project maintains etcd to synchronize system configuration across a set of machines and fleet to perform system initialization across a cluster, but even that set of tools is austere compared to the offerings of some cloud-computing providers.

Launch

On December 1, the CoreOS team posted an announcement on its blog, introducing Rocket and explaining the rationale behind it. Chief among its stated justifications for the new project was that Docker had begun to grow from its initial concept as "a simple component, a composable unit" into a larger and more complex deployment framework:

Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server.

The post also highlighted the fact that, early on in its history, the Docker project had published a manifesto that argued in favor of simple container design—and that the manifesto has since been removed.

The announcement then sets out the principles behind Rocket. The various tools will be independent "composable" units, security primitives "for strong trust, image auditing and application identity" will be available, and container images will be easy to discover and retrieve through any available protocol. In addition, the project emphasizes that the Rocket container format will be "well-specified and developed by a community." To that end, it has published the first draft of the App Container Image (ACI) specification on GitHub.

As for Rocket itself, it was launched at version 0.10. There is a command-line tool (rkt) for running an ACI image, as well as a draft specification describing the runtime environment and facilities needed to support an ACI container, and the beginnings of a protocol for finding and downloading an ACI image.

Rocket is, for the moment, certainly a lightweight framework in keeping with what one might expect form CoreOS. Running a containerized application with Rocket involves three "stages."

Stage zero is the container-preparation step; the rkt binary generates a manifest for the container, creates the initial filesystem required, then fetches the necessary ACI image file and unpacks it into the new container's directory. Stage one involves setting up the various cgroups, namespaces, and mount points required by the container, then launching the container's systemd process. Stage two consists of actually launching the application inside its container.

What's up with Docker

The Docker project, understandably, did not view the announcement of Rocket in quite the same light as CoreOS. In a December 1 post on the Docker blog, Ben Golub defends the decision to expand the Docker tool set beyond its initial single-container roots:

While Docker continues to define a single container format, it is clear that our users and the vast majority of contributors and vendors want Docker to enable distributed applications consisting of multiple, discrete containers running across multiple hosts.

We think it would be a shame if the clean, open interfaces, anywhere portability, and robust set of ecosystem tools that exist for single Docker container applications were lost when we went to a world of multiple container, distributed applications. As a result, we have been promoting the concept of a more comprehensive set of orchestration services that cover functionality like networking, scheduling, composition, clustering, etc.

But the existence of such higher-level orchestration tools and multi-container applications, he said, does not prevent anyone from using the Docker single-container format. He does acknowledge that "a small number of vendors disagree with this direction", some of whom have "technical or philosophical differences, which appears to be the case with the recent announcement regarding Rocket."

The post concludes by noting that "this is all part of a healthy, open source process" and by welcoming competition. It also, however, notes the "questionable rhetoric and timing of the Rocket announcement" and says that a follow-up post addressing some of the technical arguments from the Rocket project is still to come.

Interestingly enough, the CoreOS announcement of Rocket also goes out of its way to reassure users that CoreOS will continue to support Docker containers in the future. Less clear is exactly what that support will look like; the wording says to "expect Docker to continue to be fully integrated with CoreOS as it is today", which might suggest that CoreOS is not interested in supporting Docker's newer orchestration tools.

In any case, at present, Rocket and its corresponding ACI specification makes use of the same underlying Linux facilities employed by Docker, LXC containers, and most of the other offerings. One might well ask whether or not a "community specification" is strictly necessary as an independent entity. But as containerization continues to make its way into the enterprise market, it is hardly surprising to see more than one project vie for privilege of defining what a standard container should look like.

Comments (15 posted)

Moving some of Python to GitHub?

By Jake Edge
December 3, 2014

Over the years, Python's source repositories have moved a number of times, from CVS on SourceForge to Subversion at Python.org and, eventually, to Mercurial (aka hg), still on Python Software Foundation (PSF) infrastructure. But the new Python.org site code lives at GitHub (thus in a Git repository) and it looks like more pieces of Python's source may be moving in that direction. While some are concerned about moving away from a Python-based DVCS (i.e. Mercurial) into a closed-source web service, there is a strong pragmatic streak in the Python community that may be winning out. For good or ill, GitHub has won the popularity battle over any of the other alternatives, so new contributors are more likely to be familiar with that service, which makes it attractive for Python.

The discussion got started when Nick Coghlan posted some thoughts on his Python Enhancement Proposal (PEP 474) from July. It suggested creating a "forge" for hosting some Python documentation repositories using Kallithea—a Python-based web application for hosting Git and Mercurial repositories—once it has a stable release. More recently, though, Coghlan realized that there may not be a need to require hosting those types of repositories on PSF infrastructure as the PEP specified; if that is the case, "then the obvious candidate for Mercurial hosting that supports online editing + pull requests is the PSF's BitBucket account".

But others looked at the same set of facts a bit differently. Donald Stufft compared the workflow of the current patch-based system to one that uses GitHub-like pull requests (PRs). Both for contributors and maintainers (i.e. Python core developers), the time required to handle a simple patch was something like 10-15 minutes with the existing system, he said, while a PR-based system would reduce that to less than a minute—quite possibly much less.

Python benevolent dictator for life (BDFL) Guido van Rossum agreed, noting that GitHub has easily won the popularity race. He was also skeptical that the PSF should be running servers:

[...] We should move to GitHub, because it is the easiest to use and most contributors already know it (or are eager to learn thee). Honestly, the time for core devs (or some other elite corps of dedicated volunteers) to sysadmin their own machines (virtual or not) is over. We've never been particularly good at this, and I don't see us getting better or more efficient.

Moving the CPython code and docs is not a priority, but everything else (PEPs, HOWTOs etc.) can be moved easily and I am in favor of moving to GitHub. For PEPs I've noticed that for most PEPs these days (unless the primary author is a core dev) the author sets up a git repo first anyway, and the friction of moving between such repos and the "official" repo is a pain.

GitHub, however, only supports Git, so those who are currently using Mercurial and want to continue would be out of luck. Bitbucket supports both, though, so in Coghlan's opinion, it would make a better interim solution. But Stufft is concerned that taking the trouble to move, but choosing the less popular site, makes little sense.

On the other hand, some are worried about lock-in with GitHub (and other closed-source solutions, including Bitbucket). As Coghlan put it:

And this is why the "you can still get your data out" argument doesn't make any sense - if you aren't planning to rely on the proprietary APIs, GitHub is just a fairly mundane git hosting service, not significantly different in capabilities from Gitorious, or RhodeCode, or BitBucket, or GitLab, etc. So you may as well go with one of the open source ones, and be *completely* free from vendor lockin.

The feature set that GitHub provides is what will keep the repositories there, though, Stufft said: "You probably won’t want to get your data out because Github’s features are compelling enough that you don’t want to lose them". Furthermore, he looked at the Python-affiliated repositories on the two sites and found that there were half a dozen active repositories on GitHub and three largely inactive repositories on Bitbucket.

The discussion got a bit testy at times, with Coghlan complaining that choosing GitHub based on its popularity was anti-community: "I'm very, very disappointed to see folks so willing to abandon fellow community members for the sake of following the crowd". He went on to suggest that perhaps Ruby or JavaScript would be a better choice for a language to work on since they get better press. Van Rossum called that "a really low blow" and pointed out: "*A DVCS repo is a social network, so it matters in a functional way what everyone else is using.*" He continued:

So I give you that if you want a quick move into the modern world, while keeping the older generation of core devs happy (not counting myself :-), BitBucket has the lowest cost of entry. But I strongly believe that if we want to do the right thing for the long term, we should switch to GitHub. I promise you that once the pain of the switch is over you will feel much better about it. I am also convinced that we'll get more contributions this way.

Eventually, Stufft proposed another PEP (481) that would migrate three documentation repositories (the Development Guide, the development system in a box (devinabox), and the PEPs) to GitHub. Unlike the situation with many PEPs, Van Rossum stated that he didn't feel it was his job to accept or reject the PEP, though he made a strong case for moving to GitHub; he believes that most of the community is probably already using GitHub in one way or another, lock-in doesn't really concern him since the most important data is already stored in multiple places, and, in his mind, Python does not have an "additional hidden agenda of bringing freedom to all software".

It turns out that Brett Cannon is the contact for two of the three repositories mentioned in the PEP (devguide and devinabox), so Van Rossum is leaving the decision to Cannon for those two. Coghlan is the largest contributor to the PEPs repository, so the decision on that will be left up to him. He is currently exploring the possibility of using RhodeCode Enterprise (a Python-based, hosted solution with open code, but one that has licensing issues that Coghlan did acknowledge). For his part, Cannon noted his preference for open, Mercurial-and-Python-based solutions, but he is willing to consider other options. There may be a discussion at the Python language summit (which precedes PyCon), but, if so, Van Rossum said he probably won't take part—it's clear he has tired of the discussion at this point.

There are good arguments on both sides of the issue, but it is a little sad to see Python potentially moving away from the DVCS written in the language and into the more popular (and feature-rich, seemingly) DVCS and hosting site (Git and GitHub). While Van Rossum does not plan to propose moving the CPython (main Python language code) repository to GitHub anytime soon, the clear implication is that he would not be surprised if that happens eventually. While it might make pragmatic sense on a number of different levels, and may have all the benefits that have been mentioned, it would certainly be something of a blow to the open-source Python DVCS communities. With luck, those communities will find the time to fill the functionality gaps, but the popularity gap will be much harder to overcome.

Comments (66 posted)

Kawa — fast scripting on the Java platform

December 3, 2014

This article was contributed by Per Bothner

Kawa is a general-purpose Scheme-based programming language that runs on the Java platform. It aims to combine the strengths of dynamic scripting languages (less boilerplate, fast and easy start-up, a read-eval-print loop or REPL, no required compilation step) with the strengths of traditional compiled languages (fast execution, static error detection, modularity, zero-overhead Java platform integration). I created Kawa in 1996, and have maintained it since. The new 2.0 release has many improvements.

Projects and businesses using Kawa include: MIT App Inventor (formerly Google App Inventor), which uses Kawa to translate its visual blocks language; HypeDyn, which is a hypertext fiction authoring tool; and Nü Echo, which uses Kawa for speech-application development tools. Kawa is flexible: you can run source code on the fly, type it into a REPL, or compile it to .jar files. You can write portably, ignoring anything Java-specific, or write high-performance, statically-typed Java-platform-centric code. You can use it to script mostly-Java applications, or you can write big (modular and efficient) Kawa programs. Kawa has many interesting features; below we'll look at a few of them.

Scheme and standards

Kawa is a dialect of Scheme, which has a long history in programming-language and compiler research, and in teaching. Kawa 2.0 supports almost all of R7RS (Revised7 Report on the Algorithmic Language Scheme), the 2013 language specification. (Full continuations is the major missing feature, though there is a project working on that.) Scheme is part of the Lisp family of languages, which also includes Common Lisp, Dylan, and Clojure.

One of the strengths of Lisp-family languages (and why some consider them weird) is the uniform prefix syntax for calling a function or invoking an operator:

    (op arg1 arg2 ... argN)
If op is a function, this evaluates each of arg1 through argN, and then calls op with the resulting values. The same syntax is used for arithmetic:
    (+ 3 4 5)
and program structure:
    ; (This line is a comment - from semi-colon to end-of-line.)
    ; Define variable 'pi' to have the value 3.13.
    (define pi 3.13)

    ; Define single-argument function 'abs' with parameter 'x'.
    (define (abs x)
      ; Standard function 'negative?' returns true if argument is less than zero.
      (if (negative? x) (- x) x)

Having a simple regular core syntax makes it easier to write tools and to extend the language (including new control structures) via macros.

Performance and type specifiers

Kawa gives run-time performance a high priority. The language facilitates compiler analysis and optimization. Flow analysis is helped by lexical scoping and the fact that a variable in a module (source file) can only be assigned to in that module. Most of the time the compiler knows which function is being called, so it can generate code to directly invoke a method. You can also associate a custom handler with a function for inlining, specialization, or type-checking.

To aid with type inference and type checking, Kawa supports optional type specifiers, which are specified using two colons. For example:

    (define (find-next-string strings ::vector[string] start ::int) ::string
      ...)

This defines find-next-string with two parameters: strings is a vector of strings, and start is a native (Java) int; the return type is a string.

Kawa also does a good job of catching errors at compile time.

The Kawa runtime doesn't need to do a lot of initialization, so start-up is much faster than other scripting languages based on the Java virtual machine (JVM). The compiler is fast enough that Kawa doesn't use an interpreter. Each expression you type into the REPL is compiled on-the-fly to JVM bytecodes, which (if executed frequently) may be compiled to native code by the just-in-time (JIT) compiler.

Function calls and object construction

If the operator op in an expression like (op arg1 ... argN)) is a type, then the Kawa compiler looks for a suitable constructor or factory method.

    (javax.swing.JButton "click here")
    ; equivalent to Java's: new javax.swing.JButton("click here")

If the op is a list-like type with a default constructor and has an add method, then an instance is created, and all the arguments are added:

    (java.util.ArrayList 11 22 33)
    ; evaluates to: [11, 22, 33]

Kawa allows keyword arguments, which can be used in an object constructor form to set properties:

    (javax.swing.JButton text: "Do it!" tool-tip-text: "do it")

The Kawa manual has more details and examples. There are also examples for other frameworks, such as for Android and for JavaFX.

Other scripting languages also have convenient syntax for constructing nested object structures (for example Groovy builders), but they require custom builder helper objects and/or are much less efficient. Kawa's object constructor does most of the work at compile-time, generating code as good as hand-written Java, but less verbose. Also, you don't need to implement a custom builder if the defaults work, as they do for Swing GUI construction, for example.

Extended literals

Most programming languages provide convenient literal syntax only for certain built-in types, such as numbers, strings, and lists. Other types of values are encoded by constructing strings, which are susceptible to injection attacks, and which can't be checked at compile-time.

Kawa supports user-defined extended literal types, which have the form:

    &tag{text}
The tag is usually an identifier. The text can have escaped sub-expressions:
    &tag{some-text&[expression]more-text}
The expression is evaluated and combined with the literal text. Combined is often just string-concatenation, but it can be anything depending on the &tag. As an example, assume:
    (define base-uri "http://example.com/")
then the following concatenates base-uri with the literal "index.html" to create a new URI object:
    &URI{&[base-uri]index.html}

The above example gets de-sugared into:

    ($construct$:URI $<<$ base-uri $>>$ "index.html")

The $construct$:URI is a compound name (similar to an XML "qualified name") in the predefined $construct$ namespace. The $<<$ and $>>$ are just special symbols to mark an embedded sub-expression; by default they're bound to unique empty strings. So the user (or library writer) just needs to provide a definition of the compound name $construct$:URI as either a procedure or macro, resolved using standard Scheme name lookup rules; no special parser hooks or other magic is involved. This procedure or macro can do arbitrary processing, such as construct a complex data structure, or search a cache.

Here is a simple-minded definition of $construct$:URI as a function that just concatenates all the arguments (the literal text and the embedded sub-expressions) using the standard string-append function, and passes the result to the URI constructor function:

    (define ($construct$:URI . args)
      (URI (apply string-append args)))

The next section uses extended literals for something more interesting: shell-like process forms.

Shell scripting

Many scripting languages let you invoke system commands (processes). You can send data to the standard input, extract the resulting output, look at the return code, and sometimes even pipe commands together. However, this is rarely as easy as it is using the old Bourne shell; for example command substitution is awkward. Kawa's solution is two-fold:

  1. A "process expression" (typically a function call) evaluates to a Java Process value, which provides access to a Unix-style (or Windows) process.
  2. In a context requiring a string, a Process is automatically converted to a string comprising the standard output from the process.

A trivial example:

   #|kawa:1|# (define p1 &`{date --utc})

("#|...|#" is the Scheme syntax for nestable comments; the default REPL prompt has that form to aid cutting and pasting code.)

The &`{...} syntax uses the extended-literal syntax from the previous section, where the backtick is the 'tag', so it is syntactic sugar for

    ($construct$:` "date --utc")
where $construct$:` might be defined as:
(define ($construct$:` . args) (apply run-process args))
This in turns translates into an expression that creates a gnu.kawa.functions.LProcess object, as you see if you write it:
    #|kawa:2|# (write p1)
    gnu.kawa.functions.LProcess@377dca04

An LProcess is automatically converted to a string (or bytevector) in a context that requires it. This means you can convert to a string (or bytevector):

    #|kawa:3|# (define s1 ::string p1) ; Define s1 as a string.
    #|kawa:4|# (write s1)
    "Wed Nov  1 01:18:21 UTC 2014\n"
    #|kawa:5|# (define b1 ::bytevector p1)
    (write b1)
    #u8(87 101 100 32 74 97 110 ... 52 10)

The display procedure prints the LProcess in "human" form, as a unquoted string:

    #|kawa:6|# (display p1)
    Wed Nov  1 01:18:21 UTC 2014

This is also the default REPL formatting:

    #|kawa:7|# &`{date --utc}
    Wed Nov  1 01:18:22 UTC 2014

We don't have room here to discuss redirection, here documents, pipelines, adjusting the environment, and flow control based on return codes, though I will briefly touch on argument processing and substitution. See the Kawa manual for details, and here for more on text vs. binary files.

Argument processing

To substitute the result of an expression into the argument list is simple using the &[] construct:

    (define my-printer (lookup-my-printer))
    &`{lpr -P &[my-printer] log.pdf}
Because a process is auto-convertible to a string, no special syntax is needed for command substitution:
    &`{echo The directory is: &[&`{pwd}]}
though you'd normally use this short-hand:
    &`{echo The directory is: &`{pwd}}

Splitting a command line into arguments follows shell quoting and escaping rules. Dealing with substitution depends on quotation context. The simplest case is when the value is a list (or vector) of strings, and the substitution is not inside quotes. In that case each list element becomes a separate argument:

    (define arg-list ["-P" "office" "foo.pdf" "bar.pdf"])
    &`{lpr &[arg-list]}

An interesting case is when the value is a string, and we're inside double quotes; in that case newline is an argument separator, but all other characters are literal. This is useful when you have one filename per line, and the filenames may contain spaces, as in the output from find:

    &`{ls -l "&`{find . -name '*.pdf'}"}
This solves a problem that is quite painful with traditional shells.

Using an external shell

The sh tag uses an explicit shell, like the C system() function:

    &sh{lpr -P office *.pdf}
This is equivalent to:
    &`{/bin/sh "lpr -P office *.pdf"}

Kawa adds quotation characters in order to pass the same argument values as when not using a shell (assuming no use of shell-specific features such as globbing or redirection). Getting shell quoting right is non-trivial (in single quotes all characters except single quote are literal, including backslash), and not something you want application programmers to have to deal with. Consider:

    (define authors ["O'Conner" "de Beauvoir"])
    &sh{list-books &[authors]}
The command passed to the shell is the following:
    list-books 'O'\''Conner' 'de Beauvoir'

Having quoting be handled by the $construct$:sh implementation automatically eliminates common code injection problems. I intend to implement a &sql form that would avoid SQL injection the same way.

In closing

Some (biased) reasons why you might choose Kawa over other languages, concentrating on those that run on the Java platform: Java is verbose and requires a compilation step; Scala is complex, intimidating, and has a slow compiler; Jython, JRuby, Groovy, and Clojure are much slower in both execution and start-up. Kawa is not standing still: plans for the next half-year include a new argument-passing convention (which will enable ML-style patterns); full continuation support (which will help with coroutines and asynchronous event handling); and higher-level optimized sequence/iteration operations. I hope you will try out Kawa, and that you will find it productive and enjoyable.

Comments (18 posted)

Brief items

Quotes of the week(s)

the unix philosophy: do 90% of one thing, and barely do it adequately
Adam Jackson

The original plan when I cooked up Just Solve The Problem Month was that there was a set of problems out there that just needed a few hundred people to contribute time and effort, and some otherwise seemingly insurmountable problems could be solved or really, really beaten down into a usable form.

Aaaaand what instead happened was:

  • We announced and set up a Just Solve The Problem Wiki for the first problem.
  • A lot of people worked on the Wiki.
  • I got very busy.
  • People kept working on the Wiki.
  • It’s been two years.
Jason Scott, who was, for the record, ultimately pleased with the resulting File Formats Wiki.

Special Black Friday deal for software developers: switch to open source tools for 100% off.
Jeff Atwood

Comments (1 posted)

GNU LibreJS 6.0.6 released

Version 6.0.6 of the LibreJS add-on for Firefox and other Mozilla-based browsers has been released. LibreJS is a selective JavaScript blocker that disables non-free JavaScript programs. New in this version are support for private-browsing mode and enhanced support for mailto: links on a page where non-free JavaScript has been blocked.

Full Story (comments: none)

Firefox 34 released

Mozilla has released Firefox 34. This version changes the default search engine, includes the Firefox Hello real-time communication client, implements HTTP/2 (draft14) and ALPN, disables SSLv3, and more. See the release notes for details.

Comments (18 posted)

QEMU Advent Calendar 2014 unveiled

The QEMU project has launched its own "Advent calendar" site. Starting with December 1, each day another new virtual machine disk image appears and can be downloaded for exploration in QEMU. The December 1 offering was a Slackware image of truly historic proportions.

Comments (2 posted)

Rocket, a new container runtime from CoreOS

CoreOS has announced that it is moving away from Docker and toward "Rocket," a new container runtime that it has developed. "Unfortunately, a simple re-usable component is not how things are playing out. Docker now is building tools for launching cloud servers, systems for clustering, and a wide range of functions: building images, running images, uploading, downloading, and eventually even overlay networking, all compiled into one monolithic binary running primarily as root on your server. The standard container manifesto was removed. We should stop talking about Docker containers, and start talking about the Docker Platform. It is not becoming the simple composable building block we had envisioned."

Comments (9 posted)

Newsletters and articles

Development newsletters from the past two weeks

Comments (none posted)

Introducing AcousticBrainz

MusicBrainz, the not-for-profit project that maintains an assortment of "open content" music metadata databases, has announced a new effort named AcousticBrainz. AcousticBrainz is designed to be an open, crowd-sourced database cataloging various "audio features" of music, including "low-level spectral information such as tempo, and additional high level descriptors for genres, moods, keys, scales and much more." The data collected is more comprehensive than MusicBrainz's existing AcoustID database, which deals only with acoustic fingerprinting for song recognition. The new project is a partnership with the Music Technology Group at Universitat Pompeu Fabra, and uses that group's free-software toolkit Essentia to perform its acoustic analyses. A follow-up post digs into the AcousticBrainz analysis of the project's initial 650,000-track data set, including examinations of genre, mood, key, and other factors.

Comments (none posted)

New features in Git 2.2.0

The "Atlassian Developers" site has a summary of interesting features in the recent Git 2.2.0 release, including signed pushes. "This is an important step in preventing man-in-the-middle attacks and any other unauthorized updates to your repository's refs. git push has learnt the --signed flag which applies your GPG signature to a "push certificate" sent over the wire during the push invocation. On the server-side, git receive-pack (the command that handles incoming git pushes) has learnt to verify GPG-signed push certificates. Failed verifications can be used to reject pushes and those that succeed can be logged in a file to provide an audit log of when and who pushed particular ref updates or objects to your git server."

Comments (none posted)

Page editor: Nathan Willis

Announcements

Brief items

FSFE: Support FSFE’s work in 2015

The Free Software Foundation Europe is seeking donations for its work in 2015. "The best way to support the FSFE's work is to become a Fellow (a sustaining member of the FSFE). All Fellowship contributions directly benefit the FSFE’s work towards a free society. Fellows receive a state- of-the-art Fellowship smartcard which, together with the free GnuPG encryption software and a card reader, can be used to sign and encrypt e-mails, to secure SSH keys, to securely log into a computer from a potentially insecure machine, or to store the user’s hard disk encryption keys. Since the encryption key is stored on the card itself, it is almost impossible to steal."

Full Story (comments: none)

Articles of interest

Free Software Supporter - Issue 80

The Free Software Foundation's newsletter for November is out. Topics include FSF is hiring, organize a Giving Guide Giveaway, ThinkPenguin router that respects your freedom, copyleft.org, lobbyists pushing forward on TPP agreements, GNU Tools Cauldron 2014 videos posted, LibrePlanet, and much more.

Full Story (comments: none)

Mapping the world with open source (Opensource.com)

Opensource.com talks with Paul Ramsey, senior strategist at the open source company Boundless. "Boundless is the “Red Hat of geospatial”, which says a bit about our business model, but doesn’t really explain our technology. GIS professionals and IT professionals (and, really, anyone with a custom mapping problem) use our tools to store their data, in a spatial SQL database (PostGIS), publish maps and data over the web (GeoServer), and view or edit data in web browsers (OpenLayers) or on the desktop (QGIS). Basically, our tools let developers build web applications that understand and can attractively visualize location. We help people take spatial data out of the GIS department and use it to improve workflows and make decisions anywhere in the organization. This is part of what we see as a move towards what we call Spatial IT, where spatial data is used to empower decision-making across an enterprise."

Comments (none posted)

The Impact of the Linux Philosophy (Opensource.com)

Starting with the premise that all operating systems have a philosophy, this article on Opensource.com looks at the Linux philosophy and how it differs from other operating systems. "Imagine for a moment the chaos and frustration that would result from attempting to use a nail gun that asked you if you really wanted to shoot that nail and would not allow you to pull the trigger until you said the word “yes” aloud. Linux allows you to use the nail gun as you choose. Other operating systems let you know that you can use nails but don't tell you what tool is used to insert the nails let alone allow you to put your own finger on the trigger."

Comments (24 posted)

New Books

Black Hat Python -- New from No Starch Press

No Starch Press has released "Black Hat Python" by Justin Seitz.

Full Story (comments: none)

JavaScript for Kids -- New from No Starch Press

No Starch Press has released "JavaScript for Kids" by Nick Morgan.

Full Story (comments: none)

Calls for Presentations

LCA2015 Debian Miniconf & NZ2015 mini-DebConf

There will be a New Zealand mini-DebConf preceding linux.conf.au, on January 10-11, 2015. That will be followed by the Debian Miniconf at LCA, on January 12. The call for presentations is open until December 21.

Full Story (comments: none)

Prague PostgreSQL Developer Day 2015 call for papers

Prague PostgreSQL Developer Day 2015 will be held February 12, with some additional activities on February 11, in Prague, Czech Republic. The call for papers ends January 5. Most talks will be in Czech, but a few talks in English are welcome.

Full Story (comments: none)

Embedded Linux Conference 2015 - Call for Participation

The Embedded Linux Conference will be held March 23-25 in San Jose, California. The theme for this year is "Drones, Things and Automobiles". The call for papers deadline is January 9. "Presentations should be of a technical nature, covering topics related to use of Linux in embedded systems. Topics related to consumer electronics are particularly encouraged, but any proposals about Linux that are of general relevance to most embedded developers are welcome."

Full Story (comments: none)

Announcing netdev 0.1

"Netdev" is a new conference aimed at networking developers; it will be held February 14 to 17 in balmy Ottawa, Canada. The call for papers is open now, with a submission deadline of January 10. "Netdev 0.1 (year 0, conference 1) is a community-driven conference geared towards Linux netheads. Linux kernel networking and user space utilization of the interfaces to the Linux kernel networking subsystem are the focus. If you are using Linux as a boot system for proprietary networking, then this conference may not be for you."

Update: the conference organizers have posted more information on the CFP and the types of proposals they are looking for.

Full Story (comments: 10)

LSF/MM 2015 Call For Proposals

The 2015 Linux Storage, Filesystem, and Memory Management summit will be held March 9 and 10 in Boston. The call for agenda proposals has gone out, with a deadline of January 16. Attendance will be capped to facilitate discussions, so developers who are interested in attending this event might want to get their proposals in soon.

Full Story (comments: none)

CFP Deadlines: December 4, 2014 to February 2, 2015

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
December 7 January 31
February 1
FOSDEM'15 Distribution Devroom/Miniconf Brussels, Belgium
December 8 February 18
February 20
Linux Foundation Collaboration Summit Santa Rosa, CA, USA
December 10 February 19
February 22
Southern California Linux Expo Los Angeles, CA, USA
December 14 January 12 LCA Kernel miniconf Auckland, New Zealand
December 17 March 25
March 27
PGConf US 2015 New York City, NY, USA
December 21 January 10
January 11
NZ2015 mini-DebConf Auckland, New Zealand
December 21 January 12 LCA2015 Debian Miniconf Auckland, New Zealand
December 23 March 13
March 15
FOSSASIA Singapore
December 31 March 17
March 19
OpenPOWER Summit San Jose, CA, USA
January 1 March 21
March 22
Kansas Linux Fest Lawrence, Kansas, USA
January 2 May 21
May 22
ScilabTEC 2015 Paris, France
January 5 January 12 Linux.conf.au 2015 Multimedia and Music Miniconf Auckland, New Zealand
January 5 March 23
March 25
Android Builders Summit San Jose, CA, USA
January 5 February 11
February 12
Prague PostgreSQL Developer Days 2015 Prague, Czech Republic
January 9 March 23
March 25
Embedded Linux Conference San Jose, CA, USA
January 10 May 16
May 17
11th Intl. Conf. on Open Source Systems Florence, Italy
January 11 March 12
March 14
Studencki Festiwal Informatyczny / Academic IT Festival Cracow, Poland
January 11 March 11 Nordic PostgreSQL Day 2015 Copenhagen, Denmark
January 16 March 9
March 10
Linux Storage, Filesystem, and Memory Management Summit Boston, MA, USA
January 19 June 16
June 20
PGCon Ottawa, Canada
January 19 June 10
June 13
BSDCan Ottawa, Canada
January 24 February 14
February 17
Netdev 0.1 Ottawa, Ontario, Canada
January 30 April 25
April 26
LinuxFest Northwest Bellingham, WA, USA
February 1 April 13
April 17
ApacheCon North America Austin, TX, USA
February 1 April 29
May 2
Libre Graphics Meeting 2015 Toronto, Canada

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

LCA 2015 and InternetNZ Diversity Program

LCA 2015 and InternetNZ are supporting diversity at linux.conf.au. "The InternetNZ Diversity Programme is one of the many ways we ensure that the LCA 2015 continues to be an open and welcoming conference for everyone. Together with InternetNZ this program has been created to assist under-represented delegates who contribute to the Open Source community but, without financial assistance, would not be able to attend LCA 2015."

Full Story (comments: 2)

Linux Foundation Announces 2015 Events Schedule

The Linux Foundation has announced the schedule for all their 2015 conferences. The announcement contains links to all the conferences, as well as call for participation deadlines.

Full Story (comments: none)

Events: December 4, 2014 to February 2, 2015

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
December 5
December 7
SciPy India Bombay, India
December 27
December 30
31st Chaos Communication Congress Hamburg, Germany
January 10
January 11
NZ2015 mini-DebConf Auckland, New Zealand
January 12 Linux.conf.au 2015 Multimedia and Music Miniconf Auckland, New Zealand
January 12
January 16
linux.conf.au 2015 Auckland, New Zealand
January 12 LCA Kernel miniconf Auckland, New Zealand
January 12 LCA2015 Debian Miniconf Auckland, New Zealand
January 13 Linux.Conf.Au 2015 Systems Administration Miniconf Auckland, New Zealand
January 23 Open Source in the Legal Field Santa Clara, CA, USA
January 31
February 1
FOSDEM'15 Distribution Devroom/Miniconf Brussels, Belgium
January 31
February 1
FOSDEM 2015 Brussels, Belgium

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds