LWN.net Weekly Edition for February 6, 2014
A look at Lightworks 11.5
After a lengthy development and beta-testing process, video editing fans finally saw the first general release of Lightworks for Linux on January 29. Lightworks is a non-linear editing (NLE) suite with a considerable history as a proprietary Windows application. Several years ago, the developers of the product announced an ambitious plan to port it to both Linux and Mac OS X and release the result under an open source license. The new release does not go that far, but it is a promising milestone along the way. Linux users unsatisfied with their other NLE options will encounter some limitations with Lightworks (particularly the free version), but are likely to find it more powerful than most of the competition.
The first Lightworks releases date back to the early 1990s, around the same time that Avid and Adobe (who have dominated the NLE software market for ages) started their respective video products lines as well. Over the years the product itself changed hands several times, and most recently (in 2009) it was acquired by EditShare when that company purchased a rival vendor of server-side video software. Like the name itself suggests, EditShare's primary product lines had been in other areas, such as digital asset management.
Several months after the acquisition, EditShare announced its plan to port Lightworks to Macs and Linux systems and to release it as an open source project. As is often the case, though, the timeline involved in the process proved to be lengthier and less predictable than the initial estimate. The original prediction was for an open source release by the end of 2011; ultimately the reworked application was released (in closed form) for Windows in 2012, and the first Linux betas in May 2013. Mac versions have been previewed, but not yet released.
Enter the shark
![[Lightworks 11.5]](https://static.lwn.net/images/2014/02-lightworks-clips-sm.png) 
The January 29 release is numbered 11.5, and the Linux packages are available for download in both RPM and Debian format, in 32-bit and 64-bit form. The officially-supported distributions are Ubuntu and derivatives (such as Linux Mint) and Fedora. The current pricing scheme makes the basic version of the application free, with a "pro" option available (either for a one-time fee or on a subscription) that unlocks the support for several additional proprietary codecs and to export at greater than 720p output resolution.
Users are also required to create a username/password combination and to sign in when the application launches. For the free version, the account info is simply the same as is used for the discussion forum and ostensibly has no other effect; for pro users the account info is also used to authenticate the availability of the paid components.
What is not clear is whether or not this sign-on process will survive to the open source release. Already the sign-on process has been revised more than once since the first beta releases of Lightworks 11; the Mac-supporting release is rumored to be numbered 12 and incorporate several other changes. Obviously it is quite possible to make an open source application authenticate to a remote server—but such a feature is likely to rankle at least a few developers (particularly if it involves relaying information about the local machine to said remote server), and EditShare's public statements about where the application is going long term leave some wondering.
Nevertheless, the sign-on process is simple enough, and those using the free edition will be happy to discover that it does not employ "nag-ware" techniques to try and push the paid options. The application itself might take some getting used to, particularly for those who have only worked with smaller or more lightweight NLEs. Fortunately, there is copious documentation available online (at the "download" link).
The biggest difference is that most other NLEs available for Linux tend to use a static window layout: scrollable list of "clips" down one side, edit timeline along the bottom, and preview/playback frame in the remaining space. Lightworks, on the other hand, is geared around floating windows and palettes that can be freely rearranged, closed, and reopened. Import some video clips, and they appear in a "bin" window. Start working on a clip, and it opens up in its own player window with its own timeline. Start assembling a scene, and a new "edit" window pops up to hold it.
In practice, this is less confusing than it might sound. For one thing, Lightworks runs full screen, and has a fixed toolbar on one side and project headers that rest at the top of the screen (in addition, the playback controls can be docked to the bottom edge of the screen, rather than being duplicated on each window, so more savings are possible). But it is also clear after working through the User's Guide that a lot of Lightworks's interface decisions exist to maximize usability on large projects. Imported clips appear in a "bin" window of their own so you can minimize that window when you need room for other things; "bins" themselves can be renamed and stacked in "racks," and you can even save several screen layouts (called "rooms" in Lightworks slang) and switch between them. In a large production, that could be beneficial, because you might have different people working on color correction and sound editing, or any number of other tasks.
Editing features and effects
![[Lightworks 11.5]](https://static.lwn.net/images/2014/02-lightworks-editing-sm.png) 
Speaking of color correction, the good news is that Lightworks supports a wide range of effects and filters, including color correction (in a variety of color models), keying (better known to those of us outside Hollywood as "green screen" effects), titling, standard transitions, split-screening, and even stereoscopic 3D. Most of the complex effects include a full-featured control panel and a set of usable presets. Effects are a notoriously tricky feature for Linux NLEs; many open source projects implement a few, or implement them with a minimal set of options. Even for a small project, it is not hard to bump up against the limitations of the effects modules and experience frustration.
As far as the editing process itself is concerned, once one gets used to the window management model Lightworks is actually fairly easy to work with. None of the controls or UI elements are difficult to figure out, which is an accomplishment—many open source NLEs struggle to find the right icons and cursor shapes to indicate what sort of operations are available, and Lightworks manages to be virtually self-explanatory. It even goes so far as to highlight related windows with a border of the same color (e.g., a playback window and the timeline window associated with it) to avoid confusion.
Similarly, for the most part the functionality is located where one would expect to find it. There are NLEs, for example, that list all of the available effects in a top-level menu, even though effects can only be applied to clips on the timeline. In Lightworks, effects are only accessible in windows to which they can be applied. Fans of big screens will notice that the UI toolkit is fully scalable, and although it attempts to default to a reasonable size, it can be manually scaled up or down.
This is not to say that there are no areas for potential improvement, of course. Depending on the color scheme and surrounding environment, it can be a tad difficult to tell which audio or video tracks are active and which are deactivated, since the only indicator is a "glow" effect around the track name. There are also some editing buttons ("Replace" and "Insert") which are visible only when the playback controls are in global mode—which is presumably a bug. More sensitive users who shudder at the memory of Microsoft's Clippy might not care for the cartoon shark who lives in the corner of the screen offering how-to tips, although, to be fair, "Chompy" (or whatever its real name is) is far less talkative and intrusive.
The major limitation, of course, it that the free version of Lightworks supports a hard upper limit on output resolution (720p ), and there is no support for exporting to some of the more common video codecs. This is reasonable in theory—after all, codec licensing fees for encoders are arguably the biggest money-makers for MPEG-LA and other commercial codec-purveyors. I was, however, a bit surprised to find that "YouTube" was the only output option in version 11.5 (which evidently is a preset for .mp4 format). Not even exporting to a local file was available, although projects can be saved in Lightworks's native project format.
When we return...
The codec issue could prove to be a major obstacle with the open
source community once EditShare begins releasing source code.  The
company's plans are not yet clear; in recent months there was talk of
selling professional codecs as add-ons, in language that suggests a
piecemeal approach rather than the all-or-nothing "pro license"
option.  But that could be reading too much into the specifics of the
wording.  Nevertheless, forum users have asked
about free codecs like Google's VP9 already; the official response was
that the company is "investigating implementation
".
Obviously it is up to the company to choose what to implement now, and even in an open source project there would be a good case to be made for restricting proprietary codecs to proprietary plugins. But things could get strained if the project attempts to prevent the addition of support for free codecs. Depending on presently unknown factors like the license and the architecture of the code, outside developers might just hack in the support they want in their own forks (in fact, some certainly will).
The bigger risk is the potential for alienating the larger development community over that sort of issue. Few are likely to justify spending money for a VP9 encoder, particularly in light of the fact that Lightworks uses other open source components like FFmpeg under the hood. Declining a pull request that the community feels is a no-brainer could spawn acrimony if not handled correctly. If nothing else, a major difference of opinion means shedding outside talent who would otherwise be interested in participating in development. So far, there is absolutely no reason to expect things to go badly—but as those who follow the open source movement know, roll-outs of previously proprietary code can be tricky to manage.
Long term, it will certainly be interesting to watch where EditShare takes its project, especially what approach it takes to underwriting its development expenses. The initial decision to produce a free NLE and to release it as open source software means that the company is not out to maximize its revenue; perhaps it is not interested in competing head-to-head against larger established players like Avid and Apple, or perhaps it is more interested in linking Lightworks to its existing server-side products. At the same time, its decision to update and release paid Windows and Mac versions of the product suggest that the company is being very cautious to not alienate its existing customer base.
There are any number of other business models to consider, of course. The poster child for freeing a proprietary application and turning it into a profitable enterprise is Blender. As most people are aware, Blender funds development through a variety of means, including books and training classes. Interestingly enough, EditShare recently announced its own line of Lightworks training courses, in addition to paid support plans. Time will tell what approach it takes (and how successful it will be).
For users who have been anticipating the open source release of Lightworks since 2010, more waiting might sound like an exasperating prospect. The good news is that, based on this 11.5 release, Lightworks seems to be as solid of a project as its Windows fans suggested. The existing open source NLE projects have other questions to consider, of course. After all, Lightworks is modern and featureful, which makes it a competitor; when it joins the ranks of the open source projects, however, the prospect for "coopetition" becomes a lot more interesting. Blender's open source release significantly cut into the development of other free 3D modeling applications; a Lightworks source release could have a similar effect—but it could also bring considerably more users and attention to the Linux NLE environment.
A possible setback for DRM in Europe
It's amazing how much the computing power of video game consoles has changed over time. For example, the Nintendo Wii, launched in 2006, features a 729 MHz CPU and 88 MB of RAM, which is quite a step up from the consoles of the 1980s. In fact, the Wii has enough power to browse the web, listen to music, and to handle most other general purpose tasks on a modern Linux distribution. That isn't just theoretical either, it is actually something that you can do. That is, it's something you can do if laws don't prohibit circumventing the DRM of the device; a recent ruling in Europe's Supreme Court involving Nintendo and an Italian company may directly affect that in the European Union (EU).
 Thirteen years ago, in Directive
2001/29, the EU required its 28 member-states, including Italy (where
this case originated), to pass legislation that, among other things,
"provide[s] adequate legal protection against the circumvention of
any effective technological measures
". This includes prohibiting:
However, a recent ruling by the European Court of Justice (ECJ) — the EU's Supreme Court — may have dramatically weakened the anti-circumvention prohibition. The ECJ adjudicated a dispute between the video game behemoth Nintendo and PC Box, which is a small Italian company. PC Box markets jailbreaking tools for the Wii console and DS, a dual-screen portable handheld gaming system also from Nintendo, which are both capable of running Linux. PC Box sold Wii and DS systems with hardware modifications that allowed the execution of arbitrary code, and added homebrew video games pre-installed onto the system. The modifications broke the DRM on the consoles. Nintendo, less than happy with this, sued in Italy; the case was heard by the Tribunale di Milano (the Milan District Court, a lower court in Italy).
That court decided it best to refer the case to the ECJ to answer two questions that were dense, impenetrable, and filled with legalese. Those questions were essentially:
- Does the anti-circumvention provision also cover video game consoles which include access control hardware, deliberately made not to be interoperable with anything else, which checks to see if video games inserted into the console include a signature that allows them to be played on the console?
-  How, if at all, are "the scope, the nature and the importance of the use of devices, products or components capable of circumventing those effective technological measures, such as PC Box equipment [...] relevant " to whether or not they fall afoul of the legal prohibition?
The court's answers were: Yes and Very, respectively.
With regard to the first, the court noted that the anti-circumvention clauses catch a lot of activity and devices: "the concept of 'effective technological measures' is defined broadly
". Those devices include combining lock-out chips on game consoles with a requirement that authorized games must contain authorization code that satisfies those chips.
While a first glance reading through the court's answer to the second question might seem to appeal to DRM-supporters, the last sentence of paragraph 38 is key to an opening for reducing the scope of DRM:
That last phrase is crucial: "how often they are used for purposes
which do not infringe copyright.
" That's the Achilles heel for
Nintendo when the case goes back to Italy's courts. Applying the standard
in paragraph 38 to this case, the ECJ is saying that it's up to the Italian
court to decide whether or not there's a feasible alternative to
DRM-circumvention (in the form of PC Box's hardware modding) for enabling
the product PC Box claims to be marketing: homebrew game playing and
audio/video playback on Nintendo's Wii and DS systems. If there isn't a
feasible alternative, and the alleged PC Box product is, in fact, often
used for non-infringing purposes, then Nintendo's case could fall apart.
But the Interactive Software Federation of Europe (ISFE) was upbeat
about the ruling. Essentially, its argument is that any DRM-circumvention device on video game consoles can't have any commercially significant use besides allowing the playing of infringing copies of games: "ISFE is confident that the application of the test of proportionality set out by the CJEU will enable the Milan Court to determine that the sale of circumvention devices is unlawful
".
But that's simply not true. There are significant applications for
repurposing video game console hardware for general-purpose computing. The
National Center for Supercomputing Applications made
a supercomputer by clustering Sony PlayStation 2s in 2003. Two years
ago, the United States Department of Defense (DoD) clustered
over 1,700 PlayStation 3s to make a powerful Linux-based supercomputer;
it was so powerful that the Air Force Research Laboratory, which made the
cluster, called it "the fastest interactive computer system in the
entire DoD, capable of executing 500 trillion floating point operations per
second.
" This technical achievement was also a great financial
success: 
There are also substantial applications for individual users and small businesses, particularly, as desktop computing solutions. Sony's PlayStation 4 features an eight-core 64-bit CPU, a dedicated GPU, 8 GB of GDDR5 RAM, WiFi, and USB 3.0 ports. That's much more powerful and also much cheaper than my relatively new laptop. Throw in a cheap monitor, keyboard, and mouse, and you could potentially have an affordable and powerful desktop computer ... if it was legal to break Sony's DRM.
Those are some of the reasons to be skeptical of the ISFE's
argument. Another reason is the anxious reaction from some highly-respected
lawyers in copyright law who have decades of experience. In an article
titled "Does
the CJEU ruling in Nintendo and Others v PC Box Srl raise serious
implications for device manufacturers?", three experienced lawyers with
the multinational law firm Osborne Clarke note: "The CJEU has
effectively said that Nintendo may use TPMs [trusted platform modules] to prevent illegal use of videogames but not to prevent other, non-infringing, uses of the consoles.
" They list some of their concerns about the effects of the ruling, in particular, they find it "worrying that device manufacturers potentially have no control over what their devices are used for
".
It's important to emphasize that the European Court of Justice's ruling has already set a strong precedent for the entire EU when it comes to DRM anti-circumvention law. The ECJ is the EU's top court when it comes to interpreting Directives, such as the ones that deal with DRM. No lower courts can go against the ECJ's ruling.
The Milan District Court is a lower court, but its ruling will eventually provide an example throughout the EU of how the ECJ's test for permissible DRM circumvention can be applied. Any other court anywhere in the EU dealing with a similar issue will likely look at how the Milan District Court has grappled with the issue, although it won't be bound by the ruling.
Looking at the ECJ's ruling and at Osborne Clarke's reaction, there is a good chance that Nintendo will lose this case. Hardware hackers and open-source enthusiasts residing in Europe who want to repurpose the latest video game console hardware for fun and/or profit should keep their eyes on the Milan District Court, as it will rule on the case — at a date yet to be determined. Don't be surprised if Nintendo comes out losing in a big way, so stay tuned as we watch this case unfold.
Security
"Strong" stack protection for GCC
Stack buffer overflows are a longstanding problem for C programs that leads to all manner of ills, many of which are security vulnerabilities. The biggest problems have typically been with string buffers on the stack coupled with bad or missing length tests. A programmer who mistakenly leaves open the possibility of overrunning a buffer on a function's stack may be allowing attackers to overwrite the return pointer pushed onto the stack earlier. Since the attackers may be able to control what gets written, they can control where the function returns—with potentially dire results. GCC, like many compilers, offers features to help detect buffer overflows; the upcoming 4.9 release offers a new stack-protection mode with a different tradeoff between security and performance impact.
GCC has supported stack protection for some time. It currently supports two different types of stack protection. Recently, Google engineers have come up with another style that tries to chart a middle course between the two existing options. It has made its way into GCC 4.9 (expected later this year) and the upcoming 3.14 kernel has support for building with that option.
The basic idea behind stack protection is to push a "canary" (a randomly chosen integer) on the stack just after the function return pointer has been pushed. The canary value is then checked before the function returns; if it has changed, the program will abort. Generally, stack buffer overflow (aka "stack smashing") attacks will have to change the value of the canary as they write beyond the end of the buffer before they can get to the return pointer. Since the value of the canary is unknown to the attacker, it cannot be replaced by the attack. Thus, the stack protection allows the program to abort when that happens rather than return to wherever the attacker wanted it to go.
There is a downside to using canaries. The value must be generated and checked, which takes some time, but more importantly there must be code added to handle the canary for each function that is protected that way. That extra code results in some level of performance degradation, perhaps mostly due to a larger cache footprint. For this reason, it can make sense to restrict stack protection to a subset of all the functions in a program.
So the question has always been: "Which functions should be protected?" Putting stack protection into every function is both overkill and may hurt performance, so one of the GCC options chooses a subset of functions to protect. The existing -fstack-protector-all option will protect all functions, while the -fstack-protector option chooses any function that declares a character array of eight bytes or more in length on its stack. Some distributions have lowered that threshold (e.g. to four) in their builds by using the --param=ssp-buffer-size=N option.
That "character array" test catches the most "at risk" functions, but it leaves a number of other functions behind. As Kees Cook pointed out in a recent blog post, the Google Chrome OS team had been using -fstack-protector-all since the team is "paranoid", but a new -fstack-protector-strong option has been developed to broaden the scope of the stack protection without extending it to every function in the program.
In addition to the protections offered by -fstack-protector, the new option will guard any function that declares any type or length of local array, even those in structs or unions. It will also protect functions that use a local variable's address in a function argument or on the right-hand side of an assignment. In addition, any function that uses local register variables will be protected. According to Cook, Chrome OS has been using -fstack-protector-strong (instead of protecting all functions) for ten months or so.
During the 3.14 merge window, Linus Torvalds pulled Cook's patches to add the ability to build the kernel using the strong stack protection. In Ingo Molnar's pull request (and Cook's post), the results of using strong protection on the kernel were presented. The kernel with -fstack-protector turned on is 0.33% larger and covers 2.81% of the functions in the kernel. For -fstack-protector-strong, those numbers are an increase of 2.4% in code size over an unprotected kernel, but 20.5% of the functions are covered.
The CONFIG_CC_STACKPROTECTOR_STRONG kernel configuration option adds the strong protection, while the CONFIG_CC_STACKPROTECTOR option for the "regular" protection has been renamed to reflect that: CONFIG_CC_STACKPROTECTOR_REGULAR. The default CONFIG_CC_STACKPROTECTOR_NONE does just what its name would imply.
While stack protection certainly isn't a panacea for security woes, it will catch a significant portion of real-world attacks. Having an option that strikes a balance between the ultra-paranoid "all" and the regular variant (not to mention the wide-open "none" option) is likely to catch more bugs—and attack vectors. We will likely see some of the more security-conscious distributions building their user-space programs and kernels with the "strong" option moving forward.
Brief items
Security quotes of the week
OpenSSH 6.5 released
The "feature-focused" OpenSSH 6.5 release is available. Changes include new ciphers and key types, a new private key format, and more. "Add support for key exchange using elliptic-curve Diffie Hellman in Daniel Bernstein's Curve25519. This key exchange method is the default when both the client and server support it."
New vulnerabilities
bind: denial of service
| Package(s): | bind | CVE #(s): | CVE-2013-3919 | ||||
| Created: | January 30, 2014 | Updated: | February 5, 2014 | ||||
| Description: | From the CVE entry: resolver.c in ISC BIND 9.8.5 before 9.8.5-P1, 9.9.3 before 9.9.3-P1, and 9.6-ESV-R9 before 9.6-ESV-R9-P1, when a recursive resolver is configured, allows remote attackers to cause a denial of service (assertion failure and named daemon exit) via a query for a record in a malformed zone. | ||||||
| Alerts: | 
 | ||||||
curl: information disclosure
| Package(s): | curl | CVE #(s): | CVE-2014-0015 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 31, 2014 | Updated: | February 24, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory: Paras Sethia discovered that libcurl, a client-side URL transfer library, would sometimes mix up multiple HTTP and HTTPS connections with NTLM authentication to the same server, sending requests for one user over the connection authenticated as a different user. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
flite: insecure temporary files
| Package(s): | flite | CVE #(s): | CVE-2014-0027 | ||||||||||||||||
| Created: | February 5, 2014 | Updated: | February 17, 2014 | ||||||||||||||||
| Description: | From the CVE entry: The play_wave_from_socket function in audio/auserver.c in Flite 1.4 allows local users to modify arbitrary files via a symlink attack on /tmp/awb.wav. NOTE: some of these details are obtained from third party information. | ||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||
horde3: code execution
| Package(s): | horde3 | CVE #(s): | CVE-2014-1691 | ||||
| Created: | February 5, 2014 | Updated: | February 5, 2014 | ||||
| Description: | From the Debian advisory: Pedro Ribeiro from Agile Information Security found a possible remote code execution on Horde3, a web application framework. Unsanitized variables are passed to the unserialize() PHP function. A remote attacker could specially-crafted one of those variables allowing her to load and execute code. | ||||||
| Alerts: | 
 | ||||||
kernel: privilege escalation
| Package(s): | kernel | CVE #(s): | CVE-2014-0038 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | January 31, 2014 | Updated: | February 20, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory: Pageexec reported a bug in the Linux kernel's recvmsg syscall when called from code using the x32 ABI. An unprivileged local user could exploit this flaw to cause a denial of service (system crash) or gain administrator privileges. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libmicrohttpd: denial of service
| Package(s): | libmicrohttpd | CVE #(s): | CVE-2013-7038 | ||||||||
| Created: | January 31, 2014 | Updated: | February 5, 2014 | ||||||||
| Description: | From the Mageia alert: The MHD_http_unescape function in libmicrohttpd before 0.9.32 might allow remote attackers to obtain sensitive information or cause a denial of service (crash) via unspecified vectors that trigger an out-of-bounds read. | ||||||||||
| Alerts: | 
 | ||||||||||
libotr: information disclosure
| Package(s): | libotr | CVE #(s): | |||||
| Created: | January 31, 2014 | Updated: | February 5, 2014 | ||||
| Description: | From the Debian bug report: It's been known [1] since 2006 that clients supporting both OTRv1 and v2 (such as libotr 3.x) are subject to protocol downgrade attacks clients. It's also been known for a while that OTRv1 has serious security issues (that were the main reason for a v2, actually). In short, support v2 only is the only safe way to go these days. | ||||||
| Alerts: | 
 | ||||||
libvirt: multiple vulnerabilities
| Package(s): | libvirt | CVE #(s): | CVE-2013-6457 CVE-2014-0028 | ||||||||||||
| Created: | January 31, 2014 | Updated: | February 5, 2014 | ||||||||||||
| Description: | From the Ubuntu advisory: Dario Faggioli discovered that libvirt incorrectly handled the libxl driver. A local user could possibly use this flaw to cause libvirtd to crash, resulting in a denial of service, or possibly execute arbitrary code. This issue only affected Ubuntu 13.10. (CVE-2013-6457) Eric Blake discovered that libvirt incorrectly handled certain ACLs. An attacker could use this flaw to possibly obtain certain sensitive information. This issue only affected Ubuntu 13.10. (CVE-2014-0028) | ||||||||||||||
| Alerts: | 
 | ||||||||||||||
libyaml: code execution
| Package(s): | libyaml | CVE #(s): | CVE-2013-6393 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | February 3, 2014 | Updated: | April 7, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory: Florian Weimer of the Red Hat Product Security Team discovered a heap-based buffer overflow flaw in LibYAML, a fast YAML 1.1 parser and emitter library. A remote attacker could provide a YAML document with a specially-crafted tag that, when parsed by an application using libyaml, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
moodle: multiple vulnerabilities
| Package(s): | moodle | CVE #(s): | CVE-2014-0008 CVE-2014-0009 CVE-2014-0010 | ||||||||||||||||
| Created: | January 31, 2014 | Updated: | February 12, 2014 | ||||||||||||||||
| Description: | From the Red Hat Bugzilla: Andrew Steele found that some password changes were visible in plain text to Administrators in the config changes report. This issue affected Moodle versions 2.6, 2.5 to 2.5.4, 2.4 to 2.4.7 and earlier unsupported versions. It has been fixed in versions 2.6.1, 2.5.4 and 2.4.8. (CVE-2014-0008) Itamar Tzadok found an issue in the group constraint checking for loginas. In some cases if a user had loginas privileges but not the site:accessallgroups capability, they could use this flaw to log in as a user not in their group. This issue affected Moodle versions 2.6, 2.5 to 2.5.4, 2.4 to 2.4.7, 2.3 to 2.3.10 and earlier unsupported versions. It has been fixed in 2.6.1, 2.5.4, 2.4.8 and 2.3.11. (CVE-2014-0009) Jun Zhu found that some profile fields were vulnerable to Cross-Site Request Forgery (CSRF). An attacker could use these flaws to perform actions on profiles (such as deleting categories). These issues affected Moodle versions 2.6, 2.5 to 2.5.4, 2.4 to 2.4.7, 2.3 to 2.3.10 and earlier unsupported versions. It has been fixed in 2.6.1, 2.5.4, 2.4.8 and 2.3.11. (CVE-2014-0010) | ||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||
mozilla: multiple vulnerabilities
| Package(s): | firefox, thunderbird, seamonkey | CVE #(s): | CVE-2014-1477 CVE-2014-1479 CVE-2014-1481 CVE-2014-1482 CVE-2014-1486 CVE-2014-1487 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | February 5, 2014 | Updated: | February 24, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2014-1477, CVE-2014-1482, CVE-2014-1486) A flaw was found in the way Firefox handled error messages related to web workers. An attacker could use this flaw to bypass the same-origin policy, which could lead to cross-site scripting (XSS) attacks, or could potentially be used to gather authentication tokens and other data from third-party websites. (CVE-2014-1487) A flaw was found in the implementation of System Only Wrappers (SOW). An attacker could use this flaw to crash Firefox. When combined with other vulnerabilities, this flaw could have additional security implications. (CVE-2014-1479) It was found that the Firefox JavaScript engine incorrectly handled window objects. A remote attacker could use this flaw to bypass certain security checks and possibly execute arbitrary code. (CVE-2014-1481) | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
mumble: multiple vulnerabilities
| Package(s): | mumble | CVE #(s): | CVE-2014-0044 CVE-2014-0045 | ||||||||||||||||||||
| Created: | February 5, 2014 | Updated: | May 8, 2014 | ||||||||||||||||||||
| Description: | From the Debian advisory: CVE-2014-0044: It was discovered that a malformed Opus voice packet sent to a Mumble client could trigger a NULL pointer dereference or an out-of-bounds array access. A malicious remote attacker could exploit this flaw to mount a denial of service attack against a mumble client by causing the application to crash. CVE-2014-0045: It was discovered that a malformed Opus voice packet sent to a Mumble client could trigger a heap-based buffer overflow. A malicious remote attacker could use this flaw to cause a client crash (denial of service) or potentially use it to execute arbitrary code. | ||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||
openldap: denial of service
| Package(s): | openldap | CVE #(s): | CVE-2013-4449 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | February 4, 2014 | Updated: | March 11, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat advisory: A denial of service flaw was found in the way the OpenLDAP server daemon (slapd) performed reference counting when using the rwm (rewrite/remap) overlay. A remote attacker able to query the OpenLDAP server could use this flaw to crash the server by immediately unbinding from the server after sending a search request. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
openstack-nova: information leak
| Package(s): | openstack-nova | CVE #(s): | CVE-2013-7130 | ||||||||||||||||||||
| Created: | February 5, 2014 | Updated: | February 5, 2014 | ||||||||||||||||||||
| Description: | From the Red Hat bugzilla: Loganathan Parthipan from Hewlett Packard reported a vulnerability in the Nova libvirt driver. By spawning a server with the same flavor as another user's migrated virtual machine, an authenticated user can potentially access that user's snapshot content resulting in information leakage. Only setups using KVM live block migration are affected. | ||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||
openstack-nova: information disclosure
| Package(s): | openstack-nova | CVE #(s): | CVE-2013-6491 | ||||||||||||||||
| Created: | January 31, 2014 | Updated: | May 7, 2014 | ||||||||||||||||
| Description: | From the Red hat advisory: It was discovered that enabling "qpid_protocol = ssl" in the nova.conf file did not result in nova using SSL to communicate to Qpid. If Qpid was not configured to enforce SSL this could lead to sensitive information being sent unencrypted over the communication channel. | ||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||
perl-MARC-XML: information disclosure
| Package(s): | perl-MARC-XML | CVE #(s): | CVE-2014-1626 | ||||||||
| Created: | January 31, 2014 | Updated: | February 5, 2014 | ||||||||
| Description: | From the CVE entry: XML External Entity (XXE) vulnerability in MARC::File::XML module before 1.0.2 for Perl, as used in Evergreen, Koha, perl4lib, and possibly other products, allows context-dependent attackers to read arbitrary files via a crafted XML file. | ||||||||||
| Alerts: | 
 | ||||||||||
pidgin: multiple vulnerabilities
| Package(s): | pidgin | CVE #(s): | CVE-2012-6152 CVE-2013-6477 CVE-2013-6478 CVE-2013-6479 CVE-2013-6481 CVE-2013-6482 CVE-2013-6483 CVE-2013-6484 CVE-2013-6485 CVE-2013-6486 CVE-2013-6487 CVE-2013-6489 CVE-2013-6490 CVE-2014-0020 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | February 4, 2014 | Updated: | June 2, 2014 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory: Many places in the Yahoo! protocol plugin assumed incoming strings were UTF-8 and failed to transcode from non-UTF-8 encodings. This can lead to a crash when receiving strings that aren't UTF-8 (CVE-2012-6152). A remote XMPP user can trigger a crash on some systems by sending a message with a timestamp in the distant future (CVE-2013-6477). libX11 forcefully exits causing a crash when Pidgin tries to create an exceptionally wide tooltip window when hovering the pointer over a long URL (CVE-2013-6478). A malicious server or man-in-the-middle could send a malformed HTTP response that could lead to a crash (CVE-2013-6479). The Yahoo! protocol plugin failed to validate a length field before trying to read from a buffer, which could result in reading past the end of the buffer which could cause a crash when reading a P2P message (CVE-2013-6481). NULL pointer dereferences in the MSN protocol plugin due to a malformed Content-Length header, or a malicious server or man-in-the-middle sending a specially crafted OIM data XML response or SOAP response (CVE-2013-6482). The XMPP protocol plugin failed to ensure that iq replies came from the person they were sent to. A remote user could send a spoofed iq reply and attempt to guess the iq id. This could allow an attacker to inject fake data or trigger a null pointer dereference (CVE-2013-6483). Incorrect error handling when reading the response from a STUN server could lead to a crash (CVE-2013-6484). A malicious server or man-in-the-middle could cause a buffer overflow by sending a malformed HTTP response with chunked Transfer-Encoding with invalid chunk sizes (CVE-2013-6485). A malicious server or man-in-the-middle could send a large value for Content-Length and cause an integer overflow which could lead to a buffer overflow in Gadu-Gadu HTTP parsing (CVE-2013-6487). A specially crafted emoticon value could cause an integer overflow which could lead to a buffer overflow in MXit emoticon parsing (CVE-2013-6489). A Content-Length of -1 could lead to a buffer overflow in SIMPLE header parsing (CVE-2013-6490). A malicious server or man-in-the-middle could trigger a crash in IRC argument parsing in libpurple by sending a message with fewer than expected arguments (CVE-2014-0020). | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: | 
 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
qemu: denial of service
| Package(s): | qemu, qemu-kvm | CVE #(s): | CVE-2013-4377 | ||||||||||||
| Created: | January 31, 2014 | Updated: | February 13, 2014 | ||||||||||||
| Description: | From the Ubuntu advisory: Sibiao Luo discovered that QEMU incorrectly handled device hot-unplugging. A local user could possibly use this flaw to cause a denial of service. This issue only affected Ubuntu 13.10. | ||||||||||||||
| Alerts: | 
 | ||||||||||||||
tntnet: information leak
| Package(s): | tntnet | CVE #(s): | CVE-2013-7299 | ||||||||
| Created: | February 5, 2014 | Updated: | February 17, 2014 | ||||||||
| Description: | From the CVE entry: framework/common/messageheaderparser.cpp in Tntnet before 2.2.1 allows remote attackers to obtain sensitive information via a header that ends in \n instead of \r\n, which prevents a null terminator from being added and causes Tntnet to include headers from other requests. | ||||||||||
| Alerts: | 
 | ||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 3.14-rc1, released on February 2. Everybody hoping for a π-oriented codename for this release will be disappointed: "I realize that as a number, 3.14 looks familiar to people, and I had naming requests related to that. But that's simply not how the nonsense kernel names work. You can console yourself with the fact that the name doesn't actually show up anywhere, and nobody really cares. So any pi-related name you make up will be *quite* as relevant as the one in the main Makefile, so don't get depressed." Instead, this kernel is named "Shuffling zombie juror."
Stable updates: no stable updates have been released in the last week. As of this writing, the 3.13.2 (140 patches), 3.12.10 (133 patches), 3.10.29 (104 patches) and 3.4.79 (37 patches) updates are in the review process; they can be expected on or after February 6.
The 2.6.34.15 update is also in the review
process; it contains 213 patches.  Paul Gortmaker writes: "This will
be the last release on 2.6.34.x ; people should be making migration plans
to newer kernels.  As such, the focus here has been with CVE items, data
leaks, and bugs that could trigger BUG/oops.
"
Quotes of the week
kGraft — live kernel patching from SUSE
SUSE has announced the existence of kGraft, a mechanism for applying kernel patches without the need to reboot the system. It is similar to ksplice in functionality, but the implementation appears to be rather different and the developers plan to try to get it merged into the mainline kernel. "kGraft builds on technologies and ideas that are already present in the kernel: ftrace and its mcount-based reserved space in function headers, the INT3/IPI-NMI patching also used in jumplabels, and RCU-like update of code that does not require stopping the kernel. A kGraft patch is a kernel module and fully relies on the in-kernel module loader to link the new code with the kernel. Thanks to all that, the design can be nicely minimalistic." The first code release is planned for March.
Kernel development news
3.14 Merge window part 3
By the time Linus closed the merge window and released 3.14-rc1, a total of 10,622 non-merge changesets had been pulled into the mainline kernel repository. That makes this merge window the busiest since 3.10, though it beat 3.13 by a mere 104 patches. At the current rate, 3.10 (11,963 patches pulled during the merge window) is likely to hold its record for some time yet.Interesting user-visible changes found in the 2000 patches pulled since last week's summary include:
-  The zram compressed swap subsystem (described in this article from 2013) has been moved out of
     the  
     staging tree and into the core memory management code.  Minchan Kim's
     commit
     notes that zram is now used heavily in television sets; recent Android
     handsets have started using it as well.
-  Support for user mode-setting in the Intel i915 driver has been
     deprecated, in preparation for removing it entirely roughly one year
     from now.  Anybody who depends on this mode would do well to make
     their needs known in that time.
-  The Btrfs filesystem now provides much more information via sysfs,
     including supported features, space utilization data, and more.  Much
     of this information is available via ioctl(), but sysfs
     interfaces can be easier to use in scripts or from the command line.
-  New hardware support includes:
     -  Systems and processors:
     	  MIPS interAptiv processors.
     
-  Miscellaneous:
     	  ITE IT8603E hardware monitoring chips,
	  Intel BayTrail IOSF-SB mailbox interface controllers,
	  Broadcom BCM281xx watchdogs,
	  Broadcom BCM2835 DMA controllers,
	  MOXA ART SoC DMA controllers, and
	  watchdogs controlled over GPIO lines.
     
-  Networking:
     	  RealTek RTL8821AE Wireless LAN NICs.
     
- Video4Linux: TI OMAP4 camera controllers, Broadcom BCM2048 FM radio receivers, Silicon Labs Si4713 FM radio transmitters, Thanko Raremono AM/FM/SW radios, Montage M88DS3103 DVB-S/S2 demodulators, Montage M88TS2022 silicon tuners, and Samsung S5K5BAF camera sensors.
 
-  Systems and processors:
     	  MIPS interAptiv processors.
     
Changes visible to kernel developers include:
- The "immutable biovec" patch set has been merged; it introduces some significant API changes to the block layer, but it enables the creation of arbitrarily large I/O requests and improves efficiency. See Documentation/block/biovecs.txt for more information.
One final feature that might yet make it into 3.14 is the proposed renameat2() system call, which Linus
wanted to review more deeply before committing to.  That code might get
pulled before 3.14-rc2, but, Linus said, "quite frankly it's more
likely to be left pending for 3.15
".  Other than that, the feature
set for the 3.14 kernel should be complete at this time.  If the usual
schedule holds, this kernel can be expected sometime toward the end of
March.
An x32 local exploit
So far, the x32 ABI—a 32-bit ABI for running on x86 processors in 64-bit mode—is not widely used. Only a few distributions have enabled support for it in their kernels (notably Ubuntu), which reduces the impact of a recently discovered local privilege escalation somewhat, but the bug has been in the kernel since 2012. It's a nasty hole, that required a quick fix for Ubuntu 13.10 (and two hardware enablement kernels for 12.04 LTS: linux-lts-raring and linux-lts-saucy).
It is the x32 version of recvmmsg() that has the bug. In the compat_sys_recvmmsg() function that is part of the compatibility shim for handling multiple ABIs in the kernel, a user-space pointer for the timeout value is treated as a kernel pointer (rather than copied using copy_from_user()) for the x32 ABI. The value of the timeout pointer is controlled by the user, but it gets passed as a kernel pointer that __sys_recvmmsg() (which implements the system call) will use. The kernel will dereference the pointer for both reading and writing, which allows a local, unprivileged user to get root privileges.
The problem was reported to the closed security@kernel.org and linux-distros mailing lists on January 28 by Kees Cook, after "PaX Team" reported it to the Chrome OS bug tracker (in a still-restricted entry). It was embargoed for two days to give distributions time to get fixes out. After that, "Solar Designer" reported it publicly since Cook was traveling. It is a serious bug, but is somewhat mitigated by the fact that few distributions have actually enabled the ABI.
The x32 ABI came about largely to combat the amount of memory wasted on x86_64 processors for 64-bit pointers (and long integers) in programs that did not require the extra 32 bits for each value. It allows programs to use the extra registers and other advantages that come with x86_64 without paying the penalty of extra memory usage. In theory, that should lead to less memory usage and faster programs due to a smaller cache footprint. So far, though, those benefits are somewhat speculative—and controversial.
X32 does exist in the kernel, however, and can be enabled with the CONFIG_X86_X32 flag. If it is enabled, any user can build an x32 program using GCC with the -mx32 flag. The kernel will recognize such a binary and handle it appropriately.
The bug was introduced in a February 2012 commit that was adding support for 64-bit time_t values to x32. The problematic code is as follows (from compat_sys_recvmmsg()):
    if (COMPAT_USE_64BIT_TIME)
            return __sys_recvmmsg(fd, (struct mmsghdr __user *)mmsg, vlen,
                                  flags | MSG_CMSG_COMPAT,
                                  (struct timespec *) timeout);
The timeout value is passed to that function as:
    struct compat_timespec __user *timeout
It is clearly annotated as a user-space pointer, but just gets
passed to __sys_recvmmsg().  The fix
is to use compat_get_timespec() to copy the data from user space
before the call to __sys_recvmmsg() and
compat_put_timespec() to copy any changes back to user space
afterward. 
Exploits have started to appear (for example, one by rebel and another by saelo). The basic idea is to use the fact that recvmmsg() will write the amount of time left in the timeout to the location specified by the timeout pointer. Since the value of that pointer is controlled by the user, it can be arranged to write known values (another exploit-controlled address, say) to somewhere "interesting", for example to a function pointer that gets called when the /proc/sys/net/core/somaxconn file is opened (as rebel's exploit does). The program will already have arranged to have "interesting" code (to gain root privileges) located at that address. When the function is called by the kernel via that pointer, the exploit's code is run.
Users of Ubuntu 13.04 should note that it reached its end of life two days before the bug was found, so no update for that kernel has been issued. One possible solution for those who have not yet upgraded to 13.10 (or are running some other distribution kernel and do not want to patch and build their kernel) is a module that disables the x32 version of the recvmmsg() system call.
As PaX Team noted in the report (quoted by Solar Designer), the presence of this bug certainly calls into question how much testing (fuzz testing in particular) has been done on the x32 ABI. For a bug of that nature to exist in the kernel for two years would also seem to indicate that it isn't just testing that has fallen by the wayside—heavy use would also seem to be precluded. In any case, the problem was found, reported, and fixed, now it is up to users (and any distributions beyond Ubuntu since we have received no other security advisories beyond those mentioned above) to update their kernels.
ARM, SBSA, UEFI, and ACPI
For some years now we have been promised that ARM-based servers were going to start showing up in data centers. Opinions differ on whether ARM processors can be successful in this market, but there tends to be widespread agreement on a related point: the free-form, highly differentiated nature of ARM-based systems would make them painful to support in large, server-oriented environments, where users expect to be able to treat servers like interchangeable parts. ARM Ltd. clearly understands this problem; its recently announced "Server Base System Architecture" (SBSA) is an attempt to improve the situation. SBSA has been greeted with generally optimistic reviews, but the requirements that are coming along for the ride may yet stir things up in the development community.In truth, it can be hard to say for sure what the SBSA mandates; the standard is currently kept behind a restrictive license, limiting the number of people who have read it. Arnd Bergmann described it this way (in the comments):
Arnd went on to describe the requirements as "extremely
reasonable
".  Olof Johansson, one of the maintainers of the arm-soc
kernel tree, was also
supportive of the idea:
In short, the SBSA is trying to create a base platform that can be assumed to be present as part of any compliant system. ARM has always lacked that platform, which is part of why supporting ARM systems has traditionally been a messy affair. To the extent that the SBSA succeeds, it will make life easier for kernel developers, hardware manufacturers, and server administrators; it is hard to be upset about that.
Nonetheless, the SBSA announcement has stirred up some heated discussion in the community. But the SBSA is mostly guilty by association; the controversial part of the platform is the firmware requirements, which are not addressed by the SBSA at all. Instead, these requirements will be released as part of a separate specification. The details of that specification are not known, but it has been made clear that it will mandate the use of both UEFI and ACPI on compliant systems.
UEFI is the low-level firmware found on most current PC systems. Like any firmware, UEFI has caused its share of misery, but, for the most part, developers are fine with its use in this context. UEFI works well enough, it has an open-source reference implementation, and supporting UEFI is not hard for the kernel to do. So there is no real opposition to the idea of supporting UEFI on ARM systems.
ACPI (the "Advanced Configuration and Power Interface") is another story. Getting ACPI working solidly on x86 systems was a long and painful process. Its detractors cite a few reasons to believe that it could be just as bad — if not worse — in the ARM world. For example, most ARM system-on-chip (SoC) vendors currently have no experience with ACPI, so they are going to have to come up to speed quickly, repeating lots of mistakes on the way. Each one of those mistakes is likely to find its way into deployed systems, meaning that the kernel would have to support them indefinitely.
There are also concerns about how well an ACPI-based ARM platform will come together. In the PC world, it became clear fairly quickly that the specification only meant so much when it came to hardware support. The real acid test was not compliance with a spec; instead, it was the simple question of "does Windows run on it?" Once Windows worked, firmware authors tended to stop fixing things. That led to numerous situations where the Linux kernel has to carefully do things exactly as Windows does, since that's the only well-tested mode of operation. Windows compatibility is not the most satisfying compliance test out there, but it did result in ACPI implementations converging sufficiently to allow them to be supported in a generic manner.
Windows does not have the same dominating position in the ARM server market; indeed, it's not clear that Windows will be offered on such systems at all. It is certainly possible that, say, Red Hat Enterprise Linux could play a similar role in this space. But it's also possible that vendors will just try to push lots of patches into the kernel to support their specific ACPI implementations. The result could be an incompatible, bug-ridden mess that takes many years to settle out.
Finally, there is the question of whether ACPI is needed at all. ACPI is, in the end, a standardized way to enable the operating system to discover and initialize the system's hardware. But, ACPI critics point out, the ARM architecture already has such a mechanism: device trees. The device tree work is reaching a point where it is reasonably mature; as Olof recently noted in a pull request to Linus:
The developers who have been working on getting this system working well are now asking: why should that work be pushed aside in favor of a PC standard with no history in the ARM world? Among certain developers one can easily pick up a feeling that the kernel should simply refuse ARM ACPI support and mandate the use of device trees on all ARM systems.
In the end, it is hard to see any such thing happening; Linux kernel development has almost always been done in such a way as to favor running on as many systems as possible. And there may well be technical reasons for favoring ACPI on some systems, especially in situations where strict compatibility has to be maintained for years. As Grant Likely put it in a lengthy posting about the upcoming firmware standards, ACPI can make it easier for manufacturers to keep things compatible:
As Grant points out, that abstraction runs counter to the way things have traditionally been done on ARM-based systems; normal practice is to go through a great deal of pin configuration, regulator setup, clock programming, and more just to get things into an operational state. ACPI pushes a lot of that work into the firmware, taking it out of kernel developers' hands. That, perhaps, is where some of the resistance comes from: kernel developers like that control and are reluctant to cede it to firmware authors. It just doesn't feel right if you don't have to establish the right pinmux configuration before anything will work.
Still, ARM servers with ACPI are coming, and the kernel will almost certainly support them. The kernel will also, of course, continue to support device-tree-based systems; the chances of ACPI moving into the embedded world in the near future seem relatively small. After a while, ACPI on ARM will just be another configuration supported by the kernel, and people will be wondering why it was ever controversial. But "a while" may turn out to be a longer period of time than some people expect.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Documentation
Memory management
Security-related
Page editor: Jonathan Corbet
Distributions
Wrangling releases and messaging with openSUSE
The openSUSE project has historically made its releases at an eight-month pace. Since the most recent was 13.1, released in November 2013, that schedule would place 13.2's release date around July 2014. But the project recently decided to push the release date back another four months, and take advantage of the extra time to beef up some key parts of the distribution's infrastructure. The announcement of the plan, however, resulted in quite a bit of confusion among openSUSE volunteers—particularly as to how the paid SUSE employees would be participating—but, fortunately, the facts have since been sorted out.
The eight-month release cycle has been the standard for several years now, and—despite not dividing evenly into the year—is a predictable pace: releases arrive in November, then July, then March, and so on. But the July releases have proven problematic in the past (at least, recently), particularly due to the travel interruptions of conferences and vacations taken by those in the northern hemisphere. The last release slated for a July debut was 12.2 (in 2012), which was eventually delayed until September.
In the intervening time, however, openSUSE-the-project has also experienced considerable growth, especially where the number of packages is concerned. That growth has strained the release process, adversely affecting a number of stages—such as package building on the Open Build Service (OBS), quality assurance (QA), and the roll out of releases themselves. Back in December 2013, Jos Poortvliet first raised the possibility of skipping the next July release and allowing the openSUSE Team at SUSE to instead devote some time to improving OBS, QA, and other pieces of the release infrastructure. That proposal eventually evolved into the current plan, which pushes back the next release by four months, rather than skipping it entirely (and thus leaving 16 months between 13.1 and 13.2). In the interim, the SUSE openSUSE Team would work on long-delayed improvements to OBS, the openQA service, and other components of the release infrastructure.
Mixed-up confusion
On January 30, Michal Hrusecky posted a message alluding to this plan, but omitting many of the details.  It was signed " The prospect of reduced QA and security updates from SUSE upset
many in the community, who did not see why SUSE's Maintenance Team and
Security Team should be unable to do their regular update work just
because a particular release was coordinated by community volunteers.
Hrusecky replied that he had not yet brought the subject up with the
Maintenance and Security teams, and shortly thereafter provided an update indicating that both of those
teams would indeed support the 13.2 release, despite the differing
circumstances.
 In addition, though, there was widespread confusion on several
other points mentioned in the announcement.  Jiri
Slaby pointed out that the initial
announcement made references such as " Furthermore, the announcement's brief synopsis of the 13.2 delay
apparently failed to communicate what would happen in the interim,
which led to another problem.  Several project members, such as Carlos E.R., 
took the message to mean that SUSE was
reducing its involvement in openSUSE, or perhaps ending it
altogether.  Robert Schweikert (himself a SUSE employee, although in a
different group), went so far as to thank the SUSE openSUSE Team for its work
and call for volunteers to take on the team's now-open roles.
 In reality, though, SUSE was not reducing its involvement in the
project; that was a misreading of the announcement—albeit an
easily understandable once, due to the " Although team members responded to most of the questions on
the opensuse-factory and opensuse-project mailing lists, the confusion
quickly spread elsewhere.  Andrew Wafaa pointed out that the announcement and its
most extreme misinterpretations had been picked up by many in the
general public (including several comments here at LWN), thus " The SUSE openSUSE Team indeed did recognize that the initial
announcement had resulted in considerable confusion—including
the decidedly unexpected interpretation that SUSE was scaling back its
commitment to the distribution—and, to its credit, began trying
to set the record straight in earnest.  
 Greg Freemyer reiterated the plan
for 13.2, starting with the reasons that led to the decision to push
back the release date.  openSUSE release manager Stephan Kulow posted a much shorter announcement in a
new thread that focused on the November release date and promised
users that milestones for 13.2 would follow approximately the same
schedule as last year's 13.1 release.  The team also posted a blog
entry on February 3, issuing its own mea culpa and
highlighting the news for those not following the relevant mailing
lists.
 By and large, the confusion appears to have settled down as of
press time, although there is still evidence that pockets of
misunderstanding persist, such as Marcus Moeller's February 5 message asking how the 13.2 release could
happen without Kulow.  The openSUSE team may find itself putting out
such smaller fires for quite some time.
 Arguably the bigger question, though, is whether or not there are
lessons to be learned from the confusing announcement and subsequent
flurry of disagreement.  Perhaps it is just one more example of how
quickly news can travel in Internet Time, and how difficult it can be
to spread a correction to a widespread rumor.  There have certainly
been other examples of Linux distributions having an announcement
misinterpreted and quickly spiraling into an unexpected bad-news
cycle; one that springs immediately to mind is Ubuntu's October 2012
announcement
that it would invite outside developers into previously in-house
teams, which was quickly misinterpreted to mean that the distribution
was taking
development closed.
 It is impossible to predict how an announcement will be
misread, of course, but there may be a few points to take
away from this particular incident.  As has already been noted, one of
the biggest early problems was Hrusecky's speculation that 13.2
would not receive maintenance support or security updates.  In his
defense, that statement was meant as an attempt to not speak on behalf of other
teams, but in practice he did get concrete answers from those teams
quickly once he asked, and had he made that inquiry beforehand,
considerable confusion could have been avoided.  Philipp Wagner pointed out that he was confused by the
term "openSUSE Team at SUSE," which he mistakenly took to mean "all
SUSE employees who work on openSUSE"—when it in fact refers to
just one of several openSUSE-tasked groups working at SUSE.
 Perhaps a rename would help in the future, but perhaps not.
Perhaps making an announcement in a blog post rather than on a mailing
list would help (on the grounds that many of the project's lists are specific enough that only a minority of participants subscribe to any list), but perhaps that is a red herring as well.
Ultimately, a poor choice of wording and lack of detail were the main
problems with the announcement, regardless of how it was published.  A
lengthier pre-publication review process might have saved a lot of
trouble.
 It could be argued that the important thing for a project is the
ability to recognize a communication breakdown and respond 
appropriately—and quickly—to it.  On that front, the
openSUSE team did a fair job.  But the real shame is that the incident so
overshadowed the team's actual news, which offers the prospect of better
tooling and testing for openSUSE releases over the coming months.
That should have been news that the openSUSE community was happy to hear.
 
The openSUSE Team
", and noted that the team had made a decision about postponing the 13.2 release and pursuing other work, but fell short of being a clear announcement about the decision.  The wording of that
message sparked the concerns of many in the openSUSE community.
In particular, Hrusecky noted that the team could not guarantee that
it would complete its planned work in sufficient time to start back
into the release schedule as normal and, as a result, it expected
volunteers to help out.  He also described the potential for less
involvement from SUSE:
As you know we had a
week-long meeting in Nuremberg
" and "Quite a few of the
things we've been working on (openQA, OBS workflow etc) were shared by
now
" when, in fact, people outside of SUSE's openSUSE Team were
not clear on the meeting or the specifics of the openSUSE Team's plans.
community
release
" comments and omission of key details.  Once the
reduction-in-involvement 
idea had gotten out there (particularly with an 
@suse.com email address attached to it), however, it proved hard to
correct.
scaring the
bejeezus out of the community
", and complained that even after
Hrusecky's clarification, there was still "
no clear message of what was
originally intended
".  Others were similarly critical of the
messaging, noting that it did not help matters that the announcement and
subsequent discussion was split up between multiple openSUSE mailing lists.
I shall be released
Brief items
Distribution quotes of the week
At FOSDEM, we thought that we need a name for such a mindset.
Between beers, that name came to be "debops". (It's not just Debian, though: many other distributions get it right, too)
Mageia 4 released
Version 4 of the Mageia distribution is out. "There is a wide choice of desktop environments and languages, along with a variety of new and updated packages." See the release notes for details.
Distribution News
Debian GNU/Linux
Debian's technical committee starts another init system vote
Debian technical committee member Ian Jackson has posted a new call for votes on what the default init system should be in the upcoming "jessie" release. There are ten options to vote from with various combinations of init systems and whether packages can require a specific init system; there are also options for "further discussion" or to punt the question to a general resolution.The technical committee rules allow a week for the vote to run its course; members can change their votes in the middle if they want.
Debian's architecture health check
The Debian release team has posted the results of its latest architecture health check. Interestingly, neither the HURD nor the kFreeBSD port — both of which have featured prominently in the init system debate — is currently on-track to be part of the "jessie" release. HURD looks hopeless, but kFreeBSD might be able to improve its fate if discussions with the release team on "reducing the scope" of the port can come to a mutually agreeable conclusion.
Fedora
FESCo announces acceptance of Fedora.next PRDs
The Fedora Engineering and Steering Committee (FESCo) has accepted Product Requirements Documents (PRDs) from the Workstation, Server, Cloud, and Environments and Stacks working groups. "We are now ready to move on to concrete technical plans for how the products can be created, tested, distributed and marketed."
openSUSE
Changes at openSUSE
After some confusing communications (example) the folks at SUSE have come clean on a change for the openSUSE distribution: paid SUSE staff will no longer work on creating openSUSE releases. It is claimed that the amount of work going into openSUSE is not decreasing, it is just being put into other areas. Meanwhile, the community is trying to figure out how to "release without full time paid worker bees". The current plan seems to be to put out 13.2 in November, with SUSE still providing security support thereafter.
Update: see also this note from Greg
Freemyer.  "The openSUSE team @ suse therefore has decided to take a 8-month
period to push away from day-to-day issues and instead focus on the
improvements needed in [the Open Build Service] and openQA to handle the requirements
caused by the success of OBS.
"
Red Hat Enterprise Linux
Red Hat Enterprise Linux 3 Extended Life Cycle Support Retirement Notice
Red Hat's RHEL 3 Extended Life Cycle Support has reached its end of life. "In accordance with the Red Hat Enterprise Linux Errata Support Policy, Extended Life Cycle Support (ELS) for Red Hat Enterprise Linux 3 was retired on January 30, 2014, and support is no longer provided. Accordingly, Red Hat will no longer provide updated packages, including critical impact security patches or urgent priority bug fixes, for Red Hat Enterprise Linux 3 ELS after January 30, 2014. In addition, technical support through Red Hat's Global Support Services will no longer be provided after this date."
Other distributions
Update on Scientific Linux
Scientific Linux takes a look at what the Red Hat/CentOS merger means for them. "There are still many questions to pursue as the details of CentOS Special Interest Groups continue to evolve. The anticipated release of Red Hat Enterprise Linux 7 presents an opportunity to consider forming/joining a CentOS Special Interest Group and producing Scientific Linux 7 as a CentOS variant The variant structure may allow greater flexibility in adapting the distribution to scientific needs. The framework and relationship structure of CentOS Special Interest Groups is still under heavy discussion on the CentOS development list. This is only being evaluated for Scientific Linux version 7." (Thanks to Scott Dowdle)
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 544 (February 3)
- Gentoo Monthly Newsletter (January 2014)
- Ubuntu Weekly Newsletter, Issue 353 (February 2)
Page editor: Rebecca Sobol
Development
Systemd programming part 1: modularity and configuration
Systemd's positive and negative features have been discussed at length; one of the first positives I personally noticed was seen from my perspective as an upstream package maintainer. As the maintainer of mdadm and still being involved in the maintenance of nfs-utils, one of my frustrations was the lack of control over, or even much visibility into, the way these packages were integrated into the "init" system on each distribution. Systemd has the potential to give back some of that control while still giving flexibility to distributors and administrators; this article (and the one that follows) will look at systemd's programming features to show how that works.Once upon a time, all the major distributions were using SysVinit and each had their own hand-crafted scripts to handle the startup of "my" packages. While I could possibly have copied all of those scripts into the upstream package and tried to unify them, it would have been problematic trying to do this for all distributions, and it is unlikely that many distributions would have actually used what was offered. We did have scripts for Debian in the nfs-utils package for a while, but it turned out that this didn't really help the Debian developer at the time and they were ultimately removed when they proved to be more of a hindrance.
Not all packagers will care about having this visibility or control. However, with both mdadm and nfs-utils there are issues with recovery after failures which are not entirely straightforward to deal with. It is very possible for a distribution maintainer to put together an init script which works perfectly well in most cases, but can fail strangely in rare cases. I first came across this problem with nfs-utils, which has subtle ordering requirements for starting daemons on the server if a client held a lock while the server rebooted. Most (possibly all) distributions got this wrong. The best I could do to fix it was to get a fix into the distribution I was using at the time and update the README file. Whether that actually helped any other distribution is something I'll never know.
It was against this backdrop that I first considered systemd. The contrast to SysVinit was and is substantial. While the configuration files for SysVinit are arbitrary shell scripts with plenty of room for individual expressiveness and per-distribution policy, the configuration for systemd is much more constrained. Systemd allows you to say the things you need to say but provides very little flexibility in how you say them; systemd also makes it impossible to say things irrelevant to the task at hand, such as color coding status messages. This means that there is much less room for per-distribution differences and, thus, much less motivation for distribution packager to deviate from a configuration provided by upstream.
So when it came time to replace the SysVinit scripts that openSUSE uses for nfs-utils with some systemd unit files I decided to see if I could do it "properly." My initial baseline definition for "properly" was that the unit files should be suitable for inclusion in the upstream nfs-utils package. This in turn means they must be suitable for every distribution to use, and must be of sufficient quality to pass review by my peers without undue embarrassment.
This effort, together with the work I had already done to convert mdadm to use systemd unit files, caused me to look at systemd from the perspective of programming and programming language design. Systemd is a powerful and flexible tool that can be programmed through a special-purpose language to unite the various tools in the packages that I help maintain, in order to provide holistic functionality.
Modularity
In the early days of Unix, there was a script called /etc/rc which started all the standard daemons, and maybe it would run /etc/rc.local to start a few non-standard ones. When you only have 32KB of RAM, you probably don't want to run so many daemons that the lack of modularity provided by these files becomes a problem. As the capacity of hardware increased so too did the need for modularity, with modern SysVinit allowing a separate script (or possibly scripts in the plural) for each distinct package.
Systemd takes this one step further by allowing, and in fact requiring, a separate unit file for each daemon or for each distinct task. For packages that just provide a single daemon this is probably ideal. For nfs-utils, this requirement borders on being clumsy.
The draft unit-file collection I recently posted for review has 14 distinct unit files with a total of 168 lines (including blanks and occasional comments). They replace two SysVinit scripts totaling 801 lines, so the economy of expression cannot be doubted. However, it does mean that I cannot simply open "the unit files" in an editor window and look over them to remind myself how it works or look for particular issues. For mdadm, which has 4 systemd unit files this is a minor inconvenience. For nfs-utils it really feels like a barrier.
These 14 files include eight which actually run daemon processes, two which mount special virtual filesystems which the tools use for communicating with the kernel, and four which are "target" units. "Targets" are sometimes described as "synchronization points" in the systemd documentation. A target might represent a particular level of service such as network-online.target or multi-user.target. For nfs-utils we have, for example, nfs-server.target. This target starts all the various services required for, or useful to, NFS service. The individual daemons don't necessarily depend on each other, but the service as a whole depends on the collection and so gets a separate target.
These target units are well placed to provide a clean module structure. There is sometimes a need for unit files belonging to one package to reference unit files belonging to another package, such as the reliance on rpcbind.target in several nfs services. Restricting such references to target units would allow a clean separation between the API (.target) and the implementation (.service etc). Unfortunately the "systemctl" command handles an abbreviated unit name but it assumes a ".service" suffix rather than a ".target" suffix. This tends to discourage the use of targets and blurs the line between API and implementation.
This ability to collect units together into a virtual unit while allowing the individual details of the units to be managed separately is a distinct "plus" for systemd from the modularity perspective. The insistence that units each have their own file, together with systemctl's unfortunate default behaviour, are small "minuses".
Configuration
As a programmer I try to choose sensible defaults but also to allow users of my code to customize or configure some settings to meet their particular needs. As a programmer using the systemd language I have two sorts of users to keep in mind: distribution maintainers who will package my project for particular distributions, and system administrators who will use it to create a useful computer system.
I want both of these groups to be able to make any configuration changes they need, but at the same time I want to retain some degree of control. If I find a bug, for example some subtle ordering issue for different daemons, then I want to be able to fix that in the upstream package and have some confidence that the changes will flow through to all users. In this I seem at variance with the systemd developers, or at least with an opinion expressed two and a half years ago in systemd for Administrators, part IX.
That document asserts that
"systemd unit files do not include code
",
and that
"they are very easy to modify: just copy them from /lib/systemd/system
to /etc/systemd/system and edit them there
".
The first of these points is largely a philosophical one, but one
which the attentive reader will already see that I disagree with.  Any
time I am writing specific instructions to bring about a specific
effect, I am writing code — whether it is written in C, Python, or
systemd unit-file language.
The second is a more practical concern. If system administrators were to take this advice and replace one of my carefully crafted unit files, then the bug-fix I release may not have a chance to work for them — an outcome I would rather avoid.
So I don't want system administrators to feel the need to edit my unit files, but equally I don't want distribution packagers to edit them either. Partly this is a pride issue — I want upstream to be a perfect fit for everyone. Partly this is a quest for uniformity — packagers should feel free to send any patches they require upstream so that everyone benefits. And partly it is a support issue. If I get a bug report, I want to be able to ask the reporter to fetch and install the current upstream version and be fairly confident that it won't break any distribution-specific features.
So my goal, as a programmer, is to ensure my users can configure what they need without having to change my code. Once again systemd gets a mixed score-card.
Systemd allows for so-called "drop-ins" to extend any unit file. When looking for the file to load a particular unit, systemd will search a standard list of directories for a file with the right name. Early in this list are directories for local overrides such as /etc/systemd/system mentioned above. Late in the list is the location for the primary file to use — /lib/systemd/system in the quote above, though often /usr/lib/systemd/system on modern installations.
After finding the unit file, systemd will search again, this time for a directory with the same name as the file except with ".d" appended. If that is found, any files in the directory with names ending ".conf" are read and can extend the unit description. These ".conf" files are referred to as "drop-ins."
The ordering of directives in unit files is largely irrelevant so any directive can be replaced or, if relevant, extended by a drop-in file. So for example an "ExecStartPre" directive could be given to add an extra program to be run before the service starts. This facility allows both the packager and sysadmin to impose a variety of configuration changes without having to edit my precious unit files. Things like WorkingDirectory, User and Group, CPUSchedulingPriority, and more can easily be set if that is desired to meet some local need. However, there is one common area of configuration that isn't so amenable to change through drop-in files: the command-line arguments for the process being run.
When I examine the configuration that openSUSE SysVinit scripts allow for various daemons in nfs-utils, the one that stands out is the adding of extra command-line options. This may involve setting the number of threads that the NFS server might use, allowing mountd to run multi-threaded or provide alternate management of group IDs, or setting an explicit port number for statd to use, among others.
It is certainly true that a drop-in file can contain text like:
    ExecStart=
    ExecStart=/some/program --my --list=of arguments
where the assignment of an empty string removes any previous assignment (ExecStart is defined as a list, so that a "one-shot" service can run a list of processes) and the second assignment gives the desired complete list of command line arguments. However this approach can only replace the full set of arguments, it cannot extend them.
This gets back to my desire for control. If an upstream fix adds or changes a command-line argument, then any installation which uses a drop-in to replace the arguments to that daemon will miss out on my change.
There is also a question of management here. Many distributions don't expect system administrators to edit files directly but, instead, provide a tool like YaST (on openSUSE) or debconf (on Debian) which manages the configuration through a GUI interface or a series of prompts. For such a tool, it is much easier to manipulate a file with a well-defined and simple structure — such as the /etc/sysconfig files in openSUSE and Fedora or the similar /etc/defaults files in Debian — which are essentially a list of variable assignments. It is quite straightforward for user-interface tools to manipulate this file, and then for the SysVinit scripts to interpret the values and choose the required command line arguments.
While systemd can certainly read these same configuration files (with the EnvironmentFile directive) and can expand environment variables in command line arguments to programs, it lacks any sophistication. Non-optional parameters are easily handled, so:
    /usr/sbin/rpc.nfsd -n $USE_KERNEL_NFSD_NUMBER
would work as expected, but optional parameters cannot be managed so easily. While the Bourne shell (which interprets SysVinit scripts) and Upstart would both support:
    /usr/sbin/rpc.mountd ${MOUNTD_PORT:+-p $MOUNTD_PORT}
to include the "-p" only if $MOUNTD_PORT were non-empty, systemd cannot do this. It could be argued that this syntax has little to recommend it and that systemd might be better off without it. However, there is a clear need for this sort of functionality, at least unless or until various configuration management tools learn to create complete command lines directly.
There are at least two possible responses that are worth considering to address this need. The first is to modify all the daemons to accept any command line configuration also from environment variables. For example mountd could be changed to examine the MOUNTD_PORT environment variable if no -p option were given, and use that value instead.
This could be seen as a rather intrusive change. However, since I am looking from the perspective of an upstream developer, writing code in each daemon is not really harder or easier than writing code in systemd unit files. It is also very much in the style of systemd, which seems to encourage authors of daemons to design them to work optimally with systemd, recommending use of libsystemd-daemon, for example, when appropriate. This can maximize the effective communication between systemd and the services it runs, and remove any "middle-men" which don't really add any long-term value.
So this seems the best solution for the longer term, but as it requires some degree of agreement among developers and co-ordination between distributions, it doesn't seem like the best short-term solution.
The other option is to create a "middle-man" exactly as suggested. That is, have a script which can read a distribution-specific configuration file, interpret the contents, and construct the complete command lines for a collection of daemons. These command lines would be written to a temporary file which systemd could read before running the daemons.
So we could have an "nfs-config.service" unit like:
    [Unit]
    Description=Preprocess NFS configuration
    [Service]
    type=oneshot
    RemainAfterExit=yes
    ExecStart=/usr/lib/systemd/scripts/nfs-utils_env.sh
which is tasked with creating /run/sysconfig/nfs-utils. The "nfs-utils_env.sh" script would be distribution-specific to allow the packager to create a perfect match for the configuration options that the local config tool supports.
Then each nfs-utils unit file would contain something like:
    [Unit]
    Wants=nfs-config.service
    After=nfs-config.service
    [Service]
    EnvironmentFile=-/run/sysconfig/nfs-utils
    ExecStart=/usr/sbin/rpc.mountd $MOUNTD_ARGS
so that the "nfs-config" service would be sure to have run first and created the required EnvironmentFile, which would contain, among other things, the value of MOUNTD_ARGS.
Assessing this support that systemd provides (or fails to provide) for configuration objectively is hard. On the one hand, it seems to require me to create this "nfs-config" service which seems a bit like a hack. On the other it encourages me to press forward and change my daemon processes to examine the environment directly which is probably a better solution than any sort of service that systemd could provide to convert environment variables directly.
So, generally, for configuration, systemd gets a thumbs-up, but I'm reserving final judgment for now.
Thus far, we have seen a pattern with modularity and service configuration, where it provides good functionality but also a few significant frustrations. This pattern will continue in the second part when we look at how unit files may be "activated" to become running services, and how the set of unit directives as a whole fare when viewed as a programming language.
Brief items
Quote of the week
Nevertheless, I also asked for reducing exceptions of copyrights, because to give a temporary access to non-free works at zero cost has the consequence of reducing the incentive for creating alternatives that can be copied, modified and redistributed freely.
LibreOffice 4.2 released
The LibreOffice 4.2 release is out. "LibreOffice 4.2 features a large number of performance and interoperability improvements targeted to users of all kinds, but particularly appealing for power and enterprise users. In addition, it is better integrated with Microsoft Windows." See this article from last October for more information on what the LibreOffice project has been working on.
tox-1.7.0 available
Version 1.7.0 of the tox generic test-running suite for Python has been released. This update adds several new features, including the ability to start tests with a random seed value, but it also drops support for Python 2.5.
cwrap 1.0.0 available
Version 1.0.0 of the cwrap libc
wrapper has been released.  Cwrap is "a set of tools to create a
fully isolated testing network environment to test client/server
components on a single host. It provides synthetic account
information, hostname resolution and privilege separation
support.
"  Although the project originated from within Samba,
it is independent and usable in many other environments.
The Eclipse Foundation turns 10
The Eclipse Foundation recently passed its tenth anniversary.  In
commemoration of the event, Mike Milinkovich posted a retrospective
on his blog.  Milinkovich was also interviewed
about the milestone, where he notes ""We became the first open
source organization to show that real competitors could collaborate
successfully within the community. When BEA and Borland joined in 2005
as strategic members, they effectively validated our claim of vendor
neutrality. Both were important, but BEA in particular was a fierce
IBM competitor at that time.
"
GNU Xnee 3.17 released
Version 3.17 of GNU Xnee is now available. Xnee is a framework for recording, replaying, and distribution X11 actions, usable for test automation or "macro recording" functionality.
Firefox 27 released
Firefox 27 is available. From the release notes: "You can now run more than one service at a time with Firefox SocialAPI, allowing you to receive notifications, chat and more from multiple integrated services." This version also enables TLS 1.1 and TLS 1.2 by default, contains added support for SPDY 3.1 protocol, and more.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (January 29)
- What's cooking in git.git (February 3)
- LLVM Weekly (February 3)
- OCaml Weekly News (February 4)
- OpenStack Community Weekly Newsletter (January 31)
- Perl Weekly (February 3)
- PostgreSQL Weekly News (February 3)
- Python Weekly (January 31)
- Ruby Weekly (January 30)
- This Week in Rust (February 1)
- Tor Weekly News (February 4)
Kelly: Defensive Patent Publication for Qt
Stephen Kelly has written a blog post describing his recent experience documenting type-erased container features in Qt5—a project he undertook to serve as a defensive patent publication. Defensive publications serve as documentation of prior art in the event that someone attempts to patent the ideas described, but many corners of the community are still getting the hang of the process involved.  Kelly's effort required iterations to " Georg Lukas has posted a look at
various factors hindering XMPP instant messaging in mobile
environments, and how applications have tackled them on various
platforms.  In particular, that list includes the difficulties of 
transient connectivity, synchronization, and encryption.  The
discussion ends with a look at the next round of challenges:
" Libre Graphics World reviews
a handful of new astrophotography features in the Luminance HDR image
processor.  Luminance now supports the Flexible Image Transport System
(FITS) file format used for astronomy images and up to
32-bit-per-channel TIFF images.
 
extend the description of the method, make the description less-specific to C++ and particular operations on containers, add a diagram, and show how the prose of the description relates to the reference implementation,
" among other changes.  "We are learning more about creating such publications in the process of doing them, and the results will grow better with time
", he says. "
A rule of thumb is that if an implementation of a method in Qt is worth blogging about or talking about at a conference, it is probably worth of a defensive patent publication.
"
Lukas: The (Sad) State of Mobile XMPP in 2014
The next big thing is to create an XMPP standard extension for
end-to-end encryption of streaming data (files and real-time), to
properly evaluate its security properties, and to implement it into
one, two and all the other clients. Ideally, this should also cover
group chats and group file sharing.
"
Libre Graphics World: Luminance HDR 2.4.0 gets FITS support and 32bit TIFF exporting
Page editor: Nathan Willis
Announcements
Articles of interest
Free Software Supporter - Issue 70, January 2014
The Free Software Foundation's newsletter for January covers the FSF licensing team, LibrePlanet 2014, speak out against the TPP, an interview with Joerg Henrichs of SuperTuxKart, GNU toolchain update, and several other topics.FSFE Newsletter – February 2014
The Free Software Foundation Europe presents the February edition of its newsletter. Topics include free software in Italy, good news and bad news in the EU, Munich's GNU/Linux migration, compulsory routers, and more.
Calls for Presentations
Linux Plumbers Conference Call for Microconference Proposals
The Call for Microconferences for the 2014 edition of the Linux Plumbers Conference is open. LPC will take place October 15-17 in Düsseldorf, Germany. "Microconference proposals are due as soon as possible, however we understand that with dynamic development, issues that need discussion can arise at any time. We will try to accommodate incoming proposals as well as we can, depending on the slots available."
oSC14 CfP and Registration Open
The openSUSE Conference (oSC14) will take place April 24-28 in Dubrovnik, Croatia. Registration is open. Also the call for papers is open until February 28.CFP Deadlines: February 6, 2014 to April 7, 2014
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location | 
|---|---|---|---|
| February 8 | February 14 February 16 | Linux Vacation / Eastern Europe Winter 2014 | Minsk, Belarus | 
| February 9 | July 21 July 27 | EuroPython 2014 | Berlin, Germany | 
| February 14 | May 12 May 16 | OpenStack Summit | Atlanta, GA, USA | 
| February 27 | August 20 August 22 | USENIX Security '14 | San Diego, CA, USA | 
| March 10 | June 9 June 10 | Erlang User Conference 2014 | Stockholm, Sweden | 
| March 14 | May 20 May 22 | LinuxCon Japan | Tokyo, Japan | 
| March 14 | July 1 July 2 | Automotive Linux Summit | Tokyo, Japan | 
| March 14 | May 23 May 25 | FUDCon APAC 2014 | Beijing, China | 
| March 16 | May 20 May 21 | PyCon Sweden | Stockholm, Sweden | 
| March 17 | June 13 June 15 | State of the Map EU 2014 | Karlsruhe, Germany | 
| March 21 | April 26 April 27 | LinuxFest Northwest 2014 | Bellingham, WA, USA | 
| March 31 | July 18 July 20 | GNU Tools Cauldron 2014 | Cambridge, England, UK | 
| March 31 | September 15 September 19 | GNU Radio Conference | Washington, DC, USA | 
| March 31 | June 2 June 4 | Tizen Developer Conference 2014 | San Francisco, CA, USA | 
| March 31 | April 25 April 28 | openSUSE Conference 2014 | Dubrovnik, Croatia | 
| April 3 | August 6 August 9 | Flock | Prague, Czech Republic | 
| April 4 | June 24 June 27 | Open Source Bridge | Portland, OR, USA | 
| April 5 | June 13 June 14 | Texas Linux Fest 2014 | Austin, TX, USA | 
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: February 6, 2014 to April 7, 2014
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location | 
|---|---|---|
| February 7 February 9 | Django Weekend Cardiff | Cardiff, Wales, UK | 
| February 7 February 9 | devconf.cz | Brno, Czech Republic | 
| February 14 February 16 | Linux Vacation / Eastern Europe Winter 2014 | Minsk, Belarus | 
| February 21 February 23 | conf.kde.in 2014 | Gandhinagar, India | 
| February 21 February 23 | Southern California Linux Expo | Los Angeles, CA, USA | 
| February 25 | Open Source Software and Govenrment | McLean, VA, USA | 
| February 28 March 2 | FOSSASIA 2014 | Phnom Penh, Cambodia | 
| March 3 March 7 | Linaro Connect Asia | Macao, China | 
| March 6 March 7 | Erlang SF Factory Bay Area 2014 | San Francisco, CA, USA | 
| March 15 March 16 | Chemnitz Linux Days 2014 | Chemnitz, Germany | 
| March 15 March 16 | Women MiniDebConf Barcelona 2014 | Barcelona, Spain | 
| March 18 March 20 | FLOSS UK 'DEVOPS' | Brighton, England, UK | 
| March 20 | Nordic PostgreSQL Day 2014 | Stockholm, Sweden | 
| March 21 | Bacula Users & Partners Conference | Berlin, Germany | 
| March 22 March 23 | LibrePlanet 2014 | Cambridge, MA, USA | 
| March 22 | Linux Info Tag | Augsburg, Germany | 
| March 24 March 25 | Linux Storage Filesystem & MM Summit | Napa Valley, CA, USA | 
| March 24 | Free Software Foundation's seminar on GPL Enforcement and Legal Ethics | Boston, MA, USA | 
| March 26 March 28 | Collaboration Summit | Napa Valley, CA, USA | 
| March 26 March 28 | 16. Deutscher Perl-Workshop 2014 | Hannover, Germany | 
| March 29 | Hong Kong Open Source Conference 2014 | Hong Kong, Hong Kong | 
| March 31 April 4 | FreeDesktop Summit | Nuremberg, Germany | 
| April 2 April 4 | Networked Systems Design and Implementation | Seattle, WA, USA | 
| April 2 April 5 | Libre Graphics Meeting 2014 | Leipzig, Germany | 
| April 3 | Open Source, Open Standards | London, UK | 
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
 
           