Many different languages have a read-eval-print loop (REPL) for interactive use, but almost inevitably the REPL implementation is too simple for some people. Features like history retrieval and editing, incorporating graphics for data visualization, adding annotation capabilities, and so on, are often things that users want and, thus, a part of any replacement. IPython, or "interactive Python", is a popular substitute for the simple, text-only REPL provided by the standard C Python interpreter. It can also be used in combination with the IPython Notebook to put IPython "documents" into interactive web pages.
The original target for IPython was data visualization, but it has been used for much more than that. It is one of the primary teaching tools used in various informal Python tutorials and explanations found on the web, and it is not uncommon to see conference talks that are accompanied by IPython notebooks. IPython is even included as an interactive shell on the newly redesigned Python home page (look for the ">_" icon).
IPython is similar, in some ways, to commercial numerical computing packages like Mathematica and MATLAB. It also takes on some of the roles of an integrated development environment (IDE) by providing syntax highlighting, tab completion, popping up function API references, and so on. In addition, it provides direct access to Python's "help" functionality with the "help()" (or help(object)) command. But that is just a tiny taste of what IPython does.
To start with, it provides extensive command-line editing features that work especially well with Python's idiosyncratic indentation rules. Pasting code into IPython works well, which is not something that can be said for the standard REPL. To use it, one must learn the difference between Shift-ENTER (execute the code being edited) and ENTER (add a newline at the cursor). That takes a little getting used to, but IPython makes editing Python much easier.
All of the input and output is saved and numbered, so that it can be recalled and manipulated in various ways. The "magic" commands (which start with "%", such as %hist for history) provide extra features outside of Python to do things like load and save files, create aliases and macros, time execution of Python code, call out to the system shell, set up graphics consoles, etc. The %quickref command is one of the more useful when getting started with IPython, though the "?" command that gives a good overview of the features of the program is indispensable in the early going.
The program can be invoked in a number of different ways. For a text-based console in the current terminal window, a simple ipython will do, but the Qt-based console (with menus and tabs for multiple workspaces) can be invoked with the --qt-console command-line switch. The workspaces can either be connected to the same running Python instance (or "kernel") or can have their own. There are also some command-line parameters to control the loading of commonly used libraries. The --matplotlib[=qt4] will preload the matplotlib library (with an optional Qt 4 backend), while the --pylab option loads numpy in addition to matplotlib. The --pylab option is semi-deprecated because it loads numpy into the root namespace (though it can still be useful for quick-and-dirty scratch sessions).
The preloading options clearly show the data visualization roots of IPython, but the program is not limited to numeric plotting by any means. There many kinds of output that it can handle, including image formats, SVG, HTML, audio, video, and LaTeX. While any of that can be done within the console, it is the notebooks that show off IPython output best.
For example, the link in the previous paragraph goes to a notebook created to describe IPython Notebook output types. It uses the IPython Notebook Viewer (often abbreviated as "nbviewer") to display the contents of the notebook, which were created using IPython. Notebooks can be viewed with nbviewer, but interacting with them is done using notebook mode (which we will get to momentarily) locally. There are also extensions for Chrome, Firefox, and Safari that will add an nbviewer icon to the toolbar for even faster access.
There are a number of example notebooks shown on the nbviewer page, ranging from IPython, IRuby, and IJulia introductions to full-on textbooks, as well as notebooks covering the analysis of mathematical functions, an introduction to pandas Python-based data analysis tool, and so on. There are even more notebooks listed in the gallery.
Notebook mode is invoked by running "ipython notebook", which starts a web server process running on the local machine and opens a browser tab to http://127.0.0.1:8888/ to access it. From there, one can create, edit, or interact with an IPython notebook. An existing notebook can be loaded from the local system or from the web, and edited using the same techniques (and keystrokes, commands, etc.) as the console version uses. The notebook can then be saved and shared with others.
As alluded to above, IPython is not restricted to the Python language. Though Python is the only "official" kernel, there are active projects for Julia, Haskell, Ruby, R, and others. The IPython kernel will support multiple languages, as well, so those languages can be mixed and matched as needed.
The stable version is 1.2.1 from February, but a 2.0 release is imminent. The first beta of 2.0 was released on March 7. Development is being funded in large part by a $1.15 million Sloan Foundation grant, though it should also be noted that Microsoft donated $100,000 toward IPython development in August 2013. There is a rather aggressive roadmap for future releases, with lots of new features.
We have only scratched the surface of what IPython is and can do in this look. It is an impressive tool that is worth a look for data visualization, data analysis, and a whole host of other applications. Moving to it permanently instead of the standard REPL seems like a no-brainer for anyone doing anything particularly complicated in that environment.
The One Laptop per Child (OLPC) project was launched in 2005, with considerable publicity. Its mission (providing low-cost laptops to schoolchildren in regions under-served by the technology industry) was one almost everyone could relate to, and it was headed by Nicholas Negroponte, the well-known founder of MIT's Media Lab. Thus it was with great sadness on March 11 that many spread the online news reports that OLPC was closing down. Except that the shutdown story turns out not to be true, with OLPC issuing a clarification once the rumor began to spread. The project is still active, it seems, although it has morphed considerably in recent years.
OLPC News, an independent blog covering OLPC and related educational-technology issues, posted a story titled Goodbye One Laptop per Child on March 11 that was quickly taken to mean that OLPC had closed its doors. The story noted that Negroponte had long since departed for other projects, that the iconic XO-1 laptop was no longer supported (and replacement parts difficult to find), that work on the Sugar software environment had slowed to a crawl, and that the OLPC Foundation's flagship Boston-area office had recently closed.
But that assessment of the situation turned out not be entirely accurate. As commenters on the blog post mentioned, Sugar is now developed as an independent project (organized by Sugar Labs), and is still actively developed, with a conference coming up in April. There are still new OLPC hardware deployments, and while the Boston office of the OLPC Foundation may have been closed, the Miami-based OLPC Association is still alive and well.
The existence of two distinct OLPC-based organizations may come as a confusing surprise to many casual fans, of course, much less the difference between them. Although the project does not provide a black-and-white explanation of the different pieces involved, historically the OLPC Foundation was the arm that handled fundraising duties and undertook the development of the hardware and software. The Association, in turn, handled working with governments and schools to arrange OLPC deployments and provided the support services to keep those deployments working.
On March 12, OLPC Association Vice President Giulia D'Amico spoke to the electronics blog Gizmodo in an attempt to clarify the current state of affairs. OLPC is "thriving and making more inroads at bringing education to those who can't easily access it," she said, and is actively working on deployments in Costa Rica and Uruguay. She did, however, shed a bit more light on the organization of the project and its backing organization:
We have more exciting things planned in the horizon including the implementation of very large scale projects in several regions of the world, so be sure to stay tuned.
Working on content creation with the Smithsonian may sound like a significant shift in focus, but it clearly is the action of an organization that still exists. In fact, one could make the case that hardware costs have dropped significantly enough since 2005 that internally developing special-purpose laptops is not financially worthwhile. OLPC critics have long noted that the project was never able to produce its XO series hardware for the $100 price tag it originally targeted; if today it is simpler and cheaper to purchase compatible devices for deployment from third-party OEMs, would that be the wrong move? Similarly, if Sugar and the other components of the software stack are running well as independent projects, then does it make sense for OLPC to devote paid staff to them?
Some of the commenters of the original OLPC News story suggested that its author, Wayan Vota, was more disappointed with the recent shifts in direction than he should have been, so his account of the closure of the Boston OLPC office came across more pessimistically than it would have elsewhere. To Vota's credit, he did report on D'Amico's announcement as well, but on March 18 OLPC News announced that the site was closing.
Regrettably, it is a bit difficult to glean further information about OLPC's inner workings. The shutdown of the Boston office evidently happened in January, as The Boston Globe reported, and the shutdown was not accompanied by an announcement at the time. The organizational side of the OLPC project is not really explained on the laptop.org site, which focuses primarily on showcasing the XO platform and highlighting recent deployments.
Whatever else may be happening behind the scenes, though, it does not appear that OLPC has stopped putting affordable, Linux-powered computer hardware into classrooms around the world. That said, it is also clear that the OLPC project is not as focused on creating and mass-producing a revolutionary new computing device. More recent OLPC deployments have focused on tablets as the hardware form factor of choice, and inexpensive tablets are now a common sight. Nor is OLPC focused on developing a new software platform, as the recent devices have been built to run Android. Some disappointment, then, is understandable.
But it is harder to argue that OLPC has stopped working toward those goals because it failed at its stated mission. The XO-1 was a revolutionary product, and it spawned considerable rethinking of what goes into an educational laptop or tablet. The Asus Eee PC, which kicked off the "netbook" craze in 2007, is widely regarded as having been inspired by the XO-1. Sugar has proven to be successful in classrooms around the globe. The landscape for classroom computing has changed considerably since 2005, and OLPC deserves credit for much of the change. Perhaps, then, OLPC simply had to change as well.
Patent trolling — the aggressive assertion of weak or meritless patent claims by non-practicing entities — is a frequent target of disdain from open source enthusiasts. Thus it may be of some comfort to readers that the highest court in the US has recently decided the issue is worth looking into. Two cases have already been heard; another will be at the end of March. Decisions are, as usual, still a ways off.
On February 26, 2014, the US Supreme Court heard oral arguments in two separate cases. These cases both focus on the grounds for awarding legal fees for victorious defendants of weak-to-completely-baseless lawsuits for patent infringement. How the Supreme Court decides to rule in these cases might cripple patent trolling ... or it could give it a shot of adrenaline.
The issue of awarding legal fees seems like a dry, procedural matter at first glance. But this issue is crucial in the fight against patent trolling: if trolling means there's a good chance of losing $1-2 million (which is what legal fees can easily amount to in these types of cases, including those that never even go past the lower courts) for each organization that decides to fight back, it can really cripple the patent troll "business model". The profit from shaking down twenty or so companies for a few thousand dollars each in pre-trial settlements pales in comparison to the millions of dollars of losses from just one organization realizing it is threatened by a paper tiger, and fighting back. Facing that situation, why troll for money?
The legal basis for awarding attorney fees comes from a particular section of a patent law statute; Title 35 of the US Code, Section 285, which reads: "The court in exceptional cases may award reasonable attorney fees to the prevailing party." The Court of Appeals for the Federal Circuit (CAFC) outlined a two-step test for applying Section 285 in the 2005 Brooks Furniture case [PDF]. First, if there is "some material inappropriate conduct related to the matter in litigation" (such as unethical behavior on the part of the lawyers for a party), then legal fees can be awarded. If there isn't such misconduct, then fees can be awarded "only if both (1) the litigation is brought in subjective bad faith, and (2) the litigation is objectively baseless."
The two cases before the Supreme Court are arguing against the Brooks Furniture test. The first case heard was Octane Fitness v ICON Health and Fitness [PDF]. These two competing companies make, among other things, elliptical trainers. ICON holds patent 6,019,710, on a particular construction of elliptical trainers. ICON sued Octane for patent infringement in 2010 and lost, later losing at the CAFC as well. On appeal, Octane, among other claims, essentially accused ICON of trolling, and asked the CAFC to overturn the lower court's refusal to award attorney fees. The CAFC rejected Octane's claim for fees, refusing to lower its standard for Section 285.
In the hearing for the Octane case, the oral arguments focused on defining the legal meaning of the word "exceptional" in the context of that section. Notably, the Supreme Court judges seemed displeased with all the arguments they heard. When Octane's counsel argued that his client should receive legal fees according to Section 285 because ICON's claims were "unreasonably weak" and "meritless", some of the justices seemed skeptical: they were dubious that the standard could be applied effectively and consistently by lower courts, and that using "unreasonably weak" and "meritless" as a standard for Section 285 would not fit better with the section's intent. When the Assistant for the Solicitor General (acting as amicus curiae, or "friend to the court", in favor of the petitioners views) argued that Section 285 is "to prevent gross injustice", several judges argued that that phrase doesn't help clarify the section.
After opening with a weak argument based on First Amendment case law, ICON's counsel followed up by arguing that attorney fees should be awarded when a claim is brought that is "objectively baseless", which was sharply criticized by Justice Breyer. He openly mused about "send[ing] [the case] back and tell [the district court] that they were imposing a standard that was too narrow". Justice Scalia seemed sensitive to the general framework of patent trolling: "if the alternative for the defendant is either [...] spend $2 million defending or pay off the $10,000 [...] that the plaintiff demands to go away, hey, that's an easy call."
The closing, rebutting remarks of Octane's counsel urged the Court not to pick an "extreme" standard:
The amici briefs are written by the types of organizations one would expect to be interested in this type of case. For example, the Business Software Alliance (BSA) argued that "objectively unreasonable" should be the criteria, while the Electronic Frontier Foundation suggested that "bring[ing] an objectively weak case or us[ing] the cost of defense as a weapon" should be the standard. Google and thirteen other large corporations including Facebook, Netflix, Intel, HTC, Verizon, and Cisco joined in writing one brief, which also sought to lower the standard to objectively unreasonable but not meritless: "It should be sufficient to demonstrate that a patentee lacks an objectively reasonable prospect of prevailing on his overall claims, even if there is some merit to certain portions of them."
The second case heard was Highmark, Inc. v. Allcare Health Management System, Inc. [PDF]. Allcare holds a software patent on a patient management system for health care organizations. Concerned about its freedom to operate in the marketplace, Highmark initiated a proceeding for declaratory judgment that it did not infringe that patent in 2003. Allcare fought back, claiming Highmark infringed its patent. Highmark won at the district court level, and got the district court judge to order Allcare to pay Highmark for its costs and legal fees. On appeal, the CAFC refused Allcare's request to overturn the district court judges ruling on costs and legal fees.
The oral argument for the Highmark case concerned the extent to which appeals courts should respect the right of district court judges to use their discretion in making a determination of an "exceptional" case (and therefore the extent to which those determinations cannot be overturned on appeal). Again, this seems like just a procedural issue on its face. However, if a patent troll loses a case in the lower courts, is forced to pay attorneys fees, but has a chance to have the attorney fees sanction lifted on appeal, then the patent troll poses a more menacing threat to those companies willing to fight back against a troll. A victory at the district court level would seem hollow if years of appellate litigation could follow. Cutting off the ability of appellate courts to overturn a Section 285 finding could cripple patent trolls after a loss at the first trial.
None of the lawyers arguing this case fared well in front of the court either. Highmark's counsel started the session by accusing the CAFC of not properly respecting the Supreme Court's rulings on awarding attorney's fees. Counsel argued that lower courts' decisions to grant attorneys fees should be almost always upheld on appeal. Justice Ginsburg raised the concern that allowing this much discretion risks major discrepancies in the awards district courts give; counsel countered that because district courts look at the entirety of cases, rather than appeals courts, which often look only at "one piece of it", they have a better sense of what's "exceptional" than appeals courts, so there would not be major discrepancies. The strongest part of Highmark's argument was its criticism of the "objective baselessness" criterion for the Section 285 test; according to Highmark, "exceptional" requires a fact-based approach, but never a purely legal examination of the merits of the patent suit.
The Assistant for the Solicitor General, representing the US government, after arguing that letting the CAFC have broad power to review claims for attorneys fees would encourage wasteful litigation, faced harsh criticism from Justice Alito; he was left "wondering [...] whether there really is going to be any meaningful review of what district courts do in this situation" if the suggestion that broad deference should be given to district courts would be implemented.
Allcare's arguments revealed a coalition of sorts on the bench. Four justices, Breyer, Sotomayor, and Scalia, and, to an extent, Chief Justice Roberts, took stances opposite those of Ginsburg and Alito: their criticism of Allcare's legalistic approach to Section 285 revealed their sympathy for an interpretation of the section as allowing broad deference to the decision of the lower courts on attorneys fees.
In both hearings, the justices seemed dissatisfied with all of the arguments they heard; those from the petitioners, those from the Assistants to the Solicitor General, and those from the respondents. It appears that Justices Breyer, Sotomayor, and Scalia, and possibly Roberts read Section 285 in a way that would hurt patent trolls, while Justices Alito, Ginsburg, and Kagan did not. Justice Kennedy did not reveal enough in his questions for me to predict how he'd rule, and Justice Thomas, as usual, was silent. Importantly, Chief Justice Roberts recognized that the CAFC is struggling to provide a united perspective on patent issues "they seem to have a great deal of disagreement among themselves and are going back [and] forth in particular cases, in this area specifically".
Given the split in opinions expressed by the court, and the different concerns raised by different justices, it's reasonable to predict that the CAFC's Brooks Furniture test will be overturned by a 5-4 slim majority ruling, with the strong likelihood of at least one or two concurring opinions. However, the difficulty that the Court had with determining the boundaries of Section 285 makes predicting that new test difficult. A fragmented ruling from the Court is quite possible, which should come out in the next few months.
Frivolous patent litigation isn't the only patent issue before the Supreme Court. On March 31, the Court will listen to oral arguments in Alice Corporation Pty. Ltd. v. CLS Bank International, where the specific issue before the court is the eligibility as patentable subject material of "computer-implemented inventions" (i.e. software). Stay tuned for coverage of this important case.
Version 6.5 of the OpenSSH suite was released in late January, bringing with it a host of new features. A 6.6 release followed, primarily to provide an important bug fix, but most users will still find the feature set introduced in OpenSSH 6.5 to be the more significant enhancement. That feature set includes support for additional key exchange and signature functions, configuration improvements, and dropping support for several out-of-date options that are no longer regarded as secure.
OpenSSH, like the OpenBSD project where it originated, makes fairly frequent releases with automatic increments of the version number, but while OpenSSH 6.3 was released in September 2013 and 6.4 in November, neither of those releases debuted much new functionality. So when version 6.5 was made available on January 30 as a feature-focused release, it had been nearly a year since our last look at what has changed.
The headline cryptographic feature in 6.5 is the addition of support for Daniel J. Bernstein's Curve25519 key exchange function. Curve25519 is an elliptic curve (and accompanying parameter set) designed for use in the Elliptic Curve Diffie–Hellman (ECDH) key agreement protocol. Bernstein developed it to meet a set of specific characteristics, the first of which was extremely fast computation speed. When the paper [PDF] describing Curve25519 was published in 2006, Bernstein's implementation set new speed records for all strong Diffie-Hellman functions. It is also immune to timing attacks that can undermine related functions, and offers several other properties that make it efficient—such as the lack of any need to validate whether an input string is a valid public key (an operation that can significantly slow down the function, but which Bernstein notes is rarely reported in the advertised metrics of other functions).
OpenSSH 6.5 can not only use Curve25519 for ECDH key agreement, it is now the default option when both endpoints of the connection support it. The curve is a popular choice in other security tools, and has become particularly popular since the 2013 revelation that the NSA deliberately included a weakness in the Dual_EC_DRBG function. That event, of course, subsequently cast doubt (in many minds) on other algorithms and functions approved by the US National Institute of Standards and Technology (NIST) or released by RSA, the vendor that implemented Dual_EC_DRBG.
A related addition is support for Ed25519 as a public key format for digital signatures. Ed25519 is a function for use in EdDSA, an elliptic curve signature scheme with stronger security guarantees than ECDSA and DSA—such as resilience to hash-function collision and timing attacks. Like Curve25519, Ed25519 was designed with high speed in mind. Bernstein developed it in conjunction with a team of researchers at other universities.
The new release also adds a new storage format for private keys that is based on bcrypt hashing. It is used automatically for Ed25519 keys and is an option for other key types. The release notes indicate that the OpenSSH team has plans to make it the default storage format for other key types in some future release.
There is also a new SSH2 transport cipher named "email@example.com" in OpenSSH 6.5. It combines Bernstein's ChaCha20 stream cipher and Poly1305 message-authentication code (MAC). A detailed description is included in the documentation. The cipher is similar to a stream cipher recently proposed for TLS; regardless of where the TLS proposal ends up, though, OpenSSH's Damien Miller wrote on his blog that a new option for OpenSSH was needed, since RC4 "is pretty close to broken now" and lacked authentication. Among the new cipher's interesting features is that it uses a separate cipher instance to encrypt packets' length fields. That protects against attackers learning anything about the message payload's contents merely by manipulating packet lengths.
In addition to the new ciphers and functions, several other noteworthy changes landed in OpenSSH 6.5. Support has dropped for older, proprietary SSH clients and servers that use the now-obsolete RSA+MD5 signature format, and those that use weak key-exchange hashes. The ssh and sshd programs will still connect to these old endpoints if they also support stronger key and hash options, but the release notes caution that support will eventually be dropped entirely.
There are several new configuration options and features that
should, hopefully, make life easier for users and system
administrators. ssh_config now supports conditional
configuration rules that can be applied by matching against hostname,
user, and command output. ssh now supports hostname
canonicalization when looking for keys in known_hosts or when
looking for host certificates. As a practical matter, this means that
the forgetful among us can connect with
when we really should have typed
ssh myserver.mydomain.com. The canonicalization feature
can be configured for better security—for example, by
restricting it to a sensible set of DNS suffixes.
Other feature additions include the ability to blacklist and whitelist SFTP requests in sftp-server, support for calling fsync() on an open file handle through SFTP, and the ability to disable TTY allocation for clients connecting to sshd. Finally, in keeping with the recommendations from NIST Special Publication 800-57, the size of the Diffie-Hellman groups requested for each symmetric key size has been increased.
OpenSSH 6.6 was released on March 15, which marked a bit of short development cycle since 6.5. The main reason was that the 6.5 release unintentionally made configuration file parsing case-insensitive. In an email, Miller said that he decided to release a small update with the case-sensitivity fix early, since new OpenBSD and Ubuntu releases are on the horizon—better, he said, to not let parsing errors become entrenched in users' configurations.
Nevertheless, there were a few other changes added to 6.6. Support for the J-PAKE (Password Authenticated Key Exchange by Juggling) key-agreement protocol was removed. The OpenSSH implementation was only included as an experimental feature in previous releases, and was unmaintained. The hostname canonicalization feature was also improved; now whenever the canonicalization changes the hostname, the ssh_config configuration is re-parsed, so that any rules that match the now-canonical hostname (but did not match the hostname as originally entered) are not skipped. Finally, a security fix was included that prevents sshd from incorrectly matching wildcards in environment variables.
It is certainly possible for those of us who do not grapple with cryptography on a daily basis to occasionally get lost in the details of new algorithms and functions. But OpenSSH 6.5's Curve25519 and Ed25519 support has a practical side: increased speed and resistance to attack vectors that are exploitable even in other ECDH and signature implementations. That, plus the enhancements to configuration options, certainly make this a release worth exploring.
|Created:||March 14, 2014||Updated:||April 1, 2014|
From the Red Hat advisory:
It was discovered that the 389 Directory Server did not properly handle certain SASL-based authentication mechanisms. A user able to authenticate to the directory using these SASL mechanisms could connect as any other directory user, including the administrative Directory Manager account. This could allow them to modify configuration values, as well as read and write any data the directory holds.
|Package(s):||catfish||CVE #(s):||CVE-2014-2093 CVE-2014-2094 CVE-2014-2095 CVE-2014-2096|
|Created:||March 19, 2014||Updated:||September 2, 2014|
|Description:||From the CVE entries:
Untrusted search path vulnerability in Catfish through 0.4.0.3 allows local users to gain privileges via a Trojan horse catfish.py in the current working directory. (CVE-2014-2093)
Untrusted search path vulnerability in Catfish through 0.4.0.3, when a Fedora package such as 0.4.0.2-2 is not used, allows local users to gain privileges via a Trojan horse catfish.pyc in the current working directory. (CVE-2014-2094)
Untrusted search path vulnerability in Catfish 0.6.0 through 1.0.0, when a Fedora package such as 0.8.2-1 is not used, allows local users to gain privileges via a Trojan horse bin/catfish.pyc under the current working directory. (CVE-2014-2095)
Untrusted search path vulnerability in Catfish 0.6.0 through 1.0.0 allows local users to gain privileges via a Trojan horse bin/catfish.py under the current working directory. (CVE-2014-2096)
|Package(s):||freetype2||CVE #(s):||CVE-2014-2240 CVE-2014-2241|
|Created:||March 17, 2014||Updated:||January 19, 2015|
|Description:||From the Mageia advisory:
It was reported that Freetype before 2.5.3 suffers from an out-of-bounds stack-based read/write flaw in cf2_hintmap_build() in the CFF rasterizing code, which could lead to a buffer overflow (CVE-2014-2240).
It was also reported that Freetype before 2.5.3 has a denial-of-service vulnerability in the CFF rasterizing code, due to a reachable assertion (CVE-2014-2241).
|Created:||March 19, 2014||Updated:||March 24, 2014|
|Description:||From the bug report:
Florian Weimer and Eric Sesterhenn reported an issue with Jansson, a C library for encoding, decoding and manipulating JSON data.
The problem exists inside the hashing implementation and results in possible prediction of hash collisions.
|Package(s):||lighttpd||CVE #(s):||CVE-2014-2323 CVE-2014-2324|
|Created:||March 13, 2014||Updated:||April 9, 2014|
|Description:||From the Debian advisory:
CVE-2014-2323: Jann Horn discovered that specially crafted host names can be used to inject arbitrary MySQL queries in lighttpd servers using the MySQL virtual hosting module (mod_mysql_vhost). This only affects installations with the lighttpd-mod-mysql-vhost binary package installed and in use.
CVE-2014-2324: Jann Horn discovered that specially crafted host names can be used to traverse outside of the document root under certain situations in lighttpd servers using either the mod_mysql_vhost, mod_evhost, or mod_simple_vhost virtual hosting modules. Servers not using these modules are not affected.
|Package(s):||mantis||CVE #(s):||CVE-2014-1608 CVE-2014-1609 CVE-2014-2238|
|Created:||March 13, 2014||Updated:||September 22, 2014|
|Description:||From the Red Hat bugzilla entry:
CVE-2014-2238: It was reported , that MantisBT suffers from an SQL injection vulnerability. admin_config_report.php relied on unsanitized, inlined query parameters, enabling a malicious user to perform an SQL injection attack. An administrative account is required to access this page, however.
From the oCERT advisory:
CVE-2014-1608: The MantisBT SOAP API uses the unsafe db_query() function allowing a specially crafted tag within the envelope of a mc_issue_attachment_get SOAP request to inject arbitrary SQL queries.
The reporting of this specific issue was followed by an investigation that lead to additional cases of unsafe db_query() function use, being found by MantisBT maintainers, throughout MantisBT code. (CVE-2014-1609)
|Package(s):||firefox thunderbird seamonkey||CVE #(s):||CVE-2014-1494 CVE-2014-1498 CVE-2014-1499 CVE-2014-1500 CVE-2014-1502 CVE-2014-1504 CVE-2104-1508|
|Created:||March 19, 2014||Updated:||January 26, 2015|
|Description:||From the Ubuntu advisory:
Benoit Jacob, Olli Pettay, Jan Varga, Jan de Mooij, Jesse Ruderman, Dan Gohman, Christoph Diehl, Gregor Wagner, Gary Kwong, Luke Wagner, Rob Fletcher and Makoto Kato discovered multiple memory safety issues in Firefox. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to cause a denial of service via application crash, or execute arbitrary code with the privileges of the user invoking Firefox. (CVE-2014-1493, CVE-2014-1494)
David Keeler discovered that crypto.generateCRFMRequest did not correctly validate all arguments. An attacker could potentially exploit this to cause a denial of service via application crash. (CVE-2014-1498)
Ehsan Akhgari discovered that the WebRTC permission dialog can display the wrong originating site information under some circumstances. An attacker could potentially exploit this by tricking a user in order to gain access to their webcam or microphone. (CVE-2014-1499)
Tim Philipp Schäfers and Sebastian Neef discovered that onbeforeunload events used with page navigations could make the browser unresponsive in some circumstances. An attacker could potentially exploit this to cause a denial of service. (CVE-2014-1500)
Jeff Gilbert discovered that WebGL content could manipulate content from another sites WebGL context. An attacker could potentially exploit this to conduct spoofing attacks. (CVE-2014-1502)
Nicolas Golubovic discovered that CSP could be bypassed for data: documents during session restore. An attacker could potentially exploit this to conduct cross-site scripting attacks. (CVE-2014-1504)
Tyson Smith and Jesse Schwartzentruber discovered an out-of-bounds read during polygon rendering in MathML. An attacker could potentially exploit this to steal confidential information across domains. (CVE-2104-1508)
|Package(s):||firefox thunderbird seamonkey||CVE #(s):||CVE-2014-1493 CVE-2014-1497 CVE-2014-1505 CVE-2014-1508 CVE-2014-1509 CVE-2014-1510 CVE-2014-1511 CVE-2014-1512 CVE-2014-1513 CVE-2014-1514|
|Created:||March 19, 2014||Updated:||April 30, 2014|
|Description:||From the Red Hat advisory:
Several flaws were found in the processing of malformed web content. A web page containing malicious content could cause Firefox to crash or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2014-1493, CVE-2014-1510, CVE-2014-1511, CVE-2014-1512, CVE-2014-1513, CVE-2014-1514)
Several information disclosure flaws were found in the way Firefox processed malformed web content. An attacker could use these flaws to gain access to sensitive information such as cross-domain content or protected memory addresses or, potentially, cause Firefox to crash. (CVE-2014-1497, CVE-2014-1508, CVE-2014-1505)
A memory corruption flaw was found in the way Firefox rendered certain PDF files. An attacker able to trick a user into installing a malicious extension could use this flaw to crash Firefox or, potentially, execute arbitrary code with the privileges of the user running Firefox. (CVE-2014-1509)
|Created:||March 17, 2014||Updated:||April 23, 2015|
|Description:||From the SUSE bug report:
Multiple denial of service flaws were reported against various parts of Python's stdlib.
Unfortunately, upstream assigned a single CVE to all of these, however I do not believe they can all use the same CVE due to them being fixed across so many different versions (2.6.9, 2.7.4, 2.7.6, 3.3.3, as well as future 2.7.x and 3.3.x versions). So this will likely require MITRE to detangle.
|Package(s):||samba||CVE #(s):||CVE-2013-4496 CVE-2013-6442|
|Created:||March 14, 2014||Updated:||April 9, 2014|
From the Slackware advisory:
CVE-2013-4496: Samba versions 3.4.0 and above allow the administrator to implement locking out Samba accounts after a number of bad password attempts. However, all released versions of Samba did not implement this check for password changes, such as are available over multiple SAMR and RAP interfaces, allowing password guessing attacks.
CVE-2013-6442: Samba versions 4.0.0 and above have a flaw in the smbcacls command. If smbcacls is used with the "-C|--chown name" or "-G|--chgrp name" command options it will remove the existing ACL on the object being modified, leaving the file or directory unprotected.
|Created:||March 17, 2014||Updated:||March 19, 2014|
|Description:||From the Mageia advisory:
Webmin has been updated to version 1.680, which fixes some security issues in the PHP Configuration and Webalizer modules, as well as several other bugs.
Page editor: Jake Edge
Brief itemsreleased on March 16. Linus is feeling better about things now. "What a difference a week makes. In a good way. A week ago, cutting rc6, I was not a happy person: the release had much too much noise in it, and I felt that an rc8 and even an rc9 might well be a real possibility. Now it's a week later, and rc7 looks much better." He is now saying this might be the last -rc for 3.14.
Stable updates: the 3.12 series is now maintained by Jiri Slaby; his first release, 3.12.14, came out on March 14.
Kernel development news
Both of these patch sets implement variations on a feature that has often gone by the name volatile ranges. A volatile range is a region of memory in a process's address space that is used to store data that can be regenerated if need be. If the kernel finds itself short of memory, it can take pages from a volatile range, secure in the knowledge that the process using that range of memory can recover from the loss, albeit with a possible performance hit. But, as long as memory remains plentiful, volatile ranges will not be reclaimed by the kernel and the data cached there can be freely used by applications.
Much of the volatile range work is motivated by the desire to create a replacement for Android's ashmem mechanism that is better integrated with the core memory-management subsystem. But there are other potential users of this functionality as well.
There have been many versions of the volatile ranges patch set over the last few years. At times, volatile ranges were implemented with the posix_fadvise() system call; at others, it was added to fallocate() instead. Other versions have made it a feature of madvise(). But version 11 of the volatile ranges patch set from John Stultz takes none of those approaches. Instead, it adds a new system call:
int vrange(void *start, size_t length, int mode, int *purged);
In this incarnation, a vrange() call operates on the length bytes of memory beginning at start. If mode is VRANGE_VOLATILE, that range of memory will be marked as volatile. If, instead, mode is VRANGE_NONVOLATILE, the volatile marking will be removed. In this case, though, some or all of the pages previously marked as being volatile might have been reclaimed; in that case, *purged will be set to a non-zero value to indicate that the previous contents of that memory range are no longer available. If *purged is set to zero, the application knows that the memory contents have not been lost.
A process may continue to access memory contained within a volatile range. Should it attempt to access a page that has been reclaimed, though, it will get a SIGBUS signal to indicate that the page is no longer there. Thus, programs that are prepared to handle that signal can use volatile ranges without the need for a second vrange() call before actually accessing the memory.
This version of the patch differs from its predecessors in another significant way: it only works with anonymous pages while the previous versions worked only with the tmpfs filesystem. Working with anonymous pages satisfies the need to simplify the patch set as much as possible in the hope of getting it reviewed and eventually merged, but it has a significant cost: the inability to work with tmpfs means that volatile ranges are not a viable replacement for ashmem. The intent is to support the file-backed case (which adds more complexity) after there is consensus on the basic patch.
Internally, vrange() works at the virtual memory area (VMA) level. All pages within a VMA are either volatile or not; if need be, VMAs will be split or coalesced in response to vrange() calls. This should make a vrange() call reasonably fast since there is no need to iterate over every page in the range.
A different approach to a similar problem can be seen in Minchan Kim's MADV_FREE patch set. This patch adds a new command to the existing madvise() system call:
int madvise(void *addr, size_t length, int advice);
Like vrange(), madvise() operates on a range of memory specified by the caller; what it does is determined by the advice argument. Callers can specify MADV_SEQUENTIAL to tell the kernel that the pages in that range will be accessed sequentially, or MADV_RANDOM to indicate the opposite. The MADV_DONTNEED call causes the kernel to reclaim the indicated pages immediately and drop their contents.
The new MADV_FREE operation is similar to MADV_DONTNEED, but there is an important difference. Rather than reclaiming the pages immediately, this operation marks them for "lazy freeing" at some future point. Should the kernel run low on memory, these pages will be among the first reclaimed for other uses; should the application try to use such a page after it has been reclaimed, the kernel will give it a new, zero-filled page. But if memory is not tight, pages marked with MADV_FREE will remain in place; a future access to those pages will clear the "lazy free" bit and use the memory that was there before the MADV_FREE call.
There is no way for the calling application to know if the contents of those pages have been discarded or not without examining the data contained therein. So a program could conceivably implement something similar to volatile ranges by putting a recognizable structure into each page before the MADV_FREE operation, then testing for that structure's presence before accessing any other data in the pages. But that does not seem to be the intended use case for this feature.
Instead, MADV_FREE appears to be aimed at user-space memory allocator implementations. When an application frees a set of pages, the allocator will use an MADV_FREE call to tell the kernel that the contents of those pages no longer matter. Should the application quickly allocate more memory in the same address range, it will use the same pages, thus avoiding much of the overhead of freeing the old pages and allocating and zeroing the new ones. In short, MADV_FREE is meant as a way to say "I don't care about the data in this address range, but I may reuse the address range itself in the near future."
It's worth noting that MADV_FREE is already supported by BSD kernels, so, unlike vrange(), it would not be a Linux-only feature. Indeed, it would likely improve the portability of programs that use this feature on BSD systems now.
Neither patch has received much in the way of reviews as of this writing. The real review, in any case, is likely to happen at this year's Linux Storage, Filesystem, and Memory Management Summit, which begins on March 24. LWN will be there, and we promise to make at least a token effort to not be too distracted by the charms of California wine country; stay tuned for reports from that discussion.a patch allowing a process to determine which control group contains a process at the other end of a Unix-domain socket. The patch is relatively simple, but it still kicked off a lengthy discussion making it clear that, among other things, there is still resistance to using modern Linux kernel facilities to implement new features.
The patch in question adds a new command (SO_PEERCGROUP) to the getsockopt() system call. A process can invoke this command on an open Unix-domain socket and get back the name of the control group containing the process at the other end. Or something close to that: what is returned is the control group the peer process was in when the connection was established; that process may have moved in the meantime. The information may thus be a bit outdated, but SO_PEERCGROUP mirrors the existing SO_PEERCRED command in this regard. Connection-time information is deemed to be good enough for the targeted use case, which is allowing the system security services daemon (SSSD) to make policy decisions based on which container it is talking to.
The main critic of this patch was Andy Lutomirski, who had a number of complaints with it. In the end, though, the key point may have been described in this message:
Part of this complaint was a bit off the mark: the idea is to not require awareness of control groups for processes running inside containers. But, even without that, Andy appears to be against the use of control groups in general. He is certainly not alone in that point of view.
Andy came up with three alternative approaches by which a daemon process could identify which container is connecting to it, but those have run into resistance as well. The first of those was to put the containers inside user namespaces. The user-ID mapping performed by user namespaces would then allow each connecting process to be identified with the existing SO_PEERCRED mechanism or with an SCM_CREDENTIALS control message. Adding user namespaces to the mix should also make containers more secure, he said.
The objection to this approach was best summed up by Vivek:
Simo Sorce echoed these concerns and also added that he is not in a position to make the target container mechanism (Docker) use user namespaces. Eric Biederman, the developer of user namespaces, asked for specifics of any problems and observed: "It seems strange to work around a feature that is 99% of the way to solving their problem with more kernel patches."
Strange or not, there does not appear to be a lot of interest in exploring the use of user namespaces as a solution to this particular problem. Like control groups, user namespaces are a relatively new, Linux-specific mechanism; getting developers to adopt such features is often a challenge. In this case, concerns about a lack of maturity can only serve to deprive user namespaces of testing, prolonging any such immaturity further.
Andy's second suggestion was to get the container information out of /proc, using the process ID of the connecting process. Simo responded that use of process IDs can suffer from race conditions; processes can come and go quickly on some systems. The third idea was to just keep a separate socket open into each container; this idea was dismissed as being on the messy and inelegant side, but nobody said that it wouldn't work.
The end result was a conversation that, by all appearances, convinced nobody. In the process, it has highlighted a question that often comes up in the kernel community: once we add interesting new features, to what extent can we integrate those features with others or expect developers to use them? Expect to see this kind of debate more often as the kernel continues to develop and acquires more features that were never envisioned by any of the Unix standards bodies. A lot of work is going into adding new capabilities to the kernel; it would seem strange if we were so unconvinced by our own work that we did not expect others to make use of it.
Occasionally, the OOM killer will actually do something helpful: it will kill a rogue memory-hogging process that is leaking memory and unfreeze everything else that is trying to make forward progress. Most of the time, though, it sacrifices something of importance without any notification; it's these encounters that we remember. One of my goals in my work at Google is to change that. I've recently proposed a patchset to actually give a process a notification of this impending doom and the ability to do something about it. Imagine, for example, being able to actually select what process is sacrificed at runtime, examine what is leaking memory, or create an artifact to save for debugging later.
This functionality is needed if we want to do anything other than simply kill the process on the machine that will end up freeing the most memory — the only thing the OOM killer is guaranteed to do. Some influence on that heuristic is available through /proc/<pid>/oom_score_adj, which either biases or discounts an amount of memory for a process, but we can't do anything else and we can't possibly implement all practical OOM-kill responses into the kernel itself.
So, for example, we can't force the newest process to be killed in place of a web server that has been running for over a year. We can't compare the memory usage of a process with what it is expected to be using to determine if it's out of bounds. We also can't kill a process that we deem to be the lowest priority. This priority-based killing is exactly what Google wants to do.
There are two different types of out-of-memory conditions of interest:
User-space out-of-memory handling can address OOM conditions for both control groups using the memory controller and for the system as a whole. Either way, the interface is provided by the memory controller since the handler should be implemented in a way that it doesn't care whether it is attached to a memory controller cgroup or not.
The memory controller allows processes to be aggregated together into memcgs and for their memory usage to be accounted together. It also prevents total memory usage from exceeding a configured limit, which provides very effective memory isolation from other processes running on the same system. Processes attached to a memcg may not cause the group as a whole to use more memory than the configured limit.
When a memcg usage reaches its limit and memory cannot be reclaimed, the memcg is out of memory. This happens because memory allocation within a memcg is done in two phases: the allocation, which is done with the kernel's page allocator, and the charge, which is done by the memory controller. If the allocation fails, the system as a whole is out of memory; if that succeeds and then the charge fails, the memcg is out of memory.
As your tour guide for the memory controller cgroup, I must first offer a warning: this functionality must be compiled into your kernel. If you're not in control of the kernel yourself, you may find that memcg is not enabled or mounted. Let's check my desktop machine running a common distribution:
$ grep CONFIG_MEMCG /boot/config-$(uname -r) CONFIG_MEMCG=y CONFIG_MEMCG_SWAP=y # CONFIG_MEMCG_SWAP_ENABLED is not set # CONFIG_MEMCG_KMEM is not set
Ok, good, this kernel has the memory controller enabled. Now let's see if it's mounted:
$ grep memory /proc/mounts cgroup /sys/fs/cgroup/memory cgroup rw,memory 0 0
It is, at /sys/fs/cgroup/memory. If it weren't mounted, we could mount it if we had root privileges with:
mount -t cgroup none /sys/fs/cgroup/memory -o memory
At the mount point, there are several control files that can be used to configure the memory controller. This memcg itself is the root memcg — the control group that contains all processes in the system by default. Memcgs can be added by creating directories with mkdir, just like any other filesystem. Those memcgs will include all of these control files as well, and you can create children in them as well.
There are four memcg control files of interest in current kernels:
My patch set adds another control file to this set:
The limit of the root memcg is infinite so that processes attached to it may charge as much memory as possible from the kernel.
When memory.use_hierarchy is enabled, the usage, limit, and reserves of descendant memcgs are accounted to the parent as well. This allows a memcg to overcommit its resources, an important aspect of memcg that we'll talk about later. If a memcg limits its usage to 512 MiB and has two child memcgs with limits of 512 MiB and 256 MiB each, for example, then the group as a whole is overcommitted.
When the usage of a memcg reaches its limit and the kernel cannot reclaim any memory from it or a descendant memcg, it is out of memory. By default, the kernel will kill the process attached to that memcg (or one of its descendant memcgs) that is using the most memory. It is possible to disable the kernel OOM killer by doing
echo 1 > memory.oom_control
in the relevant control directory. Now, when the memcg is out of memory, any process attempting to allocate memory will effectively deadlock unless memory is freed. This behavior may seem unhelpful, but that situation changes if user space has registered for a memcg OOM notification. To register for a notification when a memcg is out of memory, a process can use eventfd() in a sequence like:
The process would then do something like:
uint64_t ret; read(<fd of eventfd()>, &ret, sizeof(ret));
and this read() will block until the memory controller is out of memory. This will only wake up the process when it needs to react to an OOM condition, rather than requiring it to poll the out-of-memory state.
Unfortunately, there may not be much else that this process can do to respond to an OOM situation. If it has locked its text into memory with mlock() or mlockall(), or it is already resident in memory, it is now aware that the memory controller is out of memory. It can't do much of anything else, though, because most operations of interest require the allocation of more memory. If this process was a shell, for example, an attempt to run ps, ls, or even cat tasks would stall forever because no memory could be allocated. That leads to an obvious question: how is user space supposed to kill a process if it cannot even get a list of processes?
The goal of user-space out-of-memory handling is to transition that responsibility to user space so users can do anything they want under these conditions. This is only possible because of memory reserves. With my patchset, it's possible to use the new memory.oom_reserve_in_bytes to configure an amount of memory that the limit may be overcharged solely by processes that are registered for out-of-memory notifications. If you run:
echo 32M > memory.oom_reserve_in_bytes
then any processes attached to this memcg that has registered for eventfd() notifications with memory.oom_control, (including notifications from other memcgs), may overcharge the limit by 32MiB. This allows user space to actually do something interesting: read a file, check memory usage, build a list of processes attached to out of memory memcgs, etc. The reserve should only need to be a few megabytes at most for these operations if the process is already locked in memory.
The user-space OOM handler does not necessarily need to kill a process. If it can free memory in other (usually creative) ways, no kill may be required. Or, it may simply want to create a record for examination later that includes the state of the memcg's memory, process memory, or statistics before re-enabling the kernel OOM killer with memory.oom_control. With a reserve, writing to memory.oom_control will actually work.
The memcg remains out of memory until the user-space OOM handler frees some memory (including the memory taken from the reserve), it re-enables the kernel out-of-memory killer, or makes memory available by other means.
One possible "other means" would be to increase the memcg limit. If top-level memcgs represent individual jobs running on a machine, it's usually advantageous to set the memcg limit for each to be less than the full reservation for that job. The kernel will aggressively try to reclaim memory and push the memcg's usage below its limit before finally declaring it to be out of memory as a last resort. Then, and only then, systems software can increase the limit of the memcg if there is memory available on the system. Don't worry, this job would become the first process killed if the system is out of memory and there's a system user-space OOM handler which we'll describe next!
It's important that the out-of-memory reserve is configured appropriately for the user-space OOM handler. If an OOM handler is dealing with out-of-memory conditions in other memcgs, the memcg that the OOM handler is attached to is overcharged. If there is more than one user-space OOM handler attached, then memory.oom_reserve_in_bytes must be adjusted accordingly depending on the maximum memory usage that is allocated by those handlers.
If the entire system is out of memory, then no amount of memory reserve granted by a memcg, including the root memcg, will allow a process to allocate more. In this case, it isn't the charge to the memcg that is failing but rather the allocation from the kernel.
Handling this situation requires a different type of reserve implementation in the kernel: an amount of memory set aside by the memory-management subsystem that allows user-space out-of-memory handlers to allocate in system OOM conditions when nobody else can. A per-zone reserve is nothing new: the min_free_kbytes sysctl knob has existed for years; it ensures that some small amount of memory is always free so that important allocations, such as those that are needed for reclaim or are required by exiting processes to free their own memory, will succeed. The user-space OOM handling reserve is simply a small subset of the min_free_kbytes reserve for system out of memory conditions.
The reserve would be pointless, however, if the kernel out-of-memory killer stepped in and killed something itself. Without the patchset, the OOM killer cannot be disabled for the entire system; the patchset makes it possible to disable the system OOM killer just like you can disable the OOM killer for a memcg. This is done via the same interface, memory.oom_control, in the root memcg.
Access to the reserve is implemented immediately before the kernel out-of-memory killer is called. We do a check with a new per-process flag, PF_OOM_HANDLER to determine if this process is waiting on an OOM notification. If it is, and the process is attached to the root memcg, then the kernel will try to allocate below the per-zone minimum watermarks. If the reserve is configured correctly, this effectively guarantees memory to be available for user space to handle the condition. Since the per-process flag is used in the page allocator's slow path, there is no performance downside to this feature: it will simply be a conditional for all processes that aren't handling the out of memory condition.
An important aspect of this design is that the interface for handling system out-of-memory conditions and handling memory controller out-of-memory conditions is the same. User space should not need to have a different implementation depending on whether it is running on a system unconstrained by memcg or whether it's attached to a child memcg. The user-space OOM handler does not need to be changed in any way: if it's attached to the root memcg, it will handle system out-of-memory conditions and if it's attached to a descendant memcg, it will handle memcg out-of-memory conditions.
Earlier, we talked about the hierarchical nature of memcg and how it's possible to overcommit memory in child memcgs. This is the same at the top level: it's possible for the sum of the memcg limits of all of the root memcg's immediate children to exceed the amount of system memory. In this case, the memcg out-of-memory reserve is useless for the handling of system OOM conditions since the the memcg has not reached its limit, the charge would succeed but the allocation fails first.
In configurations such as this, the system-level OOM killer may want to do priority-based killing. Rather than simply killing the process using the most memory on the system, which is the heuristic used by the kernel OOM killer, it may want to sacrifice the lowest priority process depending on business goals or deadlines. Top-level memcgs represent individual jobs running on the machine each with their own limit and priority. Given a memory reserve, it's trivial to kill a process from within the lowest priority memcg. This is exactly what Google wants to do.
It is also possible to give those jobs the same type of control. If system-level software does a chown so that memcg control fields are configurable (except for the limit or reserve, of course) by the job attached at the top level, then the job may create its own child memcgs and enforce its own out-of-memory policy. It may even overcommit its child memcgs so the sum of their limits exceeds its own limit. When the job's limit is reached and the out of memory notification is sent, it may effect a policy as if it were a system out-of-memory condition: the interface is exactly the same. In this way, the top-level memcgs are virtualized environments as if all system memory was bounded by its limit.
When this idea has been proposed in the past, there has been some controversy as to whether the kernel really wants to start making a commitment to adding another memory reserve to the kernel. Some have suggested that it is a large maintenance burden to support such a feature and that the number of people who actually want to use it outside of Google is very small.
Google depends on user-space out-of-memory handling to effect a policy beyond the kernel default of selecting the largest process and killing it. The choice is important especially when talking about one of the most aggressive policies you'll find in Linux: the immediate termination of a process that hasn't necessarily done anything wrong. I believe that making something that is extremely difficult or impossible to achieve and empowers users to have control over such an important thing as process termination is worthwhile.
In the future, it will be possible to release a library that handles all of the above implementation details behind the scenes and allows users to implement their own HandleOOM() function. Such a library could also provide implementations for some of the common actions that a user-space OOM handler may do: read the list of processes on the system or attached to an OOM memory controller, check the memory usage of a particular process, etc.
It is also possible to replace the disabling of the OOM killer, either memcg or system OOM killer, with an out-of-memory delay. For example, another memcg control file could be added to allow user space a certain amount of time to respond and make memory available before the kernel steps in itself. This could be useful if the user-space OOM handler is buggy or has allocated more than its reserve allows; adding a delay isn't as dangerous as disabling the system OOM killer outright.
This would also allow your favorite Linux distribution to ship with an out of memory handler that could pop up a window and allow the user to select how to proceed or to diagnose the issue further. This saves your important document or presentation that you've been working hard on by not immediately killing something out from under you for that MP3 you just started playing.
User-space out-of-memory handling is a powerful tool that will give users and administrators more flexibility in controlling their systems and keep important processes running when they need to be. I've described some motivations for Google; others may use it for something completely different. The functionality can be used by anyone for their own needs exactly because the power is in user space.
Patches and updates
Core kernel code
Virtualization and containers
Page editor: Jonathan Corbet
CAcert is an SSL/TLS certificate authority (CA) that seeks to be community driven and to provide certificates for free (gratis), which stands in sharp contrast to the other existing CAs. But, in order for CAcert-signed certificates to be accepted by web browsers and other TLS-using applications, the CAcert root certificate must be included in the "trusted certificate store" that operating systems use to determine which CAs to trust. For the most part, CAcert has found it difficult to get included in the distribution-supplied trusted root stores; the discussion in a recently closed Debian bug highlights the problem.
Debian has been distributing the CAcert root since 2005, when it was added to the ca-certificates package. That has ended with the removal of the certificates from the package by maintainer Michael Shuler in mid-March. That was in response to a bug filed in July 2013 asking for the removal of the CAcert root certificates for a variety of reasons, but mostly because the organization has not passed an audit of its practices. As one might guess, there are a number of different viewpoints regarding the validity and trustworthiness of CAcert-signed certificates; Debian community members were not shy about expressing them.
At the time CAcert was added, the inclusion of certificate roots was done on an ad hoc basis where popularity and "advocating votes from project members" played a role, according to Thijs Kinkhorst. That has changed to follow whatever Mozilla is doing with respect to which root certificates to include. CAcert itself withdrew its Mozilla inclusion request back in 2007, awaiting the results of CAcert's long-stalled internal audit.
Under most criteria, CAcert fails to provide enough assurance that its processes are secure enough to merit inclusion. In addition, the code it uses to manage certificates (which is open source) has some serious problems, as reported by Ansgar Burchardt. But CAcert is different than other CAs in fundamental ways that make it attractive to include its root certificates. As Kinkhorst put it: "CAcert is a bit of a special case because it's the only real community CA, and in that sense very different from the other CA's, and in that sense also close at heart to the way Debian operates." But even he was unsatisfied with the security of CAcert.
As Geoffrey Thomas pointed out, other CAs offer gratis certificates (GlobalSign for open source projects, StartCom for anyone), which moots the argument that CAcert is the only gratis provider, to some extent anyway. But, Alessandro Vesely was not convinced:
Vesely is referring to the organization of CAcert and that it releases its code under the GPL, when he refers to it as "free as in free speech". CAcert certainly has a different philosophy than most other CAs, which is reflected in the goodwill that many in the free software world are willing to grant the organization.
Given that few other distributions (or any major browser vendors) include the CAcert root certificates, Debian's decision to do so doesn't really help, as several pointed out in the bug. If developers get CAcert certificates for their sites and test them from Debian only, they will get a false sense of what their users will see (i.e. the developers won't see the invalid certificate warnings that will pop up for users). The fact that Debian ships CAcert roots can be seen as something of an endorsement of CAcert, which might be intended, but also of its security practices, which probably isn't. But, as Vesely and others pointed out, the other CAs don't have spotless security records; furthermore we can't even see their code to find the kinds of problems Burchardt reported.
Shuler's announcement that the CAcert roots had been removed was met with a number of objections. Christoph Anton Mitterer complained that there was something of a double standard being applied since there are other "doubtful CAs" included in the ca-certificates package. In fact, that package is essentially just the Mozilla-distributed root store with one addition: the Software in the Public Interest (SPI) root certificate—because SPI runs some of the Debian infrastructure.
Axel Beckert suggested adding the CAcert roots back into the package, but disabling them by default. It had come up earlier in the discussion too. The ca-certificates package is a secure way for Debian users who do want those root certificates to get them. Removing them requires those users to find another path.
But Thomas R. Koll was quite supportive of the removal. He was fairly dismissive of arguments against removal:
Daniel Kahn Gillmor doesn't see the issue as so clear-cut. While there are criteria that Mozilla uses to exclude some CAs, they aren't necessarily strictly applied to all:
This tension results in further concentration of business among the "too big to fail" CAs (since they're the only ones who can issue acceptable certs), which ironically results in them being even less accountable to relying parties in the future.
This is not a good long-term dynamic.
He is also skeptical of including the SPI root certificate because it runs some of the Debian infrastructure. In fact, Gillmor said, that's a good reason not to include it as its presence makes it harder to switch away from the Debian infrastructure in the event it gets compromised (or the user is being targeted by someone in charge of Debian infrastructure). "With SPI's root cert, stopping software updates or varying my choice of debian mirror does *not* defend me against malicious use of the CA, and an attack can be much more narrowly tailored and hard to detect."
There are plans to move from certificates signed by the SPI root to those from another CA, Gandi, but Mitterer, at least, is not fond of that plan. It just moves the problem from SPI to Gandi, he said. He suggested that Debian should run its own CA.
While there was a fair amount of support for shipping, but not enabling, the CAcert root certificates, that has not happened, at least yet. As most would agree, the CA system that we have is largely broken in multiple ways, so, to some at least, arbitrarily deciding that CAcert is "insecure" is a bit of a stretch. On the other hand, there is a Mozilla policy that, if followed, would allow the CAcert root into the Mozilla root store (and thus, likely, back into the Debian package), but CAcert has been unable to complete the process for financial or logistical reasons. For now, though, Debian users that want to include CAcert in their root store are on their own.
For the most part, other distributions have not picked up CAcert either. Ubuntu followed the Debian lead for a while and continued that by recently removing the CAcert roots. Perhaps the most significant distributions that include CAcert roots are Mandriva, Arch Linux, Gentoo, and OpenBSD. Some are either descendants of Debian or use the Debian package, so that may change in light of Debian's removal. The best way forward for CAcert would seem to be completing the audit and getting included by Mozilla, but even that doesn't solve the whole problem. One guesses that Microsoft, Google, and Apple might be harder nuts to crack.
Debian GNU/Linuxgeneral resolution proposal on "freedom of choice in init systems," has now withdrawn that proposal. "I said that if I'd not received enough seconds by today that I would withdraw this GR proposal. Despite one person emailing me off-list to urge me to continue, I think it's important to do what I said I would do, so I hereby withdraw this GR proposal." The five sponsors needed to bring the GR to a vote never appeared; it seems that the Debian community has had enough of this discussion and is ready to move on.
Newsletters and articles of interest
Page editor: Rebecca Sobol
Although it has not yet been officially released, Git 2.0.0 has appeared in the Git project repository. The upcoming milestone release will bring with it some changes that could significantly affect users' workflows.
The most widely discussed change in 2.0.0 is a new default behavior for
git push remotelocation
when no branch is supplied as an argument. In previous releases, Git would push all of the local branches that matched the name of an existing branch on remotelocation to the remote repository. This behavior is called the "matching" semantics.
In 2.0.0, however, the default behavior becomes the "simple" semantics instead. Under this scheme,
git push remotelocation
pushes only the current branch to the remote branch with the same name. Furthermore, if remotelocation is the same location that the local repository is fetched from, then the aforementioned push command will only push the current branch if it is tracking that remote branch. There is, however, a configuration option available for anyone who wishes to return to the old semantics: one must simply set the push.default variable to matching. This change does not expose new functionality, of course; its importance is primarily that the "simple" behavior seems to be what most users expect: a push with no branch argument pushes the current branch, not every branch.
The git add command received some changes as well. In previous releases, when git add -u (which stages only changed files that are already in the index) or git add -A (which also stages files that are new in the working tree) were run inside a subdirectory and no path argument was given, they would operate only on the subdirectory. Starting with 2.0.0, both commands operate on the entire directory tree. The old behavior can be accomplished, however, with
git add -u .
git add -A .
The major benefit to changing this default behavior is consistency; in 2.0.0, add and commit behave the same way when there is no path argument.
git add path is also changing how it behaves regarding paths that have been removed. In previous releases, Git would ignore these removals; starting in 2.0.0, it will instead notice and record them. As with the other changes, there is a backward-compatibility workaround: add the --ignore-removal switch to emulate the old behavior.
The -q option for git diff-files is being removed; it was evidently misinterpreted to mean "quiet" by some users, while its actual purpose, ignoring deletion, can also be accessed by appending the --diff-filter=d switch.
Several of the commit-related commands will have bolstered support for GPG signatures in 2.0.0. The pull and rebase commands will accept the --gpg-sign flag on the command line. As a result, integrators can more easily sign the commits that are created when acting on pull requests, which is important for auditability. The commit command (which already understands --gpg-sign) will be configurable to always sign commits, via setting the commit.gpgsign configuration variable to true.
There are also several new options for other common commands. A new configuration variable is available for git pull. Setting pull.ff to true tells pull to only do fast-forward merges. This prevents the pull from accidentally overwriting local changes, and is a long-requested feature. git config will be able to read from standard input when given - as its input file parameter.
Naturally, there are a fair number of bug fixes in the 2.0.0 release as well, plus some under-the-hood changes—including performance improvements to serving bitmap objects from a repository and optimizations to how git log -cc (which attempts to omit "uninteresting" hunks of a diff) displays diffs against multiple parents.
For the most part, however, Git 2.0.0 looks poised to be yet another steady, incremental improvement to the popular source-management tool. Much like Linux 3.0, the 2.0.0 version number does not signify a major API/ABI break or a particularly special new feature; it is simply a milestone along the way for a stable project. Changing options and defaults in such an established utility can be a disruptive practice; luckily the Git development team continues to plan for these changes in advance and provide alternate options and workaround to make the transition as smooth as it can get.
After several years of development, version 1.0.0 of the 3D character-modeling studio MakeHuman has been released. MakeHuman is often used to generate character models for games, animations, and simulations, and exports to a wide variety of 3D formats. Features include a character-rigging library, GUI design tools, and pre-set morphing parameters for common use cases.
Version 1.4.0 of the graphical file manager GNOME Commander has been released. Highlights include support for tabs, a reworked bookmarking system, a new file properties dialog, enhanced filename matching, and much more.looked at recently. The "What's new in Python 3.4" page looks at the changes in even greater detail. Beyond the new features, there were also "hundreds of small improvements and bug fixes". You can get Python 3.4 from the download page or from distribution repositories before too long.
Steinar H. Gunderson has released version 1.0 of Movit, his GPU filter library. The filter set includes many common video effects, such as blur, sharpen, diffusion, convolution, glow, color-correction, overlay, and rescaling. As Gunderson notes in the README file: "Yes, that's a short list. But they all look great, are fast and don't give you any nasty surprises."
Version 1.2.0 releases of three GStreamer libraries are now available: gst-python, GNonLin, and GStreamer Editing Services (GES). All represent the first stable releases of the new 1.2.x series. The gst-python release, notably, adds support for Python 3.3. The GES and GNonLin releases do not introduce major new features, but both incorporate a large number of important bug fixes.
Version 2.0 of the repmgr tool suite for managing PostgreSQL database clusters has been released. Among many new features is experimental support for auto-failover, support for daemonizing repmgr itself, the ability to detect master failures, and many new tunables and configuration parameters.
Newsletters and articles
At his blog, Florian Scholz has posted a write up of the tools used by Mozilla's Mozilla Developer Network (MDN) team to track the status and freshness of MDN documentation. "If you look at a content section on MDN, you can definitely identify more "health indicators" than just the dev-doc-needed bug list. To make the state of the documentation visible, we started to build documentation status pages for sections on MDN." The tools are built into the MDN site itself, and track (among other factors) tagging, editorialand technical review, documentation requests, and translation status.
Page editor: Nathan Willis
Calls for PresentationsFlock was held last year for the first time in Charleston, SC, as a combined event replacing the former North America and Europe FUDCons. Unlike those barcamp-style events, Flock is a planned conference with talk submissions voted on by the Fedora community. It will alternate between North America and Europe each year." The call for proposals is open until April 3.
|March 21||April 26
|LinuxFest Northwest 2014||Bellingham, WA, USA|
|March 31||July 18
|GNU Tools Cauldron 2014||Cambridge, England, UK|
|March 31||September 15
|GNU Radio Conference||Washington, DC, USA|
|March 31||June 2
|Tizen Developer Conference 2014||San Francisco, CA, USA|
|March 31||April 25
|openSUSE Conference 2014||Dubrovnik, Croatia|
|April 3||August 6
|Flock||Prague, Czech Republic|
|April 4||June 24
|Open Source Bridge||Portland, OR, USA|
|April 5||June 13
|Texas Linux Fest 2014||Austin, TX, USA|
|April 7||June 9
|DockerCon||San Francisco, CA, USA|
|April 14||May 24||MojoConf 2014||Oslo, Norway|
|April 17||July 9||PGDay UK||near Milton Keynes, UK|
|April 17||July 8||CHAR(14)||near Milton Keynes, UK|
|April 18||November 9
|Large Installation System Administration||Seattle, WA, USA|
|April 18||June 23
|LF Enterprise End User Summit||New York, NY, USA|
|April 24||October 6
|Operating Systems Design and Implementation||Broomfield, CO, USA|
|April 25||August 1
|PyCon Australia||Brisbane, Australia|
|April 25||August 18||7th Workshop on Cyber Security Experimentation and Test||San Diego, CA, USA|
|May 1||July 14
|2014 Ottawa Linux Symposium||Ottawa, Canada|
|May 1||May 12
|Wireless Battle Mesh v7||Leipzig, Germany|
|May 2||August 20
|LinuxCon North America||Chicago, IL, USA|
|May 2||August 20
|CloudOpen North America||Chicago, IL, USA|
|May 3||May 17||Debian/Ubuntu Community Conference - Italia||Cesena, Italy|
|May 4||July 26
|Gnome Users and Developers Annual Conference||Strasbourg, France|
|May 9||June 10
|Distro Recipes 2014 - canceled||Paris, France|
|May 12||July 19
|Conference for Open Source Coders, Users and Promoters||Taipei, Taiwan|
|May 18||September 6
|Akademy 2014||Brno, Czech Republic|
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Eventsinvites entrants to write a game in one week from scratch either as an individual or in a team" Entries must be developed in Python. Pre-registration will open April 11. It will be the first premier event in Fedora.next phase, spreading innovative ideas and helping make Fedora better than ever. FUDCon is a combination of sessions, talks, workshops, and hackfests in which contributors work on specific initiatives." AdaCamp is a conference dedicated to increasing women's participation in open technology and culture: open source software, Wikipedia-related projects, open data, open geo, library technology, fan fiction, remix culture, and more. AdaCamp brings women together to build community, share skills, discuss problems with open tech/culture communities that affect women, and find ways to address them."
|FLOSS UK 'DEVOPS'||Brighton, England, UK|
|March 20||Nordic PostgreSQL Day 2014||Stockholm, Sweden|
|March 21||Bacula Users & Partners Conference||Berlin, Germany|
|March 22||Linux Info Tag||Augsburg, Germany|
|LibrePlanet 2014||Cambridge, MA, USA|
|March 24||Free Software Foundation's seminar on GPL Enforcement and Legal Ethics||Boston, MA, USA|
|Linux Storage Filesystem & MM Summit||Napa Valley, CA, USA|
|16. Deutscher Perl-Workshop 2014||Hannover, Germany|
|Collaboration Summit||Napa Valley, CA, USA|
|March 29||Hong Kong Open Source Conference 2014||Hong Kong, Hong Kong|
|FreeDesktop Summit||Nuremberg, Germany|
|Networked Systems Design and Implementation||Seattle, WA, USA|
|Libre Graphics Meeting 2014||Leipzig, Germany|
|April 3||Open Source, Open Standards||London, UK|
|4th European LLVM Conference 2014||Edinburgh, Scotland, UK|
|ApacheCon 2014||Denver, CO, USA|
|Lustre User Group Conference||Miami, FL, USA|
|Open Source Data Center Conference||Berlin, Germany|
|PyCon 2014||Montreal, Canada|
|April 11||Puppet Camp Berlin||Berlin, Germany|
|State of the Map US 2014||Washington, DC, USA|
|Red Hat Summit||San Francisco, CA, USA|
|openSUSE Conference 2014||Dubrovnik, Croatia|
|LinuxFest Northwest 2014||Bellingham, WA, USA|
|Embedded Linux Conference||San Jose, CA, USA|
|Android Builders Summit||San Jose, CA, USA|
|Linux Audio Conference 2014||Karlsruhe, Germany|
|LOPSA-EAST 2014||New Brunswick, NJ, USA|
|Wireless Battle Mesh v7||Leipzig, Germany|
|OpenStack Summit||Atlanta, GA, USA|
|Samba eXPerience||Göttingen, Germany|
|ScilabTEC 2014||Paris, France|
|May 17||Debian/Ubuntu Community Conference - Italia||Cesena, Italy|
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
Copyright © 2014, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds