LWN.net Weekly Edition for January 14, 2016
The final act for Mozilla's Persona
Mozilla has announced that it will close down its Persona.org identity service in November 2016. The browser maker stopped developing the Persona software in 2014, citing low adoption, but has maintained Persona.org as a public service. With the announcement that the service will be discontinued, the question arose as to whether or not the software could survive as an independent, community-driven project. Questions also arose as to why Persona failed to take off, and whether Mozilla should have managed the project differently.
Persona is a sign-in system for web sites in which the responsibility for authenticating a user login attempt is handed to an email provider. In theory, the user enters only their email address (e.g., user@example.com) on the web site; that site then performs a handshake with a process running on the domain portion of the email address (example.com). The user proves to the mail server that the address is theirs by logging into their email account, at which point the email server returns a token to the web site that concludes the authentication process.
The scheme offers potential benefits to a number of parties. The site owner does not have to implement a login system from scratch and is able get by with storing only the user's email address (which, in addition to being simple, prevents lock-in). The user can re-use their email address on any number of sites without having to create new accounts (and passwords) for every site. The whole process could be decentralized; users and site maintainers could therefore stop handing authentication over to the big proprietary social-media networks.
But theory rarely lives up to reality, and Mozilla found it difficult to persuade email providers to run the mail-server side of the authentication service. The Persona.org site was created as a stop-gap; if a user's email provider did not natively support the Persona authentication scheme, the user could verify that they had access to the email address through Persona.org. The Persona.org authentication flow, though, was not part of the Persona scheme itself. Instead, the Persona.org server worked by sending an email containing a challenge to the user's address. Clicking on the link inside the email verified that the user had access, at which point Persona.org completed the login transaction with the originating web site.
Disconnect
Sadly, because Mozilla never succeeded in convincing major email services to implement their own Persona authentication service, Persona itself became a scheme that relied almost entirely on the Persona.org site—which undercut the goal of making Persona a decentralized protocol. As a result, Persona.org was just one of many third-party authentication options—and a much smaller one than Facebook, Google, Twitter, and the like.
By March 2014, Mozilla decided that the writing was on the wall: without email-provider support, there was not sufficient interest in adding Persona support among web-site proprietors either. That made Persona unlikely to make a meaningful dent in the login-service space dominated by the social-media companies. So Mozilla stopped working on the code (or, as the official announcement put it, "transitioned Persona to community ownership") and moved the Persona developers over to work on a revamped Firefox Sync. But Persona.org remained active even after development wound down.
The Persona.org shutdown announcement was sent out on January 12 by Mozilla's Ryan Kelly. According to Kelly, the Persona.org site will finally be decommissioned on November 30, 2016. Afterward, Mozilla will destroy all user data stored on the servers, and will retain the persona.org domain name indefinitely, but with no services running on it (presumably to prevent a malicious third party from taking control of the domain and hijacking any lingering Persona transactions). Between now and then, Mozilla will continue to apply security updates to the Persona.org servers and will keep the mailing-list and IRC support channels functioning as normal.
The announcement says that the decision to shutter the service was
due to " The suggested replacement systems include the rather obvious
options of using another identity provider (like Google or Facebook)
or self-hosting an authentication system. But the suggestions also
point out that there are other authentication systems that, like
Persona, rely solely on the user's email address to establish their
identity. For example, there is Passwordless, a Node.js
middleware that emails per-session login tokens to the user's
address—much like the authentication flow of the Persona.org site.
No doubt Persona has far fewer adopters than the Facebook or Google
authentication systems, but some in the development community contend
that Mozilla failed to give Persona enough time to grow a user base. In December 2015, Stavros Korokithakis criticized
the short amount of time that the Persona team was given to develop
and deploy the system—a little under two years:
Along the way, he quotes Persona developer Dan Callahan, who reported that
the team was taken by surprise by requests to show adoption numbers:
The need to give a new protocol adequate time to gain acceptance
was a theme raised in the Hacker News (HN) discussion
thread about the November-shutdown news. Jan Wrobel noted that:
Others lamented the fact that Mozilla did not make a concerted push
to have Persona established as a formal specification or the fact
that the client side of Persona in Firefox was implemented
in a JavaScript shim rather
than natively in the browser. For many, however, the situation was
similar to the one seen with OpenID: large web service providers have
a vested interest in running their own centralized identity solutions, and
without a large userbase to rival Google or Facebook's, any
authentication scheme promoted by a small non-profit organization
stands little chance of success.
Mozilla's shutdown does not necessarily spell the end for the
underlying Persona concept, of course. When news of the shutdown broke, Korokithakis was among those in
the HN thread who advocated taking the Persona code and developing it
further. The interested parties eventually pooled their resources and
formed
a GitHub group named Let's
Auth. The group has put together a roadmap,
which notes a desire to not have a single point of failure akin to
Persona.org as well as the importance of implementing native browser
support. The roadmap also highlights the importance of getting an
existing web-framework project (such as WordPress or Rails) on board.
The plan seems to be a move away from directly picking up where
Persona development left off and, instead, stripping the idea down to
basics and reimplementing what is necessary. It may be a wise choice;
Callahan weighed in on
the revival effort, saying that " In its own post-mortem
analysis, Mozilla noted many of the same issues raised in the HN
thread and by the Let's Auth project. It also pointed out that Persona
suffered feature creep, implementing session-management and
attribute-exchange features that distracted from the the core
authentication function. If the attempt to reboot Persona outside of
Mozilla takes those lessons to heart, perhaps there is still a future
for the project's decentralized authentication concept.
Good intentions and lessons learned do not guarantee that a revival
effort will succeed, but it is nice to see interest in evolving the
concept of Persona further. As several people have pointed out, one
lingering gift that Persona gave to web developers was a simple exit
strategy. All of the site maintainers abandoned by the Persona.org
shutdown will still have their users' email addresses, so they can
easily move to a new authentication solution. Such would not be the
case had they chosen instead to delegate authentication to a
proprietary web-service provider.
The past few years have seen a flurry of development effort
directed at building secure and anonymizing apps for smartphones; one
can run Tor on an Android device and has a choice of multiple
encrypted messaging solutions. But, until recently, there has been
comparatively little work in making mobile devices react in emergency
situations—to, say, lock down or wipe the device's storage clean
of sensitive information or to sound an alarm that something untoward
has happened to the user. Now the Guardian Project has proposed
an open-source framework to make "panic button" features available in
every Android app.
The Guardian Project is a non-commercial developer of mobile apps
with an emphasis on security and privacy. It is perhaps best known for the
Android Tor client Orbot and the
encrypted messaging app ChatSecure.
Those offerings are fairly straightforward fare for anyone wishing to
keep their communications private, but some of the project's other
work has offered additional features.
In 2012, it developed InTheClear,
an Android app that would securely wipe the device's storage when it
was activated. The Courier
news-reader incorporates a similar emergency-erase feature. So too
does the still-in-development CameraV app, which also has a built-in ability to disguise
the app's launcher icon. Similar ideas are found elsewhere, such as
in the Panic Button app developed
by Amnesty International with the goal of
providing human rights activists with a "panic button" to clear out
their phones in the event that they were arrested or otherwise placed
in harm's way.
Now the Guardian Project has developed an Android library that will
allow any app to respond to "panic" situations—by locking the
app, erasing its data, hiding the launcher icon, or any other appropriate
action. Called PanicKit, the system works by having each compatible
app accept an ACTION_TRIGGER Intent from a separate "panic
button" app. Thus, the user can activate a single panic button and have
every configured app respond automatically.
How an app responds to the panic trigger can vary, and could be
non-destructive or destructive (or perhaps provide the user with
several options). The Guardian Project's blog post recommends that
non-destructive responses (such as erasing caches or locking apps) be
the default, though it notes that more serious measures may be what
the user wants:
The scheme thus enables any interested developer to add "panic
button" support to an app, and it allows the user to choose from
potentially a multitude of possible "panic button" apps. The Guardian
Project has released one such app, Ripple, and PanicKit support has
been added to Amnesty International's Panic Button app. In addition,
the Guardian Project has released a non-functional demonstration app
called FakePanicButton.
Both take
essentially the same UI/UX approach: the user can trigger a panic
signal by opening the app and tapping an on-screen button. But, as
the blog post notes, there are other possible ways one could trigger a
panic signal—a "geo-fence" trigger that sends the panic signal
if the phone enters a dangerous area, detecting the proximity of a designated Bluetooth
or NFC "button," or even a "dead man's switch" that issues the panic
signal if the user does not check in regularly.
PanicKit response support is available now in Orweb, InTheClear,
Courier, and several other Guardian project apps. In addition,
several third-party apps have added PanicKit support or are in the
process of doing so, such as the chat client Zom and the Lightning
web browser. The responses implemented in these apps vary, from
erasing browser history or deleting data to sending pre-defined
messages to specific, trusted contacts.
The blog post points out in several places that "panic button"
situations are, naturally, times when the user is under considerable
stress. Consequently, the project is taking care to work out design
patterns and best practices to help avoid mishaps. The Ripple app,
for example, takes two steps to send out a panic signal, and provides
a five-second window during which the user can easily cancel the
operation.
If PanicKit becomes a popular feature, though, there is
also the risk that it could become too complex for its own good.
Right now, for instance, one can install both Ripple and Panic
Button. Since each app on the device must register to accept an
Intent, the user can configure some apps to respond to Ripple and others
to respond to Panic Button. Throw in geo-fence triggers and
dead man's switches, then multiply by configurable options for each
panic-response app, and there quickly becomes a lot for the user to
configure.
Consequently, the Guardian Project has formed the Panic Initiative as a
collaboration space where interested developers can address open
questions about system integration, usability, and the like. The
PanicKit wiki
documents the project's design work and implementation progress so
far.
Perhaps most Android users will never have any occasion to need a
panic button, and no doubt it is a feature no one looks forward to
using. But if it proves popular, PanicKit could ease the minds of
users simply by making responding to panic situations an issue that
they can think about once in advance, rather than it the heat of the moment.
The idea of sandboxing applications has a certain appeal. By restricting a
program's access to various features and parts of the system that it
shouldn't need, any harm that
can come from a compromise can be reduced—often, substantially so. But
putting together the required pieces for a given application is a tedious
task, which is part of why projects like Firejail have been
started. Firejail uses namespaces, seccomp BPF, Linux
capabilities, and other kernel features to apply restrictions to
arbitrary programs, but it also
has profiles targeting popular applications.
One of the goals of the project is to make using sandboxes easy or, as the
Documentation
page puts it: "low, declining usage
", but Mozilla has still
published a transition
guide to help the remaining users migrate their sites to a new
authentication provider before the shutdown occurs.
The adoption problem
Persons of interest
I'd strongly suggest learning
from Persona's design rather than directly re-hosting the
code.
"
Emergency app functionality with PanicKit
Sandboxing with Firejail
There is no difficult in Firejail, at least
not SELinux-difficult.
" To that end, running Firefox in a
sandbox is done with a simple command:
$ firejail firefox
That command will launch Firefox inside of a sandbox with a whole list
of pre-configured restrictions. One can also modify the profile or
create a new one with different restrictions.
The firejail command is a setuid-root program that sets up namespaces, seccomp filters, and capabilities before executing the desired program. It is a C program that is available under the GPLv2; it also comes with profiles for more than 30 different applications. There is a Qt-based GUI, called Firetools, available as well.
Simply invoking firejail will start a shell using the generic profile. That profile will remove all capabilities, create a user namespace with only one user (the current user, thus typically no mapping to the root user outside the namespace), disallow any network protocols other than Unix sockets, IPv4, and IPv6 using seccomp, blacklist access to certain files (by mounting empty root-owned files or directories on them), and so on. Profiles can also use the include directive to reference other profiles. So there are some commonly used profiles that are included by the generic profile to restrict access to a large number of home directory files and directories (e.g. .bashrc, .emacs, .ssh), system files (e.g. /etc/shadow), and management utilities (e.g. mount, su). While those lists cannot be exhaustive (especially for various application-specific configuration directories in the home directory), users can add their own entries to the blacklist.
In addition, as described on the Basic Usage page, Firejail can be started with the --private option to replace the user's home directory with an empty one. It does that by mounting a tmpfs atop the home directory; the tmpfs will be destroyed when Firejail exits. Alternatively, users can specify a persistent directory (--private=~/my_sandbox_dir) to store sandbox data.
The default behavior for Firejail (when invoked without the generic profile using the --noprofile option) is to create new mount, PID, and UTS namespaces, but it can also be invoked (or configured) to use new network and user namespaces as well. If invoked with --net=eth0 option, for example, the network namespace will use the system's eth0 device with the macvlan driver to create a new network device inside the namespace that can communicate with the outside world. Bridging is also supported. The --net=none option will create a new network namespace without any devices, so processes cannot communicate outside of the namespace.
There is lots more to Firejail; the highlights of its feature set are outlined on the Features page. There is also plenty of documentation, ranging from man pages for firejail and firejail-profile (which describes the configuration options for profiles) to information on building custom profiles and filtering system calls using Firejail and seccomp. It is, in short, a rather comprehensive framework for applying a sandbox to applications.
But it is not only restricted to GUI applications like web browsers, email readers, BitTorrent clients, media players, and the like. It also supports running server processes in sandboxes. This is where capabilities are likely to come more into play. As described on the Linux Capabilities Guide page, programs like web servers and other network services can be restricted to just a handful of capabilities that are needed to do their job (e.g. CAP_NET_BIND_SERVICE, CAP_SETUID). That will reduce what a compromise of those processes can accomplish (though Linux capabilities are known to have weaknesses).
Over time, the number of profiles available should grow and additions will likely be made to the existing generic profile and the other commonly included profiles. Obviously, getting those profiles "right" is an important piece of the puzzle. For the most part, it is a blacklist approach (though support for using whitelists of files is present), which may allow some important things to be unprotected. That said, it is clearly far better than simply running these applications with all of the access and privileges of the user. Root-level compromises are certainly terrible, but for most regular users, their crown jewels live in their home directory anyway, so a full compromise is not substantially worse.
The idea of Firejail came from the sandbox that Google uses for the rendering processes in its Chrome browser, but it goes much further than that. It uses many of the security and isolation technologies that have been added to the kernel over the last decade or so—including control groups for resource limiting the sandboxes. We have covered many of those technologies over that time, so it is nice to see them being used in ways that can help users protect themselves from attacks of various kinds. The next time you want to run an unknown new program or visit a dodgy web site, Firejail might be a good option to reduce the harm that might otherwise occur.
[ Thanks to Raphaël Rigo for giving us a heads up about Firejail. ]
Security
User namespaces + overlayfs = root privileges
The user namespaces feature is conceptually fairly straightforward—allow users to run as root in their own space, while limiting their privileges on the system outside that space—but the implementation has, perhaps unsurprisingly, proven to be quite tricky. There are some assumptions about user IDs and how they operate that are deeply wired into the kernel in various subsystems; shaking those out has taken some time, which led to some hesitation about enabling the feature in distribution kernels. But that reluctance has largely passed at this point, which makes the recent discovery of a root-privilege escalation using user namespaces and the overlay filesystem (overlayfs) that much more dangerous.
The basic idea, as described by "halfdog" in a blog post, is that a regular user can create new mount and user namespaces, mount an overlayfs inside them, and exploit a hole in the overlayfs implementation to create a setuid-root binary that can be run from outside the namespace. Effectively, a regular user can create a root-privileged binary and do whatever they want with it—thus it is a complete system compromise.
The exploit uses another property of namespaces that has always seemed like something of a bug: the /proc filesystem provides a route for processes outside of a namespace to "see" inside it. In this case, the overlayfs mounted inside the namespace can be accessed from outside of it by using information on the mounts inside the namespace via /proc/PID/cwd. As halfdog put it:
The exploit [C program] works like this:
- New mount and user namespaces are created for the process.
- That process then mounts an overlayfs atop /bin using temporary directories for the overlayfs "upperdir" and "workdir" directories. A writable overlayfs must have both of these directories; upperdir holds the files/directories that have been changed, while workdir is used as a work space to enable atomic overlayfs operations.
- The process inside the namespaces changes its working directory to the overlayfs, thus making it visible outside of the namespaces by way of /proc/PID/cwd.
- The process changes the su binary (in /bin) to be world-writable, but does not change the owner. That results in a new file being created in the upper overlay directory.
- A process outside of the namespaces writes anything it wants to that file without changing the setuid bit (more on that coming).
- The outer process then runs that su with root privileges.
That seems reasonably straightforward, but there is one difficulty: writing to a root-owned, setuid-enabled file from a non-root process will remove the setuid bit, which defeats the whole thing. So a variant of another exploit (SetgidDirectoryPrivilegeEscalation) described by halfdog is used to trick the setuid-root mount program into writing an ELF binary to the file. Since mount is owned by root, the write doesn't remove setuid, resulting in an setuid-root program with contents controlled by a regular user (attacker).
Because the process inside the namespaces made the file world-writable, a regular user outside the namespaces can run mount with its stderr hooked to the file of interest (which, crucially, only requires an open() that doesn't revoke setuid). Then, when mount writes to the file, it is doing so as root, so it doesn't trigger the setuid revocation either. The file cannot be written inside the namespaces, which would be easier since it does not require the /proc/PID/cwd dance, as it would be done as root inside the user namespace. Since that is not the same as root outside that namespace, the setuid revocation would still occur.
Perhaps a final entry could be made to the list above: "Profit!".
The
fix
is fairly simple; it was committed by Miklos Szeredi in early December and
was merged into 4.4-rc4 (without any
mention of the security
implications). According to Al Viro's commit message, overlayfs was
"too enthusiastic about optimizing ->setattr() away
". It
combined two operations that should have been done separately, which led to
the creation of the setuid-root file in the upper overlay filesystem. After
the fix it won't be possible to change su to be world-writable
while still retaining root ownership and the setuid bit in the overlay.
The second exploit that was used to write the file appears to not yet have been fixed. Halfdog's description points to some email messages discussing the problem and possible solutions with several kernel developers, but no real resolution is evident. Obviously, if there was no way to write the overlay file, this particular avenue for exploiting the overlayfs bug would not have worked, but it is unknown if there are more ways to skin that particular cat.
All of the myriad interactions between various kernel subsystems (capabilities, namespaces, security modules, filesystems, /proc, and so on), especially given the "no regressions for user space" policy, make these kinds of bugs pretty much inevitable. One could imagine tossing out a bunch of currently expected behavior to simplify the complexity of those interactions, but that is not going to happen, so these types of problems will crop up from time to time.
This episode is also a reminder of the ingenuity that can go into an exploit of this kind. Other user namespace exploits and, indeed, exploits of bugs in other programs and kernel features have shown similar levels of cleverness. Often stringing together a few seemingly low-severity vulnerabilities results in something that almost appears as if it has exceeded the sum of its parts. Of course, white hats are not the only ones with the required level of skill, which makes efforts like the Kernel self-protection project, as well as other analysis and hardening projects, that much more important.
Brief items
Security quotes of the week
Your computerized things are talking about you behind your back, and for the most part you can't stop them -- or even learn what they're saying.
That’s the shape of the solution: the future of the Internet of Things should involve constant sensing by devices of other devices, looking for evidence of badness, making reports up the chain to humans or other authorities to do something about it.
The devil is in the details: we don’t want a system that makes it easy for your prankish neighbors to make the police think you’re harboring a massive radio-disrupter, driving like a madman, or tailpipe-spewing more than the rest of the city combined. You don’t want your devices to be tricked into tripping spurious alarms every night at 2AM. We also need to have a robust debate about what kind of radio-energy, driving maneuvers, network traffic, and engine emissions are permissible, and who enforces the limits, and what the rule of law looks like for those guidelines.
Mozilla: Man-in-the-Middle Interfering with Increased Security
Mozilla has run into a hitch with its plans to deprecate SHA-1 certificates. "However, for Firefox users who are behind certain 'man-in-the-middle' devices (including some security scanners and antivirus products), this change removed their ability to access HTTPS web sites. When a user tries to connect to an HTTPS site, the man-in-the-middle device sends Firefox a new SHA-1 certificate instead of the server’s real certificate. Since Firefox rejects new SHA-1 certificates, it can’t connect to the server." An update backing out the SHA-1 deprecation has been posted, but affected users will have to install it manually (assuming they don't use a distribution-supported version, of course).
US military still SHAckled to outdated DoD PKI infrastructure (Netcraft)
Netcraft reports that the US Department of Defense (DoD) is still issuing SHA-1 signed certificates, and using them to secure connections to .mil websites. "The DoD is America's largest government agency, and is tasked with protecting the security of its country, which makes its continued reliance on SHA-1 particularly remarkable. Besides the well known security implications, this reliance could already prove problematic amongst the DoD's millions of employees. For instance, Mozilla Firefox 43 began rejecting all new SHA-1 certificates issued since 1 January 2016. When it encountered one of these certificates, the browser displayed an Untrusted Connection error, although this could be overridden. If DoD employees become accustomed to ignoring such errors, it could become much easier to carry out man-in-the-middle attacks against them."
New vulnerabilities
armagetron: two vulnerabilities
Package(s): | armagetron | CVE #(s): | |||||
Created: | January 11, 2016 | Updated: | January 13, 2016 | ||||
Description: | From the Mageia advisory:
A practically exploitable bug was fixed in the network error handling. In client mode, any received packet that causes an exception during processing would terminate the connection to the server. Another theoretically exploitable bug was fixed that allowed very short UDP packets to cause a memory reading beyond the input buffer. Several non-exploitable crash bugs and one pathological camera behavior were also fixed. | ||||||
Alerts: |
|
bugzilla: multiple vulnerabilities
Package(s): | bugzilla | CVE #(s): | CVE-2015-8508 CVE-2015-8509 | ||||||||||||
Created: | January 8, 2016 | Updated: | January 13, 2016 | ||||||||||||
Description: | From the Bugzilla advisory: During the generation of a dependency graph, the code for the HTML image map is generated locally if a local dot installation is used. With escaped HTML characters in a bug summary, it is possible to inject unfiltered HTML code in the map file which the CreateImagemap function generates. This could be used for a cross-site scripting attack. (CVE-2015-8508) If an external HTML page contains a <script> element with its src attribute pointing to a buglist in CSV format, some web browsers incorrectly try to parse the CSV file as valid JavaScript code. As the buglist is generated based on the privileges of the user logged into Bugzilla, the external page could collect confidential data contained in the CSV file. (CVE-2015-8509) | ||||||||||||||
Alerts: |
|
dhcpcd: denial of service
Package(s): | dhcpcd | CVE #(s): | CVE-2016-1503 CVE-2016-1504 | ||||||||||||
Created: | January 11, 2016 | Updated: | June 20, 2016 | ||||||||||||
Description: | From the Arch Linux advisory:
- CVE-2016-1503 (denial of service) An issue has been discovered that can lead to a heap overflow via malformed dhcp responses later in print_option (via dhcp_envoption1) due to incorrect option length values. - CVE-2016-1504 (denial of service) A malformed dhcp response can lead to an invalid read/crash leading to denial of service. A remote attacker is able to send specially crafted packets leading to application crash resulting in denial of service. | ||||||||||||||
Alerts: |
|
ffmpeg: multiple vulnerabilities
Package(s): | ffmpeg | CVE #(s): | CVE-2015-8661 CVE-2015-8662 CVE-2015-8663 | ||||||||
Created: | January 13, 2016 | Updated: | January 13, 2016 | ||||||||
Description: | From the CVE entries:
The h264_slice_header_init function in libavcodec/h264_slice.c in FFmpeg before 2.8.3 does not validate the relationship between the number of threads and the number of slices, which allows remote attackers to cause a denial of service (out-of-bounds array access) or possibly have unspecified other impact via crafted H.264 data. (CVE-2015-8661) The ff_dwt_decode function in libavcodec/jpeg2000dwt.c in FFmpeg before 2.8.4 does not validate the number of decomposition levels before proceeding with Discrete Wavelet Transform decoding, which allows remote attackers to cause a denial of service (out-of-bounds array access) or possibly have unspecified other impact via crafted JPEG 2000 data. (CVE-2015-8662) The ff_get_buffer function in libavcodec/utils.c in FFmpeg before 2.8.4 preserves width and height values after a failure, which allows remote attackers to cause a denial of service (out-of-bounds array access) or possibly have unspecified other impact via a crafted .mov file. (CVE-2015-8663) | ||||||||||
Alerts: |
|
gajim: man-in-the-middle
Package(s): | gajim | CVE #(s): | CVE-2015-8688 | ||||||||||||||||||||||||||||||||||||||||
Created: | January 11, 2016 | Updated: | December 22, 2016 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Arch Linux advisory:
It was found that gajim doesn't verify the origin of roster pushes thus allowing third parties to modify the roster. This vulnerability allows to intercept messages resulting in man-in-the-middle. A remote attacker is able to intercept messages due to unverified origin of roster resulting in man-in-the-middle. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
isc-dhcp: denial of service
Package(s): | isc-dhcp | CVE #(s): | CVE-2015-8605 | ||||||||||||||||||||||||||||||||||||||||
Created: | January 13, 2016 | Updated: | March 1, 2016 | ||||||||||||||||||||||||||||||||||||||||
Description: | From the Debian advisory:
It was discovered that a maliciously crafted packet can crash any of the isc-dhcp applications. This includes the DHCP client, relay, and server application. Only IPv4 setups are affected. | ||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
kea: denial of service
Package(s): | kea | CVE #(s): | CVE-2015-8373 | ||||||||
Created: | January 8, 2016 | Updated: | January 13, 2016 | ||||||||
Description: | From the CVE entry: The kea-dhcp4 and kea-dhcp6 servers 0.9.2 and 1.0.0-beta in ISC Kea, when certain debugging settings are used, allow remote attackers to cause a denial of service (daemon crash) via a malformed packet. | ||||||||||
Alerts: |
|
libvirt: denial of service
Package(s): | libvirt | CVE #(s): | CVE-2015-5247 | ||||
Created: | January 13, 2016 | Updated: | January 13, 2016 | ||||
Description: | From the Ubuntu advisory:
Han Han discovered that libvirt incorrectly handled volume creation failure when used with NFS. A remote authenticated user could use this issue to cause libvirt to crash, resulting in a denial of service. This issue only applied to Ubuntu 15.10. | ||||||
Alerts: |
|
lighttpd: denial of service
Package(s): | lighttpd | CVE #(s): | |||||||||
Created: | January 12, 2016 | Updated: | January 13, 2016 | ||||||||
Description: | From the Red Hat bugzilla:
The 1.4.39 release of lighttpd fixed the following flaw: This release fixes crashes resulting from a use after free (#2700) and was introduced in 1.4.36. | ||||||||||
Alerts: |
|
mariadb: multiple vulnerabilities
Package(s): | mariadb | CVE #(s): | |||||
Created: | January 12, 2016 | Updated: | January 13, 2016 | ||||
Description: | From the Mageia advisory:
The mariadb package has been updated to version 10.0.23. An issue with client-side SSL certificate verification has been fixed, as have several other bugs. See the upstream release notes for more details. | ||||||
Alerts: |
|
mod_nss: enables insecure ciphersuites
Package(s): | mod_nss | CVE #(s): | CVE-2015-5244 | ||||||||
Created: | January 11, 2016 | Updated: | January 25, 2016 | ||||||||
Description: | From the Red Hat bugzilla:
The NSSCipherSuite option of mod_nss accepts OpenSSL-styled cipherstrings. It was found that the parsing of such cipherstrings is flawed. If this option is used to disable insecure ciphersuites using the common "!" syntax, e.g.: NSSCipherSuite !eNULL:!aNULL:AESGCM+aRSA:ECDH+aRSA it will actually enable those insecure ciphersuites. | ||||||||||
Alerts: |
|
openstack-nova: information leak
Package(s): | openstack-nova | CVE #(s): | CVE-2015-7548 | ||||||||
Created: | January 11, 2016 | Updated: | January 13, 2016 | ||||||||
Description: | From the Red Hat advisory:
A flaw was discovered in the OpenStack Compute (nova) snapshot feature when using the libvirt driver. A compute user could overwrite an attached instance disk with a malicious header specifying a backing file, and then request a snapshot, causing a file from the compute host to be leaked. This flaw only affects LVM or Ceph setups, or setups using filesystem storage with "use_cow_images = False". | ||||||||||
Alerts: |
|
oxide-qt: multiple vulnerabilities
Package(s): | oxide-qt | CVE #(s): | CVE-2015-8548 CVE-2015-8664 | ||||
Created: | January 12, 2016 | Updated: | January 13, 2016 | ||||
Description: | From the Ubuntu advisory:
Multiple security issues were discovered in V8. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to read uninitialized memory, cause a denial of service via renderer crash or execute arbitrary code with the privileges of the sandboxed render process. (CVE-2015-8548) An integer overflow was discovered in the WebCursor::Deserialize function in Chromium. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit this to cause a denial of service via application crash, or execute arbitrary code with the privileges of the user invoking the program. (CVE-2015-8664) | ||||||
Alerts: |
|
perl: returns untainted strings
Package(s): | perl | CVE #(s): | CVE-2015-8607 | ||||||||||||||||||||||||||||
Created: | January 11, 2016 | Updated: | January 27, 2016 | ||||||||||||||||||||||||||||
Description: | From the Debian advisory:
David Golden of MongoDB discovered that File::Spec::canonpath() in Perl returned untainted strings even if passed tainted input. This defect undermines taint propagation, which is sometimes used to ensure that unvalidated user input does not reach sensitive code. | ||||||||||||||||||||||||||||||
Alerts: |
|
pitivi: code execution
Package(s): | pitivi | CVE #(s): | CVE-2015-0855 | ||||||||||||
Created: | January 11, 2016 | Updated: | January 13, 2016 | ||||||||||||
Description: | From the Mageia advisory:
In pitivi before 0.95, double-clicking a file in the user's media library with a specially-crafted path or filename allows for arbitrary code execution with the permissions of the user running Pitivi. | ||||||||||||||
Alerts: |
|
prosody: two vulnerabilities
Package(s): | prosody | CVE #(s): | CVE-2016-1231 CVE-2016-1232 | ||||||||||||||||||||
Created: | January 11, 2016 | Updated: | January 21, 2016 | ||||||||||||||||||||
Description: | From the Debian advisory:
CVE-2016-1231: Kim Alvefur discovered a flaw in Prosody's HTTP file-serving module that allows it to serve requests outside of the configured public root directory. A remote attacker can exploit this flaw to access private files including sensitive data. The default configuration does not enable the mod_http_files module and thus is not vulnerable. CVE-2016-1232: Thijs Alkemade discovered that Prosody's generation of the secret token for server-to-server dialback authentication relied upon a weak random number generator that was not cryptographically secure. A remote attacker can take advantage of this flaw to guess at probable values of the secret key and impersonate the affected domain to other servers on the network. | ||||||||||||||||||||||
Alerts: |
|
python-rsa: signature forgery
Package(s): | python-rsa | CVE #(s): | CVE-2016-1494 | ||||||||||||||||||||||||
Created: | January 12, 2016 | Updated: | January 25, 2016 | ||||||||||||||||||||||||
Description: | From the Mageia advisory:
A signature forgery vulnerability in python-rsa allows an attacker to fake signatures for arbitrary messages for any key with a low exponent "e", such as the common value of 3. | ||||||||||||||||||||||||||
Alerts: |
|
qemu: multiple vulnerabilities
Package(s): | qemu | CVE #(s): | CVE-2015-7549 CVE-2015-8558 CVE-2015-8666 CVE-2015-8744 CVE-2015-8745 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Created: | January 12, 2016 | Updated: | January 20, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
CVE-2015-7549: Qemu emulator built with the PCI MSI-X support is vulnerable to null pointer dereference issue. It occurs when the controller attempts to write to the pending bit array(PBA) memory region. Because the MSI-X MMIO support did not define the .write method. A privileges used inside guest could use this flaw to crash the Qemu process resulting in DoS issue. CVE-2015-8558: Qemu emulator built with the USB EHCI emulation support is vulnerable to an infinite loop issue. It occurs during communication between host controller interface(EHCI) and a respective device driver. These two communicate via a isochronous transfer descriptor list(iTD) and an infinite loop unfolds if there is a closed loop in this list. A privileges used inside guest could use this flaw to consume excessive CPU cycles & resources on the host. CVE-2015-8666: Qemu emulator built with the Q35 chipset based pc system emulator is vulnerable to a heap based buffer overflow. It occurs during VM guest migration, as more(8 bytes) data is moved than allocated memory area. A privileged guest user could use this issue to corrupt the VM guest image, potentially leading to a DoS. This issue affects q35 machine types. CVE-2015-8744: Qemu emulator built with a VMWARE VMXNET3 paravirtual NIC emulator support is vulnerable to crash issue. It occurs when a guest sends a Layer-2 packets smaller than 22 bytes. A privileged(CAP_SYS_RAWIO) guest user could use this flaw to crash the Qemu process instance resulting in DoS. CVE-2015-8745: Qemu emulator built with a VMWARE VMXNET3 paravirtual NIC emulator support is vulnerable to crash issue. It could occur while reading Interrupt Mask Registers(IMR). A privileged(CAP_SYS_RAWIO) guest user could use this flaw to crash the Qemu process instance resulting in DoS. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Alerts: |
|
roundcubemail: path traversal
Package(s): | roundcubemail | CVE #(s): | |||||||||||||
Created: | January 8, 2016 | Updated: | January 14, 2016 | ||||||||||||
Description: | From the Fedora advisory: Path traversal vulnerability (CWE-22) in setting a skin. | ||||||||||||||
Alerts: |
|
rsync: unsafe destination path
Package(s): | rsync | CVE #(s): | CVE-2014-9512 | ||||||||||||||||||||||||||||
Created: | January 11, 2016 | Updated: | June 28, 2016 | ||||||||||||||||||||||||||||
Description: | From the Red Hat bugzilla:
A security fix in rsync 3.1.2 was released, adding extra check to the file list to prevent the malicious sender to use unsafe destination path for transferred file, such as just-sent symlink. Affects versions older than 3.1.2. | ||||||||||||||||||||||||||||||
Alerts: |
|
rubygem-mail: SMTP injection
Package(s): | rubygem-mail | CVE #(s): | |||||||||
Created: | January 11, 2016 | Updated: | January 15, 2016 | ||||||||
Description: | From the SUSE bug report:
The Mail library does not impose a length limit on email addresses, so an attacker can send a long spam message via a recipient address unless there is a limit on the application’s side. The attacker-injected message in the recipient address is processed by the server. This type of vulnerability can be real threats in inquiry forms, member signup forms, or any other application that delivers an email to a user-specified email address. | ||||||||||
Alerts: |
|
shellinabox: DNS rebinding
Package(s): | shellinabox | CVE #(s): | CVE-2015-8400 | ||||||||||||
Created: | January 8, 2016 | Updated: | December 22, 2016 | ||||||||||||
Description: | From the Red Hat bug report: The shellinabox server, while using the HTTPS protocol, allows HTTP fallback through the "/plain" URL. | ||||||||||||||
Alerts: |
|
shotwell: validate TLS certificates
Package(s): | shotwell | CVE #(s): | |||||||||||||||||||||
Created: | January 13, 2016 | Updated: | March 22, 2016 | ||||||||||||||||||||
Description: | From the GNOME bugzilla:
Seems Shotwell logs into Facebook, etc. without validating TLS certificates. Since you use WebKit1 you're responsible not just for all security bugs since security updates ended a year ago, but also for validating TLS certificates on the SoupSession used by your WebKitWebView, before sending any HTTP headers. I've never done this before, but I think the right way is to connect to WebKitWebView:resource-request-starting, grab the WebKitNetworkRequest, get the SoupMessage property from it, then connect to notify::tls-errors and cancel the message immediately in the signal handler (not sure how to do that). I think you also have to somehow tell libsoup to check for TLS errors in the first place; should be easy if you can find a way to get the SoupSession from WebKit. | ||||||||||||||||||||||
Alerts: |
|
wireshark: multiple vulnerabilities
Package(s): | wireshark | CVE #(s): | CVE-2015-8711 CVE-2015-8712 CVE-2015-8713 CVE-2015-8714 CVE-2015-8715 CVE-2015-8716 CVE-2015-8717 CVE-2015-8718 CVE-2015-8719 CVE-2015-8720 CVE-2015-8721 CVE-2015-8722 CVE-2015-8723 CVE-2015-8724 CVE-2015-8725 CVE-2015-8726 CVE-2015-8727 CVE-2015-8728 CVE-2015-8729 CVE-2015-8730 CVE-2015-8731 CVE-2015-8732 CVE-2015-8733 | ||||||||||||||||||||||||||||||||
Created: | January 8, 2016 | Updated: | March 14, 2016 | ||||||||||||||||||||||||||||||||
Description: | From the openSUSE advisory: CVE-2015-8711: epan/dissectors/packet-nbap.c in the NBAP dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate conversation data, which allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a crafted packet. CVE-2015-8712: The dissect_hsdsch_channel_info function in epan/dissectors/packet-umts_fp.c in the UMTS FP dissector in Wireshark 1.12.x before 1.12.9 does not validate the number of PDUs, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. CVE-2015-8713: epan/dissectors/packet-umts_fp.c in the UMTS FP dissector in Wireshark 1.12.x before 1.12.9 does not properly reserve memory for channel ID mappings, which allows remote attackers to cause a denial of service (out-of-bounds memory access and application crash) via a crafted packet. CVE-2015-8714: The dissect_dcom_OBJREF function in epan/dissectors/packet-dcom.c in the DCOM dissector in Wireshark 1.12.x before 1.12.9 does not initialize a certain IPv4 data structure, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. CVE-2015-8715: epan/dissectors/packet-alljoyn.c in the AllJoyn dissector in Wireshark 1.12.x before 1.12.9 does not check for empty arguments, which allows remote attackers to cause a denial of service (infinite loop) via a crafted packet. CVE-2015-8716: The init_t38_info_conv function in epan/dissectors/packet-t38.c in the T.38 dissector in Wireshark 1.12.x before 1.12.9 does not ensure that a conversation exists, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. CVE-2015-8717: The dissect_sdp function in epan/dissectors/packet-sdp.c in the SDP dissector in Wireshark 1.12.x before 1.12.9 does not prevent use of a negative media count, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. CVE-2015-8718: Double free vulnerability in epan/dissectors/packet-nlm.c in the NLM dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1, when the "Match MSG/RES packets for async NLM" option is enabled, allows remote attackers to cause a denial of service (application crash) via a crafted packet. CVE-2015-8719: The dissect_dns_answer function in epan/dissectors/packet-dns.c in the DNS dissector in Wireshark 1.12.x before 1.12.9 mishandles the EDNS0 Client Subnet option, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. CVE-2015-8720: The dissect_ber_GeneralizedTime function in epan/dissectors/packet-ber.c in the BER dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 improperly checks an sscanf return value, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. CVE-2015-8721: Buffer overflow in the tvb_uncompress function in epan/tvbuff_zlib.c in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 allows remote attackers to cause a denial of service (application crash) via a crafted packet with zlib compression. CVE-2015-8722: epan/dissectors/packet-sctp.c in the SCTP dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate the frame pointer, which allows remote attackers to cause a denial of service (NULL pointer dereference and application crash) via a crafted packet. CVE-2015-8723: The AirPDcapPacketProcess function in epan/crypt/airpdcap.c in the 802.11 dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate the relationship between the total length and the capture length, which allows remote attackers to cause a denial of service (stack-based buffer overflow and application crash) via a crafted packet. CVE-2015-8724: The AirPDcapDecryptWPABroadcastKey function in epan/crypt/airpdcap.c in the 802.11 dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not verify the WPA broadcast key length, which allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted packet. CVE-2015-8725: The dissect_diameter_base_framed_ipv6_prefix function in epan/dissectors/packet-diameter.c in the DIAMETER dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate the IPv6 prefix length, which allows remote attackers to cause a denial of service (stack-based buffer overflow and application crash) via a crafted packet. CVE-2015-8726: wiretap/vwr.c in the VeriWave file parser in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate certain signature and Modulation and Coding Scheme (MCS) data, which allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted file. CVE-2015-8727: The dissect_rsvp_common function in epan/dissectors/packet-rsvp.c in the RSVP dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not properly maintain request-key data, which allows remote attackers to cause a denial of service (use-after-free and application crash) via a crafted packet. CVE-2015-8728: The Mobile Identity parser in (1) epan/dissectors/packet-ansi_a.c in the ANSI A dissector and (2) epan/dissectors/packet-gsm_a_common.c in the GSM A dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 improperly uses the tvb_bcd_dig_to_wmem_packet_str function, which allows remote attackers to cause a denial of service (buffer overflow and application crash) via a crafted packet. CVE-2015-8729: The ascend_seek function in wiretap/ascendtext.c in the Ascend file parser in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not ensure the presence of a '\0' character at the end of a date string, which allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted file. CVE-2015-8730: epan/dissectors/packet-nbap.c in the NBAP dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate the number of items, which allows remote attackers to cause a denial of service (invalid read operation and application crash) via a crafted packet. CVE-2015-8731: The dissct_rsl_ipaccess_msg function in epan/dissectors/packet-rsl.c in the RSL dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not reject unknown TLV types, which allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted packet. CVE-2015-8732: The dissect_zcl_pwr_prof_pwrprofstatersp function in epan/dissectors/packet-zbee-zcl-general.c in the ZigBee ZCL dissector in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate the Total Profile Number field, which allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted packet. CVE-2015-8733: The ngsniffer_process_record function in wiretap/ngsniffer.c in the Sniffer file parser in Wireshark 1.12.x before 1.12.9 and 2.0.x before 2.0.1 does not validate the relationships between record lengths and record header lengths, which allows remote attackers to cause a denial of service (out-of-bounds read and application crash) via a crafted file. | ||||||||||||||||||||||||||||||||||
Alerts: |
|
wireshark-cli: multiple vulnerabilities
Package(s): | wireshark-cli | CVE #(s): | CVE-2015-8742 CVE-2015-8741 CVE-2015-8740 CVE-2015-8738 CVE-2015-8739 CVE-2015-8737 CVE-2015-8736 CVE-2015-8735 CVE-2015-8734 | ||||||||||||||||||||
Created: | January 11, 2016 | Updated: | January 13, 2016 | ||||||||||||||||||||
Description: | From the CVE entries:
The dissect_CPMSetBindings function in epan/dissectors/packet-mswsp.c in the MS-WSP dissector in Wireshark 2.0.x before 2.0.1 does not validate the column size, which allows remote attackers to cause a denial of service (memory consumption or application crash) via a crafted packet. (CVE-2015-8742) The dissect_ppi function in epan/dissectors/packet-ppi.c in the PPI dissector in Wireshark 2.0.x before 2.0.1 does not initialize a packet-header data structure, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2015-8741) The dissect_tds7_colmetadata_token function in epan/dissectors/packet-tds.c in the TDS dissector in Wireshark 2.0.x before 2.0.1 does not validate the number of columns, which allows remote attackers to cause a denial of service (stack-based buffer overflow and application crash) via a crafted packet. (CVE-2015-8740) The s7comm_decode_ud_cpu_szl_subfunc function in epan/dissectors/packet-s7comm_szl_ids.c in the S7COMM dissector in Wireshark 2.0.x before 2.0.1 does not validate the list count in an SZL response, which allows remote attackers to cause a denial of service (divide-by-zero error and application crash) via a crafted packet. (CVE-2015-8738) The ipmi_fmt_udpport function in epan/dissectors/packet-ipmi.c in the IPMI dissector in Wireshark 2.0.x before 2.0.1 improperly attempts to access a packet scope, which allows remote attackers to cause a denial of service (assertion failure and application exit) via a crafted packet. (CVE-2015-8739) The mp2t_open function in wiretap/mp2t.c in the MP2T file parser in Wireshark 2.0.x before 2.0.1 does not validate the bit rate, which allows remote attackers to cause a denial of service (divide-by-zero error and application crash) via a crafted file. (CVE-2015-8737) The mp2t_find_next_pcr function in wiretap/mp2t.c in the MP2T file parser in Wireshark 2.0.x before 2.0.1 does not reserve memory for a trailer, which allows remote attackers to cause a denial of service (stack-based buffer overflow and application crash) via a crafted file. (CVE-2015-8736) The get_value function in epan/dissectors/packet-btatt.c in the Bluetooth Attribute (aka BT ATT) dissector in Wireshark 2.0.x before 2.0.1 uses an incorrect integer data type, which allows remote attackers to cause a denial of service (invalid write operation and application crash) via a crafted packet. (CVE-2015-8735) The dissect_nwp function in epan/dissectors/packet-nwp.c in the NWP dissector in Wireshark 2.0.x before 2.0.1 mishandles the packet type, which allows remote attackers to cause a denial of service (application crash) via a crafted packet. (CVE-2015-8734) | ||||||||||||||||||||||
Alerts: |
|
wordpress: cross-site scripting
Package(s): | wordpress | CVE #(s): | CVE-2016-1564 | ||||||||||||||||
Created: | January 11, 2016 | Updated: | January 18, 2016 | ||||||||||||||||
Description: | From the Arch Linux advisory:
A cross-site scripting vulnerability has been discovered that could allow a site to be compromised. A remote attacker is able to inject unescaped HTML and javascript leading to cross-site scripting that could result in a site to be compromised. | ||||||||||||||||||
Alerts: |
|
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The 4.5 merge window is open, following the 4.4 release on January 10. See the separate article below for a summary of what has been merged thus far.Stable updates: none have been released since December 14.
Kernel development news
The 4.5 merge window opens
As of this writing, just over 3,100 non-merge changesets have been pulled into the mainline repository for the 4.5 development cycle. As one would expect three days into the merge window, things are just getting started. Nonetheless, a number of significant changes have already been pulled. Some of the more interesting of those are:
- The device mapper's dm-verity
subsystem, which is charged with validating the integrity of data
on the underlying storage device, has gained the ability to perform
forward
error correction. This allows for the recovery of data from a
device where "
several consecutive corrupted blocks
" exist. The first consumer for this appears to be Android, which uses dm-verity already. - As usual, there is a long list of improvements to the perf events
subsystem; see this
merge commit for a detailed summary.
- Mandatory file locking is now optional
at configuration time. This is a first step toward the removal
(sometime in the distant future) of this unloved and little-used
feature.
- The copy_file_range() system
call has been merged. It allows for the quick copying of a
portion of a file, with the operation possibly optimized by the
underlying filesystem.
The support code for copy_file_range() has also enabled an
easy implementation of the NFSv4.2 CLONE operation.
- The User-Mode Linux port now supports the seccomp() system
call.
- The SOCK_DESTROY operation,
allowing a system administrator to shut down an open network
connection, is now supported.
- The "clsact" network queueing discipline module has been added; see this
commit changelog for details and usage information.
- The "version 2" control-group interface is now considered official and
non-experimental; it can be mounted with the cgroup2
filesystem type. Not all controllers support this interface yet,
though. See Documentation/cgroup-v2.txt for
details on the new interface.
- New hardware support includes:
- Cryptographic:
Rockchip cryptographic engines and
Intel C3xxx, C3xxxvf, C62x, and C62xvf cryptographic
accelerators.
- Miscellaneous:
HiSilicon MBIGEN interrupt controllers,
Technologic TS-4800 interrupt controllers, and
Cirrus Logic CS3308 audio analog-to-digital converters.
- Networking:
Netronome NFP4000/NFP6000 VF interfaces,
Analog Devices ADF7242 SPI 802.15.4 wireless controllers,
Freescale data-path acceleration architecture frame manager devices,
IBM VNIC virtual interfaces, and
STMicroelectronics ST95HF NFC transceivers.
- Pin control: Qualcomm MSM8996 pin controllers, Marvell PXA27x pin controllers, Broadcom NSP GPIO controllers, and Allwinner H3 pin controllers.
- Cryptographic:
Rockchip cryptographic engines and
Intel C3xxx, C3xxxvf, C62x, and C62xvf cryptographic
accelerators.
Changes visible to kernel developers include:
- The follow_link() method in struct inode_operations
has been replaced with:
const char *(*get_link) (struct dentry *dentry, struct inode *inode, struct delayed_call *done);
It differs from follow_link() (which was described in this article) by separating the dentry and inode arguments and, most importantly, being callable in the RCU-walk mode. In that case, dentry will be null, and get_link() is not allowed to block.
Also added in the same patch set was a "poor man's closures" mechanism, represented by struct delayed_call:
struct delayed_call { void (*fn)(void *); void *arg; };
See include/linux/delayed_call.h for the (tiny) full interface. In this case, get_link() should set done->fn to its inode destructor function — probably the one that was previously made available as the (now removed) put_link() inode_operations method.
- There is a new memory-barrier primitive:
void smp_cond_acquire(condition);
It will spin until condition evaluates to a non-zero value, then insert a read barrier.
- There is a new stall detector for workqueues; if any workqueue fails
to make progress for 30 seconds, the kernel will output a bunch of
information that should help in debugging of problem.
- There is a new helper function:
void *memdup_user_nul(const void __user *src, size_t len);
It will copy len bytes from user space, starting at src, allocating memory for the result and adding a null-terminating byte. Over 50 call sites have already shown up in the kernel.
- The configfs virtual filesystem now supports binary attributes; see
the documentation changes at the beginning of this
commit for details.
- Changes to the networking core mean that NAPI network drivers get busy
polling for free, without the need to add explicit support.
- Patches moving toward the removal of protocol-specific checksumming from networking drivers (described in this article) have been merged. See this merge commit for more information.
The 4.5 merge window will probably stay open until January 24, so there is time for a lot more changes to find their way into the mainline. As usual, LWN will track those changes and summarize them in the coming weeks; stay tuned.
Fixing asynchronous I/O, again
The process of adding asynchronous I/O (AIO) support to the kernel began with the 2.5.23 development kernel in June 2002. Sometimes it seems that the bulk of the time since then has been taken up by complaints about AIO in the kernel. That said, AIO meets a specific need and has users who depend on it. A current attempt to improve the AIO subsystem has brought out some of those old complaints along with some old ideas for improving the situation.Linux AIO does suffer from a number of ailments. The subsystem is quite complex and requires explicit code in any I/O target for it to be supported. The API is not considered to be one of our best and is not exposed by the GNU C library; indeed, the POSIX AIO support in glibc is implemented in user space and doesn't use the kernel's AIO subsystem at all. For files, only direct I/O is supported; despite various attempts over the years, buffered I/O is not supported. Even direct I/O can block in some settings. Few operations beyond basic reads and writes are supported, and those that are (fsync(), for example) are incomplete at best. Many have wished for a better AIO subsystem over the years, but what we have now still looks a lot like what was merged in 2002.
Benjamin LaHaise, the original implementer of the kernel AIO subsystem, has recently returned to this area with this patch set. The core change here is to short out much of the kernel code dedicated to the tracking, restarting, and cancellation of AIO requests; instead, the AIO subsystem simply fires off a kernel thread to perform the requested operation. This approach is conceptually simpler; it also has the potential to perform better and, in many cases, makes cancellation more reliable.
With that core in place, Benjamin's patch set adds a number of new operations. It starts with fsync(), which, in current kernels, only works if the operation's target supports it explicitly. A quick grep shows that, in the 4.4 kernel, there is not a single aio_fsync() method defined, so asynchronous fsync() does not work at all. With AIO based on kernel threads, it is a simple matter to just call the regular fsync() method and instantly have working asynchronous fsync() for any I/O target supporting AIO in general (though, as Dave Chinner pointed out, Benjamin's current implementation does not yet solve the whole problem).
In theory, fsync() is supported by AIO now, even if it doesn't actually work. A number of other things are not. Benjamin's patch set addresses some of those gaps by adding new operations, including openat() (opens are usually blocking operations), renameat(), unlinkat(), and poll(). Finally, it adds an option to request reading pages from a file into the page cache (readahead) with the intent that later attempts to access those pages will not block.
For the most part, adding these features is easy once the thread mechanism is in place; there is no longer any need to track partially completed operations or perform restarts. The attempts to add buffered I/O support to AIO in the past were pulled down by their own complexity; adding that support with this mechanism (not done in the current patch set) would not require much more than an internal read() or write() call. The one exception is the openat() support, which requires the addition of proper credential handling to the kernel thread.
The end result would seem to be a significant improvement to the kernel's AIO subsystem, but Linus still didn't like it. He is happy with the desired result and with much of the implementation, but he would like to see the focus be on the targeted capabilities rather than improving an AIO subsystem that, in his mind, is not really fixable. As he put it:
In other words, why is the interface not simply: "do arbitrary system call X with arguments A, B, C, D asynchronously using a kernel thread".
That's something that a lot of people might use. In fact, if they can avoid the nasty AIO interface, maybe they'll even use it for things like read() and write().
Linus suggested that the thread-based implementation in Benjamin's patch set could be adapted to this sort of use, but that the interface needs to change.
Thread-based asynchronous system calls are not a new idea, of course; it
has come around a number of times in the past under names like
fibrils,
threadlets,
syslets, and
acall.
Linus even once posted an asynchronous system
call patch of his own as these discussions were happening. There are
some challenges to making asynchronous system calls work properly; there
would have to be, for example, a whitelist of the system calls that can be
safely run in this mode. As Andy Lutomirski pointed out, "exit is bad
".
Linus also noted that many system calls and
structures as presented by glibc differ considerably from what the kernel
provides; it would be difficult to provide an asynchronous system call API
that could preserve the interface as seen by programs now.
Those challenges are real, but they may not prevent developers from having another look at the old ideas. But, as Benjamin was quick to point out, none of those approaches ever got to the point where they were ready to be merged. He seemed to think that another attempt now might run into the same sorts of complexity issues; it is not hard to conclude that he would really rather continue with the approach he has taken thus far.
Chances are, though, that this kind of extension to the AIO API is unlikely to make it into the mainline until somebody shows that the more general asynchronous system call approach simply isn't workable. The advantages of the latter are significant enough — and dislike for AIO strong enough — to create a lot of pressure in that direction. Once the dust settles, we may finally see the merging of a feature that developers have been pondering for years.
The present and future of formatted kernel documentation
The kernel source tree comes with a substantial amount of documentation, believe it or not. Much of that can be found in the Documentation tree as a large set of rather haphazardly organized plain-text files. But there is also quite a bit of documentation embedded within the source code itself that can be extracted and presented in a number of formats. There has been an effort afoot for the better part of a year to improve the capabilities of the kernel's formatted-documentation subsystem; it's a good time for a look at the current state of affairs and where things might go.Anybody who has spent much time digging around in the kernel source will have run across the specially formatted comments used there to document functions, structures, and more. These "kerneldoc comments" tend to look like this:
/** * list_add - add a new entry * @new: new entry to be added * @head: list head to add it after * * Insert a new entry after the specified head. * This is good for implementing stacks. */
This comment describes the list_add() function and its two parameters (new and head). It is introduced by the "/**" marker and follows a number of rules; see Documentation/kernel-doc-nano-HOWTO.txt for details. Normal practices suggest that these special comments should be provided for all functions meant to be used outside of the defining code (all functions that are exported to modules, for example); some subsystems also use kerneldoc comments for internal documentation.
The documentation subsystem is able to extract these comments and render them into documents in a number of formats, including plain text, man pages, HTML, and PDF files. This can be done in a kernel source tree with a command like "make mandocs" or "make pdfdocs". There is also a copy of the formatted documentation on kernel.org; the end result for the comment above can be found on this page, for example. The results are not going to win any prizes for beautiful design, but many developers find them helpful.
Inside kernel-doc
The process of creating formatted documents starts with one of a number of "template files," found in the Documentation/DocBook directory. These files (there are a few dozen of them) are marked up in the DocBook format; they also contain a set of specially formatted (non-DocBook) lines marking the places where documentation from the source should be stuffed into the template. Thus, for example, kernel-api.tmpl contains a line that reads:
!Iinclude/linux/list.h
The !I directive asks for the documentation for all functions that are not exported to modules. It is used rather than !E (which grabs documentation for exported functions) because the functions, being defined in a header file, do not appear in an EXPORT_SYMBOL() directive.
Turning a template file into one or more formatted documents is a lengthy process that starts with a utility called docproc, found in the scripts directory. This program (written in C) reads the template file, finds the special directives, and, for each of those directives, it does the following:
- A pass through named source file is made, and each of the
EXPORT_SYMBOL() directives found therein is parsed and the
named function added to the list of exported symbols.
- A call is made to scripts/kernel-doc (a 2,700-line Perl
script) to locate all of the functions, structures, and more that are
defined in the source file. kernel-doc tries to parse the C
code well enough to recognize the definitions of interest; in the process,
it attempts to deal with some of the kernel's macro trickery without
actually running the source through the C preprocessor. It will
output a list of the names it found.
- docproc calls kernel-doc again, causing it to parse the source file a second time; this time, though, the output is the actual documentation for the functions of interest, with some minimal DocBook formatting added.
The formatted output is placed into the template file in the indicated spot. If the target format is HTML, the kernel-doc-xml-ref script is run to generate cross-reference links. This feature, only added in 4.3, can only generate links within one template file; cross-template links are not supported.
The final step is to run the documentation-formatting tool to actually create the files in the format of interest. Most of the time, the xmlto tool is used for this purpose, though there are some provisions in the makefile for using other tools.
In other words, this toolchain looks just like what one might expect from a documentation system written by kernel developers. It gets the basic job done but it is not particularly pretty or easy to use. It is somewhat brittle, making it easy for developers to break the documentation build without knowing it. Numerous developers have said that they have given up on trying to actually get formatted output from it; depending on one's distribution, getting all of the pieces is place is not always easy. And a lot of potentially desirable features, like cross-file links, indexing, or formatting within the in-source comments, are not present.
Formatted comments
The latter issue — adding formatting to the kerneldoc comments — has been the subject of some work in recent times. Daniel Vetter has a long-term goal of putting much more useful graphics-subsystem information into those comments, but has found the lack of formatting to be an impediment once one gets beyond documenting function prototypes. To fix that, Intel funded some work that, among other things, produced a patch set allowing markup in the comments. Nobody really wants to see XML markup in C source, though, so the patch took a different approach, allowing markup to be done using the Markdown language. Using Markdown allowed a fair amount of documentation to be moved to the source from the template file, shedding a bunch of ugly XML markup on the way.
This work has not yet been merged into the mainline. Daniel has his own hypothesis as to why:
Your editor (who happens to be the kernel documentation maintainer, incidentally), has a different hypothesis. Perhaps this work remains outside because: (1) it is a significant change affecting all kernel developers that shouldn't be rushed; (2) it used pandoc, requiring, on your editor's Fedora test box, the installation of 70 Haskell dependencies to run; (3) it had unresolved problems stemming from disagreements between pandoc and xmlto regarding things like XML entity escaping; and (4) a certain natural reluctance to add another step to the kernel documentation house of cards. All of these concerns led to a discussion at the 2015 Kernel Summit and a lack of enthusiasm for quick merging of this change.
All that notwithstanding, there is no doubt that there is interest in adding formatting to the kernel's documentation comments. Your editor thinks that there might be a better way to do so, perhaps involving the removal of xmlto (and DocBook) entirely in favor of a Markdown-only solution or a system like Sphinx. Unfortunately, your editor has proved to be thoroughly unable to find the time to actually demonstrate that such an approach might work, and nobody else seems ready to jump in and do it for him. Meanwhile, the Markdown patches have been reworked to use AsciiDoc (which can be thought of as a rough superset of Markdown) instead. That change gets rid of the Haskell dependency (replacing it with a Python dependency) and improves some formatting features at the cost of slowing the documentation build considerably. Even if it is arguably not the best solution, it is out there and working now.
As a result, these patches will probably be pulled into the documentation tree (and, thus, into linux-next) in the next few weeks, with an eye toward merging in 4.6 if all looks well. It has been said many times that a subsystem maintainer's first job is to say "no" to changes. Sometimes, though, the right thing is to say "yes," even if said maintainer thinks that a better solution might be possible. A good-enough solution that exists now should not be held up overly long in the hopes that vague ideas for something else might turn into real, working code.
Patches and updates
Kernel trees
Architecture-specific
Core kernel code
Development tools
Device drivers
Device driver infrastructure
Filesystems and block I/O
Memory management
Networking
Security-related
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Steps toward a unified automotive Linux platform
The Automotive Grade Linux (AGL) workgroup unveiled its new Unified Code Base (UCB) demo platform on January 4, timed for the annual Consumer Electronics Show (CES) show in Las Vegas. The UCB release is the first tangible product to emerge from AGL since the restructuring that followed the shutdown of Tizen IVI (the "in-vehicle infotainment" device profile). Although the released UCB code focuses on demonstration applications, that does not tell the whole story. Under the hood, it incorporates resurrected work that originated in Tizen, several new contributions, and components from the GENIVI Alliance. In fact, the development of the UCB release has been marked by cooperation between AGL and GENIVI—two multi-company coalitions that some feared would be reluctant to share code.
To recap, Intel wound down its involvement in Tizen in early 2015. For the phone, smart-TV, and other device profiles, Samsung remained in the driver's seat, and has continued to make Tizen software and product releases. But Samsung played no part in the vehicle entertainment market, so Tizen IVI was essentially orphaned as Intel redirected its engineering resources elsewhere. Since Tizen IVI had been tagged to serve as the reference distribution for AGL, a new plan had to be formulated.
For its part, GENIVI had long maintained a Yocto-based development platform called GENIVI Baseline. Add to that the real-world experience of companies working on Linux-based IVI systems—many of which chose Yocto as their development system—and selecting Yocto to serve as the new underpinnings of AGL's work was the obvious choice.
Nevertheless, AGL's decision to build a Yocto-based distribution would not, in and of itself, eliminate the possibility of friction, incompatibilities, or duplication of effort with GENIVI. A number of companies belong to both AGL and GENIVI and, thus, would prefer for the projects to complement each other rather than compete head-on. Consequently, in June 2015, right around the Automotive Linux Summit, several GENIVI and AGL project members started actively pursuing a joint strategy that would enable the projects to build compatible Yocto-based releases—sharing components where possible, and cleanly separating AGL- or GENIVI-specific components elsewhere.
Demonstrations
The UCB release shown at CES is the result of this new shared-strategy effort. Disk images were released for the Renesas Porter SBC (an automotive-centric development board) and for the QEMU emulator. Functionally, the CES demo highlighted a basic navigation app, a media-playback app, a car-status dashboard app, and a heating/ventilation/air-conditioning (HVAC) control app. The dashboard app displays only simulated vehicle-status data, rather than reading from the vehicle's data bus and, similarly, the navigation app shows how the display tracks vehicle progress to an address, but it does so by updating the display to reflect phony position changes. Both of those choices are natural for a booth demonstration where there is no moving vehicle, of course.
The HVAC app was a bit more interesting, as the board was connected to a fan and motorized actuators (like those that would control the vents in a real HVAC system). Likewise, the media player was connected to a set of real Media Oriented Systems Transport (MOST) audio components, so it, too, demonstrates real functionality and not software simulation alone. The AGL web site hosts a video walk-through recorded at CES (where AGL, notably, attended as part of the GENIVI booth) for those who would like to see the setup in action.
For those hoping to run the UCB code at home, there is less to get excited about. The Renesas Porter board is not terribly expensive (US $360 at present), but few are likely to have one on hand already. The QEMU images offer more hope, although one is limited to simulated events. In my own tests, I also found the QEMU images more than a little picky, especially where GPUs are concerned. The Weston-based display manager requires OpenGL, and I could not get it to cooperate with QEMU OpenGL support available for any of my available video cards.
In the long run, though, one expects that such limitations will change; the disk images were put together for CES, rather than being a general-purpose release. Far more interesting than demo apps are what lies under the surface. Recall that there was initially concern that AGL's distribution efforts would have the decidedly unwanted effect of splintering the automotive-Linux movement. That concern was all the more real when AGL was tethered to Tizen IVI and GENIVI was focused on Yocto. The combined effort of both AGL and GENIVI to develop mutually compatible Yocto strategies is a welcome development indeed.
The automotive-specific parts of the UCB release are built out of three layers: meta-ivi-common, meta-agl, and a platform-specific layer (either meta-agl-bsp for QEMU or meta-agl-renesas for the Porter SBC). The meta-agl layer contains the code unique to AGL; at the moment that includes the demo apps and several common libraries (oFono, GUPnP, rtl-sdr, gpsd, etc.).
The meta-ivi-common layer is what is meant to function as the tier that unifies the AGL, GENIVI, and other automotive Linux codebases. In the CES release, the highlights are its inclusion of Automotive Message Broker (AMB) and the Wayland IVI extension. AMB was originally a Tizen IVI project; it provides a message-passing multiplexer for vehicle status reports, sensors, and other lower-level traffic. The Wayland IVI extension is a GENIVI project; it is used to mediate access to the Weston display shell between multiple running applications (for example, the navigation screen and the output from a rear-view camera).
For now, that appears to be the extent of the automotive packages applied on top of the vanilla Yocto recipes. It may be interesting to note the lack of some other Tizen IVI and GENIVI packages, such as the multi-point audio manager or the AF_BUS in-kernel D-Bus implementation. AF_BUS may be dropped in anticipation of kdbus being accepted to the mainline kernel; it is harder to speculate whether the audio manager will make a return.
The road ahead
While the inclusion of Tizen IVI and GENIVI components is a promising sign for meta-ivi-common, the combined-layer strategy is still in its early days. For its part, GENIVI has not yet reworked its own Yocto layers to separate out components in the same manner as AGL. GENIVI, being a significantly more mature effort with a larger package set, would understandably want to undertake such an effort at a measured pace—especially given GENIVI's concern with producing specifications and compliance-testing tools for its member companies. On the other hand, quite a few of the standard packages found in meta-agl (e.g., oFono) are already included in GENIVI's current Yocto baseline, but with different releases—an AGL audit in October noted that often the GENIVI code uses the more recent version.
So arriving at a shared, common base layer will clearly take some work. Sufficient interest seems to be there among both AGL and GENIVI project members, which makes one hopeful for the future. And, in the long term, AGL wants to develop Linux-based systems for vehicles beyond the IVI head unit itself—starting with instrument clusters, but also including engine-control units (ECUs) and other embedded systems that feature no UI. Not duplicating effort with GENIVI for the IVI use case is therefore paramount.
While providing the first glimpse at meta-ivi-common is perhaps the most important aspect of the UCB release, it is not the only feature worthy of note. The release also includes several Yocto layers maintained by other projects, such as meta-qt5 and meta-crosswalk. Together, those layers provide support for Qt 5 applications and for HTML5 web apps running on the Crosswalk web runtime (which is used in the other, non-IVI Tizen device profiles).
It also includes an open-source driver for the MOST transceivers made by Microchip Technology. The code was donated by Microchip. There are I2C and USB MOST interfaces available for purchase, although they are not common in the consumer market—MOST is used largely by automakers and Tier-1 suppliers, with a fiber-optic physical network. Still, if the driver makes its way into the mainline kernel at some point, one might see more MOST adoption.
Looking forward, the UCB demo represents phase 1 of AGL's rebooted push to produce an automotive Linux distribution, plus progress on several points planned for phase 2. But matters will really get interesting once the project reaches phase 3, which is slated to include an SDK and, hopefully, will deliver a distribution that interested parties can run on more than a handful of specific demonstration boards.
Brief items
Distribution quotes of the week
If you want people to do maintenance work without getting to do anything creative, interesting, or exciting, you had better have some method for paying them, because that's a job, not something people do for the love of it.
New BlackArch Linux ISOs (2016.01.10) released
BlackArch Linux is an Arch-based distribution for pentesters and security researchers. The January release "includes more than 1337 tools and comes with lots of improvements. The armv6h and armv7h repositories are filled with about 1200 tools."
Distribution News
Debian GNU/Linux
Debian Installer Stretch Alpha 5 release
The Debian Installer team has announced the fifth alpha release of the installer for Debian 9 "Stretch". With this release i386 architecture now requires 686-class processors.
Newsletters and articles of interest
Distribution newsletters
- DistroWatch Weekly, Issue 643 (January 11)
- openSUSE Tumbleweed – Review of the week 2016/1 (January 9)
- Ubuntu Kernel Team newsletter (January 12)
- Ubuntu Weekly Newsletter, Issue 449 (January 10)
The Best Linux Distros of 2016 (Linux.com)
Swapnil Bhartiya presents his picks for best distributions for 2016, on Linux.com. The list includes Best Comeback Distro: openSUSE, Most Customizable Distro: Arch Linux, Best-Looking Distro: elementary OS, Best Newcomer: Solus, Best Cloud OS: Chrome OS, Best Laptop OS: Ubuntu MATE, Best Distro for Old Hardware: Lubuntu, Best Distro for IoT: Snappy Ubuntu Core, Best Distro for Desktops: Linux Mint Cinnamon, Best Distro for Games: Steam OS, Best Distro for Privacy: Tails, Best Distro for Multimedia Production: Ubuntu Studio, Best Enterprise Distro: SLE/RHEL, Best Server OS: Debian/CentOS, Best Mobile OS: Plasma Mobile, and Best Distro for ARM Devices: Arch Linux ARM.
Page editor: Rebecca Sobol
Development
Testing PAM modules and applications in the Matrix
A new tool, called pam_wrapper, was developed by the article authors; it makes it easy to either test an application that uses pluggable authentication modules (PAM) to authenticate a user, or to develop test cases to make sure that a PAM module under development is working correctly. It is a tool that enables developers to create unit tests for their PAM-using code in a simple manner.
PAM is a layer of abstraction on top of Unix authentication. It is written so that applications don't have to worry about the underlying authentication scheme, which is implemented in a module. If you're not familiar with PAM you can learn more here.
A "PAM conversation" is part of the process of doing authentication using PAM. It is essentially a question and answer game between the user and the system that is being used to authenticate the user. Normally, it just asks for a username and password, but it could also ask the user for the username then ten questions about Star Wars before actually asking for the password and authenticating the user.
Pam_wrapper is a component of the cwrap project, which provides a set of tools that make testing easier. Due to its origin in the Samba project, cwrap is especially targeted at client/server testing. Pam_wrapper is a preloadable library similar to the other cwrap components.
About pam_wrapper
The authors are working on different software projects like Samba, sssd, and libssh. Samba and sssd provide PAM modules and, until now, there were no tests for authentication using the modules available. There was no easy way to achieve that without a fully configured environment, so tests were done by people who run the modules in production or by dedicated QA teams.The libssh project runs tests against the SSH daemon from the OpenSSH project. This was only possible in a special environment with root privileges. With pam_wrapper and the PAM module it provides, you can now run the OpenSSH daemon as a normal user performing the PAM conversation to test interactive logins. This means pam_wrapper is useful for both writing tests for PAM modules or using it to handle PAM conversations.
Testing either PAM modules or PAM applications does not require root privileges when using pam_wrapper. You can also set up a dummy user account database to test against.
Testing applications
In theory, testing PAM applications shouldn't require too much instrumentation. A PAM service file allows the administrator to specify a full path, which can point to a PAM module under test; both the PAM application itself and the module can usually run unprivileged. The problem is with the location that the PAM service files are loaded from — the directory (typically /etc/pam.d) is hardcoded in libpam.so during configure time, and there is no way to override it at runtime. The pam_wrapper library allows the developer to specify an alternate directory with PAM service files, which can point to different service configurations or include test modules. This also removes the requirement to run tests as root, because the test configurations can be stored under the UID running the test.
Pam_wrapper is a preloadable library. Preloading is a feature of the dynamic linker that loads the user-specified libraries before all others. Note that if you try to preload a library for binaries that have the suid or sgid bit set (see the chmod(1) man page), the user-specified preloaded libraries are ignored. The pam_wrapper library wraps all functions of libpam.so and allows you to define your own service directory for each test:
LD_PRELOAD=libpam_wrapper.so PAM_WRAPPER=1 \ PAM_WRAPPER_SERVICE_DIR=/path/to/servicedir ./myapplication
This command would run myapplication and tell libpam.so to read service files from the directory /path/to/servicedir instead of /etc/pam.d. The PAM_WRAPPER environment variable must be set to enable the library, which should restrict the ability to use it for attacks of any sort.
A service directory normally contains one file for the service that the test is being run against. For example, if you want to authenticate using sshd, your service file name would be sshd. In the file you need to specify which of the management groups the subsequent module is to be associated with. Valid entries are account, auth, password, and session.
The management groups handle different phases of the authentication process. The auth group modules manage authentication (i.e. if the user is who they claim to be), while the account group verifies that the user is permitted to do the action they are trying to do; it normally runs after authentication. The password group is used for password changes and the session group sets up the user environment — it can mount user-private directories, for example. They are described in the pam.d(5) man page.
Testing an application with pam_matrix
Another issue developers face when developing tests for PAM applications is that there must be some database that the tests authenticate against. A very simple test could use the pam_permit or pam_deny modules that either allow or deny all requests, but that doesn't provide tests that are like real deployments. Therefore, the pam_wrapper project added a simple PAM module called pam_matrix.so that will authenticate against a simple text database.
Let's assume you want to run tests against a server that requires PAM to authenticate users. This application uses PAM service file myapp. Normally, you would need a real user in the system with a password set — but this might not be possible in environments like Continuous Integration (CI) systems or on build hosts. Pam_wrapper and the pam_matrix module allow you to authenticate users via PAM without requiring an account on the local machine.
For that you need to create a service file that looks like this:
auth required pam_matrix.so passdb=/tmp/passdb account required pam_matrix.so passdb=/tmp/passdb password required pam_matrix.so passdb=/tmp/passdb session required pam_matrix.so passdb=/tmp/passdb
Save this file as myapp and place it in a directory. Later, this directory will be referenced in the PAM_WRAPPER_SERVICE_DIR variable. The passdb option defines a file that contains users with a plain-text password for a specified service. The syntax of the file is:
username:password:allowed_service
An example for that is:
bob:secret:myapp
As an alternative to using the passdb PAM module option, it's possible to specify the database location by setting the PAM_MATRIX_PASSWD environment variable.
Testing a module with libpamtest and pam_wrapper helper modules
Writing tests for PAM applications or modules can be a tedious task. Each test would have to implement some way of passing data like passwords to the PAM modules executing the test (probably via a conversation function), run the PAM conversation, and collect output from the module or application under test. To simplify writing these tests, we added a library called libpamtest to the pam_wrapper project. This library allows the test developer to avoid code duplication and boilerplate code, and focus on writing tests instead. The libpamtest library comes with fully documented C and Python APIs.
Each libpamtest-driven test is defined by one or more instances of the structure pam_testcase that describes what kind of test is supposed to run (authentication, password change, ...) and what the expected error code is, so that both positive and negative tests are supported. The array of pam_testcase structures is then passed to a function called run_pamtest() that executes them with the help of a default conversation function provided by libpamtest. If the test requires a custom conversation function, another test driver called run_pamtest_conv() is also available that allows developers to supply their own conversation function.
The default conversation function provided by libpamtest allows the programmer to supply conversation input (typically a password) and also a string array that would capture any output that the conversation emits during the PAM transaction. As an example, the following test calls the PAM change password function, changes the password, and then verifies the new password by authenticating using the new password:
enum pamtest_err perr; const char *new_authtoks[] = { "secret" /* login with old password first */ "new_secret", /* provide a new password */ "new_secret", /* verify the new password */ "new_secret", /* login with the new password */ NULL, }; struct pamtest_conv_data conv_data = { .in_echo_off = new_authtoks, }; struct pam_testcase tests[] = { /* pam function to execute and expected return code */ pam_test(PAMTEST_CHAUTHTOK, PAM_SUCCESS), pam_test(PAMTEST_AUTHENTICATE, PAM_SUCCESS), }; perr = run_pamtest("matrix", /* PAM service */ "trinity", /* user logging in */ &conv_data, tests); /* conversation data and array of tests */
As you can see, the test is considerably shorter than a hand-written one would be. In addition, the test developer doesn't have to handle the conversation, or open and close the PAM handle. Everything is done behind the scenes.
If one of the PAM transaction steps failed (for example if the passwords didn't match the database), the perr return variable would indicate a test failure with value PAMTEST_ERR_CASE. The developer could then fetch the failed case using the pamtest_failed_case() function and examine the test case further.
In addition to the standard PAM actions like AUTHENTICATE or CHAUTHTOK, libpamtest also supports several custom actions that might be useful in tests. One is PAMTEST_GETENVLIST, which dumps the full PAM module environment into the test case's output data field. Another is PAMTEST_KEEPHANDLE, which prevents the PAM handle from being closed — the test could go and perform custom operations on the handle before closing it by calling pam_end().
Module stacking
Another aspect that is normally quite hard to test is module stacking. That is, testing that your module is able to read a password that is provided by another module that was executed earlier in the stack. This is a quite common setup especially for PAM modules that handle authenticating remote users. Since local users should take precedence, the password would be read by pam_unix first and passed down the stack if no local user could be authenticated. Conversely, your module might pass an authentication token on to the PAM stack for other modules (such as Gnome Keyring's PAM module) that come later in the stack.
Normally, handling these stack items is only allowed from the module context, not application context. Because the test runs in the application context, we had to develop a way to pass data between the two. So, in order to test the stacking, two simple modules called pam_set_items.so and pam_get_items.so were added.
The purpose of pam_set_items.so is to read environment variables with names corresponding to internal PAM module items and to put the data from the environment variables onto the stack. The pam_get_items.so module works in the opposite direction, reading the PAM module items and putting them into the environment for the application to read. Suppose you wanted to test that the pam_unix.so module is able to read a password from the stack and later pass it on. The PAM service file for such a test would look like this:
auth required /absolute/path/to/pam_set_items.so auth required pam_unix.so auth required /absolute/path/to/pam_get_items.so
The test itself would first set the auth token into the process environment with putenv(), run the test, and then make sure the token was put into the PAM environment by the module by calling pam_getenv(). It's very convenient to use libpamtest's PAMTEST_GETENVLIST test case to read the PAM environment:
pamtest_err perr; const char *new_authtoks[] = { "secret" /* password */ }; struct pamtest_conv_data conv_data = { .in_echo_off = new_authtoks, }; struct pam_testcase tests[] = { pam_test(PAMTEST_AUTHENTICATE, PAM_SUCCESS), pam_test(PAMTEST_GETENVLIST, PAM_SUCCESS), }; setenv("PAM_AUTHTOK", "secret"); perr = run_pamtest("matrix", "trinity", &conv_data, tests); /* * tests[1].case_out.envlist now contains list of key-value strings, * find PAM_AUTHTOK to see what the authtok is. */
Finally, because it is often inconvenient to write tests in a low-level programming language like C, we also developed Python bindings for libpamtest. Using the Python bindings, an authentication test might look like this:
def test_auth(self): neo_password = "secret" tc = pypamtest.TestCase(pypamtest.PAMTEST_AUTHENTICATE) res = pypamtest.run_pamtest("neo", "matrix_py", [tc], [ neo_password ])
Of course libpamtest can be used with or without pam_wrapper's preloading and custom PAM service location.
Where to go from here?
We hope that this tool is useful for those developers who have struggled testing their PAM modules and applications. The authors are looking forward to more projects that implement tests for PAM modules. We are also looking forward for feedback for the current API and usability of pam_wrapper.
At the moment, only Linux-PAM and OpenPAM (FreeBSD) are tested and supported by pam_wrapper. The code is maintained in Git on the Samba Git server. If you want to discuss pam_wrapper you can do that on the samba-technical mailing list. For discussions, you can also join #cwrap on irc.freenode.net.
Brief items
Quotes of the week
Certainly, portability would be just as interesting, being able to build certain components on top a GNU system with GNU libc.
At times, it seems like this is done by design, to make it difficult for "fragmentation" and competition. Basically, making it difficult to exercise the freedom to modify the software and share your modifications with others.
PostgreSQL 9.5 released
PostgreSQL 9.5 has been released
with lots of new features for the database management system, including
UPSERT, row-level security, and several "big data" features. We previewed
some of these features back in July and August. "A most-requested feature by application developers for several years,
'UPSERT' is shorthand for 'INSERT, ON CONFLICT UPDATE', allowing new
and updated rows to be treated the same. UPSERT simplifies web and
mobile application development by enabling the database to handle
conflicts between concurrent data changes. This feature also removes
the last significant barrier to migrating legacy MySQL applications to
PostgreSQL.
"
Openfire 4.0.0 released
Version 4.0.0 of the Openfire XMPP chat server has been released. There is an extensive changelog; users are also advised that many of the available plugins have been updated and will no longer work with pre-4.0 Openfire releases.
Ansible 2.0 released
Version 2.0 of the Ansible configuration management system has been released. "This is by far one of the most ambitious Ansible releases to date, and it reflects an enormous amount of work by the community, which continues to amaze me. Approximately 300 users have contributed code to what has been known as 'v2' for some time, and 500 users have contributed code to modules since the last major Ansible release." New features include playbook-level exception handling, better error diagnostics, a new set of OpenStack modules, and more. See the changelog for more (terse) details.
GNU Health 3.0 released
Version 3.0 of the GNU Health electronic medical record (EMR) system has been released. Among the added features are support for flexible patient name formats, improved reporting, updated unit tests, and the addition of modules for ophthalmology and updated World Health Organization procedural codes.
openHAB 1.8 is available
Version 1.8 of the openHAB home-automation system has been released, along with the first beta for the upcoming 2.0 release. New in 1.8 are bindings for the RWE automation protocol popular in Germany and the Local Control Network (LCN) protocol popular with professional audio-video installers. In addition, the old Google Calendar plugin has been replaced with a general-purpose CalDAV plugin.
Ardour 4.6 released
Version 4.6 of the Ardour audio editor is available. "4.6 includes some notable new features - deep support for the Presonus FaderPort control surface, Track/Bus duplication, a new Plugin sidebar for the Mixer window - as well as the usual dozens of fixes and improvements to all aspects of the application, particularly automation editing." The full list of enhancements is quite long; see the announcement for details.
Newsletters and articles
Development newsletters from the past week
- What's cooking in git.git (January 11)
- What's cooking in git.git (January 13)
- Git Rev News (January 13)
- LLVM Weekly (January 11)
- Perl Weekly (January 11)
- PostgreSQL Weekly News (January 10)
- Python Weekly (January 7)
- Ruby Weekly (January 7)
- This Week in Rust (January 11)
- Tahoe-LAFS Weekly News (January 12)
- Wikimedia Tech News (January 11)
Akonadi – still alive and rocking
At his blog, Daniel Vrátil provides an extensive update on the status of Akonadi, the KDE project's personal information management (PIM) data service. He focuses on the changes made during the port to KDE Frameworks 5, starting with the switch from a text-based to a binary protocol. "This means we spent almost zero time on serialization and we are able to transmit large chunks of data between the server and the applications very, very efficiently.
" The ripple effects include changes to the database operations and, eventually, to the public API. Finally, he addresses the disappearance of the KJots note-taking application. "What we did not realize back then was that we will effectively prevent people from accessing their notes, since we don’t have any other app for that! I apologize for that to all our users, and to restore the balance in the Force I decided to bring KJots back. Not as a part of the main KDE PIM suite but as a standalone app.
"
Page editor: Nathan Willis
Announcements
Brief items
Mozilla shutting down Persona
Mozilla has announced that it will be shutting down the persona.org authentication service in November. It has been two years since Persona was "transitioned to community ownership"; now the other shoe has dropped. "Due to low, declining usage, we are reallocating the project’s dedicated, ongoing resources and will shut down the persona.org services that we run. Persona.org and related domains will be taken offline on November 30th, 2016." There is a set of "shutdown guidelines" to help sites still using Persona to transition to something else. (LWN looked at Persona in 2013).
Qt open source licensing changed
The Qt Company has announced changes to the open source licensing and product structure of the Qt cross-platform application development framework. "New versions of Qt will be licensed under a commercial license, GPLv2, GPLv3, and LGPLv3, but no longer under LGPLv2.1. The updated open source licenses better ensure end user freedom when using open source licensed versions of Qt. LGPLv3 explicitly forbids the distribution of closed embedded devices. Distributing software under these terms includes a patent grant to all receivers of the software. Commercial Qt licensing removes these requirements and includes professional technical support from The Qt Company."
FSF: Fill out our survey.
The Free Software Foundation is seeking feedback, suggestions, and visions for the future of the FSF. They are conducting a survey to gather input from the community.
Articles of interest
Top 10 open source legal developments in 2015 (Opensource.com)
Mark Radcliffe writes about important legal developments from 2015, including the first ruling on GPLv3 (in Germany): "In this case, the user cured its breach within the necessary period, but refused to sign a 'cease and desist' declaration which was sought by the plaintiff to ensure that the defendant would have an incentive not to breach the terms of the GPLv3 again. The court ruled that the reinstatement provision in Section 8 did not eliminate the plaintiff's right to a preliminary injunction to prevent further infringements, particularly if the defendant had refused to sign the plaintiff's cease-and-desist declaration."
New Books
EFF: The Boy Who Could Change the World
The Electronic Frontier Foundation introduces the book The Boy Who Could Change the World: The Writings of Aaron Swartz.
Calls for Presentations
LSF/MM 2016: Call for Proposals
The annual Linux Storage, Filesystem and Memory Management Summit for 2016 will be held April 18-19 in Raleigh, NC. There is a call for agenda proposals that are suitable for cross-track discussion as well as technical subjects for the breakout sessions. The deadline for proposing agenda topics is February 29.CFP Deadlines: January 14, 2016 to March 14, 2016
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
January 15 | March 14 March 17 |
Open Networking Summit | Santa Clara, CA, USA |
January 15 | March 10 March 12 |
Studencki Festiwal Informatyczny (Students' Computer Science Festival) | Cracow, Poland |
January 16 | April 1 | DevOps Italia | Bologna, Italy |
January 18 | March 18 March 20 |
FOSSASIA 2016 Singapore | Singapore, Singapore |
January 19 | May 17 May 21 |
PGCon - PostgreSQL Conference for Users and Developers | Ottawa, Canada |
January 22 | May 2 May 5 |
FOSS4G North America | Raleigh, NC, USA |
January 22 | January 22 January 23 |
XenProject - Cloud Innovators Forum | Pasadena, CA, USA |
January 24 | March 14 March 18 |
CeBIT 2016 Open Source Forum | Hannover, Germany |
January 24 | March 11 March 13 |
PyCon SK 2016 | Bratislava, Slovakia |
January 29 | April 20 April 21 |
Vault 2016 | Raleigh, NC, USA |
February 1 | April 25 April 29 |
OpenStack Summit | Austin, TX, USA |
February 1 | June 22 June 24 |
USENIX Annual Technical Conference | Denver, CO, USA |
February 1 | April 4 April 8 |
OpenFabrics Alliance Workshop | Monterey, CA, USA |
February 2 | March 29 March 31 |
Collaboration Summit | Lake Tahoe, CA, USA |
February 5 | April 4 April 6 |
Embedded Linux Conference | San Diego, CA, USA |
February 5 | April 4 April 6 |
OpenIoT Summit | San Diego, CA, USA |
February 6 | February 12 February 14 |
Linux Vacation / Eastern Europe Winter Edition 2016 | Minsk, Belarus |
February 8 | April 7 April 8 |
SRECon16 | Santa Clara, CA, USA |
February 10 | April 23 April 24 |
LinuxFest Northwest | Bellingham, WA, USA |
February 12 | May 9 May 13 |
ApacheCon North America | Vancouver, Canada |
February 15 | March 11 March 13 |
Zimowisko Linuksowe TLUG | Puck, Poland |
February 23 | April 9 April 10 |
OSS Weekend | Bratislava, Slovakia |
February 28 | April 6 | PostgreSQL and PostGIS, Session #8 | Lyon, France |
February 28 | May 10 May 12 |
Samba eXPerience 2016 | Berlin, Germany |
February 28 | April 18 April 19 |
Linux Storage, Filesystem & Memory Management Summit | Raleigh, NC, USA |
February 28 | June 21 June 22 |
Deutsche OpenStack Tage | Köln, Deutschland |
February 28 | June 24 June 25 |
Hong Kong Open Source Conference 2016 | Hong Kong, Hong Kong |
March 1 | April 23 | DevCrowd 2016 | Szczecin, Poland |
March 6 | July 17 July 24 |
EuroPython 2016 | Bilbao, Spain |
March 9 | June 1 June 2 |
Apache MesosCon | Denver, CO, USA |
March 10 | May 14 May 15 |
Open Source Conference Albania | Tirana, Albania |
March 12 | April 26 | Open Source Day 2016 | Warsaw, Poland |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
SCALE 14X announcements
On Friday January 22 the Bad Voltage team will deliver a live show and the following morning Mark Shuttleworth will give a keynote at SCALE in Pasadena, CA.Events: January 14, 2016 to March 14, 2016
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
January 16 | Bangalore Linux Kernel Meetup | Bangalore, India |
January 20 January 22 |
O'Reilly Design Conference 2016 | San Francisco, CA, USA |
January 21 January 22 |
Ubuntu Summit | Pasadena, CA, USA |
January 21 January 24 |
SCALE 14x - Southern California Linux Expo | Pasadena, CA, USA |
January 22 January 23 |
XenProject - Cloud Innovators Forum | Pasadena, CA, USA |
January 25 | Richard Stallman - "A Free Digital Society" | Stockholm, Sweden |
January 30 January 31 |
Free and Open Source Developers Meeting | Brussels, Belgium |
February 1 | MINIXCon 2016 | Amsterdam, Netherlands |
February 1 | Sysadmin Miniconf | Geelong, Australia |
February 1 February 5 |
linux.conf.au | Geelong, Australia |
February 5 February 7 |
DevConf.cz 2016 | Brno, Czech Republic |
February 10 | The Block Chain Conference | San Francisco, CA, USA |
February 10 February 12 |
netdev 1.1 | Seville, Spain |
February 12 February 14 |
Linux Vacation / Eastern Europe Winter Edition 2016 | Minsk, Belarus |
February 24 February 25 |
AGL Member's Meeting | Tokyo, Japan |
February 27 | Open Source Days | Copenhagen, Denmark |
March 1 | Icinga Camp Berlin | Berlin, Germany |
March 1 March 6 |
Internet Freedom Festival | Valencia, Spain |
March 8 March 10 |
Fluent 2016 | San Francisco, CA, USA |
March 9 March 11 |
18th German Perl Workshop | Nürnberg, Germany |
March 10 March 12 |
Studencki Festiwal Informatyczny (Students' Computer Science Festival) | Cracow, Poland |
March 11 March 13 |
PyCon SK 2016 | Bratislava, Slovakia |
March 11 March 13 |
Zimowisko Linuksowe TLUG | Puck, Poland |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol