LWN.net Weekly Edition for December 1, 2016
Apache and the JSON license
The JSON license is a
slightly modified variant of the MIT license, but that
variation has led it to be rejected as a free-software or open-source
license by several organizations. The change is a simple—rather
innocuous at some level—addition of one line: "The Software shall be
used for Good, not
Evil.
". Up until recently, code using the JSON license was
acceptable for Apache projects, but that line and the ambiguity it
engenders was enough for Apache to put it on the list of disallowed licenses.
At the end of October, Ted Dunning brought
up the license on the Apache legal-discuss mailing list. He suggested
that classifying the JSON license as acceptable (i.e. on the list of Category A
licenses) was an "erroneous decision
". That decision was made,
he said, "apparently based on a determination that the
no-evil clause was 'clearly a joke'
". He pointed to a thread
from 2008 where a "lazy consensus
" formed
that the "not evil"
condition did not preclude Apache projects from using the license.
But Dunning pointed out that some of his customers' legal teams are not getting the "joke". As a license term, "good not evil" leaves quite a bit to be desired:
Dunning suggested that the license be moved to Category X, which contains licenses that cannot be used by Apache projects. Multiple replies in the thread made it clear there is no real support for the license and plenty of interest in seeing it be banned. In fact, Apache's vice president of legal affairs, Jim Jagielski, decided to ban the license on November 3. He explained his reasoning in a post two days earlier:
But, simply removing the license from the approved list doesn't magically fix the projects that depend on code using it. Dunning had a list of around a dozen Apache projects that may depend on JSON-licensed code; there may well be others. He also pointed to a Debian web page listing alternative JSON implementations. But work needs to be done to switch projects to acceptable alternatives—and that will take time.
Apache projects cannot make releases using code that is covered by a Category X license, so any that depend on JSON-licensed code need to fix that. Alan Gates wondered if there could be some kind of grace period for projects to come into compliance. He noted that the Apache Hive data warehousing project is in the middle of trying to get out a maintenance release, which would be blocked by the change; others may be similarly affected. Furthermore:
He and Jagielski had spoken about the matter and the latter had suggested perhaps adding a "grandfather clause" that would allow projects that already use JSON-licensed code to continue to make releases for some period of time. Gates proposed six months as that time frame. In general, that idea was popular, but there were some wrinkles.
Dunning would like to see a much shorter deadline for resolving the license problem. He has done some work to make it easier for projects to switch to a properly licensed alternative, but there is still a lot of testing that needs to be done to ensure everything works correctly. Gates asked if a six-month period would really matter given that the license has been present in Apache products for years, but Dunning is concerned that Apache projects are losing users over it:
There are other considerations, though. Andrew Wang said that the Apache Hadoop framework for big data
processing relies on third-party libraries that use JSON-licensed
code. "We can't simply swap it out.
" In addition, enterprise
software is expected to only make bug fixes for multiple years, he said, so
switching libraries will be problematic from that perspective:
There was a fair amount of discussion of how various projects could proceed to remove the dependency on JSON-licensed code, but no clear consensus emerged on how disruptive the change would be. Enough of a consensus on having a grace period of some length did emerge, however, to the point that Jagielski issued a statement that prohibited projects from adding JSON license dependencies, but allowed others some time to make the change:
It is, in some ways, surprising that it has taken this long for Apache to
tackle this particular license problem. Other organizations have banned
the license for years and Apache is rather notoriously picky about
licenses. The "determination" that it was a joke clause back in 2008 seems
a bit strange, in truth. But "there has been no real 'outcry' over our
usage of it, especially from end-users and other consumers of
our projects which use it
", Jagielski said, which may explain why it
hadn't been addressed until now.
It is likely that few would admit to using the JSON-licensed code for "evil" (however that is defined), but that isn't really the crux of the matter. Legal departments are understandably leery of how others (and courts, in particular) might interpret the clause. It is quite ambiguous and corporate legal teams go to great lengths to avoid that kind of thing when they can. Given that Apache projects are used at lots of large companies, it is perhaps also surprising that the outcry wasn't louder than it is even today.
Several in the thread wondered about getting the license's author, Douglas
Crockford, to change it. That was deemed unlikely by Jagielski and Sam
Ruby, who have both discussed it with him multiple times. Crockford has
given at least one license
exception in the past ("I give permission for IBM, its customers,
partners, and minions, to use JSLint for evil.
"), though no one
suggested pursuing that path for Apache projects.
But letting the issue linger certainly had a cost for the projects that depended on that code. It would, it seems, have been far better to grasp the nettle some time ago. In the end, though, by mid-2017 the problem should be resolved, hopefully with minimal disruption.
Linux on the Mac — state of the union
The MacBook Pro introduction in October caused unusually negative reactions among professional users due to the realization that Apple no longer caters equally to casual and professional customers as it had in the past [YouTube video]. Instead, the company appears to be following an iOS-focused, margin-driven strategy that essentially relegates professionals to a fringe group. This has well-known developers such as Salvatore Sanfilippo (of the Redis project) consider a move back to Linux. Perhaps that's a good moment to look at the current state of Mac hardware support in the kernel. While Macs are x86 systems, they possess various custom chips and undocumented quirks that the community needs to painstakingly reverse-engineer.
GPU switching
Apple is the only remaining vendor to build a multiplexer into hybrid graphics laptops, which have both high-end and low-power GPUs. The multiplexer allows the panel to be switched between the GPUs and the unused GPU to be powered off. All other manufacturers use a "muxless" solution, whereby the discrete GPU is headless and copies rendered data over PCIe into the integrated GPU's framebuffer. "Muxed" solutions, such as the one used by Apple, offer superior power saving and latency, but are more difficult to implement.
The multiplexer built into pre-Retina MacBook Pros is a custom Lattice XP2 FPGA, which is documented to some extent in patent US 8,687,007 B2 [PDF]. Retinas moved to a different display connector (eDP instead of LVDS) to accommodate the higher pixel clock and this forced Apple to come up with a redesign which consists of two chips, a Renesas H8S/2113 controller and a separate off-the-shelf eDP multiplexer from NXP [PDF], TI [PDF], or Pericom [PDF].
A driver for the controller was written by Seth Forshee in 2012, initially only to control backlight brightness. Andreas Heider subsequently added switching control. Matthew Garrett and Bernhard Frömel contributed code to handle register access on the H8S/2113. (It has the same register layout as its predecessor Lattice XP2, but the registers are accessed via a mailbox rather than directly.)
The panel resolution stored in the video BIOS is notoriously bogus on Macs, so it needs to be probed instead by temporarily switching the DDC lines between GPUs. Attempts to implement this were made by Forshee, Garrett, and Dave Airlie but not pursued into mainline. Development therefore stagnated for about three years. I started another effort in 2015 and the resulting patches that finally enable GPU switching on pre-Retinas were merged into 4.6. A byproduct was documentation on vga_switcheroo and Apple's multiplexer.
The original "unibody" MacBook Pro introduced 2008-2009 with dual NVIDIA GPUs required several extra changes that were figured out by Pierre Moreau. There is one remaining glitch related to hardware acceleration of the console framebuffer that can be worked around by loading nouveau with the nofbaccel option. This machine is no longer supported by Apple since macOS Sierra. Notably, macOS was never capable of switching at runtime on this model or using both GPUs simultaneously, so Linux is squeezing more out of the hardware than Apple ever did.
However a lot still remains to be done: so far, the discrete GPU has to be turned on and off manually by the user. I have begun a set of patches to add runtime power management support which will handle this automatically, however it turned out that a rework of vga_switcheroo audio handling is necessary first, based on Rafael Wysocki's functional dependencies series, which is queued for 4.10. Another unresolved issue is a short flicker during switching if the vertical blanking intervals (VBLANK) of the GPUs happen not to be in sync. Apple's patent talks about the gmux controller lengthening or shortening VBLANK intervals to achieve a seamless switch, but in reality this is not performed by the chip and needs to be done in software instead. Further down on the to-do list is switching while X11 (or Wayland) is running. Currently, switching is only possible on the framebuffer console with no clients connected to the DRM drivers. Airlie has stated that improving this situation is a low priority for him.
While GPU switching on pre-Retinas works, it doesn't yet on Retinas. Once again there is no valid panel resolution stored in the video BIOS, so both GPUs need to probe it. However unlike on pre-Retinas, the AUX channel (which is the DDC equivalent for DisplayPort) is not switchable between GPUs without also switching over the main link. The active GPU therefore has to either cache the panel data or proxy the inactive GPU's access to the panel. Additionally, both GPUs need to link-train their eDP outputs: the DisplayPort specification has a special provision for closed, embedded connections which allows outputs to be set up with a pre-calibrated, known-good drive current and pre-emphasis level. A solution would thus be to have the inactive GPU set up its output with pre-calibrated values determined by the active GPU.
I implemented a few prototypes of this in 2015, which were tested with partial success by Bruno Bierbaumer. Unfortunately Bierbaumer's MacBook Pro suffered an accident (unrelated to the patches) and development has since stagnated. Generally, it is hard to bring up features like this without having the hardware in front of you. Sending patches and dmesg output back and forth only gets one so far. (My own machine is a pre-Retina.) For the time being, Retina users should consider using Bierbaumer's gpu-switch application, which allows selecting the active GPU for the next reboot. The inactive GPU may then be powered down via vga_switcheroo to conserve energy.
Retinas introduced 2013 and onward have an additional requirement for GPU switching: Either the bootloader or the EFI stub in the kernel need to identify as macOS to the firmware, otherwise the integrated GPU is powered down and thus hidden from the operating system. The rationale is that Apple only supports macOS and Windows on its hardware, but never bothered to enable GPU switching on Windows. If the operating system does not identify as macOS, the EFI firmware assumes that it is dealing with Windows and disables various features of the hardware.
This identification scheme is similar in spirit to the ACPI _OSI method but happens before the EFI ExitBootServices() call (i.e. much earlier than the kernel's ACPI subsystem is initialized). It is not necessary on pre-2013 models, which assume Windows when booting in the legacy BIOS mode and macOS otherwise. Users need to be aware that if they expose the integrated GPU on 2013+ models, loading the i915 driver results in an interrupt storm that can be avoided by disabling an ACPI general-purpose event (GPE) via sysfs.
Thunderbolt
Thunderbolt controllers comprise a PCIe switch whose fabric is managed in either of two ways: by a firmware component called Intel Connection Manager (ICM) or natively by the operating system. In the former case, PCI tunnels to newly attached devices are configured in System Management Mode (SMM) behind the operating system's back and the devices appear below ACPI PCI hotplug slots. This is what most vendors do. Apple took the other approach and ships two drivers: an EFI driver that configures devices already attached on boot, and a macOS driver that assumes control after the ExitBootServices() call. When booting Windows, Apple powers the controller off on older machines or reconfigures it at runtime to be controlled by ICM on 2015+ models.
In principle, Apple's approach of foregoing a firmware blob is desirable from a free-software perspective. However Apple's drivers are closed source and Intel hasn't made the Thunderbolt specification public. Andreas Noever took on the Herculean task of reverse-engineering the macOS driver and writing a basic Linux driver. The resulting patches went into 3.17 and initially supported two chips: Cactus Ridge 4C and Falcon Ridge 4C. This year, we have been able to broaden support to further chips:
Light Ridge 4C (Macs introduced 2011-2012) supported since 4.7 Eagle Ridge 2C (MacBook Air introduced 2011) unsupported, try this patch and report back Cactus Ridge 4C (Macs introduced 2012-2013) supported since 3.17 Falcon Ridge 4C (Macs introduced 2013-2015) supported since 3.17 Falcon Ridge 2C (MacBook Air introduced 2015) supported since 4.8 Alpine Ridge 4C (MacBook Pro introduced 2016) unsupported, try this patch and report back
Thunderbolt controllers consume about 2W even when idle. Apple provides a nonstandard ACPI-based mechanism to power the controller down when nothing is plugged in and I have implemented patches this year to make use of it on Linux. A first preparatory series is in 4.9 to avoid gratuitously waking the controller before and after system sleep. A second preparatory series is queued up for 4.10 to add runtime power management for PCIe Hotplug ports. A third series containing the actual runtime power management for the Thunderbolt controller is slated for 4.11. The patches improve battery life noticeably: idle power consumption on my MacBook Pro drops from 12.2W to 10.5W when powering down Thunderbolt (with the discrete GPU, and AirPort already disabled). macOS achieves 7W as it supports power management on more devices, such as Firewire.
Another upcoming feature is EFI device properties: Thunderbolt controllers possess a 64-bit unique ID, which allows telling them apart when connected together. The ID is stored in a device ROM that the vendor is supposed to burn at the factory. Curiously, Apple skipped that step on Thunderbolt 1 chips and left the device ROM blank with an ID of 0x1000000000000. So how does the controller get a unique ID? It turns out that the EFI driver generates a device ROM with an ID based on the Mac's serial number and communicates it to the macOS driver as a device property.
For Linux 4.10 I submitted patches to retrieve these properties and use the device ROM supplied by EFI in the Thunderbolt driver. The kernel needs to be booted by the EFI stub for this to work. The properties contain a lot more than just the device ROM (e.g. they convey which PCI tunnels were established by the Thunderbolt EFI driver and how the graphics EFI drivers configured the GPUs). See this sample that was generated with the kernel command line option dump_apple_properties. Apple seems to have been using this proprietary protocol for EFI device properties ever since they moved to x86 in 2006, so it took ten years for Linux to catch up. Nevertheless, having that data available now puts us in a much better position to support Macs optimally.
Current work focuses on coping with surprise removal and fixing system sleep-related bugs in the PCIe hotplug driver. Many Thunderbolt features are still needed, such as support for daisy-chaining and establishing DisplayPort-over-Thunderbolt tunnels.
Networking is another area that needs work, specifically it needs a driver for Mac hardware. Thunderbolt 3 is marketed as having a total bandwidth of 40 Gbit/s, which in reality is capped at 32 Gbit/s by the 4x PCIe 3.0 interface of currently available controllers. Still, this promises to rival 25G Ethernet and InfiniBand at a lower price point. The MacBook Pro introduced in October has four Thunderbolt 3 ports which would enable interesting applications like a portable compute cluster with up to five fully-meshed nodes. macOS introduced Ethernet-over-Thunderbolt tunneling in 2013 and Intel ported it to its closed-source Windows driver a year later.
Apparently due to demand from vendors such as Dell, Intel also developed a Linux driver whose source code was surprisingly made public this year and is now at its ninth iteration. However the released code mostly just contains the plumbing between the kernel's networking subsystem and the firmware-based ICM. The real action happens in firmware, which remains closed source. And the dependency on ICM means that the patches only work on non-Macs.
Even so, Intel's driver duplicates a portion of
Noever's driver. When asked to eliminate duplicate code and move
the remainder into the existing Thunderbolt driver, Intel responded in a
somewhat tight-lipped manner that it
"does
not maintain, develop, and publish Thunderbolt software code running on
Apple hardware
", which sounds like it came straight out of the
legal department to avoid stepping on the toes of their key account in
Cupertino. Obviously, having some source code is better than no
source code, but Greg Kroah-Hartman has been reluctant to merge the additional
driver and has pushed
for more review.
Firmware quirks
The EFI firmware on Macs contains a network stack to facilitate downloading macOS recovery images from Apple. A particularly egregious bug is present in the EFI driver for Broadcom 4331 wireless chips built into various 2011 and 2012 Macs as it fails to disable the card upon the ExitBootServices() call. As a result, the card causes an interrupt storm and corrupts memory with DMA transfers of received packets. In principle, this can be used for remote code execution over the air. Garrett discovered the issue in 2012 and sought to fix it with a GRUB quirk, but this only addressed memory corruption and not the interrupt storm, so users continued to see messages such as "irq 17: nobody cared". For Linux 4.7, I submitted an early quirk to reset the wireless card that finally fixes the issue for good. It has also been picked up by all supported stable kernels.
MacBook Pros introduced in 2015 suffer from a similarly annoying issue caused by an unused PCIe root port that Apple forgot to disable in the firmware. The root port is invisible to macOS because it is not enumerated in the ACPI tables, but Linux discovers and initializes it, thereby breaking suspend and power off on these machines. Chen Yu has posted a workaround patch which is not yet in mainline but already included in distributions such as Ubuntu.
SPI input devices and NVMe
Before 2015, mobile Macs used USB to connect the keyboard and trackpad. Newer models equipped with a Force Touch trackpad moved to a custom controller that connects to the southbridge with SPI. On the first model to do so, the MacBook Pro 13" (Early 2015), the controller alternatively supports USB and is switchable between the two interfaces by way of ACPI methods. In USB mode, this model is supported by mainline kernels. All following machines, notably the MacBook 12", leave the USB pins unconnected and are SPI only. Federico Lorenzi has begun reverse-engineering the controller's protocol and developing an experimental out-of-tree driver.
The MacBook 12" also has a custom NVMe controller, which is known to not come out of system sleep, apparently due to missing vendor-specific commands.
In conclusion
Users wishing to try Linux as a dual boot option alongside macOS may want to consider ZFS as it allows cross-mounting between the two operating systems in a stable manner. Other Linux filesystems such as ext4 or Btrfs are unfortunately not well supported on macOS. Vice-versa, HFS Plus on Linux does not support journaling and FileVault2 support is experimental. Kernel developers need however be aware that loading the ZFS modules disables lockdep due to the CDDL taint.
Bringup on Macs is a challenge, but on the bright side we are making huge leaps with every new release. Supporting new hardware generally takes about two years, anything older can be expected to work decently. Battery life is not yet on par with macOS, Thunderbolt lacks many features, and GPU switching only works on pre-Retinas. Despite these limitations, Mac hardware support is significantly ahead on Linux compared to other free operating systems. To quote Apple's classic commercial [YouTube video]: "here's to the crazy ones" who are bringing up Linux on the Mac.
Security
Django debates user tracking
In recent years, privacy issues have become a growing concern among free-software projects and users. As more and more software tasks become web-based, surveillance and tracking of users is also on the rise. While some software may use advertising as a source of revenue, which has the side effect of monitoring users, the Django community recently got into an interesting debate surrounding a proposal to add user tracking—actually developer tracking—to the popular Python web framework.
Tracking for funding
A novel aspect of this debate is that the initiative comes from concerns
of the
Django Software
Foundation (DSF) about funding. The proposal
suggests that "relying on the free labor of volunteers is
ineffective, unfair, and risky
" and states that "the future of Django
depends on our ability to fund its development
". In fact, the DSF
recently hired an engineer to help oversee Django's development, which has
been quite successful in helping the project make timely releases with
fewer bugs. Various fundraising efforts have resulted in major new Django
features, but it is difficult to attract sponsors without some hard data on
the usage of Django.
The proposed feature tries to count the number of "unique
developers
" and gather some metrics of their environments by using
Google Analytics (GA) in
Django. The actual proposal (DEP 8) is done as a pull request, which is
part of Django
Enhancement Proposal (DEP) process that is similar in
spirit to the Python Enhancement Proposal (PEP) process. DEP 8 was
brought forward by longtime Django developer Jacob Kaplan-Moss.
The rationale is that "if we
had clear data on the extent of Django's usage, it would be much
easier to approach organizations for funding
". The proposal is
essentially about adding code in Django to send a certain set of
metrics when "developer" commands are run. The system would be
"opt-out", enabled by default unless turned off,
although the developer would be warned the first time the phone-home
system is used. The proposal notes that an opt-in system "severely
undercounts
" and is therefore not considered "substantially better
than a community survey
" that the DSF is already doing.
Information gathered
The pieces of information reported are specifically designed to run only in a developer's environment and not in production. The metrics identified are, at the time of writing:
- an event category (the developer commands:
startproject,startapp,runserver) - the HTTP User-Agent string identifying the Django, Python, and OS versions
- a user-specific unique identifier (a UUID generated on first run)
The proposal mentions the use of the GA aip flag which,
according to GA documentation, makes "the IP
address of the sender 'anonymized'
". It is not quite clear how that
is done at Google and, given that it is a proprietary platform, there
is no way to verify that claim. The proposal says it means that
"we can't see, and Google Analytics doesn't store, your actual IP
".
But that is not actually what Google does: GA stores IP addresses,
the documentation just says they are anonymized, without
explaining how.
GA is presented as a trade-off, since "Google's track record
indicates that they don't value privacy
" as highly as
the DSF does. The alternative, deploying its own analytics
software, was presented as making sustainability problems
worse. According to the proposal, Google "can't track Django
users. [...] The only thing Google could do would be to lie about
anonymizing IP addresses, and attempt to match users based on their
IPs
".
The truth is that we don't actually know what Google means when it
"anonymizes" data: Jannis Leidel, a Django team member, commented
that "Google has previously been subjected to secret US court orders
and was required to collaborate in mass surveillance conducted by US
intelligence services
" that limit even Google's capacity of ensuring
its users' anonymity. Leidel also argued that the legal framework of
the US may not apply elsewhere in the world: "for example the
strict German (and by extension EU) privacy laws would exclude the
automatic opt-in as a lawful option
".
Furthermore, the proposal claims that "if we discovered Google was
lying about this, we'd obviously stop using them immediately
", but it
is unclear exactly how this could be implemented if the software was
already deployed. There are also concerns that an
implementation could block normal operation, especially in countries
(like China) where Google itself may be blocked. Finally, some expressed
concerns that the information could constitute a security problem, since
it would unduly expose the version number of Django that is running.
In other projects
Django is certainly not the first project to consider implementing analytics to get more information about its users. The proposal is largely inspired by a similar system implemented by the OS X Homebrew package manager, which has its own opt-out analytics.
Other projects embed GA code directly in their web pages. This is apparently the option chosen by the Oscar Django-based ecommerce solution, but that was seen by the DSF as less useful since it would count Django administrators and wasn't seen as useful as counting developers. Wagtail, a Django-based content-management system, was incorrectly identified as using GA directly, as well. It actually uses referrer information to identify installed domains through the version updates checks, with opt-out. Wagtail didn't use GA because the project wanted only minimal data and it was worried about users' reactions.
NPM, the JavaScript package manager, also considered similar
tracking extensions. Laurie Voss, the co-founder of NPM, said it decided to completely avoid
phoning home, because "users would absolutely hate it
". But NPM
users are constantly downloading packages to rebuild applications
from scratch, so it has
more complete usage metrics, which are aggregated and available via a public
API. NPM users seem to find this is a "reasonable utility/privacy
trade
". Some NPM packages do phone home and have seen "very
mixed
" feedback from users, Voss said.
Eric Holscher, co-founder of Read the Docs, said the project is considering using Sentry for centralized reporting, which is a different idea, but interesting considering Sentry is fully open source. So even though it is a commercial service (as opposed to the closed-source Google Analytics), it may be possible to verify any anonymity claims.
Debian's response
Since Django is shipped with Debian, one concern
was the reaction of the distribution to the change. Indeed, "major distros' positions would be very important for
public reception
" to the feature, another developer stated.
One of the current maintainers of Django in Debian, Raphaël Hertzog,
explicitly stated from the start that such a system would "likely
be disabled by default in Debian
". There were two short
discussions
on Debian mailing lists where the overall consensus
seemed to be that any opt-out tracking code was undesirable in
Debian, especially if it was aimed at Google servers.
I have done some research to see what, exactly, was acceptable as a phone-home system in the Debian community. My research has revealed ten distinct bug reports against packages that would unexpectedly connect to the network, most of which were not directly about collecting statistics but more often about checking for new versions. In most cases I found, the feature was disabled. In the case of version checks, it seems right for Debian to disable the feature, because the package cannot upgrade itself: that task is delegated to the package manager. One of those issues was the infamous "OK Google" voice activation binary blob controversy that was previously reported here and has since then been fixed (although other issues remain in Chromium).
I have also found out that there is no clearly defined policy in Debian regarding tracking software. What I have found, however, is that there seems to be a strong consensus in Debian that any tracking is unacceptable. This is, for example, an extract of a policy that was drafted (but never formally adopted) by Ian Jackson, a longtime Debian developer:
In other words, opt-in only, period. Jackson explained that "when
we originally wrote the core of the
policy documents, the DFSG [Debian Free Software Guidelines], the SC
[Social Contract], and so on, no-one would have
considered this behaviour acceptable
", which explains why no
explicit formal
policy has been adopted yet in the Debian project.
One of the concerns with opt-out systems (or even prompts that default to opt-in) was well explained back then by Debian developer Bas Wijnen:
One could argue that Debian has its own tracking systems. For example, by default, Debian will "phone home" through the APT update system (though it only reports the packages requested). However, this is currently not automated by default, although there are plans to do so soon. Furthermore, Debian members do not consider APT as tracking, because it needs to connect to the network to accomplish its primary function. Since there are multiple distributed mirrors (which the user gets to choose when installing), the risk of surveillance and tracking is also greatly reduced.
A better parallel could be drawn with Debian's popcon system,
which actually tracks Debian installations, including package
lists. But as Barry Warsaw pointed
out in that discussion,
"popcon is 'opt-in' and [...] the overwhelming majority in Debian is
in favour of it in contrast to 'opt-out'
". It should be noted that
popcon, while opt-in, defaults to "yes" if users click through the
install process. [Update: As pointed out in the comments, popcon actually defaults to "no" in Debian.] There are around 200,000 submissions at this
time, which are tracked with machine-specific unique
identifiers that are submitted daily. Ubuntu, which also uses the
popcon software, gets around 2.8 million daily submissions, while
Canonical estimates there are 40 million desktop users of
Ubuntu. This would mean there is about an order of magnitude more
installations than what is reported by popcon.
Policy aside, Warsaw explained that "Debian has a reputation for taking
privacy issues very serious and likes to keep it
".
Next steps
There are obviously disagreements within the Django project about how to
handle this problem.
It looks like the phone-home system may end up being implemented as a
proxy system "which would allow us to strip IP addresses instead
of relying on Google to anonymize them, or to anonymize them
ourselves
", another Django developer, Aymeric Augustin, said. Augustin
also stated that the feature wouldn't "land before Django drops
support for Python 2
", which is currently estimated to be around
2020. It is unclear, then, how the proposal would resolve the funding
issues, considering how long it would take to deploy the change and
then collect the information so that it can be used to
spur the funding efforts.
It also seems the system may explicitly prompt the user, with an
opt-out default, instead of just splashing a warning or privacy
agreement without a prompt. As Shai Berger, another Django
contributor, stated, "you do not get [those] kind of numbers in
community surveys
". Berger also made the argument that "we
trust the community to give back without being forced to do so
";
furthermore:
Other options may also include gathering
metrics in pip or PyPI, which was
proposed by Donald Stufft. Leidel also proposed that
the system could ask to opt-in only after a few times the commands are
called.
It is encouraging to see that a community can discuss such issues without heating up too much and shows great maturity for the Django project. Every free-software project may be confronted with funding and sustainability issues. Django seems to be trying to address this in a transparent way. The project is willing to engage with the whole spectrum of the community, from the top leaders to downstream distributors, including individual developers. This practice should serve as a model, if not of how to do funding or tracking, at least of how to discuss those issues productively.
Everyone seems to agree the point is not to surveil users, but improve
the software. As Lars Wirzenius, a Debian
developer, commented: "it's a very sad situation if
free software projects have to compromise on privacy to get funded
".
Hopefully, Django will be able to improve its funding without
compromising its principles.
Brief items
Security quotes of the week
That was the message on San Francisco Muni station computer screens across the city, giving passengers free rides all day on Saturday.
Inside sources say the system has been hacked for days.
Mission Improbable: Hardening Android for Security And Privacy (Tor blog)
The Tor blog has a post about the refresh of its Tor-enabled Android phone prototype, which is now in a workable state though it still has some rough edges. There is also a worrisome trend that the post highlights: "It is unfortunate that Google seems to see locking down Android as the only solution to the fragmentation and resulting insecurity of the Android platform. We believe that more transparent development and release processes, along with deals for longer device firmware support from SoC vendors, would go a long way to ensuring that it is easier for good OEM players to stay up to date. Simply moving more components to Google Play, even though it will keep those components up to date, does not solve the systemic problem that there are still no OEM incentives to update the base system. Users of old AOSP base systems will always be vulnerable to library, daemon, and operating system issues. Simply giving them slightly more up to date apps is a bandaid that both reduces freedom and does not solve the root security problems. Moreover, as more components and apps are moved to closed source versions, Google is reducing its ability to resist the demand that backdoors be introduced. It is much harder to backdoor an open source component (especially with reproducible builds and binary transparency) than a closed source one."
New vulnerabilities
bzip2: denial of service
| Package(s): | bzip2 | CVE #(s): | CVE-2016-3189 | ||||||||||||
| Created: | November 28, 2016 | Updated: | January 5, 2017 | ||||||||||||
| Description: | From the Mageia advisory:
A use-after-free flaw was found in bzip2recover, leading to a null pointer dereference, or a write to a closed file descriptor. An attacker could use this flaw by sending a specially crafted bzip2 file to recover and force the program to crash. | ||||||||||||||
| Alerts: |
| ||||||||||||||
dovecot22: information disclosure
| Package(s): | dovecot22 | CVE #(s): | CVE-2016-4983 | ||||
| Created: | November 23, 2016 | Updated: | November 30, 2016 | ||||
| Description: | From the openSUSE bug report:
Red Hat found a race condition between certificate creation and chmod of the keyfile in dovecot. This can lead to the contents of the file being exposed between the time the file is created and the chmod command runs. I would suggest setting umask 077 first. | ||||||
| Alerts: |
| ||||||
drupal: multiple vulnerabilities
| Package(s): | drupal | CVE #(s): | CVE-2016-9449 CVE-2016-9450 CVE-2016-9452 | ||||||||||||||||||||||||||||
| Created: | November 21, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-9449 (information disclosure): Drupal provides a mechanism to alter database SELECT queries before they are executed. Contributed and custom modules may use this mechanism to restrict access to certain entities by implementing hook_query_alter() or hook_query_TAG_alter() in order to add additional conditions. Queries can be distinguished by means of query tags. As the documentation on EntityFieldQuery::addTag() suggests, access-tags on entity queries normally follow the form ENTITY_TYPE_access (e.g. node_access). However, the taxonomy module's access query tag predated this system and used term_access as the query tag instead of taxonomy_term_access. As a result, before this security release modules wishing to restrict access to taxonomy terms may have implemented an unsupported tag, or needed to look for both tags (term_access and taxonomy_term_access) in order to be compatible with queries generated both by Drupal core as well as those generated by contributed modules like Entity Reference. Otherwise information on taxonomy terms might have been disclosed to unprivileged users. - CVE-2016-9450 (content spoofing): The user password reset form does not specify a proper cache context, which can lead to cache poisoning and unwanted content on the page. - CVE-2016-9452 (denial of service): A specially crafted URL can cause a denial of service via the transliterate mechanism. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
drupal7: URL injection
| Package(s): | drupal7 | CVE #(s): | CVE-2016-9451 | ||||||||||||||||||||
| Created: | November 21, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||
| Description: | From the Drupal advisory:
Confirmation forms allow external URLs to be injected (Moderately critical - Drupal 7) Under certain circumstances, malicious users could construct a URL to a confirmation form that would trick users into being redirected to a 3rd party website after interacting with the form, thereby exposing the users to potential social engineering attacks. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
firefox: multiple vulnerabilities
| Package(s): | firefox | CVE #(s): | CVE-2016-5289 CVE-2016-5292 CVE-2016-9063 CVE-2016-9067 CVE-2016-9068 CVE-2016-9070 CVE-2016-9071 CVE-2016-9073 CVE-2016-9075 CVE-2016-9076 CVE-2016-9077 | ||||||||||||||||
| Created: | November 17, 2016 | Updated: | November 30, 2016 | ||||||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-5289 (arbitrary code execution): Mozilla developers and community members Christian Holler, Andrew McCreight, Dan Minor, Tyson Smith, Jon Coppeard, Jan-Ivar Bruaroey, Jesse Ruderman, and Markus Stange reported memory safety bugs present in Firefox 49. Some of these bugs showed evidence of memory corruption and we presume that with enough effort that some of these could be exploited to run arbitrary code. - CVE-2016-5292 (arbitrary code execution): During URL parsing, a maliciously crafted URL can cause a potentially exploitable crash. - CVE-2016-9063 (arbitrary code execution): An integer overflow during the parsing of XML using the Expat library. - CVE-2016-9067 (arbitrary code execution): Two heap-use-after-free errors during DOM operations in nsINode::ReplaceOrInsertBefore resulting in potentially exploitable crashes. - CVE-2016-9068 (arbitrary code execution): A heap-use-after-free in nsRefreshDriver during web animations when working with timelines resulting in a potentially exploitable crash. - CVE-2016-9070 (same-origin policy bypass): A maliciously crafted page loaded to the sidebar through a bookmark can reference a privileged chrome window and engage in limited JavaScript operations violating cross-origin protections. - CVE-2016-9071 (information disclosure): Content Security Policy combined with HTTP to HTTPS redirection can be used by malicious server to verify whether a known site is within a user's browser history. - CVE-2016-9073 (sandbox escape): WebExtensions can bypass security checks to load privileged URLs and potentially escape the WebExtension sandbox. - CVE-2016-9075 (privilege escalation): An issue where WebExtensions can use the mozAddonManager API to elevate privilege due to privileged pages being allowed in the permissions list. This allows a malicious extension to then install additional extensions without explicit user permission. - CVE-2016-9076 (content spoofing): An issue where a <select> dropdown menu can be used to cover location bar content, resulting in potential spoofing attacks. This attack requires e10s to be enabled in order to function. - CVE-2016-9077 (information disclosure): Canvas allows the use of the feDisplacementMap filter on images loaded cross-origin. The rendering by the filter is variable depending on the input pixel, allowing for timing attacks when the images are loaded from third party locations. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
firefox: timing side channel
| Package(s): | firefox-esr | CVE #(s): | CVE-2016-9074 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 17, 2016 | Updated: | December 23, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mozilla security advisory:
CVE-2016-9074: Insufficient timing side-channel resistance in divSpoiler | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
gnuchess: code execution
| Package(s): | gnuchess | CVE #(s): | CVE-2015-8972 | ||||||||
| Created: | November 18, 2016 | Updated: | November 30, 2016 | ||||||||
| Description: | From the Mageia advisory:
gnuchess before 6.2.4 is vulnerable to a stack buffer overflow related to user move input, where 160 characters of input can crash gnuchess (CVE-2015-8972). | ||||||||||
| Alerts: |
| ||||||||||
graphicsmagick: denial of service
| Package(s): | GraphicsMagick | CVE #(s): | CVE-2016-8862 | ||||||||||||||||||||||||||||||||||||
| Created: | November 18, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||||||||||
| Description: | From the openSUSE advisory:
CVE-2016-8862: A memory allocation failure in AcquireMagickMemory could lead to denial of service. | ||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||
gst-plugins-bad: code execution
| Package(s): | gst-plugins-bad1.0 | CVE #(s): | CVE-2016-9445 CVE-2016-9446 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 18, 2016 | Updated: | January 3, 2017 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Chris Evans discovered that the GStreamer plugin to decode VMware screen capture files allowed the execution of arbitrary code. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
gst-plugins-good: code execution
| Package(s): | gst-plugins-good0.10, gst-plugins-good1.0 | CVE #(s): | CVE-2016-9634 CVE-2016-9635 CVE-2016-9636 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 23, 2016 | Updated: | December 6, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Chris Evans discovered that GStreamer Good Plugins did not correctly handle malformed FLC movie files. If a user were tricked into opening a crafted FLC movie file with a GStreamer application, an attacker could cause a denial of service via application crash, or execute arbitrary code with the privileges of the user invoking the program. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
hdf5: multiple vulnerabilities
| Package(s): | hdf5 | CVE #(s): | CVE-2016-4330 CVE-2016-4331 CVE-2016-4332 CVE-2016-4333 | ||||||||||||||||||||
| Created: | November 30, 2016 | Updated: | January 2, 2017 | ||||||||||||||||||||
| Description: | From the CVE entries:
In the HDF5 1.8.16 library's failure to check if the number of dimensions for an array read from the file is within the bounds of the space allocated for it, a heap-based buffer overflow will occur, potentially leading to arbitrary code execution. (CVE-2016-4330) When decoding data out of a dataset encoded with the H5Z_NBIT decoding, the HDF5 1.8.16 library will fail to ensure that the precision is within the bounds of the size leading to arbitrary code execution. (CVE-2016-4331) The library's failure to check if certain message types support a particular flag, the HDF5 1.8.16 library will cast the structure to an alternative structure and then assign to fields that aren't supported by the message type and the library will write outside the bounds of the heap buffer. This can lead to code execution under the context of the library. (CVE-2016-4332) The HDF5 1.8.16 library allocating space for the array using a value from the file has an impact within the loop for initializing said array allowing a value within the file to modify the loop's terminator. Due to this, an aggressor can cause the loop's index to point outside the bounds of the array when initializing it. (CVE-2016-4333) | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
icu: code execution
| Package(s): | icu | CVE #(s): | CVE-2014-9911 | ||||||||||||
| Created: | November 28, 2016 | Updated: | November 30, 2016 | ||||||||||||
| Description: | From the Debian advisory:
Michele Spagnuolo discovered a buffer overflow vulnerability which might allow remote attackers to cause a denial of service or possibly execute arbitrary code via crafted text. | ||||||||||||||
| Alerts: |
| ||||||||||||||
icu: code execution
| Package(s): | icu | CVE #(s): | CVE-2016-7415 | ||||||||||||||||||||||||
| Created: | November 25, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||
| Description: | From the Fedora advisory:
CVE-2016-7415 icu: Stack based buffer overflow in locid.cpp | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
ipsilon: information leak/denial of service
| Package(s): | ipsilon | CVE #(s): | CVE-2016-8638 | ||||||||||||
| Created: | November 21, 2016 | Updated: | December 15, 2016 | ||||||||||||
| Description: | From the Red Hat advisory:
A vulnerability was found in ipsilon in the SAML2 provider's handling of sessions. An attacker able to hit the logout URL could determine what service providers other users are logged in to and terminate their sessions. | ||||||||||||||
| Alerts: |
| ||||||||||||||
jenkins-remoting: code execution
| Package(s): | jenkins-remoting | CVE #(s): | CVE-2016-9299 | ||||||||||||
| Created: | November 30, 2016 | Updated: | December 2, 2016 | ||||||||||||
| Description: | From the Mageia advisory:
An unauthenticated remote code execution vulnerability allowed attackers to transfer a serialized Java object to the Jenkins CLI, making Jenkins connect to an attacker-controlled LDAP server, which in turn can send a serialized payload leading to code execution, bypassing existing protection mechanisms. | ||||||||||||||
| Alerts: |
| ||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2016-8630 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 17, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Fedora advisory:
CVE-2016-8630 kernel: kvm: x86: NULL pointer dereference during instruction decode | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: denial of service
| Package(s): | kernel | CVE #(s): | CVE-2016-8645 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 21, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla:
It was discovered that the Linux kernel since, at least, v4.0 till v4.9-rc1 can hit BUG() statement in tcp_collapse() function after making a number of certain syscalls leading to a possible system crash. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kernel: code execution
| Package(s): | kernel | CVE #(s): | CVE-2016-8633 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 28, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
A buffer overflow vulnerability due to a lack of input filtering of incoming fragmented datagrams was found in the IP-over-1394 driver [firewire-net] in a fragment handling code in the Linux kernel. A maliciously formed fragment with a respectively large datagram offset would cause a memcpy() past the datagram buffer, which would cause a system panic or possible arbitrary code execution. The flaw requires [firewire-net] module to be loaded and is remotely exploitable from connected firewire devices, but not over a local network. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
kvm: denial of service
| Package(s): | kvm | CVE #(s): | CVE-2016-8667 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 25, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the SUSE bugzilla entry:
QEMU built with the JAZZ RC4030 chipset emulation support is vulnerable to a divide by zero issue. It could occur while computing its periodic timer's next tick value. A privileged guest user could use this flaw to crash the Qemu process instance on the host resulting in DoS. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
libgc: code execution
| Package(s): | libgc | CVE #(s): | CVE-2016-9427 | ||||||||||||||||
| Created: | November 25, 2016 | Updated: | February 16, 2017 | ||||||||||||||||
| Description: | From the Debian-LTS advisory:
libgc is vulnerable to integer overflows in multiple places. In some cases, when asked to allocate a huge quantity of memory, instead of failing the request, it will return a pointer to a small amount of memory possibly tricking the application into a buffer overwrite. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
libsoap-lite-perl: XML expansion
| Package(s): | libsoap-lite-perl | CVE #(s): | CVE-2015-8978 | ||||
| Created: | November 28, 2016 | Updated: | November 30, 2016 | ||||
| Description: | From the CVE entry:
In Soap Lite (aka the SOAP::Lite extension for Perl) 1.14 and earlier, an example attack consists of defining 10 or more XML entities, each defined as consisting of 10 of the previous entity, with the document consisting of a single instance of the largest entity, which expands to one billion copies of the first entity. The amount of computer memory used for handling an external SOAP call would likely exceed that available to the process parsing the XML. | ||||||
| Alerts: |
| ||||||
libtiff: multiple vulnerabilities
| Package(s): | libtiff | CVE #(s): | CVE-2016-9273 CVE-2016-9297 | ||||||||||||||||||||||||||||||||
| Created: | November 18, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
A read outside of array in tiffsplit (or other utilities using TIFFNumberOfStrips()) (CVE-2016-9273). A potential read outside buffer in _TIFFPrintField() (CVE-2016-9297). Multiple uint32 overflows in writeBufferToSeparateStrips(), writeBufferToContigTiles() and writeBufferToSeparateTiles() that could cause heap buffer overflows (CVE number not assigned yet). | ||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||
libtiff: multiple vulnerabilities
| Package(s): | libtiff | CVE #(s): | CVE-2015-7313 CVE-2016-3625 CVE-2016-9448 CVE-2016-9453 CVE-2016-9533 CVE-2016-9534 CVE-2016-9535 CVE-2016-9536 CVE-2016-9537 CVE-2016-9538 CVE-2016-9539 CVE-2016-9540 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 28, 2016 | Updated: | February 1, 2017 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the CVE entries:
tif_read.c in the tiff2bw tool in LibTIFF 4.0.6 and earlier allows remote attackers to cause a denial of service (out-of-bounds read) via a crafted TIFF image. (CVE-2016-3625) tif_pixarlog.c in libtiff 4.0.6 has out-of-bounds write vulnerabilities in heap allocated buffers. Reported as MSVR 35094, aka "PixarLog horizontalDifference heap-buffer-overflow." (CVE-2016-9533) tif_write.c in libtiff 4.0.6 has an issue in the error code path of TIFFFlushData1() that didn't reset the tif_rawcc and tif_rawcp members. Reported as MSVR 35095, aka "TIFFFlushData1 heap-buffer-overflow." (CVE-2016-9534) tif_predict.h and tif_predict.c in libtiff 4.0.6 have assertions that can lead to assertion failures in debug mode, or buffer overflows in release mode, when dealing with unusual tile size like YCbCr with subsampling. Reported as MSVR 35105, aka "Predictor heap-buffer-overflow." (CVE-2016-9535) tools/tiff2pdf.c in libtiff 4.0.6 has out-of-bounds write vulnerabilities in heap allocated buffers in t2p_process_jpeg_strip(). Reported as MSVR 35098, aka "t2p_process_jpeg_strip heap-buffer-overflow." (CVE-2016-9536) tools/tiffcrop.c in libtiff 4.0.6 has out-of-bounds write vulnerabilities in buffers. Reported as MSVR 35093, MSVR 35096, and MSVR 35097. (CVE-2016-9537) tools/tiffcrop.c in libtiff 4.0.6 reads an undefined buffer in readContigStripsIntoBuffer() because of a uint16 integer overflow. Reported as MSVR 35100. (CVE-2016-9538) tools/tiffcrop.c in libtiff 4.0.6 has an out-of-bounds read in readContigTilesIntoBuffer(). Reported as MSVR 35092. (CVE-2016-9539) tools/tiffcp.c in libtiff 4.0.6 has an out-of-bounds write on tiled images with odd tile width versus image width. Reported as MSVR 35103, aka "cpStripToTile heap-buffer-overflow." (CVE-2016-9540) From the Arch Linux advisory: - CVE-2015-7313 (denial of service): A denial of service flaw was found in the way libtiff parsed certain tiff files. An attacker could use this flaw to create a specially crafted TIFF file that would cause an application using libtiff to exhaust all available memory on the system. - CVE-2016-9448 (denial of service): A null pointer dereference vulnerability in TIFFFetchNormalTag() occurs when values of tags with TIFF_SETGET_C16_ASCII / TIFF_SETGET_C32_ASCII access are 0-byte arrays leading to denial of service. - CVE-2016-9453 (arbitrary code execution): An out-of-bounds write vulnerability has been discovered caused by a memcpy call without proper bounds checks. A malicious tiff file handled by tiff2pdf will cause an illegal write to a potentially attacker controlled target address. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
lxc: directory traversal
| Package(s): | lxc | CVE #(s): | CVE-2016-8649 | ||||||||||||||||||||
| Created: | November 25, 2016 | Updated: | December 19, 2016 | ||||||||||||||||||||
| Description: | From the Ubuntu advisory:
Roman Fiedler discovered a directory traversal flaw in lxc-attach. An attacker with access to an LXC container could exploit this flaw to access files outside of the container. | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
mcabber: roster push attack
| Package(s): | mcabber | CVE #(s): | CVE-2016-9928 | ||||||||||||||||||||||||||||
| Created: | November 28, 2016 | Updated: | January 2, 2017 | ||||||||||||||||||||||||||||
| Description: | From the Debian LTS advisory:
It was discovered that there was a "roster push attack" in mcabber, a console-based Jabber (XMPP) client. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
moodle: multiple vulnerabilities
| Package(s): | moodle | CVE #(s): | CVE-2016-9186 CVE-2016-9187 CVE-2016-9188 | ||||||||||||
| Created: | November 21, 2016 | Updated: | November 30, 2016 | ||||||||||||
| Description: | From the CVE entries:
Unrestricted file upload vulnerability in the "legacy course files" and "file manager" modules in Moodle 3.1.2 allows remote authenticated users to execute arbitrary code by uploading a file with an executable extension, and then accessing it via unspecified vectors. (CVE-2016-9186) Unrestricted file upload vulnerability in the double extension support in the "image" module in Moodle 3.1.2 allows remote authenticated users to execute arbitrary code by uploading a file with an executable extension, and then accessing it via unspecified vectors. (CVE-2016-9187) Cross-site scripting (XSS) vulnerabilities in Moodle CMS on or before 3.1.2 allow remote attackers to inject arbitrary web script or HTML via the s_additionalhtmlhead, s_additionalhtmltopofbody, and s_additionalhtmlfooter parameters. (CVE-2016-9188) | ||||||||||||||
| Alerts: |
| ||||||||||||||
mozilla: code execution
| Package(s): | firefox | CVE #(s): | CVE-2016-9069 | ||||||||
| Created: | November 21, 2016 | Updated: | November 30, 2016 | ||||||||
| Description: | From the Ubuntu advisory:
Two use-after-free bugs were discovered during DOM operations in some circumstances. If a user were tricked in to opening a specially crafted website, an attacker could potentially exploit these to cause a denial of service via application crash, or execute arbitrary code. (CVE-2016-9067, CVE-2016-9069) | ||||||||||
| Alerts: |
| ||||||||||
mujs: multiple vulnerabilities
| Package(s): | mujs | CVE #(s): | CVE-2016-7504 CVE-2016-7505 CVE-2016-7506 CVE-2016-9017 CVE-2016-9108 CVE-2016-9109 CVE-2016-9294 | ||||||||||||||||||||||||
| Created: | November 25, 2016 | Updated: | December 2, 2016 | ||||||||||||||||||||||||
| Description: | From the Red Hat bugzilla entry for CVE-2016-7504, CVE-2016-7505, CVE-2016-7506, CVE-2016-9017, CVE-2016-9108, and CVE-2016-9109:
Mujs received multiple CVEs for security issues. CVE-2016-9108: Integer overflow and crash parsing regex in mujs http://seclists.org/oss-sec/2016/q4/275 CVE-2016-9109: Incomplete fix for CVE-2016-7563 http://seclists.org/oss-sec/2016/q4/276 CVE-2016-7506: OOB read vulnerability in Sp_replace_regexp function http://bugs.ghostscript.com/show_bug.cgi?id=697141 CVE-2016-7505: Buffer overflow in divby function http://bugs.ghostscript.com/show_bug.cgi?id=697140 CVE-2016-7504: Use-after-free in Rp_toString function http://bugs.ghostscript.com/show_bug.cgi?id=697142 CVE-2016-9017: OOB read in jsC_dumpfunction function http://bugs.ghostscript.com/show_bug.cgi?id=697171 From the Red Hat bugzilla entry for CVE-2016-9294 MuJS before 5008105780c0b0182ea6eda83ad5598f225be3ee allows context-dependent attackers to conduct "denial of service (application crash)" attacks by using the "malformed labeled break/continue in JavaScript" approach, related to a "NULL pointer dereference" issue affecting the jscompile.c component. | ||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||
ntp: multiple vulnerabilities
| Package(s): | ntp | CVE #(s): | CVE-2016-9311 CVE-2016-9310 CVE-2016-7427 CVE-2016-7428 CVE-2016-9312 CVE-2016-7431 CVE-2016-7434 CVE-2016-7429 CVE-2016-7426 CVE-2016-7433 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 23, 2016 | Updated: | February 7, 2017 | ||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Slackware advisory:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||
otrs: code execution
| Package(s): | otrs | CVE #(s): | CVE-2016-9139 | ||||||||||||
| Created: | November 18, 2016 | Updated: | January 17, 2017 | ||||||||||||
| Description: | From the openSUSE advisory:
CVE-2016-9139: execution of JavaScript in OTRS context by opening malicious attachment | ||||||||||||||
| Alerts: |
| ||||||||||||||
p7zip: denial of service
| Package(s): | p7zip | CVE #(s): | CVE-2016-9296 | ||||||||||||
| Created: | November 30, 2016 | Updated: | December 5, 2016 | ||||||||||||
| Description: | From the CVE entry:
A null pointer dereference bug affects the 16.02 and many old versions of p7zip. A lack of null pointer check for the variable folders.PackPositions in function CInArchive::ReadAndDecodePackedStreams in CPP/7zip/Archive/7z/7zIn.cpp, as used in the 7z.so library and in 7z applications, will cause a crash and a denial of service when decoding malformed 7z files. | ||||||||||||||
| Alerts: |
| ||||||||||||||
perl-DBD-MySQL: out of bounds read
| Package(s): | perl-DBD-MySQL | CVE #(s): | CVE-2016-1249 | ||||||||||||||||||||
| Created: | November 25, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||
| Description: | From the Fedora advisory:
CVE-2016-1249 perl-DBD-MySQL: Out-of-bounds read when using server-side prepared statement support | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
php: code execution
| Package(s): | php | CVE #(s): | CVE-2016-9138 | ||||||||||||||||
| Created: | November 21, 2016 | Updated: | December 21, 2016 | ||||||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-9138 (arbitrary code execution): An use after free vulnerability was found in unserialize() via DateInterval::__wakeup(), leading to arbitrary code execution. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
python-tornado: XSRF protection bypass
| Package(s): | python-tornado | CVE #(s): | |||||||||||||
| Created: | November 28, 2016 | Updated: | December 13, 2016 | ||||||||||||
| Description: | From the Fedora advisory:
Update to 4.4.2: Security fixes * A difference in cookie parsing between Tornado and web browsers (especially when combined with Google Analytics) could allow an attacker to set arbitrary cookies and bypass XSRF protection. The cookie parser has been rewritten to fix this attack. Backwards-compatibility notes * Cookies containing certain special characters (in particular semicolon and square brackets) are now parsed differently. * If the cookie header contains a combination of valid and invalid cookies, the valid ones will be returned (older versions of Tornado would reject the entire header for a single invalid cookie). | ||||||||||||||
| Alerts: |
| ||||||||||||||
qemu: denial of service
| Package(s): | qemu | CVE #(s): | CVE-2016-7907 | ||||||||
| Created: | November 21, 2016 | Updated: | November 30, 2016 | ||||||||
| Description: | From the CVE entry:
The imx_fec_do_tx function in hw/net/imx_fec.c in QEMU (aka Quick Emulator) does not properly limit the buffer descriptor count when transmitting packets, which allows local guest OS administrators to cause a denial of service (infinite loop and QEMU process crash) via vectors involving a buffer descriptor with a length of 0 and crafted values in bd.flags. | ||||||||||
| Alerts: |
| ||||||||||
sniffit: privilege escalation
| Package(s): | sniffit | CVE #(s): | CVE-2014-5439 | ||||
| Created: | November 21, 2016 | Updated: | November 30, 2016 | ||||
| Description: | From the Debian LTS advisory:
It was discovered that there was a buffer overflow in the packet sniffer and monitoring tool "sniffit" which allowed a specially-crafted configuration file to provide a root shell. | ||||||
| Alerts: |
| ||||||
teeworlds: code execution
| Package(s): | teeworlds | CVE #(s): | CVE-2016-9400 | ||||||||||||||||
| Created: | November 30, 2016 | Updated: | November 30, 2016 | ||||||||||||||||
| Description: | From the Red Hat bugzilla:
A security vulnerability was found in teeworlds, there are possible attacker controlled memory-writes and possibly arbitrary code execution on the client, abusable by any server the client joins. | ||||||||||||||||||
| Alerts: |
| ||||||||||||||||||
testdisk: code execution
| Package(s): | testdisk | CVE #(s): | |||||
| Created: | November 23, 2016 | Updated: | November 30, 2016 | ||||
| Description: | From the Gentoo advisory:
A buffer overflow can be triggered within TestDisk when a malicious disk image is attempting to be recovered. A remote attacker could coerce the victim to run TestDisk against their malicious image. This may be leveraged by an attacker to crash TestDisk and gain control of program execution. | ||||||
| Alerts: |
| ||||||
tiff: buffer overflow
| Package(s): | tiff | CVE #(s): | CVE-2016-9532 | ||||||||||||||||||||
| Created: | November 23, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||
| Description: | From the Debian LTS advisory:
Heap buffer overflow via writeBufferToSeparateStrips(). | ||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||
tomcat: two vulnerabilities
| Package(s): | tomcat6 | CVE #(s): | CVE-2016-6816 CVE-2016-8735 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 25, 2016 | Updated: | December 19, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-6816 (information disclosure): The code that parsed the HTTP request line permitted invalid characters. This could be exploited, in conjunction with a proxy that also permitted the invalid characters but with a different interpretation, to inject data into the HTTP response. By manipulating the HTTP response, the attacker could poison a web-cache, perform an XSS attack and/or obtain sensitive information from requests other then their own. - CVE-2016-8735 (arbitrary code execution): The JmxRemoteLifecycleListener was not updated to take account of Oracle's fix for CVE-2016-3427. Therefore, Tomcat installations using this listener remained vulnerable to a similar remote code execution vulnerability. A remote attacker is able to execute arbitrary code and disclose sensitive information. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
vagrant: nfs export insertion
| Package(s): | vagrant | CVE #(s): | |||||||||||||
| Created: | November 30, 2016 | Updated: | November 30, 2016 | ||||||||||||
| Description: | From the Red Hat bugzilla:
vagrant has a tempfile race that can allow an unprivileged local user to insert arbitrary nfs exports. | ||||||||||||||
| Alerts: |
| ||||||||||||||
vim: code execution
| Package(s): | vim | CVE #(s): | CVE-2016-1248 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 23, 2016 | Updated: | January 11, 2017 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian advisory:
Florian Larysch and Bram Moolenaar discovered that vim, an enhanced vi editor, does not properly validate values for the the 'filetype', 'syntax' and 'keymap' options, which may result in the execution of arbitrary code if a file with a specially crafted modeline is opened. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
w3m: multiple vulnerabilities
| Package(s): | w3m | CVE #(s): | CVE-2016-9422 CVE-2016-9423 CVE-2016-9424 CVE-2016-9425 CVE-2016-9426 CVE-2016-9428 CVE-2016-9429 CVE-2016-9430 CVE-2016-9431 CVE-2016-9432 CVE-2016-9433 CVE-2016-9434 CVE-2016-9435 CVE-2016-9436 CVE-2016-9437 CVE-2016-9438 CVE-2016-9439 CVE-2016-9440 CVE-2016-9441 CVE-2016-9442 | ||||||||||||
| Created: | November 21, 2016 | Updated: | December 14, 2016 | ||||||||||||
| Description: | From the Arch Linux advisory:
- CVE-2016-9422 (arbitrary code execution): A problem has been discovered when rowspan and colspan are not at least 1. If either one of them is zero and the other is larger than 1, HTT_X and HTT_Y attributes are not set correctly resulting in a wrong calculation of maxcol or maxrow (not including colspan/rowspan). This is leading to a potentially exploitable buffer overflow. - CVE-2016-9423 (arbitrary code execution): A stack overflow vulnerability has been discovered in deleteFrameSet() on specially crafted input like a malformed HTML tag. - CVE-2016-9424 (arbitrary code execution): A heap out of bound write has been discovered due to a negative array index for selectnumber and textareanumber. - CVE-2016-9425 (arbitrary code execution): A heap buffer overflow vulnerability has been discovered in addMultirowsForm() duo to an invalid array access resulting in a write to lineBuf[-1]. - CVE-2016-9426 (arbitrary code execution): A heap corruption vulnerability has been discovered due to an integer overflow in renderTable() leading to an unexpected write outside the tabwidth array boundaries. - CVE-2016-9428 (arbitrary code execution): A heap buffer overflow vulnerability has been discovered in addMultirowsForm() duo to an invalid array access resulting in a write to lineBuf[-1]. - CVE-2016-9429 (arbitrary code execution): An out of bounds write vulnerability has been discovered in formUpdateBuffer() duo to invalid length and position checks. - CVE-2016-9430 (denial of service): A problem has been discovered resulting in malformed input field type properties leading to an application crash. - CVE-2016-9431 (arbitrary code execution): A stack overflow vulnerability has been discovered in deleteFrameSet() on specially crafted input like a malformed HTML tag. - CVE-2016-9432 (arbitrary code execution): A vulnerability has been discovered in formUpdateBuffer() duo to insufficient bounds validation leading to a negative sized bcopy() call getting converted to an unexpectedly large value. - CVE-2016-9433 (denial of service): An out of bounds read access has been discovered in the iso2022 parsing while calculating the WC_CCS_INDEX leading to an application crash resulting in denial of service. - CVE-2016-9434 (arbitrary code execution): An out of bounds write vulnerability has been discovered while handling form_int fields. An incorrect form_int fid is not properly checked and leads to an out of bounds write in forms[form_id]->next. - CVE-2016-9435 (arbitrary code execution): Multiple issues have been discovered related to uninitialized values for <i> and <dd> HTML elements. A missing PUSH_ENV(HTML_DL) call is leading to a conditional jump or move depending on an uninitialized value resulting in a stack overflow vulnerability. - CVE-2016-9436 (arbitrary code execution): Multiple issues have been discovered related to uninitialized values for <i> and <dd> HTML elements. A missing null string termination for the tagname variable in parsetagx.c is leading to an out of bounds access. - CVE-2016-9437 (arbitrary code execution): An out of bounds write access has been discovered when using invalid button element type properties like '<button type=radio>'. - CVE-2016-9438 (denial of service): A null pointer dereference problem has been discovered while processing the input_alt tag leading to an application crash. - CVE-2016-9439 (denial of service): An infinite recursion problem has been discovered when processing nested table and textarea elements leading to an application crash. - CVE-2016-9440 (denial of service): A null pointer dereference problem has been discovered in the formUpdateBuffer() function leading to a segmentation fault resulting in an application crash. - CVE-2016-9441 (denial of service): A null pointer dereference problem has been discovered in the do_refill() function triggered by a malformed table_alt tag leading to a segmentation fault resulting in an application crash. - CVE-2016-9442 (denial of service): A potential heap buffer corruption vulnerability has been discovered due to Strgrow. Note that w3m's allocator (boehmgc) preserves more space than the required size due to bucketing so the heap shouldn't be corrupted in practice. | ||||||||||||||
| Alerts: |
| ||||||||||||||
wireshark: multiple vulnerabilities
| Package(s): | wireshark | CVE #(s): | CVE-2016-9373 CVE-2016-9374 CVE-2016-9375 CVE-2016-9376 | ||||||||||||||||||||||||||||
| Created: | November 18, 2016 | Updated: | November 30, 2016 | ||||||||||||||||||||||||||||
| Description: | From the Mageia advisory:
The wireshark package has been updated to version 2.0.8, which fixes several security issues where a malformed packet trace could cause it to crash or go into an infinite loop, and fixes several other bugs as well. | ||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||
wireshark: denial of service
| Package(s): | wireshark | CVE #(s): | CVE-2016-9372 | ||||
| Created: | November 28, 2016 | Updated: | November 30, 2016 | ||||
| Description: | From the CVE entry:
In Wireshark 2.2.0 to 2.2.1, the Profinet I/O dissector could loop excessively, triggered by network traffic or a capture file. This was addressed in plugins/profinet/packet-pn-rtc-one.c by rejecting input with too many I/O objects. | ||||||
| Alerts: |
| ||||||
xen: multiple vulnerabilities
| Package(s): | xen | CVE #(s): | CVE-2016-9379 CVE-2016-9380 CVE-2016-9381 CVE-2016-9382 CVE-2016-9383 CVE-2016-9386 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Created: | November 25, 2016 | Updated: | December 5, 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Description: | From the Debian-LTS advisory:
CVE-2016-9379, CVE-2016-9380 (XSA-198): pygrub, the boot loader emulator, fails to quote (or sanity check) its results when reporting them to its caller. A malicious guest administrator can obtain the contents of sensitive host files CVE-2016-9381 (XSA-197): The compiler can emit optimizations in qemu which can lead to double fetch vulnerabilities. Malicious administrators can exploit this vulnerability to take over the qemu process, elevating its privilege to that of the qemu process. CVE-2016-9382 (XSA-192): LDTR, just like TR, is purely a protected mode facility. Hence even when switching to a VM86 mode task, LDTR loading needs to follow protected mode semantics. A malicious unprivileged guest process can crash or escalate its privilege to that of the guest operating system. CVE-2016-9383 (XSA-195): When Xen needs to emulate some instruction, to efficiently handle the emulation, the memory address and register operand are recalculated internally to Xen. In this process, the high bits of an intermediate expression were discarded, leading to both the memory location and the register operand being wrong. A malicious guest can modify arbitrary memory. CVE-2016-9386 (XSA-191): The Xen x86 emulator erroneously failed to consider the unusability of segments when performing memory accesses. An unprivileged guest user program may be able to elevate its privilege to that of the guest operating system. | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
| Alerts: |
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Page editor: Jake Edge
Kernel development
Brief items
Kernel release status
The current development kernel is 4.9-rc7, released on November 27. Linus said that things are shaping up and it is possible, but perhaps not likely, that the final 4.9 release will happen on December 4. "I basically reserve the right to make up my mind next weekend."
4.9-rc6 was released on November 20.
The latest 4.9 regressions list, posted on November 20, shows ten open issues.
Stable updates: the last two weeks have seen the release of 4.8.9 and 4.4.33 (November 19), 4.8.10 and 4.4.34 (November 21), and 4.8.11 and 4.4.35 (November 26). As if that weren't enough, 4.8.12 and 4.4.36 are in the review process with an expected release date of December 2.
Quotes of the week
Kernel development news
The end of modversions?
The 4.9-rc1 kernel prepatch, released on October 15, introduced a large set of new features — and, inevitably, a smaller set of new regressions. One of those problems, a module-related bootstrap failure, remains unfixed in the mainline even after the 4.9-rc7 release. A fix to the problem has been written and is known to work, but it may never be merged if, as seems reasonably likely, the community chooses a simpler option.
The problem of module compatibility
Loading modules into the kernel is a tricky business. Among other things, the module must precisely match the kernel into which it is being loaded in any of a number of ways. If a function prototype differs between the module and the kernel, bad things are sure to happen when that function is called. The same holds for data-structure layouts, configuration options, and even the version of the compiler used to build the various pieces. The obvious way to be sure that everything matches is to build the kernel and all loadable modules together; that is, indeed, how it is done most of the time. But there are users who want to be able to build the kernel and its modules separately.
One obvious use case for separately built modules is code that is not in the mainline, and, thus, cannot be built with the rest. There are also cases where users want to build and run a new kernel without necessarily rebuilding the modules that they use. Supporting these users while trying to protect the kernel against the loading of incompatible modules has led to the addition of a couple of layers of infrastructure.
The first of those is the "vermagic" string compiled into the kernel and into every loadable module. The system on which this article is being written features the following vermagic string:
4.8.6-2-default SMP preempt mod_unload modversions
In the simplest configuration, the module loader will simply check to ensure that a module and the kernel have the exact same vermagic string. That ensures that the module was built for the same kernel version and that major options like SMP support were configured in the same way. If the test fails, the module will not be loaded.
That test, however, will thwart users who want to use the same binary module in multiple versions of the kernel. Even users who have a module built for a distribution kernel will run into trouble when the distributor ships an update; the version number will increment to something like 4.8.6-3 and the test will fail, even though the new kernel only adds a few fixes and is almost certainly compatible with the old module. Supporting those users requires a more nuanced compatibility test.
The "modversions" configuration option is meant to be that test. When enabled, modversions changes both the compilation process and the module loader. When the kernel is built, a checksum is calculated from the prototype of every exported function; those checksums are stored in a special section of the binary. When modules are built, those same checksums are calculated for every exported function that the module calls; the result is built into the module binary. At module-load time, the kernel will drop the first part of the vermagic string (the kernel version number) before comparing it, meaning that modules can now be loaded into versions other than the one they were built for. But the loader will also compare the checksums for all kernel symbols used by the module; should one of those checksums fail to match, the module will not be loaded. This test will, thus, catch major changes in the functions used by modules, but it still cannot catch more subtle changes.
Recent changes and modversions
Back in February, Al Viro posted a set of changes to the symbol-export mechanism; these changes were designed to, among other things, allow the placement of EXPORT_SYMBOL() directives in assembly code for functions defined there. These changes, merged into the mainline for 4.9-rc1, improved symbol exports in a number of ways, but there was one little problem: the generation of checksums for symbols exported from assembly code does not work properly with binutils 2.27. In particular, those checksums (which were set to zero anyway) would be dropped entirely; the module loader would then complain about the missing checksums and refuse to load the module. As a result, systems with that version of binutils and with modversions enabled will fail to boot if they require a module that uses symbols defined in assembly code.
One fix, developed by Nick Piggin, is to create a special include file containing prototypes for functions exported from assembly code; the build process can read that file to generate the necessary modversions checksums. That ensures that the checksums are not only present, but also that they correspond to the symbols and can be meaningfully checked. This fix was merged for 4.9-rc6, but it failed to actually fix the problem because it did not finish the job. Functions defined in assembly code are, by their nature, architecture-specific, so the include file containing the prototypes must be created for each architecture. Those files were not actually created for any architecture beyond PowerPC so, as of 4.9-rc7, users of other architectures (i.e. most of us) can still run into the problem. Adam Borowski has posted a patch adding this file for the x86 architecture, but it has not been merged as of this writing.
And, indeed, it may never be merged, because it seems that most of the use cases for modversions no longer exist. Some distributors (notably Debian) make use of it but, since they take pains to not change APIs in supported kernels, all they really gain is the ability to avoid the kernel-version check (though Debian also counts on modversions to allow internal API changes to be made without changing the kernel version). As Linus Torvalds noted, the feature was once useful for developers who were tired of tracking down problems that were caused by stale kernel modules. In 2016, where the kernel version can contain the actual Git revision that was built and where the time required to build a full set of modules is short, modversions is no longer as useful as it once was. And, Piggin noted, modversions uses a fair amount of complicated machinery for a mediocre result:
By "quite limited," he is referring to the fact that many changes will elude the modversions check. In particular, changes to a structure passed to a function will not be caught. Piggin suggested that a better result could be obtained if the whole mechanism were removed and replaced by a simple, manually maintained version number attached to each exported symbol. Whenever a developer made an incompatible change, they would be expected to increment the version number; modules using the affected interface would then fail to load until they were rebuilt.
The version-number suggestion did not get far; the chances of those numbers
actually being maintained in a useful manner are quite small. But the idea
of removing modversions was better received. Torvalds agreed that the
whole thing "may just be too painful to bother with
" and that
the number of users is quite small — an idea reinforced by the fact that
few testers complained about this issue. So, rather than apply the fix,
Torvalds chose instead to mark
modversions as "broken" (essentially disabling the feature altogether)
instead. That change was merged just prior to the 4.9-rc7 release.
It seems, though, that not everybody is ready to see modversions go away
quite yet; in particular, Debian, which is planning on using 4.9 for the
upcoming "stretch" release, would like to have modversions available. So,
after 4.9-rc7 was released, Torvalds committed another
change re-enabling modversions, but with a change. Rather than refuse
to load a module when a checksum is missing, the loader will log a
complaint and continue. That should suffice to get modversions working
again on all systems without requiring the addition of
architecture-specific include files. His real goal is clear, though:
"Some day I really do want to remove MODVERSIONS entirely. Sadly,
today does not appear to be that day.
"
When that day does come, Piggin has a patch removing modversions altogether and replacing it with a simple option for distributors to supply their own ABI version string to be used instead of vermagic. Getting rid of modversions removes about 7,700 lines of code (much of which is generated by lex and bison) and simplifies the module-loading logic. It seems like a relatively easy sell — if distributors agree that they can do without modversions in the future.
statx() v3
Some developments just take a long time to truly come to fruition. That has proved to be the case for the proposed statx() system call — at least, the "long time" part has, even if we may still be waiting for "fruition". By most accounts, though, this extension to the stat() system call would appear to be getting closer to being ready. Recent patches show the current state of statx() and where the remaining sticking points are.The stat() system call, which returns metadata about a file, has a long history, having made its debut in the Version 1 Unix release in 1971. It has changed little in the following 45 years, even though the rest of the operating system has changed around it. Thus, it's unsurprising that stat() tends to fall short of current requirements. It is unable to represent much of the information relevant to files now, including generation and version numbers, file creation time, encryption status, whether they are stored on a remote server, and so on. It gives the caller no choice about which information to obtain, possibly forcing expensive operations to obtain data that the application does not need. The timestamp fields have year-2038 problems. And so on.
David Howells has been sporadically working on replacing stat() since 2010; his version 3 patch (counting since he restarted the effort earlier this year) came out on November 23. While the proposed statx() system call looks much the same as it did when we looked at it in May, there have been a few changes.
The prototype for statx() is still:
int statx(int dfd, const char *filename, unsigned atflag, unsigned mask,
struct statx *buffer);
Normally, dfd is a file descriptor identifying a directory, and filename is the name of the file of interest; that file is expected to be found relative to the given directory. If filename is passed as NULL, then dfd is interpreted as referring directly to the file being queried. Thus, statx() supersedes the functionality of both stat() and fstat().
The atflag argument modifies the behavior of the system call. It handles a couple of flags that already exist in current kernels: AT_SYMLINK_NOFOLLOW to return information about a symbolic link rather than following it, and AT_NO_AUTOMOUNT to prevent the automounting of remote filesystems. A set of new flags just for statx() controls the synchronization of data with remote servers, allowing applications to adjust the balance between I/O activity and accurate results. AT_STATX_FORCE_SYNC will force a synchronization with a remote server, even if the local kernel thinks its information is current, while AT_STATX_DONT_SYNC inhibits queries to the remote server, yielding fast results that may be out-of-date or entirely unavailable.
The atflag parameter, thus, controls what statx() will do to obtain the data; mask, instead, controls which data is obtained. The available flags here allow the application to request file permissions, type, number of links, ownership, timestamps, and more. The special value STATX_BASIC_STATS returns everything stat() would, while STATX_ALL returns everything available. Reducing the amount of information requested might reduce the amount of I/O required to execute the system call, but some reviewers worry that developers will just use STATX_ALL to avoid the need to think about it.
The final argument, buffer, contains a structure to be filled with the relevant information; in this version of the patch this structure looks like:
struct statx {
__u32 stx_mask; /* What results were written [uncond] */
__u32 stx_blksize; /* Preferred general I/O size [uncond] */
__u64 stx_attributes; /* Flags conveying information about the file [uncond] */
__u32 stx_nlink; /* Number of hard links */
__u32 stx_uid; /* User ID of owner */
__u32 stx_gid; /* Group ID of owner */
__u16 stx_mode; /* File mode */
__u16 __spare0[1];
__u64 stx_ino; /* Inode number */
__u64 stx_size; /* File size */
__u64 stx_blocks; /* Number of 512-byte blocks allocated */
__u64 __spare1[1];
struct statx_timestamp stx_atime; /* Last access time */
struct statx_timestamp stx_btime; /* File creation time */
struct statx_timestamp stx_ctime; /* Last attribute change time */
struct statx_timestamp stx_mtime; /* Last data modification time */
__u32 stx_rdev_major; /* Device ID of special file [if bdev/cdev] */
__u32 stx_rdev_minor;
__u32 stx_dev_major; /* ID of device containing file [uncond] */
__u32 stx_dev_minor;
__u64 __spare2[14]; /* Spare space for future expansion */
};
Here, stx_mask indicates which fields are actually valid; it will be the intersection of the information requested by the application and what the filesystem is able to provide. stx_attributes contains flags describing the state of the file; they indicate whether the file is compressed, encrypted, immutable, append-only, not to be included in backups, or an automount point.
The timestamp fields contain this structure:
struct statx_timestamp {
__s64 tv_sec;
__s32 tv_nsec;
__s32 __reserved;
};
The __reserved field was added in the version 3 patch as the result of one of the strongest points of disagreement in recent discussions about statx(). Dave Chinner suggested that, at some point in the future, nanosecond resolution may no longer be adequate; he said that the interface should be able to handle femtosecond timestamps. He was mostly alone on that point; other participants, such as Alan Cox, said that the speed of light will ensure that we never need timestamps below nanosecond resolution. Chinner insisted, though, so Howells added the __reserved field with the idea that it can be pressed into service should the need arise in the future.
Chinner had a number of other objections about the interface, some of which have not yet been addressed. These include the definition of the STATX_ATTR_ flags, which shadow a set of existing flags used with the FS_IOC_GETFLAGS and FS_IOC_SETFLAGS ioctl() calls. Reusing the flags allows a micro-optimization of the statx() code but, Chinner says, it perpetuates some interface mistakes made in the past. Ted Ts'o offered similar advice when reviewing a 2015 version of the patch set, but version 3 retains the same flag definitions.
The largest of Chinner's objections, though, may well be the absence of a comprehensive set of tests for statx(). This code, he said, should not go in until those tests are provided:
This position has been echoed by others (Michael Kerrisk, for example) recently. The kernel does have a long history of merging new system calls that do not work as advertised, with corresponding pain resulting later on. Howells will likely end up providing such tests, but not yet:
The rate of change of the patch set does seem to be slowing so, perhaps, its final form is beginning to come into focus. The history of this work suggests that it would not be wise to predict its merging in the near future, though. The stat() system call has been with us for a long time; it's reasonable to expect that statx() will last for just as long. A bit of extra "bikeshedding" to get the interface right seems understandable in that context.
Patches and updates
Kernel trees
Architecture-specific
Build system
Core kernel code
Device drivers
Device driver infrastructure
Documentation
Filesystems and block I/O
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Jonathan Corbet
Distributions
Funding Qubes OS
Qubes OS describes itself as "a reasonably secure operating system". At its core, it uses the Xen hypervisor to separate applications into isolated "qubes" that cannot interfere with each other. While Qubes OS has pushed the boundaries in desktop security, the company behind it, Invisible Things Lab (ITL), has not been as successful in achieving financial security. As a result, Qubes OS is taking a new direction that, its developers hope, will prove to be more lucrative.
As described in this news posting, the original funding model — beyond what ITL brought in with consulting — was a variant of the open-core approach. While Qubes OS is free software, the company tried to sell support for running Windows applications under AppVM as a proprietary product. Doing so required selling binary-only versions of GPLv2-licensed code. Companies wanting to sell proprietary licenses to free code often require either copyright assignment or the right to relicense the code from their contributors. ITL chose the latter option as can be seen on the Qubes OS License page, which says:
Contributors who didn't look closely might have been surprised to learn about the redefinition of Signed-off-by, especially since the page in question links to the kernel's SubmittingPatches document, which has no such provision. In any case, outside contributions do not appear to be a significant source of code for Qubes OS; a quick look at the Qubes core-admin repository shows that at least 90% of the commits there come from ITL employees. Qubes OS is, thus far, a single-company development project.
The AppVM-based business evidently failed to bring in enough revenue to justify it existence, though, so ITL de-emphasized it a little while back. The company credits the Open Technology Fund for supporting Qubes OS work for the last two years; there is no word on whether that funding is continuing into the future or not. Even if it does continue, it seems clear that this funding, while welcome, is not enough to sustain or grow Qubes OS at the level its creators would like.
Thus the new model: a "commercial edition" of Qubes OS that will meet corporate needs in ways that, it would appear, are still being worked out:
ITL insists that Qubes OS itself will remain an open-source project; it
will just be adding some proprietary bits around the edges. Much of this,
the posting says, may take the form of "custom Salt
configurations
" and, perhaps, some additional applications. So
users of Qubes OS (of which there are evidently about 20,000), need
not worry about it going away, especially if the commercialization effort
is successful.
What the company will not do, despite requests from some users, is offer complete systems with Qubes OS installed. That is a hard business and, in any case, there does not appear to be any available hardware out there that meets the company's standards for trustworthiness. It might be interesting to see whether there is a market out there for a complete system that has a higher-than-usual probability of staying under the owner's control, but that would almost certainly require a larger organization and budget than is available at this time.
The Linux distribution market is a hard place to play. Qubes OS does not emphasize its Linux roots, but that is the market it is operating in anyway. Many companies have tried to make a go at it, but few of them are still in business now. Like the Linux kernel itself, a distribution tends to be infrastructure that successful companies use to build some other sort of offering on, rather than a product in its own right.
ITL is now trying to create such an offering in the form of its corporate integration modules. With luck, the company will find success in that area without needing to let the free version of Qubes OS languish. ITL may also want to consider trying harder to build a community of contributors to the project, rather than trying to carry the entire burden on its own. There is certainly space for more secure operating systems; with stable funding and enough developers, perhaps ITL can continue working on one.
Brief items
Distribution quote of the week
> Note that QT is one of those that uses dlopen()/dlsym() when
> calling openssl functions (for license reasons).
No comment I could make about this would be acceptable in polite company. Or in impolite company. Or even during a sailor-class-cursing competition.
Fedora 25 released
The Fedora 25 release is now available "The Fedora Project is pleased to announce the immediate availability of Fedora 25, the next big step our journey into the containerized, modular future!" See the announcement and the release notes for details on the many changes in this release.
Distribution News
Debian GNU/Linux
Bits from the Stable Release Managers
The Debian Stable Release Managers are responsible for updates to the stable release (and old-stable while supported by the Security Team). These bits contain a reminder of what happens after the "stretch" release and how bugs in stable can be addressed. "In order to help improve our processes and provide earlier QA checks for uploads to stable, since our last d-d-a mail we've augmented our tools that generate the proposed-updates overview pages, to add support for binary debdiffs, piuparts results and Lintian checks (for both source and binary packages)."
BSP in Cambridge, UK, 27th-29th January 2017
There will be a Bug Squashing Party on January 27-29 in Cambridge, UK.
Fedora
openSUSE
Advanced discontinuation notice for openSUSE 13.2
SUSE support of openSUSE 13.2 will be ending around the middle of January.
Newsletters and articles of interest
Distribution newsletters
- Debian Misc. Developer News (November 25)
- Debian Project News (November 28)
- DistroWatch Weekly, Issue 688 (November 21)
- DistroWatch Weekly, Issue 689 (November 28)
- Lunar Linux weekly news (November 18)
- Lunar Linux weekly news (November 25)
- openSUSE news (November 23)
- openSUSE Tumbleweed – Review of the Week (November 18)
- openSUSE Tumbleweed – Review of the Week (November 25)
- Ubuntu Kernel Team newsletter (November 15)
- Ubuntu Kernel Team newsletter (November 29)
- Ubuntu Weekly Newsletter, Issue 488 (November 21)
- Ubuntu Weekly Newsletter, Issue 489 (November 28)
What’s new in Fedora 25 Workstation (Fedora Magazine)
Fedora Magazine has a brief overview of the changes to be found in the workstation version of the Fedora 25 release. "Wayland now replaces the old X11 display server by default. Its goal is to provide a smoother, richer experience when navigating Fedora Workstation. Like all software, there may still be some bugs. You can still choose the old X11 server if required."
FreeBSD quarterly report
The FreeBSD project has released its quarterly report for the third quarter of 2016. "Though 11.0-RELEASE was not finalized until after the period covered in this report, we can still have some anticipatory excitement for the features that will be coming in 12.0. The possibilities are tantalizing: a base system with no GPL components, arm64 as a Tier-1 architecture, capsicum protection for common utilities, and the CloudABI for custom software are just a few."
AV Linux Update: Good but Not Better (LinuxInsider)
LinuxInsider reviews AV Linux, a specialty distribution for audio/graphics/video enthusiasts. "This version ships with a custom RT kernel and JACK Audio Connection Kit. Its toolkit has Linux software developers in mind. It provides a strong development suite, and the leading audio/video/graphics applications either are included or available from the Debian or KXStudio software repositories."
Page editor: Rebecca Sobol
Development
The Emacs dumper dispute
The Emacs editor is, at its core, a C program, but much of the editor's functionality is actually implemented in its special "Elisp" dialect of Lisp. Starting the editor requires loading a great deal of Elisp code and initializing its state, a process that can take a long time. To avoid making users wait for this process, Emacs has long used a scheme whereby the Elisp code is loaded once and a memory image is written to disk; starting Emacs becomes a matter of reading the memory image back in, which is a much faster process. Supporting this "dumping" functionality (also known as "unexec") has never been easy; beyond the technical challenges, it now appears that it may lead to a significant split within the Emacs community.As covered here in January, the Emacs dumping (and "undumping") mechanism has long depended on some low-level hooks in the GNU C Library's memory allocation subsystem. The Glibc developers would like to modernize and improve this code, improving the library overall but removing the hooks that Emacs depends upon. At the end of the January discussions, the Emacs developers had decided to move to a workaround implementation that allowed the dumper to continue to work in the absence of Glibc support.
"Unstable" is the sort of behavior that users of text editors normally go well out of their way to avoid; it's also the sort of thing that could give vi a definitive advantage in the interminable editor wars. So something clearly needs to be done to make the Emacs dumping facility more stable and, preferably, more maintainable going forward. What that "something" would be is unclear, and the posting of a possible solution appears to have simply muddied the waters further.
That solution comes in the form of the "portable dumper" patch from Daniel Colascione. This patch is not small; it adds over 4,500 lines of code to Emacs and it is not yet complete. Rather than try to capture the state of the C library's memory-allocation subsystem, it simply marshals and saves the set of Elisp objects known to the editor. The file format is designed for performance and, in some settings at least, Emacs can start by simply mapping the file into memory and initializing a set of pointers.
Colascione describes the result this way:
It also, he says, matches the startup performance of the current "unexec" system to within 100ms, and he has not yet had the time to collect a bunch of low-hanging optimization fruit. In other words, it seems like an interesting solution to the problem, but a patch of this size is always going to generate some discussion.
Some of that discussion focused on how this dumper works when address-space layout randomization (ASLR) is in use. Current Emacs binaries must disable ASLR entirely, thus losing the security benefits that ASLR is meant to provide. The new dumper does not require disabling ASLR, but it does contain an optimization that can be applied if the dump file can be successfully mapped at a specific address: most of the data therein can be used directly from the mapped image, without the need to allocate storage for and copy it. That should speed the startup process considerably, at the cost of always mapping the dump image at the same location.
Paul Eggert worried about the potential security implications of losing ASLR protection for the bulk of the editor's data. Colascione responded that, since no part of the data image is marked executable, there is little risk of attackers running code from there. But, as Eggert pointed out, that view overlooks an important detail: that memory is full of Elisp bytecode that is executed in the editor itself, and which can do just about anything an attacker might want. So, if this approach is adopted, the fixed-location mapping might have to be turned off, at least by default.
There is, however, a bigger disagreement involving Zaretskii, who described this work as "a wrong
direction.
" His objection, in short, is that this patch adds a lot
of low-level complexity, implemented in C, that will be a maintenance
burden going forward. That is, he said, a
threat to the future of the project:
It makes sense to put thought into the maintainability of the code base and how it can be evolved to attract more developers. It is not entirely clear, though, that C programmers are actually a dying breed — or that the long-term supply of Elisp developers is more certain. In any case, the Emacs community needs to fix the startup problem; those who oppose the portable dumper solution presumably have something else in mind.
Zaretskii's preferred solution would be to make the Elisp loader faster, to the point that it can be used to read Elisp code directly at startup time. That is a solution that others might like to see as well, but it has one significant shortcoming: no code toward that goal exists, and there are no signs that anybody is working in that area. Colascione's solution, instead, does exist and has an interested developer behind it. In almost any development project, working code and ongoing maintenance carries a lot of weight.
Zaretskii feels strongly enough about this issue that he has threatened to resign as co-maintainer if the portable dumper is adopted. He appears to be nearly alone in this stance, though. Colascione has said repeatedly that he sees no other way to get the required performance. Richard Stallman is guardedly favorable to this solution, noting that it will be far easier to maintain than the current unexec code. John Wiegley, the other Emacs co-maintainer, also favors going with the portable dumper code.
The wind thus appears to be blowing in the direction of adopting the portable dumper patch. Nobody seems to want to see Zaretskii relinquish the co-maintainer role (a role he only accepted last July), so, if the portable dumper is merged, the community can only hope that he will change his mind. Any large development project will occasionally make decisions that are opposed by some of its developers, even when those developers are maintainers. But the venerable Emacs editor will still be there, and will still have no end of other problems to solve.
Brief items
Development quotes of the week
The result of this is that software, particularly at the surface, is almost entirely there to satisfy its contributors. Fortunately for the majority of us that means that early users of the software get to shape it which might mean that it meets our needs too. The better software is written by people who actually spend time working out what their users need and writing for that, rather than for themselves.
Long ago when a friend told me "When you're telling someone else where you plan to throw the Frisbee, don't say anything more specific than 'Watch this!'" That's 'grep -P' in a nutshell.
If it turns out I don't have one, then so be it. In this case the code itself isn't the goal, it exists as a vehicle for writing these articles.
Cinnamon 3.2 released
Clement Lefebvre has announced the release of Cinnamon 3.2. This version has QT 5.7+ support, support for libinput touchpads as well as synaptics, and many more changes across the stack.Elektra 0.8.19 released
Elektra 0.8.19 has been released. "Elektra solves a non-trivial issue: how to abstract configuration in a way that software can be integrated and reconfiguration can be automated." This version features more tutorials and getting started guides, new Ruby bindings, and a cleanup of core.
Git 2.11 released
The Git project has announced the release of Git 2.11.0. This version prints longer abbreviated SHA-1 names and has better tools for dealing with ambiguous short SHA-1s, it's faster at accessing delta chains, and has other performance enhancements, and much more. The release notes contain more details.GNU Octave 4.2.0 Released
The Octave developers have announced the release of GNU Octave 4.2.0. GNU Octave is a high-level interpreted language for numerical computations. "Octave 4.2 is a major new release with many new features, better compatibility with Matlab, and many new and improved functions."
GnuPG 2.1.16 released
GnuPG 2.1.16 has been released with many new features and fixes, including a new algorithm for selecting the best ranked public key when using a mail address with -r, -R, or --locate-key, new options, changes to the trust on first use (TOFU) implementation, and more.
Newsletters and articles
Development newsletters
- Emacs News (November 21)
- Emacs News (November 28)
- These Weeks in Firefox (November 23)
- What's cooking in git.git (November 21)
- What's cooking in git.git (November 23)
- What's cooking in git.git (November 28)
- GNU Toolchain Update (November)
- This week in GTK+ (November 21)
- This week in GTK+ (November 28)
- OCaml Weekly News (November 22)
- OCaml Weekly News (November 29)
- OpenStack Developer Mailing List Digest (November 21)
- Perl Weekly (November 21)
- Perl Weekly (November 28)
- PostgreSQL Weekly News (November 20)
- PostgreSQL Weekly News (November 27)
- Python Weekly (November 18)
- Python Weekly (November 25)
- Ruby Weekly (November 17)
- Ruby Weekly (November 24)
- This Week in Rust (November 22)
- This Week in Rust (November 29)
- Wikimedia Tech News (November 21)
- Wikimedia Tech News (November 28)
Page editor: Rebecca Sobol
Announcements
Brief items
LinuxCon + CloudOpen + ContainerCon Become The Linux Foundation Open Source Summit for 2017
The Linux Foundation has announced that it is consolidating three conferences under one name going forward. LinuxCon, CloudOpen, and ContainerCon join together under the "Linux Foundation Open Source Summit" name. For 2017, that encompasses three events: OSS Japan in Tokyo May 31-June 2, OSS North America in Los Angeles September 11-13, and OSS Europe in Prague October 23-25. "The Linux Foundation Open Source Summit in North America and Europe will also contain a brand new event, Community Leadership Conference. Attendees will have access to sessions across all events in a single venue, enabling them to collaborate and share information across a wide range of open source topics and areas of technology. They can take advantage of not only unparalleled educational opportunities, but also an expo hall, networking activities, hackathons, additional co-located events and The Linux Foundation’s diversity initiatives, including free childcare, nursing rooms, non-binary restrooms and a diversity luncheon."
Articles of interest
Time is running out for NTP (InfoWorld)
InfoWorld looks at the underfunded NTP project. "NTP is more than 30 years old—it may be the oldest codebase running on the internet. Despite some hiccups, it continues to work well. But the project’s future is uncertain because the number of volunteer contributors has shrunk, and there’s too much work for one person—principal maintainer Harlan Stenn—to handle. When there is limited support, the project has to pick and choose what tasks it can afford to complete, which slows down maintenance and stifles innovation."
The UK is about to wield unprecedented surveillance powers (The Verge)
The Verge looks at legislation in the UK that would allow police and intelligence agencies to legally spy on its own people. "The legislation in question is called the Investigatory Powers Bill. It’s been cleared by politicians and awaits only the formality of royal assent before it becomes law. The bill will legalize the UK’s global surveillance program, which scoops up communications data from around the world, but it will also introduce new domestic powers, including a government database that stores the web history of every citizen in the country. UK spies will be empowered to hack individuals, internet infrastructure, and even whole towns — if the government deems it necessary."
Welte: Ten years anniversary of Openmoko
Harald Welte looks back at the Openmoko phone with a ten-year perspective (and an almost unreadable low-contrast web page). "So yes, the smartphone world is much more restricted, locked-down and proprietary than it was back in the Openmoko days. If we had been more successful then, that world might be quite different today. It was a lost opportunity to make the world embrace more freedom in terms of software and hardware."
New Books
Invent Your Own Computer Games with Python--new from No Starch Press
No Starch Press has released "Invent Your Own Computer Games with Python" by Al Sweigart.
Calls for Presentations
FOSDEM 2017 - Distributions Devroom - CFP Extended
The Distributions Devroom at FOSDEM will take place February 4, in Brussels, Belgium. The call for participation deadline has been extended until December 2.CFP Deadlines: December 1, 2016 to January 30, 2017
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
| Deadline | Event Dates | Event | Location |
|---|---|---|---|
| December 1 | April 3 April 6 |
‹Programming› 2017 | Brussels, Belgium |
| December 10 | February 21 February 23 |
Embedded Linux Conference | Portland, OR, USA |
| December 10 | February 21 February 23 |
OpenIoT Summit | Portland, OR, USA |
| December 31 | March 2 March 3 |
PGConf India 2017 | Bengaluru, India |
| December 31 | April 3 April 7 |
DjangoCon Europe | Florence, Italy |
| January 1 | April 17 April 20 |
Dockercon | Austin, TX, USA |
| January 3 | May 17 May 21 |
PyCon US | Portland, OR, USA |
| January 6 | July 16 July 23 |
CoderCruise | New Orleans et. al., USA/Caribbean |
| January 8 | March 11 March 12 |
Chemnitzer Linux-Tage | Chemnitz, Germany |
| January 11 | February 15 February 16 |
Prague PostgreSQL Developer Day 2017 | Prague, Czech Republic |
| January 13 | May 22 May 24 |
Container Camp AU | Sydney, Australia |
| January 14 | March 22 March 23 |
Vault | Cambridge, MA, USA |
| January 20 | March 17 March 19 |
FOSS Asia | Singapore, Singapore |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
Events: December 1, 2016 to January 30, 2017
The following event listing is taken from the LWN.net Calendar.
| Date(s) | Event | Location |
|---|---|---|
| November 29 December 2 |
Open Source Monitoring Conference | Nürnberg, Germany |
| December 3 | NoSlidesConf | Bologna, Italy |
| December 3 | London Perl Workshop | London, England |
| December 6 | CHAR(16) | New York, NY, USA |
| December 10 | Mini Debian Conference Japan 2016 | Tokyo, Japan |
| December 10 December 11 |
SciPy India | Bombay, India |
| December 27 December 30 |
Chaos Communication Congress | Hamburg, Germany |
| January 16 | Linux.Conf.Au 2017 Sysadmin Miniconf | Hobart, Tas, Australia |
| January 16 January 17 |
LCA Kernel Miniconf | Hobart, Australia |
| January 16 January 20 |
linux.conf.au 2017 | Hobart, Australia |
| January 18 January 19 |
WikiToLearnConf India | Jaipur, Rajasthan, India |
| January 27 January 29 |
DevConf.cz 2017 | Brno, Czech Republic |
If your event does not appear here, please tell us about it.
Page editor: Rebecca Sobol
