|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for August 23, 2018

Welcome to the LWN.net Weekly Edition for August 23, 2018

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Redis modules and the Commons Clause

By Jake Edge
August 22, 2018

The "Commons Clause", which is a condition that can be added to an open-source license, has been around for a few months, but its adoption by Redis Labs has some parts of the community in something of an uproar. At its core, using the clause is meant to ensure that those who are "selling" Redis modules (or simply selling access to them in the cloud) are prohibited from doing so—at least without a separate, presumably costly, license from Redis Labs. The clause effectively tries to implement a "no commercial use" restriction, though it is a bit more complicated than that. No commercial use licenses are not new—the "open core" business model is a more recent cousin, for example—but they have generally run aground on a simple question: "what is commercial use?"

Redis is a popular in-memory database cache that is often used by web applications. Various pieces of it are licensed differently; the "Redis core" is under the BSD license, some modules are under either Apache v2.0 or MIT, and a handful of modules that Redis Labs created are under Apache v2.0, now with Commons Clause attached. Cloud services (e.g. Amazon AWS, Microsoft Azure, Google Compute Engine, and other smaller players) provide Redis and its modules to their customers and, naturally, charge for doing so. The "charge" part is what the adoption of the clause is trying to stamp out—at least without paying Redis Labs.

The clause itself is admirably brief, just three paragraphs that are meant to be tacked on as an additional restriction to a permissive license, such as the Apache License 2.0. It overrides the license text to prohibit selling the software and defines what it means by "sell":

"Sell" means practicing any or all of the rights granted to you under the License to provide to third parties, for a fee or other consideration (including without limitation fees for hosting or consulting/ support services related to the Software), a product or service whose value derives, entirely or substantially, from the functionality of the Software.

One can immediately see some "wiggle room" that will have to be evaluated by lawyers (and, eventually, judges) to define various pieces of that sentence. "Value derives", "entirely or substantially", and even "from the functionality" are all open to interpretation. The Redis Labs announcement tries to make it clear what is being targeted:

However, today's cloud providers have repeatedly violated this ethos by taking advantage of successful open source projects and repackaging them into competitive, proprietary service offerings. Cloud providers contribute very little (if anything) to those open source projects. Instead, they use their monopolistic nature to derive hundreds of millions dollars in revenues from them. Already, this behavior has damaged open source communities and put some of the companies that support them out of business.

Redis is an example of this paradigm. Today, most cloud providers offer Redis as a managed service over their infrastructure and enjoy huge income from software that was not developed by them. Redis' permissive BSD open source license allows them to do so legally, but this must be changed. Redis Labs is leading and financing the development of open source Redis and deserves to enjoy the fruits of these efforts. Consequently, we decided to add Commons Clause to certain components of open source Redis. Cloud providers will no longer be able to use these components as part of their Redis-as-a-Service offerings, but all other users will be unaffected by this change.

That provides some of the reasoning behind the move, but it may make others who are outside of the target zone leery of using the Redis modules that are now covered by the clause. The "this must be changed" wording about the BSD license may also make some worry about the license for the Redis core (which remains under the BSD without the addition of the clause) down the road. There is a contributor license agreement [PDF] for at least some contributions to the project, which might allow relicensing if Redis Labs—or some company that buys it—decides that is in its interest. It should be noted that the agreement allows Redis Labs to make money on any contributions made under it, which is the norm for such things but might be seen as a tad hypocritical. The Redis Labs page clearly disclaims the possibility of changing the license, though that assurance may not be ironclad:

The Redis core is, and always will remain, an open source BSD license. Certain modules, however, are now licensed as "Apache 2.0 modified with Commons Clause." These modules can be freely used in any application, but selling a product whose value derives, entirely or substantially, from their functionality is prohibited. In simple words: if your product is an application that uses such a module to perform select functions, you can use it freely and there are no restrictions on selling your product. However, if what you sell is basically the functionality of the module packaged as a cloud service or on-prem[ises] software, Commons Clause does not allow it.

So it is not quite a "no commercial use" clause, at least as interpreted by Redis Labs, but that brings problems of its own and may provide ways for the cloud providers to evade the clause entirely. As both the Commons Clause and Redis Labs pages clearly note, adding the clause to an open-source license does not result in something that falls under the Open Source Definition. That means that the Redis modules in question are no longer open source, thus Linux distributions and others may not be willing or able to distribute them any longer. The issue has already been raised for Fedora; Debian is looking into it as well and others will likely follow. That alone shows some of the collateral damage that can occur when licenses are changed this way.

Redis Labs is not alone in using the clause for licenses; other projects are adopting it as well. Neo4j Enterprise has added the clause to the AGPLv3 and Dgraph has switched from AGPLv3 to Apache v2.0 with the Commons Clause, which it called a move to a "liberal license". The clause is addressing a real problem, but the cure could be worse than the disease.

Permissively licensed code (e.g. BSD or Apache v2.0) is subject to the "abuse" that is being claimed—in fact, that is much of the point of those licenses. Permissive licensing means that the code can be changed and distributed without making any of those changes public. But copyleft-style licensing wouldn't necessarily help the problem that Redis Labs is complaining about. Large (or small) cloud providers probably do not make substantive changes to the Redis modules in question—if they do, it seems likely they would be unfazed by having to release the changes if they were required to. The AGPL was meant to help with this "as a service" loophole in the GPL, but it is not meant to stop people from running the software any way they want—quite the opposite.

And that is really the crux of the matter. Being a part of the open-source world means accepting some things, including that code you release under those terms might be used in ways you don't like. It may also be used in ways that make money for someone else. It is part and parcel of what open source is all about.

There was a time when licenses with no commercial use clauses were relatively common. Before the turn of the century or thereabouts, lots of software was distributed under those terms (e.g. Linux 0.01, the Majordomo mailing-list manager). That was an accepted practice, mostly because people weren't really paying that much attention to the terms under which all of this free (as in beer) software was being made available. That changed along the way; as it did, the perils of no commercial use clauses became more apparent.

Redis Labs has tried to clarify what it means by selling the modules and others have tried to do so with their licenses as well (e.g. the NonCommercial interpretation from Creative Commons). But a restriction of that sort, with all of its various gray areas, rarely actually hits the target sought. It is the smaller cloud providers that will be affected by this move more than Amazon, Google, or Microsoft will be. It will also split off distributions and users that are not willing to get involved with non-open-source software. Restricting the use cases for a piece of software just makes it harder to actually use that software because no one truly knows which uses are blessed and which aren't.

One of the reasons that Redis is open source is presumably for attracting a community of users, developers, and others who will help broaden the reach of the project. Redis Labs appears to want to have its cake and eat it too. Perhaps this move will give the company some time to find a way to appease its investors, but it is not a community move—and the community has noticed. A look at threads on Hacker News or Reddit will show that many are not pleased with this change. Not surprisingly, longtime free-software advocate Bradley M. Kuhn has also criticized the clause.

The clause was written by Heather Meeker, who has been involved in multiple open-source disputes (on both sides) along the way. It is being pushed by FOSSA, which is a company that provides license-compliance tools. While the problem of financially supporting open-source development is real, and that is what FOSSA/Commons Clause are trying to promote, doing so with a clause restricting the scope of open-source licenses is a non-starter.

Buried in the text of the FAQ at the Commons Clause site may be a clue to what the real goal is. In two places, it mentions conversations that it is hoping will start:

The Commons Clause was intended, in practice, to have virtually no effect other than force a conversation with only the most predatory of use cases against your OSS community.

[...]

The Commons Clause was drafted by a group of developers behind many of the world's most popular open source projects who feel pressure from rapidly-developing projects and ecosystems. Honestly, we're not entirely sure what the best long-term solution is. However, we need to start a conversation on what we can do to meet the financial needs of commercial open source projects and the communities behind them.

These conversations may truly be the goal, though it may be more difficult to have that first conversation with those that have been labeled as "predatory". The latter conversation is welcome, though it is hard to see licensing as much of a tool to use in pursuit of that goal. Smaller open-source projects (or even critical infrastructure projects like OpenSSL prior to Heartbleed) often struggle to make ends meet, which is a shame. Finding better ways to fund open-source development (for projects small and large, company-backed or not) would be fabulous; changing licenses in a way that violates one of the core tenets of open-source software seems like the wrong way to go about it.

Comments (33 posted)

The sidechannel LSM

By Jake Edge
August 21, 2018

Side-channel attacks are a reasonably well-known technique to exfiltrate information across security boundaries. Until relatively recently, concerns about these types of attacks were mostly confined to cryptographic operations, where the target was to extract secrets by observing some side channel. But with the advent of Spectre, speculative execution provides a new way to exploit side channels. A new Linux Security Module (LSM) is meant to help determine where a side channel might provide secrets to an attacker, so that a speculative-execution barrier operation can be performed.

In current kernels, a context switch from one process to another often necessitates a flush of the translation lookaside buffer (TLB) contents, which is done in switch_mm_irqs_off(). For x86, after the Spectre v2 mitigations, that function calls indirect_branch_prediction_barrier() when switching away from a process that is not allowed to core dump (i.e. does not have SUID_DUMP_USER set). The barrier (which is known as IBPB) is an expensive operation, so it is only done for "sensitive" processes that have turned off core dumps (e.g. GPG). Core dumps of a process can contain secrets of various sorts, such as keys or passwords.

However, there may be other sensitive processes that do not turn off core dumps but are still susceptible to this side channel, so a patch set from Casey Schaufler would allow LSMs to offer an opinion on whether the IBPB should be done. It adds a new LSM hook (task_safe_sidechannel()) that will return zero if there are no known side-channel worries or -EACCES if the LSM considers the context switch to be potentially sensitive. The patch set provides an LSM to check some security attributes of tasks and also adds checking to the SELinux and Smack LSMs so that they can report whether the security attributes they maintain indicate a potential side-channel concern.

The SELinux and Smack changes add an entry for the new hook. Each looks at the current task and the task to be switched to and renders a verdict on the side-channel safety of the switch. The SELinux hook considers the switch to be safe against side channels if the current task has FILE__READ access to the new task. For Smack, it is similar: "Smack considers its private task data safe if the current task has read access to the passed task."

The bulk of the patch set, though, is the new "sidechannel" LSM. It is enabled with the SECURITY_SIDECHANNEL kernel configuration option, but requires other options in order to actually do any checking. One of them assumes that all task switches are subject to side channels (SECURITY_SIDECHANNEL_ALWAYS), so it simply always returns -EACCES. The other three enable various checks:

  • SECURITY_SIDECHANNEL_UIDS: checks if the tasks have different effective UIDs and reports side-channel susceptibility if so; this could have a high performance impact since most context switches are between tasks with different effective UIDs.
  • SECURITY_SIDECHANNEL_CAPABILITIES: checks if the tasks have different sets of capabilities, which may mean the new task would be subject to side-channel attacks.
  • SECURITY_SIDECHANNEL_NAMESPACES: checks if the tasks live in different user, PID, or control-group namespaces and returns -EACCES if so.
Given that a distribution will have to enable the LSM to make it available to its users, it would seem to some kind of runtime or load-time configuration of the different levels might be useful. As it stands, the LSM looks like it will only be used by those who build their own kernels.

The comments on the patch set have been relatively light. Jann Horn has made several suggestions, most of which Schaufler has adopted; the patch set is now up to v3. One comment that has not been addressed in the patch set is Horn's request that the security checks look at the previous non-kernel task when switching away from the kernel. He went into more detail in a posting on v2 of the patch set:

That means that an attacker who can e.g. get a CPU to first switch from an attacker task to a softirqd (e.g. for network packet processing or whatever), then switch from the softirqd to a root-owned victim task would be able to bypass the check, right? That doesn't sound like a very complicated attack...

I very much dislike the idea of adding a mitigation with a known bypass technique to the kernel.

The test in switch_mm_irqs_off() to decide whether to do the IBPB looks at the task structure; if it is a kernel thread, thus does not have an mm pointer to a process address space, the rest of the checks are shorted out. Schaufler didn't change that, though he did "touch" it by adding the new LSM hook call, so Horn's complaint is really about the existing test. Horn suggested keeping a copy of the metadata for the most recent non-kernel task in order to do that test, but Schaufler has not made that change; his argument was that those who are concerned about that kind of attack should probably simply enable the "always" option.

Schaufler was also concerned with finding a good mechanism to save the task metadata. Horn offered some suggestions, but noted that the obvious way to do so might not be favored in a hot path like context switching: "The obvious solution would be to take a refcounted reference on the old task's objective creds, but you probably want to avoid the resulting cache line bouncing..."

It certainly seems reasonable for the LSMs to get involved in the decision on whether a process might be susceptible to a side-channel attack from another process. The current "dumpable" test is a simple one, but likely ignores many sensitive processes. But context switching is an important function of the kernel and one that should be done as quickly as possible. Adding complexity there may not be particularly welcome, but there have been no complaints so far. Speculative execution is done as a performance optimization but clearly we are having to give some of that improvement back to work around the shortcomings of its implementation in some CPUs.

Comments (4 posted)

Batch processing of network packets

By Jonathan Corbet
August 21, 2018
It has been understood for years that kernel performance can be improved by doing things in batches. Whether the task is freeing memory pages, initializing data structures, or performing I/O, things go faster if the work is done on many objects at once; many kernel subsystems have been reworked to take advantage of the efficiency of batching. It turns out, though, that there was a piece of relatively low-hanging fruit at the core of the kernel's network stack. The 4.19 kernel will feature some work increasing the batching of packet processing, resulting in some impressive performance improvements.

Once upon a time, network interfaces would interrupt the processor every time a packet was received. That may have worked well with the kind of network interfaces we had in the 1990s, but an interface that worked that way now would be generating many thousands of interrupts per second. That, in turn, would swamp the CPU and prevent any work from getting done. The response to this problem in network circles was the adoption of an API called "NAPI" (for "new API") during the long 2.5 development series.

Old-timers on the net — like your editor — used to have their computers beep at them every time an email arrived. Most of us stopped doing that long ago; the beeps were nonstop, and things reached a point where we simply knew there would be email waiting anytime we got over our dread and opened a mail client. NAPI follows a similar approach: rather than poke the processor when packets arrive, the interface just lets them accumulate. The kernel will then poll the interface occasionally, secure in the knowledge that there will always be packets waiting to be processed. Those packets are then processed in a batch, with the batch size limited by the "weight" assigned to the interface.

At this level, we can see that batching of packet processing was added some fifteen years ago. But that is where the batching stops; when the NAPI poll happens, the device driver will pass each packet into the network stack with a call to netif_receive_skb(). From that point on, each packet is handled independently, with no further batching. In retrospect, with all of the effort that has gone into streamlining packet processing, one might wonder why that old API was never revisited, but that is how things often go in the real world.

Eventually, though, somebody usually notices an issue like that; in this case, that somebody was Edward Cree, who put together a patch set changing how low-level packet reception works. The first step was to supplement netif_receive_skb() with a batched version that reads, in its entirety:

    void netif_receive_skb_list(struct list_head *head)
    {
	struct sk_buff *skb, *next;

	list_for_each_entry_safe(skb, next, head, list)
		netif_receive_skb(skb);
    }

Now, rather than calling netif_receive_skb() for every incoming packet, a driver can make a list out of a batch of packets and pass them upward with a single call. Not much has changed at this point, but even this tweak improves performance by quite a bit, as it turns out.

The rest of the patch series is occupied with pushing the batching further up the network stack, so that packets can be passed in lists as far as possible. That gets a little trickier at the higher levels, since some packets have to be handled in fundamentally different ways. For example, some may have been allocated from the system's memory reserves (part of a mechanism to avoid deadlocks on network block devices); those require special handling. When such situations are encountered, the list of packets must be split into smaller lists, but the batching is preserved as far as possible.

The benchmark results (included in this merge commit) are interesting. In one test case, using a single receive queue, a kernel with these patches (and a suitably patched driver) showed a 4% improvement in packet-processing speed. That would certainly justify the addition of this bit of infrastructure, but it turns out that this number is the worst case that Cree could find. In general, just adding and using netif_receive_skb_list() improves performance by 10%, and the performance improvement with the entire patch series centers around 25%. One test showed a 35% speed improvement. In an era where developers have sweated mightily for much smaller gains, this is an impressive performance improvement.

One might well wonder why even the simplest batching shown above can improve things by so much. It mostly comes down to cache behavior. As Cree notes in the patch introduction, the processor's instruction cache is not large enough to hold the entire packet-processing pipeline. A device driver will warm the cache with its own code, but then the processing of a single packet pushes that code out of cache, and the driver must start cold with the next one. Just eliminating that bit of cache contention by putting the packets into a list before handing them to the network stack thus improves things considerably; creating the same sort of cache efficiency through the network stack improves things even more.

Networking also uses a lot of indirect function calls. These calls were never cheap, but the addition of retpolines for Spectre mitigation has made things worse. Batching replaces a bunch of per-packet indirect calls with single per-list calls, reducing that overhead.

There is a problem that often comes with throughput-oriented optimizations, and which can often be seen with batching: an increase in latencies. In the networking case, though, that cost was already paid years ago when NAPI was added. The new batching works on bunches of packets that have already been accumulated at the NAPI poll time and doesn't really add any further delays. So it's an almost free improvement from that point of view.

This code has been merged for the 4.19 kernel, so it will be generally available when the release happens. As of this writing, only the Solarflare network interfaces use the new netif_receive_skb_list() API. The necessary changes at the driver level are quite small, though, so it would be surprising if other drivers were not updated in the relatively near future, possibly even before the 4.19 release. This particular fruit is hanging too low to go unpicked for long.

Comments (34 posted)

The first half of the 4.19 merge window

By Jonathan Corbet
August 17, 2018
As of this writing, Linus Torvalds has pulled just over 7,600 non-merge changesets into the mainline repository for the 4.19 development cycle. 4.19 thus seems to be off to a faster-than-usual start, perhaps because the one-week delay in the opening of the merge window gave subsystem maintainers a bit more time to get ready. There is, as usual, a lot of interesting new code finding its way into the kernel, along with the usual stream of fixes and cleanups.

Core kernel

  • The scheduler's load-tracking subsystem has been enhanced with an improved awareness of the amount of time taken by realtime processes, deadline processes, and interrupt handling; this information is used to select more appropriate operating frequencies for the system's processors.
  • The "jprobes" tracing mechanism has been removed from the kernel; it has long been superseded by the ftrace infrastructure. Those who are curious about what jprobes did can find a description in this 2005 article.
  • The asynchronous I/O polling interface has been added again, after having been reverted out of 4.18. The internal implementation has changed into a more Linus-friendly form, so this feature should actually make it into the release this time around.

Architecture-specific

  • Support for Intel's "cache pseudo locking" feature has been added. With this feature, a portion of a processor's memory cache can be populated with data of interest, then locked against further changes. The result is consistent low-latency read access to the locked memory range. See this commit for documentation on this feature.
  • 32-Bit x86 systems finally have kernel page-table isolation support.
  • A large set of mitigations for the recently disclosed L1TF vulnerability has been merged.
  • The arm64 architecture has gained support for restartable sequences and the "stackleak" GCC plugin.

Filesystems and block layer

  • The XFS filesystem has removed the barrier and nobarrier mount options. Those options have not actually done anything for years; hopefully everybody has removed them from their fstab files by now.
  • The block I/O latency controller has been added; it allows administrators to provide minimum I/O latency guarantees to specific control groups.
  • The asynchronous bsg (SCSI generic) interface has been removed due to persistent and unfixable design issues.

Hardware support

  • Audio: Realtek RT5682 codecs, Everest ex7241 codecs, Amlogic AXG sound cards, and Qualcomm WCD9335 codecs.
  • Clock: Renesas R9A06G032 clock controllers, Maxim 9485 programmable clock generators, Meson AXG audio clock controllers, Actions Semi S700 SoC clock controllers, and Qualcomm SDM845 display clock controllers.
  • Graphics: Ilitek ILI9881C-based panels, Iletek ILI9341 display panels, and Qualcomm SDM845 display processing units.
  • Hardware monitoring: Mellanox fan controllers, Maxim MAX34451 voltage/current monitors, and Nuvoton NPCM750 PWM and fan controllers.
  • Media: Dongwoon DW9807 lens voice coils, Asahi Kasei Microdevices AK7375 lens voice coils, and Socionext MN88443x demodulators.
  • Network: Vitesse VSC7385/7388/7395/7398 switches, Realtek SMI Ethernet switches, and Theobroma Systems UCAN interfaces.
  • Pin control: Intel Ice Lake pin controllers, NXP IMX8MQ pin controllers, and Synaptics as370 pin controllers.
  • Miscellaneous: NVIDIA Tegra NAND flash controllers, Socionext UniPhier SPI controllers, Qualcomm last-level cache controllers, Qualcomm RPMh regulators, Hisilicon SEC crypto block cipher accelerators, Mediatek MT7621 GPIO controllers, and MediaTek CMDQ mailbox controllers.

Networking

  • The time-based packet transmission patch set has been merged. This feature allows a program to schedule data for transmission at some future time.
  • The CAKE queuing discipline, which works to overcome bufferbloat and other problems associated with home network links, has been merged.
  • The new "skbprio" queuing discipline can schedule packets according to an internal priority field. This feature is naturally undocumented; in the commit adding it the author says: "Skbprio was conceived as a solution for denial-of-service defenses that need to route packets with different priorities as a means to overcome DoS attacks".
  • Devices that can offload the receive side processing of TLS-encrypted connections are now supported.

Security-related

  • There is now a kernel configuration option that can be used to make the system fully initialize the entropy pool from the hardware random-number generator at boot time. This should allow for better early-boot random-number generation at the cost of placing a bit of trust in the CPU manufacturer's hardware.

Internal kernel changes

  • The simple wait queue API has been changed by renaming a number of functions to reflect the fact that it only implements exclusive waits. So prepare_to_swait() becomes prepare_to_swait_exclusive(), swake_up() becomes swake_up_one(), and so on.
  • There is a new initiative to translate kernel documentation into Italian, with an initial set of translations merged for 4.19.

If the usual schedule holds, the 4.19 merge window can be expected to remain open until August 26. There are still quite a few trees to be pulled, so one can expect a number of interesting changes will still find their way into this merge window. The final 4.19 release can be expected in mid-October.

Comments (2 posted)

3D printing with Atelier

August 20, 2018

This article was contributed by Marta Rybczyńska


Akademy

During this year's Akademy conference, Lays Rodrigues introduced Atelier, a cross-platform, open-source system that allows users to control their 3D printers. As she stated in her talk abstract, it is "a project with a goal to make the 3D printing world a better place". Akademy is the KDE community's annual conference. This year it took place in Vienna and the program included a number of hardware-related talks as part of the conference portion held during the weekend of August 11 and 12.

When you get a 3D printer, she began, the first interface you can access is the set of menus on the printer's own screen; see, for example, the screen on the left (taken from Rodrigues's slides [PDF]). They can be used to to perform basic [printer screen] operations and check how the printing operation is going, but there are better ways to control the device, Rodrigues explained.

Most of the technology related to 3D printing is open source. It starts with G-Code files that describe the movements and actions of the printer using a kind of a programming language; examples include where to move the head, at what speed, and what temperature to use. Another important part of the ecosystem is the firmware running in the printer itself, most of which is open source too. There are printing host solutions but "the most popular is not open source", she said. This referred to Repetier-Host from the RepRap project, which started the 3D-printing movement. Repetier-Host started as an open-source system, but became closed source in 2014.

The goal set by Rodrigues and her team was to fill the gap of missing open-source 3D printer host software. Their work consists of two modules: AtCore is the core library and Atelier is the user interface. Both of them are open source and can be downloaded, compiled, and tested right now.

The AtCore library's function is to provide an abstraction for the serial communication with the printer and control of it. It provides a generic layer that is independent from the user interface. AtCore can thus work with any interface, "including QML", she added. AtCore uses pure C++ with Qt for performance reasons. Rodrigues gave memory usage when printing as an example: Atelier requires 200MB of memory while other, similar programs may require 2GB. AtCore supports most open-source 3D-printer firmware using a plugin architecture to handle differences between different firmware implementations. Rodrigues showed at one point the list of the supported printer firmware, which corresponds to the list of supported printer models.

The second part of the team's work is the "test client": Atelier. However, it is a full 3D host system, not just a test program. It uses the KDE libraries in addition to Qt — and the AtCore library, of course. Rodrigues ran a demonstration of a number of Atelier features. The configuration she used included a laptop running Atelier and a small embedded system with the printer firmware. The demo included all stages of the printing process.

Working with a 3D printer starts with connecting to the printer itself. Rodrigues highlighted that Atelier is the first printer host that can connect to multiple printers at the same time. [Lays Rodrigues] Atelier includes a preview mode that displays the object that is to be printed, in 3D. The design can be seen in detail. Rodrigues said that this view requires more work, without listing the details of the improvements her teams plans to do. This feature is based on Qt 3D, she said in response to an audience question. Monitoring the printing process is the second important feature. There are, for example, profiles for different materials. Temperature control is essential "to not burn your house", Rodrigues explained. A 3D printer may cause damage when badly controlled as the temperature is very high. Atelier shows graphs of the main parameters over time.

Today, the basic controls of the printer are done. If a user has custom firmware "we can support it too", because the host software is open source. She suggested that the team could support other printers too if they had one.

While Atelier can be already used successfully to control a 3D printer, it still requires some tweaks, Rodrigues said. The team wanted to launch it officially at Akademy. Now she hopes to do so later this year, without giving a specific date. The Atelier team is currently in contact with companies in Brazil that do not want to pay a license fee on each 3D printer they ship. Their feedback is that they want to control multiple printers from the same host. In addition they want to do it remotely. That is the work her team will focus on next.

The project is two years old. The team started it to develop an open-source solution for 3D printing; Atelier is currently working on Linux, Mac OS X, and Windows. The Windows port was possible thanks to help from users, Rodrigues added. Atelier adapts to the platform it is running on with different look. Most people use it on Windows, Rodrigues said. "And I can't force them" to change systems, she added. For Linux hosts, Atelier is distributed as an AppImage to allow easy installation. Source code is available from the KDE Git repositories and from the GitHub mirrors of Atelier and AtCore, for those who prefer to compile on their own.

The project has currently more than 100 binary downloads, she concluded, mostly on Windows, then AppImage, and OS X last. The source code repositories count several hundred of commits each for both AtCore and Atelier. The team working on Atelier is currently Rodrigues and three other developers.

A long session of questions followed in the nearly full room. The first question asked about how many printers are supported. Rodrigues explained that most 3D printers have open-source firmware. In practice, that means that they are all supported. Printers with proprietary, closed-source firmware do exist, but they are rare — and those are currently not supported. She added that they could be supported if the vendors donated the printer or paid them to add the support. Then, to clarify, she said that most common printers "you buy in China" are open source and will work with Atelier.

The next person was curious about the camera mode that she enabled for a moment during the demo session. Rodrigues explained that its intended usage is to watch the printer remotely as it prints. It allows you to to be sure that everything is working correctly. She also explained that the industry does not care about a desktop version of the printer host software; instead, they want to drive printers remotely from small, embedded systems.

The last question was about competition to Atelier. Rodrigues explained that the main competing program was open source before, but it is closed source now. She said that she is "not making too much fuss" about Atelier right now. However, her team has contacts with industry and they hope to see Atelier used in the industry in Brazil.

Comments (4 posted)

Page editor: Jonathan Corbet

Brief items

Security

The Problems and Promise of WebAssembly (Project Zero)

Over at Google's Project Zero blog, Natalie Silvanovich looks at some of the bugs the project has found in WebAssembly, which is a binary format to run code in the browser for web applications. She also looks to the future: "There are two emerging features of WebAssembly that are likely to have a security impact. One is threading. Currently, WebAssembly only supports concurrency via JavaScript workers, but this is likely to change. Since JavaScript is designed assuming that this is the only concurrency model, WebAssembly threading has the potential to require a lot of code to be thread safe that did not previously need to be, and this could lead to security problems. WebAssembly GC [garbage collection] is another potential feature of WebAssembly that could lead to security problems. Currently, some uses of WebAssembly have performance problems due to the lack of higher-level memory management in WebAssembly. For example, it is difficult to implement a performant Java Virtual Machine in WebAssembly. If WebAssembly GC is implemented, it will increase the number of applications that WebAssembly can be used for, but it will also make it more likely that vulnerabilities related to memory management will occur in both WebAssembly engines and applications written in WebAssembly."

Comments (11 posted)

Security quote of the week

Some people enter the technology industry to build newer, more exciting kinds of technology as quickly as possible. My keynote will savage these people and will burn important professional bridges, likely forcing me to join a monastery or another penance-focused organization. In my keynote, I will explain why the proliferation of ubiquitous technology is good in the same sense that ubiquitous Venus weather would be good, i.e., not good at all. Using case studies involving machine learning and other hastily-executed figments of Silicon Valley's imagination, I will explain why computer security (and larger notions of ethical computing) are difficult to achieve if developers insist on literally not questioning anything that they do since even brief introspection would reduce the frequency of git commits. At some point, my microphone will be cut off, possibly by hotel management, but possibly by myself, because microphones are technology and we need to reclaim the stark purity that emerges from amplifying our voices using rams' horns and sheets of papyrus rolled into cone shapes. I will explain why papyrus cones are not vulnerable to buffer overflow attacks, and then I will conclude by observing that my new start-up papyr.us is looking for talented full-stack developers who are comfortable executing computational tasks on an abacus or several nearby sticks.
James Mickens in the abstract for a keynote at the 27th USENIX Security Symposium (Bruce Schneier recommends watching the video of the talk that is available with the abstract.)

Comments (none posted)

Kernel development

Kernel release status

The 4.19 merge window is still open. The process of merging changes into the mainline continues, with 10,650 changesets added as of this writing.

Stable updates: it was a busy week for stable kernels:

One cause for all of these updates was the need to fix residual problems with the L1TF mitigations. There were complaints about a lack of testing, but the real problem, according to Linus Torvalds, is that "because this was all done under embargo, we didn't get the kind of test robot coverage we usually get".

Comments (none posted)

Quotes of the week

Imagining the worst scenario with the most stupid user doesn't prove anything. We have sensible users who want to achieve security. Our job is to design processes that help them with that goal.
James Bottomley

So this merge window has been horrible.
Linus Torvalds

Comments (none posted)

Distributions

Debian: 25 years and counting

The Debian project is celebrating the 25th anniversary of its founding by Ian Murdock on August 16, 1993. The "Bits from Debian" blog had this to say: "Today, the Debian project is a large and thriving organization with countless self-organized teams comprised of volunteers. While it often looks chaotic from the outside, the project is sustained by its two main organizational documents: the Debian Social Contract, which provides a vision of improving society, and the Debian Free Software Guidelines, which provide an indication of what software is considered usable. They are supplemented by the project's Constitution which lays down the project structure, and the Code of Conduct, which sets the tone for interactions within the project. Every day over the last 25 years, people have sent bug reports and patches, uploaded packages, updated translations, created artwork, organized events about Debian, updated the website, taught others how to use Debian, and created hundreds of derivatives." Happy birthday to the project from all of us here at LWN.

Comments (6 posted)

Flatpak 1.0 released

The 1.0 release of the Flatpak application distribution system is out. There are a number of performance improvements, the ability to mark applications as being at end-of-life, up-front confirmation of requested permissions, and more. "Apps can now request access the host SSH agent to securely access remote servers or Git repositories."

Comments (16 posted)

Distribution quotes of the week

This issue was fixed fairly quickly, because you really don't want to be in a situation where drunk people would have financial motivation to pour liquids on high voltage equipment in crowded rooms. This is not a problem one has to face in most data centers.
Jussi Pakkanen

Flexibility sits on a scale between fragile and robust. The trick is working out how many footguns are appropriate to leave around our distribution.
Stuart Prescott

I understand and accept that the sponsorship of this kids football team is unusual. I will not be voting for every such request. [...]

I consider this one decision an opportunity to see if we get any traction or interest from putting our logo prominently in an unusual place. And I think that's a worthwhile idea - after all we're not going to attract new blood with totally new ideas to the Project if we only advertise ourselves in the usual places.

Richard Brown: openSUSE decides to recruit from younger ranks

Comments (none posted)

Development

Vetter: Why no 2D Userspace API in DRM?

On his blog, Daniel Vetter answers an often-asked question about why the direct rendering manager (DRM) does not have a 2D API (and won't in the future): "3D has it easy: There’s OpenGL and Vulkan and DirectX that require a certain feature set. And huge market forces that make sure if you use these features like a game would, rendering is fast. Aside: This means the 2D engine in a browser actually needs to work like a 3D action game, or the GPU will crawl. The [impedance] mismatch compared to traditional 2D rendering designs is huge. On the 2D side there’s no such thing: Every blitter engine is its own bespoke thing, with its own features, limitations and performance characteristics. There’s also no standard benchmarks that would drive common performance characteristics - today blitters are [needed] mostly in small systems, with very specific use cases. Anything big enough to run more generic workloads will have a 3D rendering block anyway. These systems still have blitters, but mostly just to help move data in and out of VRAM for the 3D engine to consume."

Comments (24 posted)

Development quotes of the week

Believe it or don't, no license, no matter how clever, will allow you to be financially successful if you blow the business execution.
Josh Berkus

It didn't take long to find many sections of code from similar blog posts. Almost all of the blog posts either wrote a disclaimer or should have written one. They all solved one small piece of a problem, but took many liberties in their solution to make it simpler to read. It's understandable. Most readers appreciate brevity when learning a concept.

The code from these blog posts had spread through the codebase like a disease, scattering issues here and there without any rhyme or reason. And there wasn't any obvious cure other than to read everything manually and fix issues as I went along. Without unit tests or automated deployments, this took almost a year. I'm almost certain the cost of fixing the code exceeded the margin on revenue due to writing it in the first place.

Stephen Mann (Thanks to Paul Wise.)

Comments (1 posted)

Page editor: Jake Edge

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Calls for Presentations

CFP Deadlines: August 23, 2018 to October 22, 2018

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
August 27 November 5
November 7
MesosCon 2018 New York, NY, USA
August 31 October 10
October 12
PyCon ZA 2018 Johannesburg, South Africa
September 9 November 13
November 15
Linux Plumbers Conference Vancouver, BC, Canada
September 9 October 25 Real-Time Summit Edinburgh, UK
September 10 November 9
November 13
PyCon Canada 2018 Toronto, Canada
September 10 October 25 Tracing Summit 2018 Edinburgh, UK
September 15 November 8 Open Source Camp Nürnberg, Germany
September 17 November 3
November 4
OpenFest 2018 Sofia, Bulgaria
September 20 September 21
September 23
Orconf Gdansk, Poland
September 30 December 7 PGDay Down Under 2018 Melbourne, Australia
October 1 October 5
October 6
Open Source Days Copenhagen, Denmark
October 5 November 12 Ceph Day Berlin Berlin, Germany
October 7 October 20
October 21
OSS Víkend Košice, Slovakia

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Two microconferences added to the Linux Plumbers Conference program

The Linux Plumbers Conference (LPC) has announced two microconferences that have been added. The BPF microconference: "This year's BPF Microconference event focuses on the core BPF infrastructure as well as its subsystems, therefore topics proposed for this year's event include improving verifier scalability, next steps on BPF type format, dynamic tracing without on the fly compilation, string and loop support, reuse of host JITs for offloads, LRU heuristics and timers, syscall interception, microkernels, and many more."

And the Containers microconference: "There will also no doubt be some discussions around performance to make up for the overhead caused by the recent Spectre and Meltdown set of mitigations that in some cases have had a significant impact on container runtimes. This year's edition will be combined with what was formerly the Checkpoint-Restart microconference. Expect continued discussion about integration of CRIU with the container runtimes, addressing performance issues of checkpoint and restart and possible optimizations, as well as (in)stability of rarely used kernel ABIs. Another hot new topic would be time namespacing and its usage for container snapshotting and migration."

LPC will be held in Vancouver, British Columbia, Canada from Tuesday, November 13 through Thursday, November 15.

Comments (none posted)

Events: August 23, 2018 to October 22, 2018

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
August 23
August 26
LVEE 2018 Minsk, Belarus
August 24
August 28
PyCon AU 2018 Sydney, Australia
August 25
August 26
Linux Developers Conference Brazil Campinas, Brazil
August 25
August 26
Nextcloud Conference Berlin, Germany
August 25
August 26
Free and Open Source Software Conference Sankt Augustin, Germany
August 25 FOSScon Philadelphia, PA, USA
August 25
August 31
Linux Bier Wanderung Jedovnice, Moravia, Czech Republic
August 27
September 2
FOSS4G Dar es Salaam 2018 Dar es Salaam, Tanzania
August 27
August 28
Linux Security Summit Vancouver, Canada
August 28
September 1
EuroSciPy 2018 Trento, Italy
August 29
August 31
Open Source Summit Vancouver, Canada
September 3
September 4
Research Software Engineers' Conference 2018 Birmingham, UK
September 5
September 8
Radare conference 2018 Barcelona, Spain
September 5
September 6
DPDK Userspace Dublin, Ireland
September 6
September 9
Libre Application Summit Denver, CO, USA
September 7
September 8
Swiss Perl Workshop Bern, Switzerland
September 11
September 14
Alpine Linux Persistence and Storage Summit Lizumerhuette, Austria
September 12
September 13
Devopsdays Berlin 2018 Berlin, Germany
September 17
September 21
Linaro Connect Vancouver, Canada
September 17
September 21
GNU Radio Conference 2018 Henderson, NV, USA
September 20
September 23
EuroBSDcon 2018 Bucharest, Romania
September 21
September 23
Orconf Gdansk, Poland
September 21
September 23
Video Dev Days 2018 Paris, France
September 24
September 27
ApacheCon North America Montreal, Canada
September 24
September 25
Embedded Recipes 2018 Paris, France
September 26
September 28
LibreOffice Conference Tirana, Albania
September 26 Open Source Backup Conference 2018 Cologne , Germany
September 26
September 28
Kernel Recipes 2018 Paris, France
September 26
September 28
X.org Developer Conference A Coruña, Spain
September 28
September 30
All Systems Go! 2018 Berlin, Germany
October 5
October 6
Open Source Days Copenhagen, Denmark
October 6
October 7
Linux Days 2018 Prague, Czech Republic
October 9 PostgresConf South Africa 2018 Johannesburg, South Africa
October 10
October 12
PyCon ZA 2018 Johannesburg, South Africa
October 12
October 13
Ohio LinuxFest 2018 Columbus, OH, USA
October 15
October 18
Tcl/Tk Conference Houston, TX, USA
October 18
October 19
Osmocom Conference 2018 Berlin, Germany
October 20
October 21
OSS Víkend Košice, Slovakia
October 21
October 23
All Things Open Raleigh, NC, USA

If your event does not appear here, please tell us about it.

Security updates

Alert summary August 16, 2018 to August 22, 2018

Dist. ID Release Package Date
CentOS CESA-2018:2439 C7 mariadb 2018-08-21
CentOS CESA-2018:2526 C6 mutt 2018-08-21
CentOS CESA-2018:2526 C7 mutt 2018-08-21
CentOS CESA-2018:2462 C7 qemu-kvm 2018-08-21
Debian DLA-1461-1 LTS clamav 2018-08-20
Debian DLA-1470-1 LTS confuse 2018-08-19
Debian DLA-1468-1 LTS fuse 2018-08-15
Debian DSA-4273-1 stable intel-microcode 2018-08-16
Debian DSA-4278-1 stable jetty9 2018-08-19
Debian DLA-1471-1 LTS kamailio 2018-08-19
Debian DSA-4279-1 stable kernel 2018-08-20
Debian DSA-4275-1 stable keystone 2018-08-16
Debian DLA-1472-1 LTS libcgroup 2018-08-20
Debian DLA-1469-1 LTS libxcursor 2018-08-18
Debian DSA-4277-1 stable mutt 2018-08-17
Debian DLA-1474-1 LTS openssh 2018-08-21
Debian DSA-4280-1 stable openssh 2018-08-22
Debian DLA-1473-1 LTS otrs2 2018-08-21
Debian DSA-4276-1 stable php-horde-image 2018-08-17
Debian DSA-4274-1 stable xen 2018-08-16
Fedora FEDORA-2018-775d96b54b F27 blktrace 2018-08-19
Fedora FEDORA-2018-c75a37ae9b F28 blktrace 2018-08-19
Fedora FEDORA-2018-28f30efaf6 F28 cri-o 2018-08-15
Fedora FEDORA-2018-160b3d2f6c F27 docker-latest 2018-08-19
Fedora FEDORA-2018-6740c38cf4 F28 gdm 2018-08-16
Fedora FEDORA-2018-202c536f70 F28 gifsicle 2018-08-22
Fedora FEDORA-2018-1c80fea1cd F27 kernel-headers 2018-08-16
Fedora FEDORA-2018-f8cba144ae F28 kernel-headers 2018-08-16
Fedora FEDORA-2018-ca483ae3e0 F27 libgit2 2018-08-19
Fedora FEDORA-2018-bc22d6c7bc F28 libldb 2018-08-20
Fedora FEDORA-2018-be770f97a6 F28 lighttpd 2018-08-22
Fedora FEDORA-2018-d8f5aea89d F27 postgresql 2018-08-16
Fedora FEDORA-2018-9829c6ddcf F27 quazip 2018-08-22
Fedora FEDORA-2018-2818fc5308 F27 rsyslog 2018-08-16
Fedora FEDORA-2018-8e4d871867 F27 samba 2018-08-22
Fedora FEDORA-2018-bc22d6c7bc F28 samba 2018-08-20
Fedora FEDORA-2018-f4f75985b8 F28 soundtouch 2018-08-20
Fedora FEDORA-2018-976ce10858 F28 units 2018-08-16
Fedora FEDORA-2018-41dfadd21a F28 wpa_supplicant 2018-08-16
Fedora FEDORA-2018-aa46eb30be F27 yubico-piv-tool 2018-08-19
Fedora FEDORA-2018-15da5380b5 F28 yubico-piv-tool 2018-08-19
Mageia MGASA-2018-0343 6 chromium-browser-stable 2018-08-18
Mageia MGASA-2018-0349 6 flash-player-plugin 2018-08-19
Mageia MGASA-2018-0338 6 iceaepe 2018-08-15
Mageia MGASA-2018-0345 6 kernel 2018-08-19
Mageia MGASA-2018-0341 6 kernel-linus 2018-08-15
Mageia MGASA-2018-0347 6 kernel-linus 2018-08-19
Mageia MGASA-2018-0340 6 kernel-tmb 2018-08-15
Mageia MGASA-2018-0346 6 kernel-tmb 2018-08-19
Mageia MGASA-2018-0339 6 libtomcrypt 2018-08-15
Mageia MGASA-2018-0344 6 microcode 2018-08-19
Mageia MGASA-2018-0342 5, 6 openslp 2018-08-18
Mageia MGASA-2018-0348 6 wpa_supplicant 2018-08-19
openSUSE openSUSE-SU-2018:2439-1 42.3 GraphicsMagick 2018-08-19
openSUSE openSUSE-SU-2018:2399-1 15.0 42.3 Security 2018-08-17
openSUSE openSUSE-SU-2018:2433-1 15.0 apache2 2018-08-19
openSUSE openSUSE-SU-2018:2397-1 42.3 apache2 2018-08-17
openSUSE openSUSE-SU-2018:2343-1 15.0 42.3 aubio 2018-08-16
openSUSE openSUSE-SU-2018:2406-1 42.3 clamav 2018-08-17
openSUSE openSUSE-SU-2018:2431-1 15.0 curl 2018-08-19
openSUSE openSUSE-SU-2018:2407-1 15.0 kernel 2018-08-17
openSUSE openSUSE-SU-2018:2404-1 42.3 kernel 2018-08-17
openSUSE openSUSE-SU-2018:2376-1 42.3 libheimdal 2018-08-16
openSUSE openSUSE-SU-2018:2373-1 15.0 nemo-extensions 2018-08-16
openSUSE openSUSE-SU-2018:2438-1 15.0 perl-Archive-Zip 2018-08-19
openSUSE openSUSE-SU-2018:2405-1 15.0 42.3 php7 2018-08-17
openSUSE openSUSE-SU-2018:2375-1 15.0 python-Django1 2018-08-16
openSUSE openSUSE-SU-2018:2402-1 15.0 qemu 2018-08-17
openSUSE openSUSE-SU-2018:2400-1 15.0 samba 2018-08-17
openSUSE openSUSE-SU-2018:2396-1 42.3 samba 2018-08-17
openSUSE openSUSE-SU-2018:2436-1 15.0 xen 2018-08-19
openSUSE openSUSE-SU-2018:2434-1 42.3 xen 2018-08-19
Oracle ELSA-2018-4200 OL6 kernel 2018-08-17
Oracle ELSA-2018-4200 OL7 kernel 2018-08-17
Oracle ELSA-2018-2439 OL7 mariadb 2018-08-16
Oracle ELSA-2018-2439 OL7 mariadb 2018-08-17
Oracle ELSA-2018-2526 OL6 mutt 2018-08-20
Oracle ELSA-2018-2526 OL7 mutt 2018-08-20
Oracle ELSA-2018-2526 OL7 mutt 2018-08-20
Oracle ELSA-2018-2462 OL7 qemu-kvm 2018-08-16
Red Hat RHSA-2018:2482-01 EL7 docker 2018-08-16
Red Hat RHSA-2018:2435-01 EL6 flash-plugin 2018-08-15
Red Hat RHSA-2018:2439-01 EL7 mariadb 2018-08-16
Red Hat RHSA-2018:2526-01 EL6 EL7 mutt 2018-08-20
Red Hat RHSA-2018:2533-01 OSP13.0 openstack-keystone 2018-08-21
Red Hat RHSA-2018:2462-01 EL7 qemu-kvm 2018-08-16
Red Hat RHSA-2018:2511-01 RHSC rh-postgresql95-postgresql 2018-08-20
Scientific Linux SLSA-2018:2439-1 SL7 mariadb 2018-08-16
Scientific Linux SLSA-2018:2526-1 SL6 SL7 mutt 2018-08-21
Scientific Linux SLSA-2018:2462-1 SL7 qemu-kvm 2018-08-16
Slackware SSA:2018-233-01 libX11 2018-08-21
Slackware SSA:2018-229-01 ntp 2018-08-17
Slackware SSA:2018-229-02 samba 2018-08-17
SUSE SUSE-SU-2018:2390-1 SLE11 GraphicsMagick 2018-08-16
SUSE SUSE-SU-2018:2465-1 SLE11 ImageMagick 2018-08-21
SUSE SUSE-SU-2018:2475-1 SLE15 ImageMagick 2018-08-22
SUSE SUSE-SU-2018:2336-1 SLE12 apache2 2018-08-16
SUSE SUSE-SU-2018:2424-1 SLE15 apache2 2018-08-18
SUSE SUSE-SU-2018:2423-1 SLE15 curl 2018-08-18
SUSE SUSE-SU-2018:2470-1 SLE11 gtk2 2018-08-21
SUSE SUSE-SU-2018:2344-1 OS7 SLE12 kernel 2018-08-16
SUSE SUSE-SU-2018:2332-1 SLE11 kernel 2018-08-15
SUSE SUSE-SU-2018:2366-1 SLE11 kernel 2018-08-16
SUSE SUSE-SU-2018:2384-1 SLE12 kernel 2018-08-16
SUSE SUSE-SU-2018:2362-1 SLE12 kernel 2018-08-16
SUSE SUSE-SU-2018:2374-1 SLE12 kernel 2018-08-16
SUSE SUSE-SU-2018:2380-1 SLE15 kernel 2018-08-16
SUSE SUSE-SU-2018:2381-1 SLE15 kernel 2018-08-16
SUSE SUSE-SU-2018:2450-1 SLE15 kernel 2018-08-20
SUSE SUSE-SU-2018:2426-1 SLE12 SLE15 kernel-livepatch-tools 2018-08-18
SUSE SUSE-SU-2018:2394-1 SLE12 kgraft 2018-08-16
SUSE SUSE-SU-2018:2468-1 SLE12 libcgroup 2018-08-21
SUSE SUSE-SU-2018:2452-1 libgcrypt 2018-08-20
SUSE SUSE-SU-2018:2469-1 SLE15 libgit2 2018-08-21
SUSE SUSE-SU-2018:2403-1 SLE11 mutt 2018-08-17
SUSE SUSE-SU-2018:2411-1 SLE11 mysql 2018-08-17
SUSE SUSE-SU-2018:2449-1 openssl 2018-08-20
SUSE SUSE-SU-2018:2447-1 perl 2018-08-20
SUSE SUSE-SU-2018:2388-1 SLE11 perl-Archive-Zip 2018-08-16
SUSE SUSE-SU-2018:2385-1 SLE12 perl-Archive-Zip 2018-08-16
SUSE SUSE-SU-2018:2386-1 SLE15 perl-Archive-Zip 2018-08-16
SUSE SUSE-SU-2018:2333-1 SLE12 php7 2018-08-16
SUSE SUSE-SU-2018:2337-1 SLE15 php7 2018-08-16
SUSE SUSE-SU-2018:2451-1 procps 2018-08-20
SUSE SUSE-SU-2018:2408-1 SLE11 python 2018-08-17
SUSE SUSE-SU-2018:2340-1 SLE15 qemu 2018-08-16
SUSE SUSE-SU-2018:2453-1 rsyslog 2018-08-20
SUSE SUSE-SU-2018:2339-1 OS7 SLE12 samba 2018-08-16
SUSE SUSE-SU-2018:2448-1 shadow 2018-08-20
SUSE SUSE-SU-2018:2331-1 OS7 SLE12 ucode-intel 2018-08-15
SUSE SUSE-SU-2018:2335-1 SLE11 ucode-intel 2018-08-16
SUSE SUSE-SU-2018:2338-1 SLE15 ucode-intel 2018-08-16
SUSE SUSE-SU-2018:2412-1 SLE11 wireshark 2018-08-17
SUSE SUSE-SU-2018:2410-1 OS7 SLE12 xen 2018-08-17
SUSE SUSE-SU-2018:2401-1 SLE12 xen 2018-08-17
SUSE SUSE-SU-2018:2409-1 SLE15 xen 2018-08-17
Ubuntu USN-3746-1 18.04 apt 2018-08-20
Ubuntu USN-3748-1 18.04 base-files 2018-08-21
Ubuntu USN-3733-2 12.04 gnupg 2018-08-15
Ubuntu USN-3741-3 14.04 kernel 2018-08-17
Ubuntu USN-3742-3 12.04 linux-lts-trusty 2018-08-20
Ubuntu USN-3747-1 18.04 openjdk-lts 2018-08-20
Ubuntu USN-3744-1 14.04 16.04 18.04 postgresql-10, postgresql-9.3, postgresql-9.5 2018-08-16
Ubuntu USN-3658-3 12.04 procps 2018-08-16
Ubuntu USN-3743-1 16.04 18.04 webkit2gtk 2018-08-16
Ubuntu USN-3745-1 14.04 16.04 18.04 wpa 2018-08-20
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Greg KH Linux 4.18.4 Aug 22
Greg KH Linux 4.18.3 Aug 18
Greg KH Linux 4.18.2 Aug 18
Greg KH Linux 4.18.1 Aug 16
Greg KH Linux 4.17.18 Aug 22
Greg KH Linux 4.17.17 Aug 18
Greg KH Linux 4.17.16 Aug 18
Greg KH Linux 4.17.15 Aug 16
Greg KH Linux 4.14.66 Aug 22
Greg KH Linux 4.14.65 Aug 18
Greg KH Linux 4.14.64 Aug 18
Greg KH Linux 4.14.63 Aug 16
Greg KH Linux 4.9.123 Aug 22
Greg KH Linux 4.9.122 Aug 18
Greg KH Linux 4.9.121 Aug 18
Greg KH Linux 4.9.120 Aug 16
Greg KH Linux 4.4.151 Aug 22
Greg KH Linux 4.4.150 Aug 18
Greg KH Linux 4.4.149 Aug 18
Greg KH Linux 4.4.148 Aug 16
Daniel Wagner 4.4.148-rt165 Aug 16
Greg KH Linux 3.18.119 Aug 18

Architecture-specific

Torsten Duwe arm64 live patching Aug 17

Core kernel

Development tools

Arnaldo Carvalho de Melo ANNOUNCE: pahole v1.12 (BTF edition) Aug 16

Device drivers

Device-driver infrastructure

David Howells tpm: Provide a TPM access library Aug 21

Documentation

Filesystems and block layer

Gabriel Krisman Bertazi Ext4 Encoding and Case-insensitive support Aug 15
Daniel Rosenberg f2fs: checkpoint disabling Aug 20
Dave Chinner xfs: feature flag rework Aug 20

Networking

Petar Penkov Introduce eBPF flow dissector Aug 16

Security-related

Page editor: Rebecca Sobol


Copyright © 2018, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds