|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for July 30, 2020

Welcome to the LWN.net Weekly Edition for July 30, 2020

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

The archaeology of GNOME accessibility

By Jonathan Corbet
July 23, 2020

GUADEC
There are many people in the world who cannot make full use of their computers without some sort of accessibility support. Developers, though, have a tendency not to think about accessibility issues themselves; they don't (usually) need those features and cannot normally even see them. In a talk at the 2020 GUADEC virtual conference, Emmanuele Bassi discussed the need for accessibility features, their history in GNOME, and his effort to rethink about how GNOME supports assistive technology.

He began by defining "accessibility" as usability by people with disabilities; this usability is often provided through the use of assistive technology of some sort. When one thinks about who benefits from accessibility, it is natural to picture people like Stephen Hawking, who clearly needed a lot of assistive technology. But that is not what the most common consumers of assistive technology look like; instead, they look like his parents, who are active people in their late 60s. They are computer-literate, but they are getting older and need more affordances than they once did.

The point, he said, is that we all will benefit from accessibility at some point in our lives. He is 39 years old and has already had to make some compromises in response to aging. None of us are getting any younger, he said, so [Emmanuele
Bassi] we are all the next set of consumers for accessible software. We owe it to ourselves — and to everybody else — to make accessibility work.

The GNOME project first got accessibility support during the development period between the GTK 1.3 and 2.0 releases — 18 years or so ago. This work was done by the Sun accessibility team as part of the company's effort to ship GNOME 2 on its workstations. This implementation was built around three separate components:

  • The ATK toolkit, which implements a set of abstract interfaces to accessibility functionality.
  • AT-SPI, which is the interface by which applications communicate with assistive technologies.
  • GAIL, the implementation of the ATK interfaces for GTK.

Bassi made it clear that he was not impressed with how this design has worked out. The charitable way of explaining how it came to be, he said, was that Sun already had this sort of API for Java, so it made sense to do the GNOME work along similar lines. In reality, though, nobody knew why the Java implementation worked the way it did, but it was easier to copy it than to change it.

To complicate things further, GNOME in those days was based on the CORBA object-request broker. For the young members of the audience fortunate enough not to know about CORBA, he suggested imagining an enterprise version of any modern interprocess communication mechanism — then make it much more corporate. It had design features like a centralized authority to assign names for all users. Once upon a time, he said, all of GNOME worked that way; that is the source of the words "object model" in the GNOME name.

Much of the accessibility implementation is maintained outside of the GTK source tree, which brings problems of its own. The end result is that GNOME's accessibility support never worked all that well. But it lets managers check the "accessibility" box, which is all many of them need. Unfortunately, accessibility is not a box that can be checked and forgotten about; it is a process that must be constantly kept up with. But the GNOME project ended up mostly forgetting about it.

In the intervening years the world has changed. CORBA has been replaced by D-Bus, for example. Patience for out-of-tree modules is mostly gone. The move to Wayland is creating problems for existing assistive technology, as is the sandboxing that is increasingly being used for GNOME applications.

AT-SPI has been ported to D-Bus, he said, but the architecture of the accessibility subsystem as a whole is the same. It remains in the X11 world, where every application expects to have access to the entire system. This is a design that dates back to the days when applications were installed by the system administrator and could (hopefully) be trusted; they certainly were not acquired from random places on the Internet.

The world has changed, he said, so accessibility support in GNOME needs to change with it. The system is "stuck" and needs a redesign. But this is hard because, unlike the situation with other desktop features, it is not possible to ask users of assistive technology to contribute. To a great extent, they simply cannot perceive what is not available to them, so it's hard to even ask them to report regressions.

The first thing that needs to happen is to consolidate the various pieces, many of which have been untouched for years. Some new functionality has been added, mostly to match new features provided by browsers, but as a whole GNOME accessibility support just doesn't really work. The abstraction layer doesn't really abstract anything, so changes typically have to be made in many places. The toolkit needs to be simplified; as things stand now, application developers expect GTK to take care of everything, but that is not the case. There is also a need for funding; this work is not trivial and it's not reasonable to expect it to be done by volunteers.

This year, he said, the GNOME Foundation (where he is employed) has directed him to work on this problem for the upcoming GTK 4 release; he has been doing that for the last six months. There has been a lot of "archaeology" involved to understand it all. He has been working to pull a lot of abstracted stuff back into GTK; developers will not have to go to the ATK reference manual anymore, he said, because it won't exist. He has also been working on portability; accessibility support in GTK doesn't currently work on obscure systems like Windows.

There is still a need for some abstraction, he said, but it should be based on concepts rather than code. GNOME is increasingly using cascading style sheets for its drawing API, so it is natural to look at what the web is doing in the accessibility area. There is a W3C standard called WAI-ARIA that defines many of the abstract concepts needed for accessibility:

  • An element is an accessible part of the user interface.
  • Each element has a role describing what it does: a checkbox, a slider, etc.
  • Attributes describe things the element has: properties, visibility, labels, state, relationships with other elements, etc.

Bassi is working on an accessibility implementation around these concepts. He described it briefly, but this is the point where he was told that he was running out of time. The resulting increase in the pace of the presentation made the information less ... accessible ...

Application developers, he said, will need to follow the WAI-ARIA rules to ensure the accessibility of their systems. That means using the provided accessible elements rather than rolling their own, for example. The semantics of widgets should not be changed. Any widgets that are accessible with a mouse pointer should also be accessible from the keyboard. Widgets need an accessible label. And so on.

He is doing some work with gtk-builder-tool to ensure that accessibility information is added to interfaces where it needs to be. There is also a new assert mechanism that can verify that accessibility information is changed when elements of the interface are changed, hopefully preventing accessibility regressions.

The current state is that the new accessibility API is nearly finalized; only a few small details remain. The GTK widgets are being ported to this new API. He is writing a new test suite for this API, and documentation is in progress. He is also working on implementing the AT-SPI backend to talk to assistive systems — necessary since, without it, the rest of this work won't actually do anything.

Bassi concluded with some suggestions for anybody who would like to help with this effort. He would like to see people writing more tests, and documentation as well. Ports to Windows and macOS need to be done. Accessibility for sandboxed applications needs to be improved; they need access to information and assistive technology but should not have access to the entire desktop. And, naturally, there is always a need for funding to push this effort forward.

Comments (12 posted)

Mycroft: an open-source voice assistant

By John Coggeshall
July 24, 2020

Mycroft is a free and open-source software project aimed at providing voice-assistant technology, licensed under the Apache 2.0 license. It is an interesting alternative to closed-source commercial offerings such as Amazon Alexa, Google Home, or Apple Siri. Use of voice assistants has become common among consumers, but the privacy concerns surrounding them are far-reaching. There have been multiple instances of law enforcement's interest in the data these devices produce for use against their owners. Mycroft claims to offer a privacy-respecting, open-source alternative, giving users a choice on how much of their personal data is shared and with whom.

The Mycroft project is backed by the Mycroft AI company. The company was originally funded by a successful one-million-dollar crowdfunding campaign involving over 1,500 supporters. In recent years, it has developed two consumer-focused "smart speaker" devices: the Mark 1 and Mark 2. Both devices were funded through successful Kickstarter campaigns, with the most recent Mark 2 raising $394,572 against a $50,000 goal.

In the press, the company has indicated its intention is to focus on the enterprise market for its commercial offerings, while keeping the project free to individual users and developers. On the subject of developers, contributors are expected to sign a contributor license agreement (CLA) to participate in the project. The actual CLA was unavailable at the time of publication, but the project claims it grants the project a license to the contributed code, while retaining ownership of the contribution to the developer.

Voice-assistant technology is complicated, with many different components that must come together to form a usable product. There is far too much to cover in a single article, making this the first of a series on the project, providing a high-level understanding of the project components.

Mycroft is broken down into multiple modules: core, wake word, speech to text, intent parsing, and text to speech. Except core, the functionality provided by each module supports a variety of implementations, with the Mycroft project providing its own implementation(s) for each. Estimating the number of contributors to the entire project as a whole is a challenge, as each sub-project attracts its own set of contributors. Looking at core, GitHub reports that project has had 128 releases from its 139 contributors — the latest release occurring at the end of May 2020.

The modular architecture of the project allows for maximum flexibility to the end user, who can choose to change the text-to-speech provider (for example) if the default doesn't meet their needs. In a way similar to commercial products like Alexa, Mycroft core exposes an API for the development of "skills". These skills are plugins that can integrate with third-party services to do everything from playing music to turning on light bulbs. Anyone can write a skill and release it for others to use, with a reasonable collection of skills provided in the Mycroft Marketplace.

To best understand how Mycroft works, it may be easiest to simply walk through the various actions of a single command: "Hey Mycroft, tell me about the weather"

At the heart is the Mycroft core, which provides the fundamental mechanisms that power the entire architecture. This includes audio output services, a message bus, and the skills-integration API. It works with three modules to fulfill the request: wake word, speech to text, and the intent parser.

Wake words

The first step in the processing of any command from a voice assistant is the "wake word". This is the word or phrase detected by Mycroft that triggers Mycroft to start recording audio to process as a command. This detection happens on the local device. Mycroft offers two options for wake-word detection: PocketSphinx, and the project's own Precise implementation (the default).

The differences between PocketSphinx and Precise have to do with how the wake word itself is detected. PocketSphinx is based on English speech and takes a speech-to-text (STT) approach to identifying the wake word. Precise, on the other hand, is a neural network implementation that is trained to recognize the sound of the wake word rather than a specific word (making it suitable for more than English). The Mycroft project provides pre-trained Precise models for "Hey Mycroft", "Christopher", "Hey Ezra", and "Hey Jarvis" as wake words. Creating your own wake word is certainly possible, but requires extensive audio samples in order to train the model.

Speech to text

Once the wake word has activated Mycroft, the microphone records the audio that follows until it detects that the user has stopped talking. This audio is then processed by the speech-to-text module, transcribing it into text to be processed. This aspect of voice assistants is by far the most contentious from a privacy perspective, as these audio files can (and often do) include private data. Since this audio data is typically sent to a third-party server to be processed, it makes it (and the corresponding transcription) ripe for abuse, compromise, or otherwise used in a way undesired by the user.

Unfortunately, currently Mycroft does not provide an ideal option in this regard. In fact, Mycroft itself does not provide a speech-to-text solution at all. Instead, by default, Mycroft uses Google's speech-to-text services proxied through Mycroft servers. Per the documentation on this module, Mycroft acts as a proxy for Google in part to provide a layer of privacy to end users:

In order to provide an additional layer of privacy for our users, we proxy all STT requests through Mycroft's servers. This prevents Google's service from profiling Mycroft users or connecting voice recordings to their identities. Only the voice recording is sent to Google, no other identifying information is included in the request. Therefore Google's STT service does not know if an individual person is making thousands of requests, or if thousands of people are making a small number of requests each.

While anonymizing requests to Google to avoid providing the user's IP address is a step in the right direction, it is far from a perfect solution — the audio of a user's voice alone could be enough to make an identification. According to the project, there simply is not a reliable open STT alternative that is on par with closed-source alternatives — hopefully that won't stay that way for long. As it turns out, privacy is not the only reason Mycroft proxies these requests — Mycroft is working with Mozilla's DeepSpeech project to build an open-source alternative to Google's STT service. Users who opt-in can contribute their audio to help train the DeepSpeech AI model. To the project's credit, this is clearly spelled out for the user during device registration:

The data we collect from those that opt in to our open dataset is not sold to anyone. [...] Mycroft's voices and services can only improve with your help. By joining our open dataset, you agree to allow Mycroft AI to collect data related to your interactions with devices running Mycroft's voice assistant software. We pledge to use this contribution in a responsible way. [...] Your data will also be made available to other researchers in the voice AI space with values that align with our own, like Mozilla Common Voice. As part of their agreement with Mycroft AI to access this data, they will be required to honor your request to remove any trace of your contributions if you decide to opt out.

If a user agrees, their audio data is added to Mycroft's open dataset and is sent to the DeepSpeech project, in addition to sending it to the selected STT service. According to Mycroft AI founder Joshua Montgomery in a Reddit AMA: "[...] we only use data from people who explicitly opt-in, this represents about 15% of our customers." Since Mycroft is being used by both individual users (for free) and paying enterprise customers, it is unclear if that 15% opt-in figure cited by Montgomery refers only to actual customers or the entire user base.

It is noteworthy that Montgomery was explicitly asked in the same AMA: "[...] do you have a 'warrant canary' for any 'National Security letters/warrants' you may have received instructing you to turn over private user data to a 3-letter agency under the PATRIOT Act?", Montgomery declined to respond. Since Mycroft AI is a US-based company this may be of importance to privacy-concerned users.

While the project doesn't feel that DeepSpeech is ready, using DeepSpeech is supported. DeepSpeech isn't the only alternative to Google's STT available, either. There is support for many options, starting with Kaldi, which is open source, along with a handful of proprietary providers: GoVivace, Houndify, IBM Cloud, Microsoft Azure, and Wit.ai.

Intent parser

Once audio has been transcribed into text, it must then be processed for intent. That is to say, Mycroft now has to process the command you gave it and figure out what you want it to do. This can be complicated in its own right, as even with a perfect transcription humans have many different ways of expressing an intent: "Hey Mycroft, tell me about the weather", "Hey Mycroft, what's the weather?", "Hey Mycroft, what is it like outside?"

To address intents, Mycroft has two separate projects. The default, Adapt, provides a more programmatic approach to intent parsing, while Padatious is a neural-network-based parser. The intention of the Mycroft project is to eventually replace Adapt with Padatious, which is reported to be under active development. It is noteworthy, however, that the project has had only six commits this year.

While both Adapt and Padatious accomplish the same goal, they do so in very different ways. Adapt (implemented in Python) is designed to run on embedded systems with limited resources, meaning it can run directly on the local device. It is a keyword-based intent parser, so its capabilities are somewhat limited — though it can be useful to implement Mycroft skills that have a small number of commands (in Mycroft terms, "Utterances"). Padatious, on the other hand, takes an AI-based approach to intent parsing where the entire sentence is examined to ascertain the intent. In this approach, a neural network is trained on whole phrases and the generated model is used to parse the intent. This involves more up-front work training the model for a skill, but, according to the documentation, the training of Padatious models "require a relatively small amount of data." Like Adapt, Padatious is implemented in Python.

Once an intent is parsed, regardless of the technology used, it is then mapped (hopefully) to a skill that knows how to execute the intended action — such as fetching and responding with the latest weather for the region.

Text to speech

The text-to-speech module provided by Mycroft is the final key swappable implementation of the Mycroft project. This module, as its name implies, takes text and converts it into audio to be played. Generally this would be in response to a command generated by the intent parser but, unlike other commercial offerings (e.g. Amazon Alexa), it can also be used to "push" audio to the voice assistant without it being strictly in response to a wake-word command. This can be useful when integrating Mycroft with a project like Home Assistant, as a way to provide audio notifications.

Mycroft offers multiple options for TTS. Specifically, Mycroft has the Mimic and Mimic2 projects. The default Mimic is a TTS engine based on Flite that runs directly on the device to synthesize a voice from text by concatenating sounds together to form words. This produces a less-than-natural voice, but in testing it wasn't bad. Mimic2, on the other hand, is a cloud-based TTS engine based on Tacotron that uses a neural network to produce a much higher-quality, synthesized voice. Mycroft does support using Google's TTS offering, though it is not enabled by default. Using Google's TTS service results in a voice synthesis very similar to that found in Google Home devices.

More to come

Mycroft's expansive portfolio of projects is impressive, but perhaps even more impressive was how easy it was to get started. The company has developed two consumer-focused devices (one on pre-order) that are built on the project, but also provides the ability to build your own Raspberry Pi or Linux-based equivalent. In our next installment, we will take a closer look into the project in action to see how well it stacks up to its proprietary competition.

Comments (15 posted)

Lockless algorithms for mere mortals

By Jonathan Corbet
July 28, 2020
Time, as some have said, is nature's way of keeping everything from happening at once. In today's highly concurrent computers, though, time turns out not to be enough to keep events in order; that task falls to an extensive set of locking primitives and, below those, the formalized view of memory known as the Linux kernel memory model. It takes a special kind of mind to really understand the memory model, though; kernel developers lacking that particular superpower are likely to make mistakes when working in areas where the memory model comes into play. Working at that level is increasingly necessary for performance purposes, though; a recent conversation points out ways in which the kernel could make that kind of work easier for ordinary kernel developers.

Concurrency comes into play when multiple threads of execution are accessing the same data at the same time. Even in a simple world, keeping everything coherent in a situation like this can be a challenging task. The kernel prevents the wrong things from happening at the same time with the use of spinlocks, mutexes, and other locking primitives that can control concurrency. Locks at this level can be thought of as being similar to traffic lights in cities: they prevent accidents as long as they are properly observed, but at the cost of stopping a lot of traffic. Time spent waiting for locks hurts; even the time bouncing lock data between memory caches can wreck scalability, so developers often look for ways to avoid locking.

Lockless list linking

Consider a highly simplified example: inserting an element into a singly-linked list. One possible solution would be to use a mutex to protect the entire list; any thread traversing the list must first acquire this mutex. If the thread inserting an element acquires this lock, it knows that no other thread will be traversing the list at the same time, so changes will be safe. But if this list is heavily used, this locking will quickly become expensive.

So one might consider a lockless alternative. If the list initially looks like this:

[linked list]

One could start by linking the outgoing pointer from the new element to the existing element that will soon follow it in the list:

[linked list]

At this point, the list still looks the same to any other thread that is traversing it. To complete the operation, the code will redirect the pointer from the preceding element in the list to the new element:

[linked list]

Now everybody sees the new list; no locking was required, and the view of the list was always consistent and correct. Or so one would hope.

The problem is that modern hardware makes things harder in the name of performance. The order in which a series of operations is executed may not be the order in which those operations are visible to other threads in the system. So, for example, it might well be that other threads in the system see the assignment of the two pointers above in the opposite order, with the result that, from their point of view, there is a window of time during which the list looks like this:

[broken linked list]

The outgoing pointer in the new element will contain whatever was there before the assignment happened, leading to a list that heads off into the weeds. There are few certainties in the world, but one can be reasonably confident in saying that little good will come from this situation.

A more complex view of memory

Another way to think about it is that locking provides a sort of Newtonian-physics view of the world; in a given situation, one always knows what's going to happen. At lower levels, life starts to resemble quantum physics, where surprising things can happen and few people can convincingly claim to understand it all. One can normally function quite well in this world without understanding quantum physics, but there are situations where it is good to have an expert around.

The Linux kernel has a few experts of this type; they have put a great deal of work over the years into the creation of the kernel's memory model, which is a description of how the kernel views memory and the how to safely perform operations where concurrency may come into play. The result is the infamous memory-barriers.txt documentation file and a whole raft of supporting materials. From this documentation, one can learn a couple of useful things for the list-insertion example given above:

  • Code traversing the list should read the next-item pointers with an "acquire" operation, which is a special sort of barrier. It guarantees that any operation that happens after the barrier actually appears afterward elsewhere in the system. In this case, it would ensure that, if a traversal reads a pointer to a given element, the assignment of that element's "next" pointer will already be visible.
  • Code that sets the pointer should do so with a "release" operation, which ensures that any other operations done before the release operation are visible before that operation. That will ensure that a new element's "next" pointer is seen correctly globally before the pointer to the element itself.

This example is complicated enough to explain, but it is as simple as these things get; most cases are rather more complex. To make things worse, optimizing compilers can create surprises of their own in pursuit of higher performance. The kernel's memory model strives to address this threat as well.

The problem with the memory model

Recently, Eric Biggers posted a patch fixing a perceived problem in the direct I/O code where a concurrent data-access situation lacked the appropriate barriers. There was some discussion about whether a bug actually existed or not; the problem according to Biggers is that this sort of concurrent access is deemed "undefined behavior", meaning that the compiler is granted license to pursue any evil agenda that might strike its fancy. The real dispute, though, was over the fix.

Dave Chinner, who is generally acknowledged as being a moderately competent kernel developer, complained that the resulting code was not something that could be readily understood:

I'm talking from self interest here: I need to be able to understand and debug this code, and if I struggle to understand what the memory ordering relationship is and have to work it out from first principles every time I have to look at the code, then *that is bad code*.

He was pointed to the memory-model documentation, but that did little to improve his view of the situation:

The majority of the _good_ programmers I know run away screaming from this stuff. As was said many, many years ago - understanding memory-barriers.txt is an -extremely high bar- to set as a basic requirement for being a kernel developer.

This documentation, he said, is aimed at people who spend their time thinking about memory-ordering issues. Everybody else is going to struggle with it, starting with the basic terminology used; even those who manage to understand it are likely to forget again once they go back to the problems they are actually trying to solve. Kernel developers would be better served, Chinner said, with a set of simple recipes showing how to safely code specific lockless patterns.

Biggers responded by posting a documented recipe for the "initialize once" pattern that was the source of the original problem in the direct-I/O subsystem. This pattern comes about when the initialization of a data structure is deferred until the structure is actually used, perhaps because that use may never actually happen. The initialization should be done exactly once; two racing threads should not both try to carry it out. The document provided several recipes of increasing complexity intended to match different performance needs.

While the attempt to provide useful recipes was welcomed, it became clear that a number of people felt that the effort had missed the mark somewhat. Darrick Wong, for example, pointed out that language like:

Specifically, if all initialized memory is transitively reachable from the pointer itself, then there is no control dependency so the data dependency barrier provided by READ_ONCE() is sufficient.

is not immediately clear to a lot of developers. Alan Stern's attempt to clarify it read like this:

Specifically, if the only way to reach the initialized memory involves dereferencing the pointer itself then READ_ONCE() is sufficient. This is because there will be an address dependency between reading the pointer and accessing the memory, which will ensure proper ordering. But if some of the initialized memory is reachable some other way (for example, if it is global or static data) then there need not be an address dependency, merely a control dependency (checking whether the pointer is non-NULL). Control dependencies do not always ensure ordering -- certainly not for reads, and depending on the compiler, possibly not for some writes -- and therefore a load-acquire is necessary.

This was seen as driving home the point that started the whole discussion: most developers do not think of memory this way and would really rather not have to. They simply do not think in this kind of language. As long as lockless algorithms require this sort of understanding, they will be underused and many of the implementations that do show up are likely to be buggy in subtle ways.

An alternative, as suggested by Matthew Wilcox, is to define an API for this sort of on-the-fly initialization and hide the details behind it. Developers who understand memory models and enjoy thinking about them can concern themselves with optimizing the implementation, while the rest of the kernel-development community can simply use it with the knowledge that it works as intended. There followed a discussion with several different ideas about how this API should actually look, but nothing emerged that looks like it could find its way into wider use.

This particular discussion has faded away, but the underlying problem remains. Successful software development at all levels depends on the management of complexity, but that is not yet really happening at the memory-model level. Sooner or later, somebody will come along with the right skills to both understand the Linux kernel memory model and to hide it behind a set of APIs that other developers can safely use without having to understand that model. Until then, writing lockless code will continue to be a challenging task for many developers — and the people who have to review their code.

Comments (108 posted)

A look at Dart

By John Coggeshall
July 29, 2020

Dart is a BSD-licensed programming language from Google with a mature open-source community supporting the project. It works with multiple architectures, is capable of producing native machine-code binaries, and can also produce JavaScript versions of its applications. Dart version 1.0 was released in 2013, with the most recent version, 2.8, released on June 3 (2.9 is currently in public beta). Among the open-source projects using Dart is the cross-device user-interface (UI) toolkit Flutter. We recently covered the Canonical investment in Flutter to help drive more applications to the Linux desktop, and Dart is central to that story.

Dart's syntax is a mix of concepts from multiple well-established languages including JavaScript, PHP, and C++. Further, Dart is a strongly-typed, object-oriented language, with primitive types that are implemented as classes. While Dart does have quirks, it is likely that a programmer familiar with the aforementioned languages will find getting started with Dart to be relatively easy. Included in the language are useful constructs like Lists (arrays), Sets (unordered collections), and Maps (key/value pairs).

Beyond the language constructs, the Dart core libraries provide additional support for features like asynchronous programming, HTML manipulation, and converters to work with UTF-8 and JSON.

Dart compilation targets

To learn about Dart's compilation features, we need something to compile; for purposes of an example, it was easy enough to write a small Dart application that calculates and displays the Fibonacci sequence recursively:

    /* Application entry point, just like C */
    void main() {
        int totalTerms = 45;
        String output = "";

        for(int i = 1; i <= totalTerms; i++) {
            output += fibonacci(i).toString() + ", ";
        }

        print(output + "...");
    }

    /* A compact syntax for the recursive fibonacci algorithm */
    int fibonacci(int n) => n <= 2 ? 1 : fibonacci(n - 2) + fibonacci(n - 1);

In Dart, like in C, the main() function is the entry point into the application. It is worth noting, however, the compact JavaScript-inspired syntax of the fibonacci() function. The function uses Dart arrow syntax combined with a conditional expression to implement the algorithm. For reference, below is a more verbose version of the same fibonacci() function:

    int fibonacci(int n) {
        if(n <= 2) {
            return 1;
        }

        return fibonacci(n - 2) + fibonacci(n - 1);
    }

The language provides many options for executing a Dart program. The most straightforward approach is from the command line using the dart command to execute the program using the Dart virtual machine (VM). Like other languages, Dart's VM implements just-in-time (JIT) compilation of Dart code into native machine code. Here's the output of the preceding example:

    $ dart fibonacci.dart
    1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610,
    987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368,
    75025, 121393, 196418, 317811, 514229, 832040, 1346269,
    2178309, 3524578, 5702887, 9227465, 14930352, 24157817,
    39088169, 63245986, 102334155, 165580141, 267914296,
    433494437, 701408733, 1134903170, ...

As an alternative to the VM, Dart can also be compiled into native machine code using the dart2native command. While it can accurately be described as compiling to native machine code, there are some differences when comparing Dart to other compiled languages, such as C. Because of the nature of the Dart language, not all code can necessarily be compiled to native machine code. There are multiple reasons for this; one example being that a Dart application may not be able to determine the type of a given variable in certain code blocks until runtime. In that case, those code blocks cannot take advantage of compilation and must be interpreted by the VM as bytecode. Thus, even compiled Dart applications still require a runtime in order to address those issues.

By default, dart2native will create a standalone binary of the application with the runtime bundled in. Alternatively, dart2native can create what is known as an ahead-of-time (AOT) binary. These binaries do not include an embedded runtime and cannot be executed directly. They can, however, be executed if a runtime environment is provided externally — such as by using the dartaotruntime command. Alternatively, AOT binaries can be used in Dart-powered cloud services where a runtime is provided by the service.

Finally, Dart can also "compile" to JavaScript (known as transpiling) using the dart2js command. The documentation indicates that the dart2js command is best suited for a production build, since the entire application is transpiled into a single optimized JavaScript file.

For development purposes, the documentation recommends the dartdevc command. Unlike dart2js, the dartdevc command supports incremental transpilation and produces a modular version of the application with multiple separate JavaScript files. This allows the developer to only transpile the changes in the application during development, saving considerable time otherwise wasted waiting to transpile the entire code base. This tool is often used in conjunction with a development web server that keeps the transpiled JavaScript up-to-date in real time as the underlying Dart code is edited. For developing browser-based Dart applications, Dart provides a development web server available through the webdev command. This tool handles rebuilding the application to JavaScript as it is being developed using dart2devc automatically.

Getting into Dart

To try Dart coding out yourself, there are a few options. Since Dart can be transpiled to JavaScript, the easiest approach is to simply use the project's provided browser-based runtime. For local installation of the SDK, Dart's project website provides instructions — including an APT package repository for Linux, Windows installation via Chocolatey, and macOS installation via Homebrew.

Beyond the Dart core libraries that ship with the language, there is an open-source community also providing libraries. The most prominent collection is a repository of packages called pub.dev that can be used in application development. To manage these libraries Dart uses a YAML-based dependencies file called pubspec.yaml stored in the project root directory. Below is an example pubspec.yaml for a project, where we define a dependency on the basic_utils package found on pub.dev:

    name: LWN-Example
    dependencies:
      basic_utils: 2.5.5

To download and install all of the dependencies for the application, Dart provides the pub command. All that is needed to resolve and install the dependencies on a system is to run pub get from the root directory of the project.

By default, these dependencies are assumed to come from the Dart pub.dev repository. For dependencies that are not available in that repository, Dart supports Git dependencies that allow them to be taken from Git repositories. Like the PHP Composer package tool, pub get produces a matching pubspec.lock file in the project's root the first time it is executed that captures the exact version of a dependency that the project is using. Subsequent calls to pub get will default to using this file over pubspec.yaml, ensuring the version of the library being used is the correct one for the project. To upgrade a project's dependencies to their latest versions, use the pub upgrade command.

Below is a brief demonstration of the use of packages (as well as more of the language itself). In this example, we construct a simple Dart command-line application to determine whether the provided string is an email address. This takes advantage of the basic_utils package mentioned earlier that includes, among other things, the EmailUtils::isEmail() method:

    import 'package:basic_utils/basic_utils.dart';

    void main(List<String> arguments)
    {
        if (EmailUtils.isEmail(arguments[0])) {
            print("It's an email!");
        } else {
            print("Not an email.");
        }
    }

As shown, we import the basic_utils package using the import statement. The package: prefix of that statement indicates the import is coming from an outside package, rather than from somewhere in the project itself. The statement imports all of the classes defined in the basic_utils package, including the EmailUtils class we use to do the email address validation.

In this example, we have specified an argument to our main() entry point function. This argument is a List (array) of strings (String type) with the name arguments. When this parameter is provided in the declaration of the main() function, Dart will automatically populate it with the command-line arguments given to the application when executed. Since a List is Dart's version of a numerically-indexed array, arguments[0] references the first command-line argument provided (note that in other languages, index 0 is typically the name of the command executed):

    $ dart package-demo.dart someone@example.com
    It's an email!

Wrapping up

There is a lot to like about Dart as a language to write more traditional "applications" in — it is what the language was designed to do. Developers can write applications in any language, and they do, but each language was designed considering a niche it is looking to fill. Dart describes itself on its project page as "a client-optimized language for fast apps on any platform", and specifically as a "language specialized around the needs of user interface creation". Dart stands apart in that sense, as not many languages can claim they were designed with that specific combination of needs in mind.

It is safe to say that we have barely scratched the surface of the capabilities of Dart in this article. Here, we have given a glimpse of the language, with the hope of helping decide if it is worth further examination. The tour of the language and tutorials found in the Dart documentation will be helpful for getting started.

Comments (22 posted)

TLS gets a boost from Arduino for IoT devices

By John Coggeshall
July 28, 2020

Arduino devices are a favorite among do-it-yourself (DIY) enthusiasts to create, among other things, Internet of Things (IoT) devices. We have previously covered the Espressif ESP8266 family of devices that can be programmed using the Arduino SDK, but the Arduino project itself also provides WiFi-enabled devices such as the Arduino MKR WiFi 1010 board. Recently, the Arduino Security Team raised the problem of security shortcomings of IoT devices in a post, and how the Arduino project is working to make improvements. We will take the opportunity to share some interesting things from that, and also look at the overall state of TLS support in the Arduino and Espressif SDK projects.

When it comes to making a secure IoT device, an important consideration is the TLS implementation. At minimum, TLS can prevent eavesdropping on the communications, but, properly implemented, can also address a number of other security concerns as well (such as man-in-the-middle attacks). Moreover, certificate-based authentication for IoT endpoints is a considerably better approach than usernames and passwords. In certificate-based authentication, a client presents a certificate that can be cryptographically verified as to the client's identity, rather than relying on a username and password to do the same. These certificates are issued by trusted and cryptographically verifiable authorities so they are considerably more difficult to compromise than a simple username and password. Still, according to the team: "As of today, a lot of embedded devices still do not properly implement the full TLS stack". As an example, it pointed out that "a lot of off-brand boards use code that does not actually validate the server's certificate, making them an easy target for server impersonation and man-in-the-middle attacks."

The reason for this is often simply a lack of resources available on the device — some devices only offer 32KB of RAM and many TLS implementations require more memory to function. Moreover, validating server certificates requires storing a potentially large number of trusted root certificates. Storing all of the data for Mozilla-trusted certificate authorities on a device takes up over 170KB in a system that potentially only has 1MB of available total flash memory. A general lack of education regarding the importance of security in this space unfortunately also plays a role. After all, TLS isn't the most straightforward subject to begin with, and having to implement it on a resource-limited platform does not make implementing it correctly any easier of a problem to solve.

The Arduino project appears to take these issues seriously, and is backing that up with some concrete improvements in its offerings. For Arduino boards with WiFi capabilities that means providing a hardware-based cryptographic solution. These chips (the ATECC508A and ATECC608A by Microchip) provide services like certificate storage, encryption, and verification for TLS implementations — without consuming the limited resources of the device running the firmware. Essentially, these hardware-cryptographic chips implement everything needed to handle asymmetric encryption like TLS. Using these hardware solutions requires the necessary software, and for Arduino that meant providing a lightweight TLS implementation that is easy for developers to use. To address this, the Arduino project has built on of the work of the MIT-licensed BearSSL project, written by Thomas Pornin.

BearSSL implements RFC5246 — TLS version 1.2. According to the BearSSL project's guiding rules, the project "tries to find a reasonable trade-off between several partly conflicting goals", one of which is support for CPU-challenged platforms. In short, it provides an implementation of TLS well-suited for an IoT device. BearSSL appears to be tightly controlled by Pornin. On the project's "How to Contribute" page, Pornin states that patches are welcome, but should be emailed directly to him for consideration. Pornin says he "will rewrite any patch suggestion" and the "resulting code uses the MIT license, listing me (and only me) as the author." According to Pornin, any contributions that are accepted will only be credited on the BearSSL site itself. The code can be retrieved from Pornin's Git repository described here. The latest release of BearSSL, v0.6, was released in August 2018 and was described as "beta-quality" software.

Despite both the lack of community and recent releases, the Arduino project selected BearSSL "as a starting point" for its TLS library. This implementation, called the ArduinoBearSSL library, bundles the last release of BearSSL and augments it with the ArduinoECCX08 library to take advantage of the hardware-provided cryptographic tools (when available). This provides to Arduino developers a reasonable library for implementing hardware-accelerated TLS correctly, with the same overall simplicity developers expect from Arduino development.

It is worth noting that Arduino isn't the only embedded device project to make use of BearSSL. The Arduino core for ESP8266 by Espressif also implements BearSSL for its WiFi client. Like Arduino, the ESP8266 implementation bundles BearSSL v0.6. According to the official Espressif forum for the ESP8266, however: "There isn't any hardware accelerated crypto support" available for the ESP8266. This makes implementation of TLS more complicated on the ESP8266 devices, as the entire implementation must be done in software. This can cause difficulties implementing TLS correctly; for example software-based TLS can lead to exceedingly slow connection times.

The newer offering from Espressif is the ESP32 chip, which provides a considerable boost to power and overall functionality when compared to the ESP8266. One of those improvements is a version of the module (the ESP32-WROOM-32SE [PDF]) that includes the same hardware-accelerated cryptographic chip as can be found on Arduino boards (Microchip's ATECC608A). While BearSSL was ported to Espressif's Arduino ESP8266 library, it has not been ported for use in Espressif's Arduino core for ESP32. Instead, the ESP32 TLS support is provided by a port of Mbed TLS to the device as part of the WiFiClientSecure library. Unlike ArduinoBearSSL, this library does not appear to be designed to take advantage of the hardware acceleration available in some ESP32 models at this time. Since the point of Arduino core for ESP32 is to enable Arduino libraries in development, it stands to reason that developers can take advantage of ArudinoBearSSL — and the hardware support it provides.

All in all, it is good to see projects like Arduino and chip manufacturers like Espressif, both of which tend to cater to the DIY community, take security seriously. These sorts of improvements are important for more than the DIY community, however, as the family of Espressif chips are also used in a wide variety of off-the-shelf consumer devices. In the world of IoT, having the ability to do something as simple as store certificates in a way that doesn't eat up precious resources makes it easier for developers to avoid cutting corners in their projects. Hopefully, making the technology more accessible will help improve IoT security overall.

Comments (12 posted)

Open-source CNCing

By John Coggeshall
July 29, 2020

Last year Sienci Labs finished its Kickstarter campaign for the open-source LongMill Benchtop CNC Router — its second successful open-source CNC machine Kickstarter campaign. CNC routers allow users to mill things (like parts) from raw materials (like a block of aluminum) based on a 3D-model. The LongMill is a significant improvement over the original sold-out Mill One and makes professional-quality machining based entirely on open-source technology a reality. As an owner of a LongMill, I will walk through the various open-source technologies that make this tool a cornerstone of my home workshop.

Hardware

The Sienci Labs LongMill is an impressive feat of engineering, using a combination of off-the-shelf hardware components alongside a plethora of 3D-printed parts. The machine, once assembled, is designed to be mounted to a board. This board, called a spoilboard, is a board the machine can "accidentally" cut into or otherwise suffer damage — designed to be occasionally replaced. In most circumstances, the spoilboard is the top of a table for the machine, and Sienci provides documentation on several different table builds done by the community. For builders short on space, the machine can be mounted on a wall.

The complete 3D plans for the machine are available for download, including a full bill of materials of all of the parts needed. The project also provides instructions to assemble the machine and how best to 3D print relevant components. The machine is controlled by the LongBoard CNC Controller, and Sienci Labs provides full schematics [23MB ZIP] of that as well. All mentioned materials are licensed under a Creative Commons BY-SA 4.0 license.

In addition to the open-source design of the machine itself, an open-source-minded community has formed around the project. The company's Facebook user group has 1,600 members, and an active community forum is hosted by the company, which discusses everything from tips to machine support. Community members contribute, among other things, various modifications to improve the original design or to add new features such as a laser engraver.

Software

When it comes to a fully functional CNC, the hardware behind it is only part of the equation. There is an entire software stack needed to go from a model to a physical thing. To explain this, I thought it best to have an example to refer to. For this example, I borrowed this design for a Tux drink coaster, scaled up its size, and used it to produce this quick example of the LongMill in action:

[Milled Tux]

There are a lot of different software options, both open source and commercial, that can be used to mill something like this. In this case, I am starting with an STL file and I need to translate that data into actions for the mill to take to carve the image. Looking at the entire process generically, the steps would be: design something, build tool paths (movements of the CNC machine), and then send tool paths to the machine. I already have the design, so I will start with generating tool paths.

The process of converting a design into a series of movements taken by a machine is called computer-aided machining (CAM). Depending on the work being done, there are many different types of CAM software available to take some sort of model and convert it into one (or more) tool paths. In the open-source world there are many choices — for example there is a list of open-source options that was compiled on Reddit. What CAM software is used depends greatly on the experience of the user and the needs of the project. In this case, I used CAMLab developed and hosted by Sienci Labs; CAMLab is based on the MIT-licensed Kiri:Moto project. CAMLab is meant to be a beginner-level tool that is easy to use; it operates directly on STL files. In other projects, such as when I am using the CNC to create my own prototype PCB circuit boards, open-source projects like FlatCAM (MIT-licensed) are a much better tool for the job. Unlike CAMLab, FlatCAM works from Gerber files generated by electronics-design tools like KiCad (GPLv3) — instead of STL files.

The job is the same regardless of the CAM software used: take the model provided, combine it with information about the material being cut along with the tool that will be used to cut it, and generate a tool path. What follows is a screenshot from CAMLab, showing the tool path generated by a 2mm cutting tool on our Tux model. In this screenshot, the tool path is indicated by the black lines, while non-cutting moves are indicated in blue:

[Tool path screenshot of Tux]

Observant readers may note that the generated tool path appears to ignore portions of the model — specifically the narrow lines in Tux's face. This is not an error, but captures that this model required making a second pass using a smaller 0.8mm tool to fill in the details that the larger tool couldn't accomplish. This tool path would be generated and run separately from the first, with a physical change in tools in between. Ultimately three separate tool paths needed to be generated and executed to complete the carve. Two of them were for the actual relief of Tux (a 2.0mm and 0.8mm end mill), and a third to cut the circle from the wood stock using an 8.0mm end mill.

Ultimately, the tool path generated by the CAM software is saved to a file to be transmitted to the CNC machine using a language known as G-code. Different CNC machine controllers implement different dialects of G-code, depending on the software of the controller. In the case of LongMill, the LongBoard CNC Controller uses an off-the-shelf Arduino Uno as its processing unit, running the open-source G-code interpreter Grbl (GPLv3). The G-code is sent to the Arduino Uno on the controller via a serial connection.

G-code is a line-based, interpreted language (G-code reference). Below is a small G-code example that sets the current position of the tool in the CNC machine as the origin, moves up 10mm, over on the Y axis 10mm, then returns to origin:

    P0 L20 X0 Y0 Z0 ; Set current location as origin
    G0 Z10          ; Move the head up 10mm
    G0 Y10          ; Move the head on the Y-axis 10mm
    G0 X0 Y0        ; Move back to origin for X/Y
    G0 Z0           ; Lower head back to origin

A look at Grbl internals would tell you that it can only buffer, at best, sixteen G-code instructions at a time. In contrast, tool paths generated by CAM software can be hundreds of thousands of lines of G-code. Thus, for sending the G-code file generated by a CAM tool, we need a utility that can buffer the G-code instructions sent to the CNC controller. Again, there are many different open-source tools available for a job like this, but I use a project called Universal Gcode Sender (UGS — GPLv3 licensed). I selected UGS because it offers features specifically for Grbl and overall works well. In my setup, UGS runs on a Raspberry Pi 4 connected to the LongMill's controller, where G-code files are then opened and transmitted to the CNC to do a job — in this case, carve out our Tux model.

While G-code is typically generated by CAM software, it is not uncommon for it also to be written by hand in a variety of contexts. Some CAM software offers the ability to inject custom G-code at various points in the tool path generation (for example), while almost all G-code sending platforms like UGS provide a serial console for manual G-code input — if not an entire G-code macro system. In typical usage, users often write small G-code scripts (or one-liners) by hand to do specific tasks unique to their device, situation, or preferences. For example, it is often faster to move the tool on a CNC to the desired location on the bed by a line of G-code than it would be to use the provided movement controls in an application like UGS.

Wrapping up

Working with a CNC is the subject of multitudes of books and web sites, and the number of open-source projects available in this space is probably just as large — we've only scratched the surface here. There are many more aspects of this space to cover, such as open-source modeling programs like OpenSCAD, that may make interesting subjects in the future. What we hope to have made clear is that CNC technology from start to finish is now available to the world in a reasonably accessible form — using nothing but gratis open-source projects. For open-source enthusiasts (or professionals), it is most welcome to see another historically proprietary space develop FOSS alternatives.

Comments (15 posted)

Page editor: Jonathan Corbet

Brief items

Security

A long list of GRUB2 secure-boot holes

Several vulnerabilities have been disclosed in the GRUB2 bootloader; they enable the circumvention of the UEFI secure boot mechanism and the persistent installation of hostile software. Fixing the problem is not just a matter of getting a new GRUB2 installation, unfortunately. "It is important to note that updating the exploitable binaries does not in fact mitigate the CVE, since an attacker could bring an old, exploitable, signed copy of a grub binary onto a system with whatever kernel they wished to load. In order to mitigate, the UEFI Revocation List (dbx) must be updated on a system. Once the UEFI Revocation List is updated on a system, it will no longer boot binaries that pre-date these fixes. This includes old install media."

Full Story (comments: 19)

Kernel development

Kernel release status

The current development kernel is 5.8-rc7, released on July 26. Linus is unsure about whether things are slowing down enough or not. "But it *might* mean that an rc8 is called for. It's not like rc7 is *big* big. We've had bigger rc7's. Both 5.3 and 5.5 had bigger rc7's, but only 5.3 ended up with an rc8. Put another way: it could still go either way. We'll see how this upcoming week goes."

Stable updates: 5.7.11, 5.4.54, 4.19.135, and 4.14.190 were released on July 29.

Comments (none posted)

Brauner: The Seccomp Notifier – New Frontiers in Unprivileged Container Development

Christian Brauner has posted a novella-length description of the seccomp notifier mechanism and the problems it is meant to solve. "So from the section above it should be clear that seccomp provides a few desirable properties that make it a natural candidate to look at to help solve our mknod(2) and mount(2) problem. Since seccomp intercepts syscalls early in the syscall path it already gives us a hook into the syscall path of a given task. What is missing though is a way to bring another task such as the LXD container manager into the picture. Somehow we need to modify seccomp in a way that makes it possible for a container manager to not just be informed when a task inside the container performs a syscall it wants to be informed about but also how can to make it possible to block the task until the container manager instructs the kernel to allow it to proceed."

Comments (8 posted)

Development

Bison 3.7 released

Version 3.7 of the Bison parser generator is out. The biggest new feature would appear to be the generation of "counterexamples" for conflicts — examples of strings that could be parsed in multiple ways. There is also better support for reproducible builds, documentation links in warnings, and more.

Full Story (comments: none)

digiKam 7.0.0 released

Version 7.0.0 of the digiKam photo editing and management application is out. This release adds support for a number of new raw formats, support for Apple's HEIF format, and a new mosaic plugin. The headline feature, though, appears to be completely reworked face detection: "The new code, based on recent Deep Neural Network features from the OpenCV library, uses neuronal networks with pre-learned data models dedicated for the Face Management. No learning stage is required to perform face detection and recognition. We have saved coding time, run-time speed, and a improved the success rate which reaches 97% of true positives. Another advantage is that it is able to detect non-human faces, such as those of dogs."

Comments (none posted)

Firefox 79.0

Firefox 79.0 has been released. This version has improved accessibility for people using screen readers. See the release notes for more details.

Comments (10 posted)

Git v2.28.0

Version 2.28.0 of the git version control system has been released. "It is smaller than the releases in our recent past, mostly due to the development cycle was near the shorter end of the spectrum (our cycles last 8-12 weeks and this was a rare 8-week cycle)."

See this GitHub Blog post for details on the new features in this release.

Full Story (comments: 11)

GNU nano 5.0 released

Version 5.0 of the GNU nano text editor is out; it contains a number of improvements to the editing experience. "With --indicator (or -q or 'set indicator') nano will show a kind of scrollbar on the righthand side of the screen to indicate where in the buffer the viewport is located and how much it covers."

Full Story (comments: none)

PHP 8 alpha 3 released

The PHP project has released PHP 8 Alpha 3, the final alpha release according to the 8.0 release schedule. Feature freeze for the 8.0 release is scheduled for August 4, making this release the last one before features for the latest version of PHP are finalized. PHP 8.0 is scheduled to be released for general availability on November 26.

Comments (none posted)

Development quote of the week

The point of open source is not to ritualistically compile our stuff from source. It’s the awareness that technology is not magic: that there is a trail of breadcrumbs any of us could follow to liberate our digital lives in case of a potential hostage situation. Should we so desire, open source empowers us to create and run our own essential tools and services.
Andrew "bunnie" Huang (Thanks to Paul Wise)

Comments (none posted)

Miscellaneous

Historical programming-language groups disappearing from Google

As Alex McDonald notes in this support request, Google has recently banned the old Usenet groups comp.lang.forth and comp.lang.lisp from the Google Groups system. "Of specific concern is the archive. These are some of the oldest groups on Usenet, and the depth & breadth of the historical material that has just disappeared from the internet, on two seminal programming languages, is huge and highly damaging. These are the history and collective memories of two communities that are being expunged, and it's not great, since there is no other comprehensive archive after Google's purchase of Dejanews around 20 years ago." Perhaps Google can be convinced to restore the content, but it also seems that some of this material could benefit from a more stable archive.

Comments (42 posted)

Page editor: Jake Edge

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Calls for Presentations

LPC 2020 Networking and BPF Track CFP (Reminder)

This is a reminder for proposals for the 4 day networking track at Linux Plumbers Conference; the deadline is August 2 and LPC will take place August 24-28. "Any kind of advanced networking and/or bpf related topic will be considered."

Full Story (comments: none)

CFP Deadlines: July 30, 2020 to September 28, 2020

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
July 31 October 20
October 23
[Canceled] PostgreSQL Conference Europe Berlin, Germany
July 31 October 29
October 30
[Virtual] Linux Security Summit Europe Virtual
August 14 October 2
October 5
PyCon India 2020 Virtual
August 28 October 8
October 9
PyConZA 2020 Online
September 1 September 14 GNU Radio Conference Virtual
September 13 October 13
October 15
Lustre Administrator and Developer Workshop 2020 Online
September 15 October 24
October 25
[Cancelled] T-Dose 2020 Geldrop (Eindhoven), Netherlands
September 18 October 10
October 11
Arch Linux Conf 2020 Online Online
September 23 November 7
November 8
RustFest Global Online

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: July 30, 2020 to September 28, 2020

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
July 25
August 2
Hackers On Planet Earth Online
August 6
August 9
[Canceled] miniDebConf Montreal 2020 Montreal, Canada
August 7
August 9
Nest With Fedora Online
August 13
August 21
Netdev 0x14 Virtual
August 20 [Virtual] RustConf online
August 23
August 29
DebConf20 online
August 25
August 27
Linux Plumbers Conference Virtual
August 26
August 28
[Canceled] FOSS4G Calgary Calgary, Canada
August 28 Linux Kernel Maintainer Summit Virtual
September 4
September 11
Akademy 2020 Virtual, Online
September 9
September 10
State of the Source Summit online
September 13
September 18
The C++ Conference 2020 Online
September 14 GNU Radio Conference Virtual
September 16
September 18
X.Org Developer's Conference 2020 online
September 18 [Postponed to 2021] PGDay Austria 2020 Vienna, Austria
September 22
September 24
Linaro Virtual Connect online

If your event does not appear here, please tell us about it.

Security updates

Alert summary July 23, 2020 to July 29, 2020

Dist. ID Release Package Date
Debian DLA-2295-1 LTS curl 2020-07-28
Debian DLA-2290-1 LTS e2fsprogs 2020-07-26
Debian DLA-2291-1 LTS ffmpeg 2020-07-27
Debian DLA-2297-1 LTS firefox-esr 2020-07-29
Debian DLA-2296-1 LTS luajit 2020-07-28
Debian DLA-2292-1 LTS milkytracker 2020-07-27
Debian DLA-2289-1 LTS mupdf 2020-07-26
Debian DSA-4734-1 stable openjdk-11 2020-07-26
Debian DLA-2287-1 LTS poppler 2020-07-23
Debian DLA-2288-1 LTS qemu 2020-07-26
Debian DSA-4733-1 stable qemu 2020-07-24
Debian DLA-2294-1 LTS salt 2020-07-28
Debian DLA-2286-1 LTS tomcat8 2020-07-22
Fedora FEDORA-2020-54e4356732 F31 bashtop 2020-07-25
Fedora FEDORA-2020-7dddce530c F31 cacti 2020-07-23
Fedora FEDORA-2020-8a15713da2 F32 cacti 2020-07-23
Fedora FEDORA-2020-7dddce530c F31 cacti-spine 2020-07-23
Fedora FEDORA-2020-8a15713da2 F32 cacti-spine 2020-07-23
Fedora FEDORA-2020-dd0c20d985 F31 clamav 2020-07-28
Fedora FEDORA-2020-508df53719 F31 java-1.8.0-openjdk 2020-07-28
Fedora FEDORA-2020-e418151dc3 F32 java-1.8.0-openjdk 2020-07-23
Fedora FEDORA-2020-93cc9c3ef2 F31 java-11-openjdk 2020-07-28
Fedora FEDORA-2020-5d0b4a2b5b F32 java-11-openjdk 2020-07-24
Fedora FEDORA-2020-5b60029fe2 F31 mbedtls 2020-07-23
Fedora FEDORA-2020-fa74e15364 F32 mbedtls 2020-07-23
Fedora FEDORA-2020-dfb11916cc F32 mingw-python3 2020-07-23
Fedora FEDORA-2020-cfbed9c9ff F32 mod_authnz_pam 2020-07-24
Fedora FEDORA-2020-ebbf149f3b F32 podofo 2020-07-24
Fedora FEDORA-2020-e9251de272 F32 python27 2020-07-24
Fedora FEDORA-2020-198fdb12a1 F31 singularity 2020-07-23
Fedora FEDORA-2020-716d38e751 F32 singularity 2020-07-23
Fedora FEDORA-2020-76cf2b0f0a F31 xen 2020-07-23
Gentoo 202007-34 ant 2020-07-27
Gentoo 202007-25 arpwatch 2020-07-27
Gentoo 202007-37 awstats 2020-07-27
Gentoo 202007-03 cacti 2020-07-27
Gentoo 202007-08 chromium 2020-07-27
Gentoo 202007-56 claws-mail 2020-07-28
Gentoo 202007-16 curl 2020-07-27
Gentoo 202007-46 dbus 2020-07-27
Gentoo 202007-36 djvu 2020-07-27
Gentoo 202007-53 dropbear 2020-07-28
Gentoo 202007-58 ffmpeg 2020-07-28
Gentoo 202007-51 filezilla 2020-07-27
Gentoo 202007-10 firefox 2020-07-27
Gentoo 202007-44 freexl 2020-07-27
Gentoo 202007-20 fuseiso 2020-07-27
Gentoo 202007-04 fwupd 2020-07-27
Gentoo 202007-50 glib-networking 2020-07-27
Gentoo 202007-27 haml 2020-07-27
Gentoo 202007-06 hylafaxplus 2020-07-27
Gentoo 202007-31 icinga 2020-07-27
Gentoo 202007-17 jhead 2020-07-27
Gentoo 202007-42 lha 2020-07-27
Gentoo 202007-55 libetpan 2020-07-28
Gentoo 202007-05 libexif 2020-07-27
Gentoo 202007-21 libreswan 2020-07-27
Gentoo 202007-52 mujs 2020-07-28
Gentoo 202007-57 mutt 2020-07-28
Gentoo 202007-01 netqmail 2020-07-27
Gentoo 202007-49 nss 2020-07-27
Gentoo 202007-45 ntfs3g 2020-07-27
Gentoo 202007-12 ntp 2020-07-27
Gentoo 202007-48 ocaml 2020-07-27
Gentoo 202007-47 okular 2020-07-27
Gentoo 202007-33 ossec-hids 2020-07-27
Gentoo 202007-38 qtgui 2020-07-27
Gentoo 202007-18 qtnetwork 2020-07-27
Gentoo 202007-28 re2c 2020-07-27
Gentoo 202007-35 reportlab 2020-07-27
Gentoo 202007-54 rsync 2020-07-28
Gentoo 202007-15 samba 2020-07-27
Gentoo 202007-32 sarg 2020-07-27
Gentoo 202007-26 sqlite 2020-07-27
Gentoo 202007-09 thunderbird 2020-07-27
Gentoo 202007-07 transmission 2020-07-27
Gentoo 202007-43 tre 2020-07-27
Gentoo 202007-24 twisted 2020-07-27
Gentoo 202007-11 webkit-gtk 2020-07-27
Gentoo 202007-13 wireshark 2020-07-27
Gentoo 202007-02 xen 2020-07-27
openSUSE openSUSE-SU-2020:1056-1 15.2 LibVNCServer 2020-07-24
openSUSE openSUSE-SU-2020:1105-1 15.2 SUSE Manager Client Tools 2020-07-28
openSUSE openSUSE-SU-2020:1106-1 cacti, cacti-spine 2020-07-28
openSUSE openSUSE-SU-2020:1060-1 15.1 15.2 cacti, cacti-spine 2020-07-26
openSUSE openSUSE-SU-2020:1060-1 15.1 15.2 cacti, cacti-spine 2020-07-26
openSUSE openSUSE-SU-2020:1061-1 chromium 2020-07-26
openSUSE openSUSE-SU-2020:1049-1 15.1 cni-plugins 2020-07-23
openSUSE openSUSE-SU-2020:1050-1 15.2 cni-plugins 2020-07-24
openSUSE openSUSE-SU-2020:1042-1 15.1 firefox 2020-07-23
openSUSE openSUSE-SU-2020:1034-1 15.2 firefox 2020-07-23
openSUSE openSUSE-SU-2020:1090-1 15.1 freerdp 2020-07-27
openSUSE openSUSE-SU-2020:1087-1 15.1 go1.13 2020-07-26
openSUSE openSUSE-SU-2020:1095-1 15.2 go1.13 2020-07-27
openSUSE openSUSE-SU-2020:1062-1 15.2 kernel 2020-07-26
openSUSE openSUSE-SU-2020:1085-1 15.1 knot 2020-07-26
openSUSE openSUSE-SU-2020:1086-1 15.2 knot 2020-07-26
openSUSE openSUSE-SU-2020:1088-1 15.1 libraw 2020-07-26
openSUSE openSUSE-SU-2020:1089-1 15.1 perl-YAML-LibYAML 2020-07-26
openSUSE openSUSE-SU-2020:1093-1 15.2 perl-YAML-LibYAML 2020-07-27
openSUSE openSUSE-SU-2020:1108-1 15.2 qemu 2020-07-28
openSUSE openSUSE-SU-2020:1035-1 15.1 redis 2020-07-23
openSUSE openSUSE-SU-2020:1035-1 SPHfSLE12 redis 2020-07-23
openSUSE openSUSE-SU-2020:1074-1 15.1 salt 2020-07-26
openSUSE openSUSE-SU-2020:1037-1 15.1 singularity 2020-07-23
openSUSE openSUSE-SU-2020:1051-1 15.1 tomcat 2020-07-24
openSUSE openSUSE-SU-2020:1102-1 15.1 tomcat 2020-07-28
openSUSE openSUSE-SU-2020:1063-1 15.2 tomcat 2020-07-26
openSUSE openSUSE-SU-2020:1071-1 15.1 vino 2020-07-26
openSUSE openSUSE-SU-2020:1064-1 15.1 webkit2gtk3 2020-07-26
openSUSE openSUSE-SU-2020:1043-1 BP xmlgraphics-batik 2020-07-23
Oracle ELSA-2020-3014 OL8 dbus 2020-07-23
Oracle ELSA-2020-3038 OL8 thunderbird 2020-07-23
Red Hat RHSA-2020:3199-01 OSP16.1 openstack-tripleo-heat-templates 2020-07-29
Red Hat RHSA-2020:3176-01 EL8 postgresql-jdbc 2020-07-28
Red Hat RHSA-2020:3185-01 EL8 python-pillow 2020-07-28
Red Hat RHSA-2020:3118-01 EL7 samba 2020-07-23
Red Hat RHSA-2020:3119-01 EL8 samba 2020-07-23
Slackware SSA:2020-209-01 mozilla 2020-07-27
SUSE SUSE-SU-2020:2032-1 SLE15 freerdp 2020-07-23
SUSE SUSE-SU-2020:2068-1 SLE15 freerdp 2020-07-29
SUSE SUSE-SU-2020:2008-1 SLE12 java-11-openjdk 2020-07-22
SUSE SUSE-SU-2020:2027-1 SLE15 kernel 2020-07-23
SUSE SUSE-SU-2020:2067-1 SLE15 ldb 2020-07-29
SUSE SUSE-SU-2020:2028-1 SLE12 libraw 2020-07-23
SUSE SUSE-SU-2020:2029-1 SLE15 libraw 2020-07-23
SUSE SUSE-SU-2020:2048-1 OS7 OS8 OS9 SLE12 SES5 mailman 2020-07-24
SUSE SUSE-SU-2020:2025-1 SLE15 perl-YAML-LibYAML 2020-07-23
SUSE SUSE-SU-2020:2055-1 SES5 python-Django 2020-07-27
SUSE SUSE-SU-2020:2057-1 SES5 python-Pillow 2020-07-28
SUSE SUSE-SU-2020:2015-1 SLE15 qemu 2020-07-23
SUSE SUSE-SU-2020:2053-1 SLE12 rubygem-excon 2020-07-27
SUSE SUSE-SU-2020:2060-1 OS6 rubygem-puma 2020-07-28
SUSE SUSE-SU-2020:2041-1 SLE15 rust, rust-cbindgen 2020-07-24
SUSE SUSE-SU-2020:2066-1 OS8 OS9 SLE12 SES5 samba 2020-07-29
SUSE SUSE-SU-2020:14437-1 SLE11 samba 2020-07-23
SUSE SUSE-SU-2020:2036-1 SLE12 samba 2020-07-24
SUSE SUSE-SU-2020:2065-1 SLE15 samba 2020-07-29
SUSE SUSE-SU-2020:2037-1 OS9 SLE12 tomcat 2020-07-24
SUSE SUSE-SU-2020:2045-1 SLE15 tomcat 2020-07-24
SUSE SUSE-SU-2020:2046-1 SLE15 tomcat 2020-07-24
SUSE SUSE-SU-2020:2047-1 SLE15 tomcat 2020-07-24
SUSE SUSE-SU-2020:2009-1 SLE15 vino 2020-07-22
SUSE SUSE-SU-2020:2069-1 OS7 OS8 OS9 SLE12 SES5 webkit2gtk3 2020-07-29
Ubuntu USN-4435-2 12.04 14.04 clamav 2020-07-27
Ubuntu USN-4435-1 16.04 18.04 20.04 clamav 2020-07-27
Ubuntu USN-4431-1 16.04 18.04 20.04 ffmpeg 2020-07-22
Ubuntu USN-4436-1 16.04 18.04 librsvg 2020-07-27
Ubuntu USN-4437-1 20.04 libslirp 2020-07-27
Ubuntu USN-4434-1 16.04 18.04 20.04 libvncserver 2020-07-23
Ubuntu USN-4439-1 18.04 linux-gke-5.0, linux-oem-osp1 2020-07-27
Ubuntu USN-4440-1 18.04 linux-hwe, linux-azure-5.3, linux-gcp-5.3, linux-gke-5.3, linux-hwe, linux-oracle-5.3 2020-07-27
Ubuntu USN-4441-1 16.04 18.04 20.04 mysql-5.7, mysql-8.0 2020-07-28
Ubuntu USN-4433-1 18.04 20.04 openjdk-lts 2020-07-23
Ubuntu USN-4430-2 20.04 pillow 2020-07-23
Ubuntu USN-4438-1 20.04 sqlite3 2020-07-27
Ubuntu USN-4442-1 14.04 sympa 2020-07-28
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 5.8-rc7 Jul 26
Greg Kroah-Hartman Linux 5.7.11 Jul 29
Greg Kroah-Hartman Linux 5.4.54 Jul 29
Greg Kroah-Hartman Linux 4.19.135 Jul 29
Greg Kroah-Hartman Linux 4.14.190 Jul 29
Clark Williams 4.9.231-rt149 Jul 26
Daniel Wagner v4.4.231-rt202 Jul 24

Architecture-specific

Core kernel

Development tools

Device drivers

Yongqiang Niu add drm support for MT8183 Jul 23
Hsin-Hsiung Wang Add Support for MediaTek PMIC MT6359 Jul 23
周琰杰 (Zhou Yanjie) Add USB PHY support for new Ingenic SoCs. Jul 23
Anup Patel Dedicated CLINT timer driver Jul 23
Vinod Koul Add LT9611 DSI to HDMI bridge Jul 23
Kurt Kanzenbach Hirschmann Hellcreek DSA driver Jul 23
Frank Lee Allwinner A100 Initial support Jul 24
Ioana Ciornei net: phy: add Lynx PCS MDIO module Jul 24
Michael Walle Add support for Kontron sl28cpld Jul 26
Konrad Dybcio SDM630/36/60 driver enablement Jul 26
alexandru.tachici@analog.com hwmon: pmbus: adm1266: add support Jul 27
Ramuthevar,Vadivel MuruganX phy: Add USB PHY support on Intel LGM SoC Jul 27
Laurent Pinchart drm: mxsfb: Add i.MX7 support Jul 27
Kevin Tang Add Unisoc's drm kms module Jul 28
Sivaprakash Murugesan Add PCIe support for IPQ8074 Jul 29

Device-driver infrastructure

Dan Murphy Multicolor Framework v32 Jul 22
Kent Gibson gpio: cdev: add uAPI V2 Jul 25
Claire Chang Restricted DMA Jul 28

Filesystems and block layer

Memory management

Networking

Martin KaFai Lau BPF TCP header options Jul 22

Security-related

Virtualization and containers

Miscellaneous

Page editor: Rebecca Sobol


Copyright © 2020, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds