|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for March 21, 2019

Welcome to the LWN.net Weekly Edition for March 21, 2019

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Defining "sustainable" for an open-source project

By Jake Edge
March 19, 2019

SCALE

Bradley Kuhn of the Software Freedom Conservancy (SFC) first heard the term "sustainability" being applied to free and open-source software (FOSS) four or five years ago in the wake of Heartbleed. He wondered what the term meant in that context, so he looked into it some. He came to SCALE 17x in Pasadena, CA to give his thoughts on the topic in a talk entitled "If Open Source Isn't Sustainable, Maybe Software Freedom Is?".

After wondering what was meant by "sustainability", Kuhn looked up definitions of it, first in Google, then in Wikipedia, which is freely licensed, unlike Google's dictionary. Both definitions agreed that "sustainability" is maintaining a balance between resource usage and future needs, particularly with respect to the environment—all of which sounded good to him. So his first thoughts about FOSS sustainability were that people were working on software to help the environment be more sustainable, perhaps by writing forest-management software or software to assist activist organizations. But that is not what was meant.

Money

There is no clear definition of "FOSS sustainability", he said. The closest he could get to one, though, is that maintainers of FOSS don't get paid enough money and/or are not paid often enough. So FOSS sustainability is about changing that situation, which he is in favor of; much of his career has been spent figuring out how to ensure that people get paid to write FOSS. The idea that the sustainability folks have is that we should be putting money into FOSS projects like the way venture capitalists (VCs) put money into software startup companies. That kind of money infusion would fuel the rapid growth of these projects. He is not a fan of that model for building software, but it is the standard way that the industry funds new software.

[Bradley Kuhn]

He tends to think of FOSS projects more like restaurants than like VC-funded startups. FOSS projects are often a labor of love, much like many restaurants. Most restaurants do not make incredible amounts of money; they are generally considered successful if they can continue to pay all of the employees and cover the other expenses without much left over. There are counterexamples, however, including chains like McDonald's that make bundles of money. Just as with restaurant food, he doesn't want "McDonald's software"; he would much prefer the "local restaurant version of software" that is crafted for its community.

After Heartbleed, a large number of companies "freaked out" about open source, but it was different than the usual freak out. Historically, companies got concerned when they realized that there was open-source code in their infrastructure and products. This time it was concern that the "crazy free-software hackers" working on the code that went into their products were not being supervised or controlled. That led to the companies thinking that maybe they needed more control over these FOSS projects, so that, hopefully, situations like Heartbleed did not recur.

He noted that he would be somewhat critical of companies and the organizations they form in this part of the talk. He said that along the way he has been accused of being a communist and of hating capitalism. He does not hate capitalism, but believes that a good culture has a counterbalance to "unbridled capitalism". We learned through the early days of the industrial revolution that allowing free rein to capitalism had some terrible effects on society, so constraints and watchdogs were added into the mix. Unfortunately, in his view, pushing back against unbridled capitalism has been lost in the US; but it has also been lost in the FOSS world, he said.

Watershed

Heartbleed was a watershed moment for companies and FOSS; lots of money suddenly became available to be pushed at FOSS projects due to Heartbleed. That process had already started before Heartbleed for other reasons, but the security incident really accelerated the process. Companies wanted to channel money to FOSS projects as a means of controlling them, Kuhn said; the companies couldn't simply hire all of the developers, which is the traditional way to gain control of a project. Much of this money has been funneled through the trade associations (e.g. the Linux Foundation and the Eclipse Foundation) that have been created by companies with an interest in FOSS.

Even after all the "oodles of money" that were aimed at these projects, it still did not make FOSS sustainable. There are still meetups and an annual conference devoted to the topic of open-source sustainability. "The problem has not been solved", he said. He asked: is more corporate money what we need? Is this really a money problem? He is not completely convinced that it is a money problem.

He gave one example to illustrate how much money he is talking about. In 2013, he attended the OpenStack party at OSCON. It featured food, cakes and cupcakes with logos from OpenStack members, and a giant open bar where you could get artisanal mojitos, margaritas, and other drinks. While he did admit to getting in the cupcake line more than once, the party was "really over the top" and not what he had come to expect from parties in the open-source world. In addition, there was a giant table with hundreds of small Fiji water bottles, which certainly failed on the environmentally sustainable metric. The party obviously had an enormous price tag.

That led him to look into the Form 990 for the OpenStack Foundation. All non-profit organizations in the US have to file a Form 990 with the tax authorities. The most recent he could find for OpenStack was 2016 [PDF], which showed it had spent nearly $30 million for the year. In fact, OpenStack had a loss of $6 million in 2016, which is roughly six times the SFC annual budget.

He dug further into the form, noting that there were fairly large outlays to several other companies and organizations, including law firms, web-site development companies, and others. Around $250K was sent to another trade association, the Linux Foundation, for "community development", which is also not getting code written. The list of the highest salaries showed a number of executives in the organization, with salaries ranging from $200K to $400K, who were not part of developing the code according to several OpenStack developers that Kuhn talked to.

As noted on its 990, the OpenStack Foundation is a trade association, which is a 501(c)(6) organization in the US; it is something that is formed by a group of companies to promote common business interests. A classic example of this would be the Pasadena Chamber of Commerce, which gathers up money from various businesses in the city and uses that to promote the city, provide maps for tourists, and the like; it is "totally reasonable activity" that helps local businesses.

These 501(c)(6) organizations have become the most popular model for non-profit work in the FOSS world; there are many examples of these organizations in our community. Trade associations exist to promote the interests of the for-profit companies that are the members of the association; those interests may or may not align with the interests of the community or the general public.

If you contrast that with the 501(c)(3) designation that is used for charitable non-profits, the difference is striking; 501(c)(3) organizations are set up to promote the public good, not the interests of a smaller group, but to "do things that help everybody". He noted that he is biased, because he works for a charitable organization, but he does like that when he gets up every morning he goes to work to "help everyone equally": individuals, companies, hobbyists, the employed and unemployed, commercial concerns, non-commercial entities, and so on.

Historically, organizations in the FOSS world have been charities; the first, the Free Software Foundation, was started that way back in 1985. "That was the way you formed organizations in our community for a very long time", he said, "until companies said 'let's put money together to influence this stuff'". In the 1990s and 2000s, a lot of free software was written sustainably.

But the free software in those days was not written in the VC style, with growth graphs that "look like hockey sticks". The projects grew slowly but surely; Linux is a great example of this, he said. Historically, Linux was a slow-moving project with a relatively small number of participants, but that has all changed. Linux also used to be a counter-culture project, but is no longer in his view; beyond that, the project is shunning counter-culture influences, which is troubling to him.

Kuhn feels that there is "kind of a 'slash and burn' attitude toward open source" these days. Companies have come to open source, but want things to work the way they are used to things working: with startups, rapid growth, and so on. An interesting thing about slash-and-burn agriculture is that it works great in the short term: the burned vegetation makes great fertilizer for next year's crop. But for long-term sustainability, it is terrible. This is a new problem for FOSS that was not really present before. Thinking in terms of this quarter's numbers and how to accelerate growth are, arguably, not even sensible for for-profit companies, but they are likely not at all right for most FOSS projects.

Slow, steady growth

As a kind of counterexample, he pointed to the phpMyAdmin project, which is a PHP program for doing MySQL administration via the web. It is a long-running project that was started in late 1998 and joined SFC in 2013. One of the things that SFC does for its member projects is to try to help them become sustainable. It does that by helping the projects raise money and to use it in a sustainable, not slash-and-burn, fashion.

The phpMyAdmin project raised $16K in 2013 and spent none of it on development. The next year it raised almost double that, but only spent $3K on development. That continued for the next few years; income kept going up, as did the money spent on developers, but the money spent was always a fairly small fraction of what was brought in. By 2016, the project was funding three developers for part-time work for a total of $21K.

He noted that the contractors doing the work were in places where they didn't need a tremendous amount of money to live comfortably, so they were willing to work for substantially less than normal US wages, in part because they got to do something they loved to do. But Kuhn did note that they were each making more than his wife does at a domestic violence shelter, though her position requires a Masters degree. Her boss supervises 40 people and has been working in the field for 40 years but, at $60K, makes less than nearly any software developer he knows.

He is a bit disturbed by this notion that salaries have to be at the high levels expected by US developers, which seems to permeate the FOSS sustainability effort. He said that he is often accused of wanting developers to starve, but that is not true at all: he wants people to get reasonable pay for reasonable work, to have health care, be able to live a comfortable middle-class life, and so on. But if being sustainable as a project means paying salaries at Silicon Valley levels, it simply will not work—it is not something we should bring back to FOSS, he said. We should look at what people need to live comfortably, while working on something they enjoy.

PhpMyAdmin is doing an "amazing job" building that kind of project, Kuhn said. It is not a "jet-setting project" with a high profile yearly conference; there is an annual developer meeting that has around 20 attendees. It is written in a language (PHP) that is relatively unpopular, doesn't have wildly over-the-top parties, and doesn't pay "giant salaries" ($500K, say) that some people are getting to work on open source. He thinks it is great if people can get those kinds of salaries, but the idea that we should strive to pay those kinds of salaries is highly problematic.

However, phpMyAdmin is a good example of doing sustainable FOSS. This is what SFC has been trying to do for its projects, though he again noted that he is biased since that is where he works. Instead of the accelerating growth pattern favored by much of the software industry today, phpMyAdmin has had slow, steady growth that is modeled after many free software projects that came before it.

OpenStack and phpMyAdmin are simply the examples that he used; there are others that fall elsewhere on the continuum of different funding and spending models. The point he is trying to make is that a sustainable project may not necessarily follow the VC-style path, with huge salaries and hockey-stick graphs; that may well not be the right path for a lot of projects. Many long-term FOSS projects have found a way to be sustainable without going down that road.

FOSS projects as small towns

He finished his talk with an extended metaphor based on the movie "It's a Wonderful Life". Kuhn would like to be known as "the George Bailey of free software" after he is gone, he said with a grin. Bailey is the protagonist of the movie who spends his whole life in a small town keeping the local bank afloat so that townspeople can buy and build their own homes, rather than rent them from the villainous Mr. Potter. Kuhn warned that his talk contained spoilers for the film, which was released in 1946; he noted that he is "a little bit obsessed" with the movie.

Potter owns nearly everything in the town, except Bailey's bank; in the FOSS world, we have our Mr. Potters, Kuhn said. These are "corporate tyrants" that exist within our world, but there are far more people like Sam Wainwright, one of Bailey's childhood friends. Wainwright is kind of a jerk, but a successful one who is, at least ostensibly, still a friend of Bailey's. At the end of the film, after the townspeople raise the funds necessary to keep the bank solvent, Wainwright offers an advance to cover the loss. At that point, it actually isn't needed and, as a loan, comes with strings attached. Wainwright is well-meaning and Kuhn thinks there are a lot of well-meaning people in FOSS who are pushing money into open source, "but there's strings attached".

He would like to see a diverse world of how free software is developed. He works for a charity and raises money from "townspeople" who like the work that the projects are doing—it may not be much money, but can be enough to sustain a project like phpMyAdmin. He would like to see more of that being done in the FOSS world. If all of the money to develop FOSS comes from large for-profit companies, the software will have a tendency to only focus on the computing needs of those companies.

It is a paradox that more and more FOSS is being created, but that it is getting harder to avoid proprietary software in our lives. Kuhn and his SFC colleague Karen Sandler gave a keynote at FOSDEM this year on just that topic. The problem is that much of the FOSS that is being created is in very specific domains, solving problems that companies have.

As a community, we need to consider ways to prioritize the needs of the general public, not necessarily the needs of big business, he said. While it isn't a perfect analogy, most FOSS projects are kind of like small towns. They are a small community of people who are working together for the most part. Small towns, like FOSS projects, have various problems but, given the option, he would choose a small town over a corporate campus any day. Small towns need George Baileys, however, so he would like to see lots more of those in FOSS—and fewer Sam Wainwrights and Mr. Potters.

The slides of the talk are available, as is a YouTube video.

[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Pasadena for SCALE.]

Comments (22 posted)

Federated blogging with WriteFreely

By Jonathan Corbet
March 15, 2019
Your editor has never been a prolific blogger; a hard day in the LWN salt mines tends to reduce the desire to write more material for the net in the scarce free time that remains. But, still, sometimes the desire to post something that is not on-topic for LWN arises. Google+ has served as the outlet for such impulses in recent years, but Google has, in its wisdom, decided to discontinue that service. That leaves a bereft editor searching for alternatives for those times when the world simply has to hear his political opinions or yet another air-travel complaint, preferably one that won't vanish at the whim of some corporation. Recently, a simple blog-hosting system called WriteFreely came to light; it offers a platform that just might serve as a substitute for centralized offerings.

WriteFreely is written in Go and released under the Affero General Public License, version 3; WriteFreely version 0.8.1 was released at the beginning of February. The project is clearly relatively young: a look at the project's public Git repository shows a total of 275 non-merge commits from nine developers. Only two of those developers exceeded ten commits, though, and one is responsible for 241 of them (and 99% of the code). For the security-conscious, numbers like that are a bit of a red flag; it seems likely that few eyeballs have passed over this body of code.

That one author, Matt Baer, is the founder of write.as, a commercial blogging site built with the WriteFreely code. Would-be contributors are expected to sign an expansive contributor license agreement that, seemingly, grants both the owning company (called "A Bunch Tell") and any other recipient the right to distribute the code under any license.

Installation and setup

WriteFreely is not generally packaged by distributors, so users must obtain a copy directly from the project's site. A Linux x86-64 binary is available, but your editor, naturally, preferred to build from source. After all, a local build is more secure, even if one hasn't actually looked at the code being built, right? Any such notions were quickly abandoned, though, after typing "make"; the build process immediately goes off and downloads several dozen packages from a whole range of GitHub repositories. There is no way to get a real handle on what it is pulling in, or to verify that any of those packages are what they claim to be. This is discouraging, given the experience (and more experience) of how this kind of build process can go bad. Kids these days definitely have different ideas about how software should be built and distributed. [Update: the situation is not as bad as portrayed here; see the comments for more information.]

There is a configuration file that controls how WriteFreely works; it is not particularly complex, but there is a menu-driven mode to generate it interactively anyway. The biggest decision that needs to be made is whether it will host a single blog, or whether it will operate in a multi-user mode where users can create accounts that will each be able to host multiple blogs. Once that's done, WriteFreely can either run standalone with its own built-in HTTP server (which claims to be able to handle TLS when configured with the certificate and key files) or behind a proxy server.

The standalone mode is fine for trying things out, but using a "real" web server as a proxy is probably the way to go when hosting something exposed to the Internet. Among other things, that makes TLS support with a certificate from Let's Encrypt easy. (As an aside, it is impressive just how easy Let's Encrypt has made things; there really is no excuse for a site that throws certificate warnings or lacks TLS support entirely anymore.) WriteFreely has achieved its goal of making it possible to set up a new blogging site with a minimum of effort.

The result can be seen over here thanks to an idle domain your editor has been hanging onto for a while. At least, it can be seen there for a while; no promises about that site's permanence are made at this point, and future readers may be disappointed. (Update: the site has since been taken down).

Writing freely

When WriteFreely is configured in the multi-user mode, the top-level page it serves to unauthenticated users provides a little form for account creation and an extensive advertisement (including an embedded video) for the software itself. There is [WriteFreely landing page] no way to change that front page within the system itself. It does not take much searching, though, to find the template files for the built-in page and tweak them. It would sure be nice if the templating language (and, in particular, the specific resources available to templates in WriteFreely) were actually documented, but one can't have everything.

For a logged-in user, though, the view changes to a blank page with the word "Write" and a few grayed-out icons at the top. The intent is to provide a distraction-free writing environment, and it would appear to succeed; there is little distraction to be found in a blank page. One is expected to enter one's prose, then hit the arrow up top to post the result, either to the blog or to a drafts folder. Text is formatted using Markdown, unless one would rather use HTML; WriteFreely simply accepts either and tries to do the right thing with them. There is probably more in the way of interesting features but, as of this writing, the writer's guide is a little on the sparse side.

Posted text is formatted cleanly, without a lot of extra markup — it's meant to be read. There is little control over the appearance provided to writers beyond the ability to chose between a serif and a sans-serif font. There is a mechanism by which a relatively advanced user can provide custom CSS declarations for a given blog and some minimal documentation on the classes in use. For the most part, it seems that one is not meant to mess around much with the appearance of the site.

There is no support for hosting images in the WriteFreely system; the write.as guide suggests that, to put an image in a post, one should "first upload it somewhere on the web and get its URL, then use markdown or HTML in your post". That is likely to work better for some users than others; naturally, write.as comes with a commercial option that includes image hosting.

In the multi-user mode, the first account created has administrative privileges; subsequent users do not. The administration screen can view the current users and their information, but makes no provision for changing anything. There is no way, for example, to silence or remove an account that is making abusive posts, no way to moderate posts, and no way to take down a problematic post. Some of those things could certainly be done by typing SQL at the underlying database (SQLite and MySQL are supported; no PostgreSQL, alas), but that's no way to run a site. The administrative side of WriteFreely will need some enhancements before it can be used to host accounts from untrusted users.

There is, though, a flag that controls whether new accounts can be created or not. If account creation is disabled, the administrator can send out invitations (in the form of special URLs) to enable specific people to create accounts anyway.

Users can create multiple blogs under their account, up to an administrator-controlled limit. That feature can be used to allow authors to segregate different types of posts. Thus, for example, readers who are only interested in your editor's complaints about the weather can be spared the indignity of reading his political opinions. Until one gets to climate, at least. Users can export their data at any time in a handful of different formats; there is no provision for importing data from elsewhere, though.

Syndication, federation, and export

WriteFreely automatically creates an RSS feed for each blog. There does not appear to be a way to get a feed for the site as a whole, which could be a nice feature for some types of installations. It also claims support for the ActivityPub protocol so that, for example, blog posts can be followed by Mastodon users. Mastodon appears to be the intended means by which others can comment on blog posts; there is no comment-posting feature built into WriteFreely itself. Your editor, not being a Mastodon user, has not had a chance to play with this aspect of the system, but it could prove to be an important piece if the vision of moving away from centralized platforms ever comes to fruition.

The world is full of blogging systems. Many of them are hosted by companies that try to make money with ads, abuse their users' data, or may turn off the whole thing if the CEO has a bad day — or all of those things. Compared to those business models, the simple flat-fee structure used by write.as comes as a breath of fresh air. Other blogging systems are free software, but many suffer from a high level of complexity. WriteFreely tries to address these problems by providing a blogging platform that is simple to set up and simple to use while providing most of the features that somebody focused on blogging might want.

Will your editor maintain a WriteFreely site as the new outlet for his rare non-Linux writings? That remains to be seen. But WriteFreely does have most of the features that would really be needed to implement such a system. The current lack of code review and seemingly uncontrolled build system are the source of legitimate worries; hopefully those will be addressed as more developers discover the project. Meanwhile, it provides a way to keep a blog under one's own control and tie into other federated systems with a minimum of administrative fuss; that is hard to complain about.

Comments (35 posted)

5.1 Merge window part 2

By Jonathan Corbet
March 17, 2019
By the time that 5.1-rc1 was released and the 5.1 merge window ended, 11,241 non-merge changesets had been pulled into the mainline repository. Of those, just over 5,000 were pulled since the first 5.1 merge-window summary. It often happens that the biggest changes are pulled early, with the emphasis shifting to fixes by the end of the merge window; this time, though, some of the most significant features were saved for last.

Some of the noteworthy changes pulled in the second half of the 5.1 merge window are:

Core kernel

  • The live patching mechanism has a new "atomic replace" feature; it allows a single cumulative patch to replace a whole set of predecessor patches. It is useful in cases where an older patch needs to be reverted or superseded; one use case is described in this article. Some more information can be found in this commit.
  • The io_uring API has been added, providing a new (and hopefully better) way of doing high-performance asynchronous I/O.
  • If the CONFIG_PRINTK_CALLER configuration option is set, messages printed by the kernel will include a new field identifying the thread or CPU ID of the code doing the printing. It is primarily meant to ease the task of reuniting streams of messages that may be interrupted by messages printed elsewhere in the system.
  • It is now possible to use nonvolatile memory as if it were ordinary RAM. This work is described in this article from January; see also this changelog for more information and some important caveats.
  • Opening a process's /proc directory now yields a file descriptor that can be used to refer to the process going forward; as described in this article, the primary purpose is to prevent the delivery of signals to the wrong process should the target exit and be replaced (at the same ID) by an unrelated process. The new pidfd_send_signal() system call (described in this commit) can be used with these file descriptors.

Filesystems and block layer

  • The "exofs" filesystem, meant to run on top of object storage devices, has been removed, along with SCSI-protocol support for those devices in general.
  • The new dm-mod.create= command-line parameter can be used to create device-mapper volumes at boot time without the need for an initramfs. See Documentation/device-mapper/dm-init.txt for more information.
  • The F2FS filesystem has a new mode bit (F2FS_NOCOW_FL) that disables copy-on-write behavior for the affected file.

Hardware support

  • Clock: ZXW Crystal SD3078 realtime clocks, Cadence realtime clocks, Amlogic Meson realtime clocks, MicroCrystal RV-3028 realtime clocks, Abracon AB-RTCMC-32.768kHz-EOZ9 I2C realtime clocks, Epson rx8571 realtime clocks, NXP i.MX8MM CCM clock controllers, and Actions Semi OWL S500 clocks.
  • GPIO and pin control: TQ-Systems QTMX86 GPIO controllers, Gateworks PLD GPIO expanders, AMD Fusion Controller Hub GPIO controllers, and NXP IMX8QM and IMX8MM pin controllers.
  • Graphics: Toppoly TPG110 panels, ARM Komeda display processors, Sitronix ST7701 panels, and Kingdisplay kd097d04 panels. It's also worth noting that the Nouveau driver now has support for heterogeneous memory management, allowing better sharing of RAM between the CPU and the GPU.
  • Input: Maltron L90 keyboards, ViewSonic/Signotec PD1011 signature pads, Sitronix ST1633 touchscreen controllers, and Qualcomm MSM vibrators.
  • Media: Melexis MLX90640 thermal cameras, Omnivision ov8856 sensors, and NXP i.MX7 camera sensor interfaces.
  • Miscellaneous: STMicroelectronics STMPE analog-to-digital converters, STMicroelectronics STPMIC1 power-management ICs, Toshiba Mobile TC6393XB I/O controllers, Mellanox hardware watchdog timers, ChromeOS Wilco embedded controllers, Xilinx ZynqMP IPI mailboxes, and NXP Layerscape qDMA engines.

Security

  • The goal of stacking security modules has been discussed since 2004 (and probably before). This work is finally coming to a conclusion, and many of the necessary low-level changes have been merged for 5.1. There is a new lsm= command-line parameter that controls which modules are loaded, and in which order.
  • The new "SafeSetID" security module has been added; it places limits on user and group ID transitions. For any given user (or group) ID, a change (via executing a setuid program, for example) would only be allowed if this module agrees. ChromeOS is currently using it to implement its security policies; see Documentation/admin-guide/LSM/SafeSetID.rst for more information.
  • The audit subsystem has gained support for namespaced file capabilities.
  • The structleak GCC plugin has been extended to initialize all variables passed by reference on the stack. See this commit for details.

Internal kernel changes

  • The work to convert all fault() handlers to return the special vm_fault_t type has been completed, so now that type has been changed to be incompatible with the previous int type. That will cause compilation failures on any out-of-tree modules that have not been updated.
  • A new "generic radix tree" data structure has been added for simple uses. There is no separate documentation for it, but this commit contains kerneldoc comments describing how it works.
  • The flexible array data structure has been removed; its (few) users have been converted to use generic radix trees instead.
  • The ever-larger file_operations structure has gained a new iopoll() method; it is used by the io_uring mechanism.
  • The handling of masks in the DMA-mapping layer has changed somewhat. Previous kernels required drivers to find a mask that the kernel was willing to accept; now, the mask provided by drivers describes only the device's capabilities, and the kernel worries about higher-level limitations. That should allow the simplification of a lot of driver initialization code. This commit describes the change.
  • The internal handling of filesystem mounts has changed considerably in preparation for the addition of the new mount API. The new system calls have still not been added, though, and seem likely to wait for another development cycle. See this documentation commit for a description of the new internal API.
  • The GCC compiler can use indirect jumps for switch statements; those can end up using retpolines on x86 systems. The resulting slowdown is evidently inspiring some developers to recode switch statements as long if-then-else sequences. In 5.1, the compiler's case-values-threshold will be raised to 20 for builds using retpolines — meaning that GCC will not create indirect jumps for statements with less than 20 branches — addressing the performance issue without the need for code changes that might well slow things down on non-retpoline systems.

Unless something perturbs the usual schedule, the final 5.1 release can be expected at the beginning of May.

Comments (none posted)

The creation of the io.latency block I/O controller

March 14, 2019

This article was contributed by Josef Bacik

Sharing a disk between users in Linux is awful. Different applications have different I/O patterns, they have different latency requirements, and they are never consistent. Throttling can help ensure that users get their fair share of the available bandwidth but, since most I/O is in the writeback path, it's often too late to throttle without putting pressure elsewhere on the system. Disks are all different as well. You have spinning rust, solid-state devices (SSDs), awful SSDs, and barely usable SSDs. Each class of device has its own performance characteristics and, even in a single class, they'll perform differently based on the workload. Trying to address all of these issues with a single I/O controller was tricky, but we at Facebook think that we have come up with a reasonable solution.

Historically, the kernel has had two I/O controllers for control groups. The first, io.max, allows setting hard limits on the bandwidth used or I/O operations per second (IOPS), per device. The second, io.cfq.weight, was provided by the CFQ I/O scheduler. As Facebook has worked on things like pressure-stall information and the version-2 control-group interface, it became apparent that neither of those controllers solved our problem. Generally, we have a main workload that runs, and then we have periodic system utilities that run in the background. Chef runs a few times an hour, updates any settings on the system, and installs packages. The fbpkg tool downloads new versions of the application that is running on the system three or four times per day.

The io.max controller allowed us to clamp down on those system utilities, but made them run unbearably slowly all of the time. Ratcheting up on the throttling just made them impact the main workload too much, so it wasn't a great solution. The CFQ io.cfq.weight controller was a non-starter, as CFQ did not work with the multi-queue block layer, not to mention that just using CFQ in general caused so many problems with latencies that we had turned it off years ago in favor of the deadline scheduler.

Jens Axboe's writeback-throttling work introduced a new way of monitoring and curtailing workloads. It works by measuring the latencies of reads from a disk and, if they exceed a configured threshold, it clamps down on the number of writes that are allowed to go to the disk. This sits above the I/O scheduler, which is important because we have a finite number of requests we can have outstanding for any single device. This number is controlled by the /sys/block/<device>/queue/nr_requests setting. We call this the "queue depth" of the device. The writeback-throttling infrastructure worked by lowering the queue depth before allocating a request for incoming write operations, allowing the available requests to be used by reads and throttling the writes as necessary.

This solution addressed a problem wherein fbpkg would pull down multi-gigabyte packages to update the running application. Since the application updates tended to be pushed all at once, we would see global latency spikes as the sudden wave of writes impacted the already running application.

Enter a new I/O controller

Writeback throttling isn't control-group aware and only really cares about evening out read and write latencies per disk. However it has a lot of good ideas, all of which I blatantly stole for the new controller, which I call io.latency. This controller has to work on both spinning rust and high-end NVMe SSDs, so it needed to have a low overhead. My goal was to add no locking in the fast path, a goal I mostly accomplished. Initially we really wanted both workload protection and proportional control. We have use cases where we want to protect the main workload at all costs, but other use cases where we want to stack multiple workloads and have them play together nicely. Eventually we had to drop that plan and go for workload protection only, and come up with another solution for proportional control.

With io.latency, one sets a latency threshold for a group. If this threshold is exceeded for a given time period (250ms normally), then the controller will throttle any peers that have a higher latency threshold [control-group hierarchy] setting. The throttling mechanism is the same as writeback throttling: the controller simply clamps down on the queue depth for that particular control group. This throttling only applies to peers in the control-group hierarchy, so in the case shown to the right, for example, if fast misses its latency threshold, then only slow would be throttled, while unrelated would be unaffected.

The way I accomplish this without locking is to have a cookie that is set in both the parent and its children. If, for example, fast misses its target, it decrements the cookie in its parent group (b). The next time slow submits an I/O request, the controller checks the cookie in b against slow's copy of the cookie. If the value has gone down, slow decreases its queue depth. If the value has gone up then slow would increase its queue depth.

In the normal I/O path, io.latency adds two atomic operations: one to read the parent cookie and one to acquire a slot in the queue. In the completion path, we only have one atomic operation (to release the queue slot) in the normal case, along with a per-CPU operation to account for the time the I/O took. In the slow case, which occurs every window sample time (that's the 250ms time period mentioned above) we have to acquire a lock in the parent to add up all of the I/O statistics and check if our latencies have missed the threshold.

Part of io.latency is accounting for the I/O time. Since we care about total latency suffered by the application, we count from the time that each operation is submitted to the time it is completed. This time is kept in a per-CPU structure that is summed up every window period. We found in testing that taking the average latency was good for rotating drives, but for SSDs it wasn't responsive enough. Thus, for SSDs, we have a percentile calculation in place; if the 90th-percentile latencies surpass the threshold setting, then it's time for a less-important peer group to be throttled.

The final part of io.latency is a timer that fires once each second. Since the controller was built to be mostly lockless, it's driven by the I/O being done. However, if you have the main workload throttling a slower workload into oblivion then ceasing I/O, we will no longer have a mechanism to unclamp the slow group. The periodic timer takes care of this by firing when there's I/O occurring from any source and verifying that the aggrieved group is still doing I/O, otherwise it unclamps everybody so they can go on about their work.

Everything worked perfectly, right?

Unfortunately, the kernel is a large system of interconnected parts, and many of these parts don't like the fact that, suddenly, submit_bio() can take much longer to return if the caller is being throttled. We kept running into a variety of different priority inversions that ate up a lot of our time when testing this whole system in production.

Our test scenario was an overloaded web server with a slow memory leak that was started under the slow control group. Generally, what happens is that the fast workload will start being driven into memory reclaim and needing to do swap I/O for whatever pages it can get. Pages are attached to their owning control group, which means any I/O performed using those pages is done within the owner's limits. Our high-priority workload was swapping pages owned by a low-priority group, which meant that it was being incorrectly throttled.

This was easy enough to solve: just add a REQ_SWAP flag to the I/O operation and make it so the I/O controller simply let those operations through with no throttling. A similar thing had to be done for REQ_META as well, since we could get blocked up on a metadata I/O that the slow group had submitted. However, now the slow group was causing a lot of I/O pressure, but not in a way that caused it to be throttled, since all REQ_SWAP I/O is now free. The bad workload was only allocating memory — and never doing I/O — so there was no way to throttle it until it buried the main workload. Once the memory pressure starts to build, the workload's latencies really go through the roof because, for the most part, the main workload is memory intensive, not I/O intensive.

Another set of infrastructure had to be added to solve this problem. We knew that we were doing a lot of I/O on behalf of a badly behaving control group; we just needed a way to tell the memory-management layer that this group was behaving poorly. To solve this problem I added a congestion counter to the block control-group structure that can be set if a control group is getting a lot of I/O done for free without being throttled. Since we know which control group was responsible for the pages being submitted, we can tag that group as congested, and the memory-management layer will know it needs to start throttling things.

The next problem we were having was with the mmap_sem semaphore. In our workload, there is some monitoring code that does the equivalent of ps; it reads /proc/<pid>/cmdline which, in turn, takes mmap_sem. The other thing that takes mmap_sem is the page-fault handler. If tasks performing page faults are being throttled, thus holding mmap_sem, and our main workload tries to read a throttled task's /proc/<pid>/cmdline file, the main workload would be blocked waiting for the throttled I/O to complete. This meant we had to find a way to do the harsh throttling outside of the path of any possible kernel locking that would cause problems. The blkcg_maybe_throttle_current() infrastructure was added to handle this problem. We would add artificial delays to the current task, then, as we return to user space, when we know we aren't holding any kernel locks, we would pause for the given delay to make sure we were still throttled.

With all of these things in place we had a working system.

Results

Previously, when we would run this memory leak test with no I/O controller in place, the box would be driven into swap and thrash for many minutes until either the out-of-memory killer brought everything down or our automated health checker noticed something was wrong and rebooted the box manually. It takes a while for our boxes to come back up, be integrated back into the cluster, and become ready to accept traffic, so on average there were about 45-50 minutes of downtime for a box with this reproducer.

With the full configuration in place and oomd monitoring everybody, we'd drop about 10% of our requests per second; then the memory hog would become so throttled that oomd would see it and kill it. This is a 10% drop on an overloaded web server; in normal traffic you'd likely see less or no impact on the overall performance.

Future work

The io.latency controller, along with all of our other control-group work and oomd, currently runs in production on all of our web servers, all of our build servers, and on the messenger servers. It has been stable for a year and has drastically reduced the number of unexpected reboots across those tiers. The next step is to build a proportional I/O controller, to be called io.weight. It's currently in development; production testing will start soon and will likely be posted upstream in the next few months. Thankfully all of the various priority inversions that were found with io.latency have all been fixed, which makes adding new I/O controllers much more straightforward.

Comments (6 posted)

Layers and abstractions

By Jake Edge
March 20, 2019

SCALE

In software, we tend to build abstraction layers. But, at times, those layers get in the way, so we squash them. In a talk at SCALE 17x in Pasadena, CA, Kyle Anderson surveyed some of the layers that we have built and squashed along the way. He also looked at some of the layers that are being created today with an eye toward where, how, and why they might get squashed moving forward.

When he thinks about layers, he thinks about abstractions and the separation of concerns principle. Those two are "kind of the same to me", he said. To set the stage, he put up some quotes about abstraction from computer scientists, which can be seen in the YouTube video of the talk. He also mentioned Rich Hickey's "Simple Made Easy" talk, which Anderson said was "kind of the opposite" of his talk, so he encouraged attendees to watch it as a counterpoint.

Squashing layers

The first of the layer squashes he wanted to talk about was DevOps. He said he started with that because it is more or less non-controversial; since 2009, the software industry has agreed that DevOps is a good thing. DevOps came about as a reaction to the traditional thinking that the responsibilities for a deployed software stack should be broken up into a bunch of layers, each of which had its own person or group responsible for it. So deployment, build/release, QA, code and unit tests, task decomposition, UI wireframes, and features were each owned by their own entity (e.g. product manager for features, developers for the code and unit tests, operations for deployment).

That all looks nice on the slide, he said, but whenever he shows something like that, he would like the audience to ask: "what if we got rid of some of the layers?" That's what happened with DevOps: the developers became responsible for most of that stack (other than features and UI wireframes). Developers are responsible for breaking things up into tasks, writing code and tests, doing QA, making sure it all builds, and deploying the result into production.

What are the tradeoffs involved with squashing those layers? He works at Yelp, which uses the DevOps model. The company has found that DevOps leads to increased performance, at least in terms of the number of deployments per day. Deployments are faster, as are rollbacks, which gives a faster "time to recovery". It is somewhat harder to hire for, however, as some developers are not interested in handling production and deployment. There is also more inter-team communication, which adds overhead. Overall, though, Yelp, like much of the rest of the industry, has found DevOps to be a beneficial change.

The next layers he described were those for filesystems. In a traditional Linux filesystem stack, there are neat layers starting with the block device at the bottom, RAID and volume-management above that, dm-crypt for encryption, an ext4 or other traditional filesystem layer, and finally the files themselves available at a mount point. As instructed, the audience said: "what if we got rid of some of the layers?"

If you did so, that would look a lot like ZFS, which is a non-traditional filesystem that has squashed those layers into the filesystem itself. ZFS handles RAID, encryption, pools for volume management, and so on. There clearly is no separation of concerns for ZFS. Is that a good thing? It does lead to better error detection and recovery because ZFS is concerned with the block layer all the way up to the files themselves; it has checksums at each level. ZFS can also do deduplication more easily and take faster snapshots.

Sympathetic abstraction

To explain what is going on with ZFS, Anderson introduced the idea of a "sympathetic abstraction". It can be thought of as an "intentional leakiness" where layers are sympathetic to those above and below. For example, ZFS is sympathetic to whether the block device it runs on is spinning rust or an SSD. ZFS is also sympathetic to errors that can occur on the raw block device, while ext4 and other traditional filesystems are not: they leave that kind of error handling to some other layer below.

These kinds of sympathetic abstractions happen all the time, he said. For example, SSH sets the TCP_NODELAY option (to disable Nagle's algorithm) for interactive sessions. But SSH should not care what transport it is running on; it should be able to run on top of UDP tunneled inside DNS packets. The Apache Cassandra database wants to know if it is running on spinning rust or SSD; shouldn't that be the concern of layers below it?

He introduced "Kyle's Abstraction Law", which he has come up with, though it is "not canon", he cautioned. It says that over time, layers tend toward becoming more sympathetic (or merged) in the pursuit of performance. We start off with "beautiful layers", each with a single purpose; over time, we realize the picture is not quite so clear cut, so we squash the layers some.

Next up was HTTP and HTTPS. In order to talk about HTTPS in terms of layers, he wanted to start with the OSI seven-layer model for networking. It came out in 1984 and when he learned about it, it made a lot of sense; it was a "great way to think about how we build our networks".

So he showed how HTTPS "fit" into the OSI model. On his laptop, both the physical and link layers (layers 1 and 2) are 802.11, which already shows some layer squashing. The network and transport layers (3 and 4) are IP and TCP, but then things (further) break down. Layer 5 is the session layer, but HTTP is "stateless", so that doesn't seem to apply. HTTPS runs atop SSL/TLS, but where does that encryption go? As layer 6, since there is no other place to put it? That puts HTTP in layer 7.

That all looks a bit weird, he said. But it is because the IETF, which oversees the internet protocols, does not intend them to conform to the OSI model. It turns out that there are several layer models out there, with differing numbers of layers. So he thought the OSI model was the territory, but it was really just one of the maps of it.

What if we got rid of some of the layers? We have, he said, and it is called QUIC. It sits atop UDP, rather than TCP, and handles everything from there, including encryption, multiplexing, error correction, and so on. There is no separation of concerns at that level, but you can't argue with the performance. It also means that you can't use tcpdump to look at the traffic and you can't simply swap out the crypto implementation with something else—you have to have faith in the QUIC crypto. As he noted, if you are watching a video on YouTube or checking Gmail on your phone, you are almost certainly using it now.

Counterexample

Services and microservices were up next. If you were to start writing Twitter today, you might start with a Ruby on Rails application with a clean separation between the model, view, and controller . Add a MySQL database for storage and that might be some early version of Twitter.

In a change from the usual pattern, he asked with a chuckle: "what if we added some more layers?" His slide showed a bunch of microservices, each with their own storage, and an API gateway that routed various requests to them. This is a "service-oriented architecture" that is what Twitter has today.

Each of those microservices may talk with the others, leading to a complicated "death star" call graph, but it is useful for scaling, both in terms of the components and in terms of development. Each of the services can be scaled up or down depending on need and each can be developed and deployed by independent teams. It also has a cost, of course, but he suggested that attendees keep this counterexample in mind through the rest of the talk.

Kubernetes

He then moved on to Kubernetes. Ten years ago, a deployment of an application in the Amazon cloud had just a few layers. There was the bare metal layer that Amazon controlled and one could pick a virtual machine (VM) size to run on top of that. That VM would run the operating system (Ubuntu, say) and the application (perhaps in a Java JAR file), which would both be built into an Amazon Machine Image (AMI) file. That was four layers.

"So what if we added a bunch more layers to this?" For Kubernetes, the stack starts the same if it is deployed in the Amazon cloud; there is the bare metal and infrastructure-as-a-service (IaaS) VM. On top of that is still the operating system, but after that is where things diverge. For Kubernetes, there will be a kubelet layer and, on top of that, a pod layer. Then there are one or more containers running in the pod and, finally, the Java JAR running inside one of the containers.

"That's a lot of layers", he said. Are we just "architecture astronauts" that want to build something with lots of layers, with a clean separation of concerns? It is hard to argue against the momentum that Kubernetes has, but he does wonder: what if we got rid of some the layers?

Amazon and other cloud providers offer many different-sized VMs as a way for their users to scale their applications, but the Kubernetes world scales in a different way. So he thinks the future is to eliminate the VMs and simply run Kubernetes on the bare metal. That would squash the lowest two layers.

You could also squash all of the layers above by moving to a unikernel model, but he does not think that is necessarily the way forward. Unikernels remove many of the benefits that we get from containers, including speed of deployment and ease of iteration. In addition, unikernels are more difficult to work with, since there is no real OS that one can SSH into, for example.

Another path would be to move to a "serverless" model, though he prefers the term "function as a service" (FaaS). In that model, the hardware layer would encompass the bare metal (and VM if needed) and the next layer would provide the Amazon Kubernetes pieces (OS, kubelet, and pod). Above that would be the container layer; for Amazon's Lambda FaaS offering, that would be Firecracker, but for Kubernetes it would be Docker or a similar container runtime. Layer 4 would be the application code.

Application focus

This allows developers to focus on their code, rather than having to keep in mind all of the substrate that makes their code run. In the unikernel world, all of that infrastructure is essentially part of the application, so developers have to be aware of it. But, isn't this new model right back to where we started? Prior to Kubernetes, there were four layers and the developer really mostly needed to be concerned with their application. He believes that there are three main reasons why this architecture based on FaaS and Kubernetes is going to be beneficial for the industry.

The first is that lack of concern for the rest of the stack. Ten years ago, developers would need to build an AMI, which is "slow to bake", to iterate on their code. Troubleshooting required making an SSH connection into the instance and changing the size of the amount of CPU to be used required picking a different size of VM at the cloud provider. These days, Docker and other containerization tools have matured so that it is easy and fast to iterate on code changes. Scaling is handled at a higher level, as well, so changes do not require an entirely new VM.

The second reason is that routing and dispatch can be handled better with sympathetic abstractions. In the traditional cloud deployment, an Amazon auto scaling group would be responsible for ensuring that N copies of a service (e.g. Apache) are running. There is a different entity, the elastic load balancer, that is responsible for routing requests to these different service instances. In the "new world", the request routing and compute dispatch are combined.

Because the API gateway is sympathetic to the kinds of services being run, it can scale up or down as needed. The old way calculated the need for more service instances from CPU usage, but is not cognizant of the differences between, say, Java and Go applications. Kubernetes allows that recognition because it has squashed the routing and dispatch layers together. That should provide a lot better performance and many features that are hard or impossible using the old mechanisms, Anderson said.

The third reason is that Kubernetes operators provide a way to do compute scheduling in a sympathetic manner. While an AWS auto scaling group can bring up a new instance of MySQL, it has no conception of the fact that the new instance needs to join the MySQL cluster and to start replication. Users need to add tooling to make sure that it doesn't get added to the load balancer before it is actually ready to serve queries. Due to separation of concerns, the auto scaling group only cares about keeping N copies running, the rest is someone's else responsibility. It can be made to work, but it is difficult to do.

But by using an operator, Kubernetes can be aware of the needs of new MySQL instances. The operator is concerned with the health of the MySQL cluster; it is sympathetic to the workload. At this point, operators for specific applications are not generally production ready, but they are getting there. Until the advent of operators, there was no layer that could handle "this application-specific knowhow", he said. It is the first time in the software industry where there is a place to specify a "really tight, application-specific control loop".

Anderson concluded his talk by noting that he believes sympathetic layers are a good thing, in general. QUIC and ZFS provide two good examples of that; sympathetic layers provide better performance, which comes at a price, but he believes the cost is justified in those cases. On the other hand, separating out layers, such as running Docker containers on Kubernetes versus AMIs on VMs, can also provide more capabilities.

Layers are not good or bad, per se, but depend on how they are used, what they cost, and so on. It is up to us as engineers to look at layers (or their lack) and to ask what would happen if they were merged (or added). He also thinks that Kubernetes operators and FaaS are "re-slicing where the layers are", which is to the good. Merging dispatch and routing brings a lot of new capabilities, but means that traditional load balancers can no longer be used.

He reiterated that attendees should watch "Simple Made Easy" before making up their minds. It is a great talk that will help guide thinking about when it makes sense to "'complect' things together or split them apart". He suggested that attendees consider both talks and then come to their own conclusions.

[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Pasadena for SCALE.]

Comments (17 posted)

Page editor: Jonathan Corbet

Brief items

Security

Security quotes of the week

The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don't have to blindly trust that the machines and election officials delivered correct results.

But DARPA and Galois won't be asking people to blindly trust that their voting systems are secure—as voting machine vendors currently do. Instead they'll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They'll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.

Kim Zetter at Motherboard

Switzerland is about to have a national election with electronic voting, overseen by Swiss Post; e-voting is a terrible idea and the general consensus among security experts who don't work for e-voting vendors is that it shouldn't be attempted, but if you put out an RFP for magic beans, someone will always show up to sell you magic beans, whether or not magic beans exist.
Cory Doctorow

There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. This continuing general progress of society suggests that cyber security is not very important. Adaptations to cyberspace of techniques that worked to protect the traditional physical world have been the main means of mitigating the problems that occurred. This "chewing gum and baling wire" approach is likely to continue to be the basic method of handling problems that arise, and to provide adequate levels of security.
Andrew Odlyzko [PDF] in an abstract for his "Cybersecurity is not very important" paper

Comments (10 posted)

Kernel development

Kernel release status

The current development kernel is 5.1-rc1, released on March 17. Linus said: "A somewhat recent development is how the tools/testing/ updates have been quite noticeable lately. That's not new to the 5.1 merge window, it's been going on for a while, but it's maybe just worth a mention that we have more new selftest changes than we have architecture updates, for example. The documentation subdirectory is also quite noticeable."

Stable updates: 5.0.2, 4.20.16, 4.19.29, 4.14.106, and 4.9.163 were released on March 14; 5.0.3, 4.20.17, 4.19.30, 4.14.107, and 4.9.164 followed on March 19. The 4.20.x line ends with 4.20.17, so users should be looking at moving to 5.0.

Comments (none posted)

Quote of the week

Rule #51 of kernel maintenance: when somebody makes it clear that they know the code better than you did, stop arguing and just apply the damn patch.
Linus Torvalds

Comments (none posted)

Distributions

Debian project leader candidates emerge

When Leaderless Debian was written, it seemed entirely plausible that there would still be no candidates for the project leader office even after the extended nomination deadline passed. It is now clear that there will be no need to extend the deadline further, since three candidates (Joerg Jaspert, Jonathan Carter, and Sam Hartman) have stepped forward. It seems likely that the wider discussion on the role of the Debian project leader will continue but, in the meantime, the office will not sit empty.

Update: nominations from Martin Michlmayr and Simon Richter also came in before the deadline, so this year's election will be a five-way race.

Comments (9 posted)

KNOPPIX 8.5.0 released

Remember the KNOPPIX distribution? KNOPPIX 8.5.0 has been released. It includes a 4.20 kernel, several desktop environments, the ADRIANE audio desktop, UEFI secure boot support, and more.

Comments (14 posted)

Solus 4 "Fortitude" released

Version 4 of the Solus distribution has been released. "We are proud to announce the immediate availability of Solus 4 Fortitude, a new major release of the Solus operating system. This release delivers a brand new Budgie experience, updated sets of default applications and theming, and hardware enablement." LWN reviewed Solus in 2016.

Comments (none posted)

Distribution quotes of the week

With all the good and bad things on our radar, Debian is more relevant than ever. The world needs a fully free system with stable releases and security updates that puts its users first, that's commercially friendly but at the same time doesn't have any hidden corporate agendas. Debian is unique and beautiful and important, and we shouldn't allow that message to get lost in the noise that exists out there.
Jonathan Carter

Debian plays a very special and important role in the FOSS ecosystem. We are respected and our contributions are appreciated. Debian contributors tend to be leaders in the FOSS space. We pride ourselves not only on packaging software from upstream but on maintaining good relationships. This often results in us getting involved upstream and taking on leadership roles there. You can also look at current and past board members of the Open Source Initiative (OSI) and again you'll see many Debian people.

While Debian people play important roles everywhere, they often don't represent the Debian project. We need to learn to develop and speak as a single voice. Overall, I believe we, as a project, need to be more vocal and take a more active role in influencing the FOSS ecosystem. Debian has an incredible reputation but we don't use our clout for important change.

Martin Michlmayr

I think that the project has grown to adulthood, and that we don't need the DPL to tell us what to do. It's important to realize that, other than having a larger floor to advertise your ideas and possibly recruit people to help you, the DPL role doesn't bring any super-powers that help with implementing them. Also, given that many people in Debian are of the "talk is cheap, show me the code" mindset, it's probably better, if you really have super-cool ideas for Debian, that you don't run for DPL and instead work on your ideas and advertise them when there's something to show and get others to join you to maintain PPAs.
Lucas Nussbaum

One area where I think we can improve is to remind teams within Debian of their power especially when dealing with upstreams. Debian matters. It's great if we have opinions on how the Linux community should work. It's great if we constructively pursue those opinions with upstreams. Sometimes I think we get too busy simply packaging to actually influence the broader world.
Sam Hartman

Comments (none posted)

Development

Firefox 66 released

Mozilla has released Firefox 66.0. The release notes contain details. New in this release: Firefox now prevents websites from automatically playing sound, improved search experience, smoother scrolling, improved performance and better user experience for extensions, and more.

Comments (31 posted)

GNOME 3.32 released

The GNOME project has released GNOME 3.32, which is code named "Taipei". "This release brings a refreshed visual style, new icons, the demise of the 'application menu' and a new on-screen keyboard, among other things. Improvements to core GNOME applications include a shell extension for desktop icons, improved automation and reader mode in GNOME Web, an 'Application Permissions' panel, and many more." In addition, there is an experimental option for fractional scaling, improvements to GNOME Software, and more. See the release notes for more information.

Full Story (comments: 15)

LLVM 8.0.0 released

Version 8.0.0 of the LLVM compiler suite is out. "It's the result of the LLVM community's work over the past six months, including: speculative load hardening, concurrent compilation in the ORC JIT API, no longer experimental WebAssembly target, a Clang option to initialize automatic variables, improved pre-compiled header support in clang-cl, the /Zc:dllexportInlines- flag, RISC-V support in lld." For details one can see separate release notes for LLVM, Clang, Extra Clang Tools, lld, and libc++.

Full Story (comments: 9)

Haller: WireGuard in NetworkManager

Thomas Haller writes about the WireGuard integration in NetworkManager 1.16. "NetworkManager provides a de facto standard API for configuring networking on the host. This allows different tools to integrate and interoperate — from cli, tui, GUI, to cockpit. All these different components may now make use of the API also for configuring WireGuard. One advantage for the end user is that a GUI for WireGuard is now within reach." (See this article for more information on WireGuard.)

Comments (2 posted)

Python 3.5.7 and 3.4.10 released

Python versions 3.5.7 and 3.4.10 have been released. Both are in "security fixes only" mode and are source-only releases. This is the final release in the Python 3.4 series. The 3.4 branch has been retired, "no further changes to 3.4 will be accepted, and no new releases will be made.

Comments (none posted)

Development quotes of the week

Ho ho ho, let's write libinput. No, of course I'm not serious, because no-one in their right mind would utter "ho ho ho" without a sufficient backdrop of reindeers to keep them sane. So what this post is instead is me writing a nonworking fake libinput in Python, for the sole purpose of explaining roughly how libinput's architecture looks like. It'll be to the libinput what a Duplo car is to a Maserati. Four wheels and something to entertain the kids with but the queue outside the nightclub won't be impressed.
Peter Hutterer (Thanks to Paul Wise)

We do not sell computers, Kodi boxes, Kodi sticks, carrot sticks or french fries. Actually, we don't recommend specific hardware, and we're certainly not interested in selling hardware. That's the manufacturer's job.

The only thing we're interested in is writing software, keeping Kodi in tip-top shape, and advising you about how to better use Kodi. We are not associated with any hardware companies, particular brand or site selling the so-called "Kodi boxes" or "Kodi sticks". There is no such thing. So, for the last time, we do not sell hardware.

Cris Silva

Comments (none posted)

Miscellaneous

SUSE completes its management transition

Here's a SUSE press release hyping its transition to being "the largest independent open-source company". "As it has for more than 25 years, SUSE remains committed to an open source development and business model and to actively participating in communities and projects to bring open source innovation to the enterprise as high-quality, reliable and usable solutions. This truly open, open source model refers to the flexibility and freedom of choice provided to customers and partners to create best-of-breed solutions that combine SUSE technologies with other products and technologies in their IT landscape through open standards and at different levels in their architecture, without forcing a locked-in stack."

Comments (10 posted)

Page editor: Jake Edge

Announcements

Newsletters

Distributions and system administration

Development

Meeting minutes

Calls for Presentations

Linux Plumbers Conference 2019 Call for Refereed-Track Proposals

The Call for Refereed-Track talk proposals for LPC is open until May 22. LPC takes place September 9-11 in Lisbon, Portugal in conjunction with the invitation only Linux Kernel Maintainer Summit. "Refereed track presentations are 45 minutes in length (which includes time for questions and discussion) and should focus on a specific aspect of the "plumbing" in the Linux system. Examples of Linux plumbing include core kernel subsystems, core libraries, windowing systems, management tools, device support, container run-times, media creation/playback, and so on. The best presentations are not about finished work, but rather problems, proposals, or proof-of-concept solutions that require face-to-face discussions and debate."

Full Story (comments: none)

CFP Deadlines: March 21, 2019 to May 20, 2019

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
March 24 July 17
July 19
Automotive Linux Summit Tokyo, Japan
March 24 July 17
July 19
Open Source Summit Tokyo, Japan
March 25 May 3
May 4
PyDays Vienna 2019 Vienna, Austria
April 1 June 3
June 4
PyCon Israel 2019 Ramat Gan, Israel
April 2 August 21
August 23
Open Source Summit North America San Diego, CA, USA
April 2 August 21
August 23
Embedded Linux Conference NA San Diego, CA, USA
April 12 July 9
July 11
​Xen Project ​Developer ​and ​Design ​Summit Chicago, IL, USA
April 15 August 26
August 30
FOSS4G 2019 Bucharest, Romania
April 15 May 18
May 19
Open Source Conference Albania Trana, Albania
April 21 May 25
May 26
Mini-DebConf Marseille Marseille, France
April 24 October 27
October 30
27th ACM Symposium on Operating Systems Principles Huntsville, Ontario, Canada
April 25 September 21
September 23
State of the Map Heidelberg, Germany
April 28 September 2
September 6
EuroSciPy 2019 Bilbao, Spain
May 1 August 2
August 4
Linux Developer Conference Brazil São Paulo, Brazil
May 1 August 2
August 3
DEVCONF.in Bengaluru, India
May 1 July 6 Tübix 2019 Tübingen, Germany
May 6 August 17
August 18
Conference for Open Source Coders, Users & Promoters Taipei, Taiwan
May 9 September 6
September 9
PyColorado Denver, CO, USA
May 12 July 21
July 28
DebConf 2019 Curitiba, Brazil
May 12 July 8
July 14
EuroPython 2019 Basel, Switzerland
May 13 August 23
August 28
GNOME User and Developer Conference Thessaloniki, Greece

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

Events: March 21, 2019 to May 20, 2019

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 19
March 21
PGConf APAC Singapore, Singapore
March 20
March 22
Netdev 0x13 Prague, Czech Republic
March 21 gRPC Conf Sunnyvale, CA, USA
March 23
March 24
LibrePlanet Cambridge, MA, USA
March 23 Kubernetes Day Bengaluru, India
March 23
March 26
Linux Audio Conference San Francisco, CA, USA
March 29
March 31
curl up 2019 Prague, Czech Republic
April 1
April 4
‹Programming› 2019 Genova, Italy
April 1
April 5
SUSECON 2019 Nashville, TN, USA
April 2
April 4
Cloud Foundry Summit Philadelphia, PA, USA
April 3
April 5
Open Networking Summit San Jose, CA, USA
April 5
April 6
openSUSE Summit Nashville, TN, USA
April 5
April 7
Devuan Conference Amsterdam, The Netherlands
April 6 Pi and More 11½ Krefeld, Germany
April 7
April 10
FOSS North Gothenburg, Sweden
April 10
April 12
DjangoCon Europe Copenhagen, Denmark
April 13
April 17
ACM SIGPLAN/SIGOPS Conference on Virtual Execution Environments Providence, RI, USA
April 13 OpenCamp Bratislava Bratislava, Slovakia
April 18 Open Source 101 Columbia, SC, USA
April 26
April 27
Grazer Linuxtage Graz, Austria
April 26
April 27
KiCad Conference 2019 Chicago, IL, USA
April 28
April 30
Check_MK Conference #5 Munich, Germany
April 29
May 1
Open Infrastructure Summit Denver, CO, USA
April 30
May 2
Linux Storage, Filesystem & Memory Management Summit San Juan, Puerto Rico
May 1
May 9
PyCon 2019 Cleveland, OH, USA
May 2
May 4
Linuxwochen Österreich 2019 - Wien Wien, Austria
May 3
May 4
PyDays Vienna 2019 Vienna, Austria
May 4
May 5
Latch-Up 2019 Portland, OR, USA
May 14
May 15
Open Source Data Center Conference Berlin, Germany
May 16 Open Source Camp | #3 Ansible Berlin, Germany
May 17
May 18
BSDCan - The BSD Conference Ottawa, Canada
May 18
May 19
Linuxwochen Linz 2019 Linz, Austria
May 18
May 19
Open Source Conference Albania Trana, Albania
May 19
May 20
Cephalocon Barcelona 2019 Barcelona, Spain

If your event does not appear here, please tell us about it.

Security updates

Alert summary March 14, 2019 to March 20, 2019

Dist. ID Release Package Date
Arch Linux ASA-201903-8 chromium 2019-03-13
Arch Linux ASA-201903-9 libelf 2019-03-20
Arch Linux ASA-201903-10 wordpress 2019-03-20
CentOS CESA-2019:0597 C7 cloud-init 2019-03-19
CentOS CESA-2019:0482 C7 cockpit 2019-03-19
CentOS CESA-2019:0512 C7 kernel 2019-03-19
CentOS CESA-2019:0483 C7 openssl 2019-03-19
CentOS CESA-2019:0485 C7 tomcat 2019-03-19
Debian DLA-1716-1 LTS ikiwiki 2019-03-18
Debian DLA-1719-1 LTS libjpeg-turbo 2019-03-18
Debian DLA-1720-1 LTS liblivemedia 2019-03-18
Debian DSA-4408-1 stable liblivemedia 2019-03-17
Debian DLA-1713-1 LTS libsdl1.2 2019-03-13
Debian DLA-1714-1 LTS libsdl2 2019-03-13
Debian DLA-1715-1 LTS linux-4.9 2019-03-15
Debian DSA-4409-1 stable neutron 2019-03-18
Debian DLA-1721-1 LTS otrs2 2019-03-19
Debian DLA-1717-1 LTS rdflib 2019-03-18
Debian DLA-1718-1 LTS sqlalchemy 2019-03-18
Fedora FEDORA-2019-bf531902c8 F29 SDL 2019-03-19
Fedora FEDORA-2019-74a285d0ad F29 advancecomp 2019-03-16
Fedora FEDORA-2019-0a381a82de F28 firefox 2019-03-13
Fedora FEDORA-2019-3ecff65275 F29 kubernetes 2019-03-15
Fedora FEDORA-2019-216ba46b12 F28 mingw-poppler 2019-03-15
Fedora FEDORA-2019-7085420900 F29 mingw-poppler 2019-03-15
Fedora FEDORA-2019-efa799fd16 F28 php 2019-03-15
Fedora FEDORA-2019-f187a4df7a F29 php 2019-03-15
Gentoo 201903-13 bind 2019-03-13
Gentoo 201903-09 glibc 2019-03-13
Gentoo 201903-15 ntp 2019-03-18
Gentoo 201903-16 openssh 2019-03-20
Gentoo 201903-10 openssl 2019-03-13
Gentoo 201903-14 oracle-jdk-bin 2019-03-13
Gentoo 201903-12 webkit-gtk 2019-03-13
Gentoo 201903-11 xrootd 2019-03-13
Mageia MGASA-2019-0109 6 apache 2019-03-14
Mageia MGASA-2019-0111 6 gnome-keyring 2019-03-14
Mageia MGASA-2019-0108 6 gnupg2 2019-03-14
Mageia MGASA-2019-0112 6 hiawatha 2019-03-14
Mageia MGASA-2019-0113 6 ikiwiki 2019-03-15
Mageia MGASA-2019-0107 6 kernel 2019-03-13
Mageia MGASA-2019-0110 6 rsyslog 2019-03-14
openSUSE openSUSE-SU-2019:0343-1 42.3 chromium 2019-03-17
openSUSE openSUSE-SU-2019:0345-1 15.0 file 2019-03-18
openSUSE openSUSE-SU-2019:0325-1 15.0 freerdp 2019-03-14
openSUSE openSUSE-SU-2019:0346-1 15.0 java-1_8_0-openjdk 2019-03-18
openSUSE openSUSE-SU-2019:0328-1 libcomps 2019-03-15
openSUSE openSUSE-SU-2019:0327-1 15.0 mariadb 2019-03-14
openSUSE openSUSE-SU-2019:0329-1 obs-service-tar_scm 2019-03-15
openSUSE openSUSE-SU-2019:0326-1 15.0 obs-service-tar_scm 2019-03-14
openSUSE openSUSE-SU-2019:0348-1 42.3 ovmf 2019-03-19
openSUSE openSUSE-SU-2019:0344-1 15.0 sssd 2019-03-18
Oracle ELSA-2019-0483 OL7 openssl 2019-03-13
Red Hat RHSA-2019:0590-01 OSP14.0 ansible 2019-03-18
Red Hat RHSA-2019:0597-01 EL7 cloud-init 2019-03-18
Red Hat RHSA-2019:0512-01 EL7 kernel 2019-03-13
Red Hat RHSA-2019:0514-01 EL7 kernel-rt 2019-03-13
Red Hat RHSA-2019:0566-01 OSP13.0 openstack-ceilometer 2019-03-14
Red Hat RHSA-2019:0580-01 OSP14.0 openstack-ceilometer 2019-03-18
Red Hat RHSA-2019:0567-01 OSP13.0 openstack-octavia 2019-03-14
Red Hat RHSA-2019:0593-01 OSP14.0 openstack-octavia 2019-03-18
Red Hat RHSA-2019:0485-01 EL7 tomcat 2019-03-13
Scientific Linux SLSA-2019:0597-1 SL7 cloud-init 2019-03-19
Scientific Linux SLSA-2019:0597-1 SL7 cloud-init 2019-03-19
Scientific Linux SLSA-2019:0482-1 SL7 cockpit 2019-03-13
Scientific Linux SLSA-2019:0512-1 SL7 kernel 2019-03-15
Scientific Linux SLSA-2019:0483-1 SL7 openssl 2019-03-13
Scientific Linux SLSA-2019:0485-1 SL7 tomcat 2019-03-13
Slackware SSA:2019-077-01 libssh2 2019-03-18
SUSE SUSE-SU-2019:0628-1 OS8 galera-3, mariadb, mariadb-connector-c 2019-03-18
SUSE SUSE-SU-2019:0651-1 SLE15 go1.11 2019-03-19
SUSE SUSE-SU-2019:13978-1 SLE11 java-1_7_1-ibm 2019-03-14
SUSE SUSE-SU-2019:0617-1 OS7 SLE12 java-1_8_0-ibm 2019-03-15
SUSE SUSE-SU-2019:13979-1 SLE11 kernel 2019-03-15
SUSE SUSE-SU-2019:0639-1 SLE15 ldb 2019-03-19
SUSE SUSE-SU-2019:0642-1 SLE12 lftp 2019-03-19
SUSE SUSE-SU-2019:0643-1 SLE15 lftp 2019-03-19
SUSE SUSE-SU-2019:0655-1 OS7 SLE12 libssh2_org 2019-03-20
SUSE SUSE-SU-2019:13982-1 SLE11 libssh2_org 2019-03-19
SUSE SUSE-SU-2019:0609-1 SLE12 mariadb 2019-03-14
SUSE SUSE-SU-2019:0636-1 SLE12 nodejs10 2019-03-19
SUSE SUSE-SU-2019:0627-1 SLE15 nodejs10 2019-03-18
SUSE SUSE-SU-2019:0635-1 SLE15 nodejs8 2019-03-19
SUSE SUSE-SU-2019:13981-1 SLE11 openwsman 2019-03-18
SUSE SUSE-SU-2019:0656-1 SLE12 openwsman 2019-03-20
SUSE SUSE-SU-2019:0654-1 SLE15 openwsman 2019-03-20
SUSE SUSE-SU-2019:0619-1 SLE15 wireshark 2019-03-15
SUSE SUSE-SU-2019:0629-1 SLE15 yast2-rmt 2019-03-18
Ubuntu USN-3911-1 16.04 18.04 18.10 file 2019-03-18
Ubuntu USN-3909-1 16.04 18.04 18.10 libvirt 2019-03-14
Ubuntu USN-3910-1 16.04 linux, linux-aws, linux-kvm, linux-raspi2, linux-snapdragon 2019-03-15
Ubuntu USN-3908-2 12.04 linux-lts-trusty 2019-03-13
Ubuntu USN-3910-2 14.04 linux-lts-xenial, linux-aws 2019-03-15
Ubuntu USN-3906-2 12.04 tiff 2019-03-18
Full Story (comments: none)

Kernel patches of interest

Kernel releases

Linus Torvalds Linux 5.1-rc1 Mar 17
Greg KH Linux 5.0.3 Mar 19
Sebastian Andrzej Siewior v5.0.3-rt1 Mar 20
Greg KH Linux 5.0.2 Mar 14
Greg KH Linux 4.20.17 Mar 19
Greg KH Linux 4.20.16 Mar 14
Greg KH Linux 4.19.30 Mar 19
Greg KH Linux 4.19.29 Mar 14
Greg KH Linux 4.14.107 Mar 19
Greg KH Linux 4.14.106 Mar 14
Greg KH Linux 4.9.164 Mar 19
Greg KH Linux 4.9.163 Mar 14

Architecture-specific

Vincenzo Frascino arm64 relaxed ABI Mar 18
Chang S. Bae x86: Enable FSGSBASE instructions Mar 15
Jarkko Sakkinen Intel SGX1 support Mar 20

Build system

Tri Vo gcov: add Clang support Mar 17
Peter Zijlstra objtool: UACCESS validation v4 Mar 18

Core kernel

Roman Gushchin freezer for cgroup v2 Mar 16
Suren Baghdasaryan psi: pressure stall monitors v6 Mar 19

Device drivers

yongqiang.niu@mediatek.com add drm support for MT8183 Mar 14
Sergio Paracuellos MT7621 PCIe PHY Mar 14
Maxime Ripard media: Allwinner A10 CSI support Mar 14
yongqiang.niu@mediatek.com add drm support for MT8183 Mar 14
Laurent Pinchart R-Car DU display writeback support Mar 18
Bartosz Golaszewski mfd: add support for max77650 PMIC Mar 18
Srinath Mannam Stingray USB PHY driver support Mar 19
Dragan Cvetic misc: xilinx sd-fec driver Mar 19
sonal.santan@xilinx.com Xilinx PCIe accelerator driver Mar 19

Device-driver infrastructure

Filesystems and block layer

Memory management

Networking

Willem de Bruijn bpf tc tunneling Mar 20

Security-related

Richard Guy Briggs audit: implement container identifier Mar 15

Virtualization and containers

Eric Auger SMMUv3 Nested Stage Setup Mar 15
Dave Martin KVM: arm64: SVE guest support Mar 19

Miscellaneous

Laura Garcia nftlb 0.4 release Mar 18
Stephen Hemminger iproute2 5.0.0 Mar 19

Page editor: Rebecca Sobol


Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds