|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for March 9, 2017

Why companies don't do GPL enforcement

March 8, 2017

This article was contributed by Luis Villa

As LWN's reports on Linux kernel code contribution have shown for years, corporations pay for lots and lots of GPL code. So why isn't there much visible corporate-sponsored GPL-enforcement work? Looking at the questions businesspeople ask when deciding to enforce can help us understand why public GPL enforcement rarely makes sense for businesses, and why quiet threats of enforcement are often effective — and probably a lot more common than you may realize.

What question doesn't get asked?

In his recent talk at FOSDEM, Richard Fontana spoke of using enforcement to encourage "collaboration" by "making a level playing field". While I strongly agree that's a valid reason to do enforcement, I've never seen anyone in a corporate context ask if they should enforce for that reason. That benefit is too abstract, and the costs very specific and real. Businesses typically need much more concrete reasons to make legal threats, especially public ones.

Do they have code I want?

The first question typically asked in community enforcement is "does the license violator have code I want?".

For most individuals who are trying to pursue a GPL violation, the answer is yes — usually because the potential enforcer has other, related code or hardware that would be improved by freeing the defendant's code. This is the intuition that drives the most common type of GPL enforcement — against Linux kernel modules, which can enable many people to use hardware in new and interesting ways.

But for large companies, violators usually don't have interesting code. For a healthy business, a code dump from a hostile party may be actively uninteresting: it will require maintenance, may be poorly written, and probably won't be aligned with its business needs. Imagine if Red Hat had done some of the early WiFi-related enforcement work, for example — what would it have done with that code? It wouldn't have helped them win its primary target market of enterprise servers, and would have cost engineering time to maintain. So even though it was in a strong position to enforce the GPL there (and presumably in many Internet-of-things infringements since then) it would not have made much sense.

Do they have cash, or customers, I want?

Of course, not everyone wants code. Sometimes they want money, or to shut down a competitor. Again, most GPL-contributing companies don't go this route, for a couple of reasons. First, many GPL violators tend to be small companies that can't afford proper compliance. Suing small, poor companies isn't a great plan to make lots of money. In the hardware space, they are also often in China, making it yet more difficult to sue and collect.

Second, many large GPL-contributing companies these days tend not to be threatened by competitors using their code. As just a few examples from the top contributor list, Red Hat knows that its primary value is in support and partnerships; Google in advertising; Intel and AMD in hardware. These companies don't view code as their actual primary business, so suing to shut down a competitor who relies on the same code rarely makes sense.

One key exception to both of these patterns is Oracle. It is willing to enforce at scale against small companies, and it views licensing as its primary business. So it is no surprise that Oracle enforces its licenses (GPL and otherwise) against Java (and MySQL) users.

Have I tried other routes and failed?

As suggested by the previous two questions, businesspeople will often decide that they simply don't care enough to enforce the GPL. But when they do care, and want code, cash, or to scare competition, they know public threats and lawsuits are expensive and uncertain. So before making public enforcement threats they'll almost always try other private routes to get what they want. Some of those options include the following:

If they want code, they can often offer business partnerships or simply try to buy the code they need. GPL enforcement can come into play here, since private threats of GPL enforcement can be used to improve the terms of the deal. Either borrowing or partnering moves a lot faster — and is more likely to be a "win-win" situation — than a lawsuit.

If they want revenue, they often don't need a lawsuit: mere implied threats of enforcement can often turn a violator into a paying customer when there is an alternate licensing model available. These threats (subtle or not) are the essence of the AGPL "dual license" business model, and other software vendors also use variations on this in the GPL space.

If a business that distributes GPL code wants to defend themselves against competitors, even with GPL there are often many options that are quicker and more reliable than litigation. For example, a software author can sometimes make the code more difficult to use while still complying with the GPL. Another, unfortunate, option is to make parts of the code that are most marketable, or susceptible to competition, proprietary.

Because these options are often effective, and don't have the costs of public enforcement, they happen much more regularly than many readers of LWN might suspect, and certainly much more regularly than other forms of enforcement.

What will it cost me? (Hint: not just money.)

If other options aren't right, and a businessperson still wants to enforce the license, they have to start thinking about the costs of enforcement.

The immediate costs are obvious: most of the big GPL copyright holders tend to play to win when they sue other companies, which means eight-digit legal fees for a single trial are common. And even cases against defendants with fewer resources can take years to resolve; years during which your executives and engineers may well be tied up in depositions and other trial-related distractions. (The later BusyBox cases took around three years to resolve, and the VMware case recently entered its sixth year.)

The costs can be non-financial as well: suing licensees will often make customers nervous, and can lead them to start looking for other vendors. Fontana noted in his talk that these fears can crop up even for vendors who have a long-established tradition of being reasonable about licensing, like Red Hat. (With this in mind, it shouldn't surprise when companies who already have a bad reputation for customer relationships are often the ones that do enforcement.)

Will I actually win?

Let's say a GPL-owning company has answered the previous questions in the right way: it's comfortable that its target has money or market share that it wants, other options aren't available, and suing won't bother its customers.

That still leaves them with a critical question: if it sues, will it win? Remember that, because the targets likely have market share and money, they're going to fight tenaciously. Options for this can include challenging ownership of the copyright in question, attacking the scope of the GPL (where that is an issue), and even challenging the enforceability of the license in general in cases where the authorship is complex. While we generally assume we can rely on these things, they are rarely tested in court, so any high-stakes litigation around GPL will have to deal with them. This is a particularly risky proposition for any business that writes a lot of GPL code: if it enters into litigation and loses, it may lose not just that case, but an entire portion of its business model.

If the odds of winning are not great, this takes us back to square one: is all the money, time, hassle, and customer risk worth it? Are there other options we can try? With all those factors in play, it is no surprise that public corporate enforcement happens rarely.

Comments (11 posted)

An update to GitHub's terms of service

March 8, 2017

This article was contributed by Antoine Beaupré

On February 28th, GitHub published a brand new version of its Terms of Service (ToS). While the first draft announced earlier in February didn't generate much reaction, the new ToS raised concerns that they may break at least the spirit, if not the letter, of certain free-software licenses. Digging in further reveals that the situation is probably not as dire as some had feared.

The first person to raise the alarm was probably Thorsten Glaser, a Debian developer, who stated that the "new GitHub Terms of Service require removing many Open Source works from it". His concerns are mainly about section D of the document, in particular section D.4 which states:

You grant us and our legal successors the right to store and display your Content and make incidental copies as necessary to render the Website and provide the Service.

Section D.5 then goes on to say:

[...] You grant each User of GitHub a nonexclusive, worldwide license to access your Content through the GitHub Service, and to use, display and perform your Content, and to reproduce your Content solely on GitHub as permitted through GitHub's functionality

ToS versus GPL

The concern here is that the ToS bypass the normal provisions of licenses like the GPL. Indeed, copyleft licenses are based on copyright law which forbid users from doing anything with the content unless they comply with the license, which forces, among other things, "share alike" properties. By granting GitHub and its users rights to reproduce content without explicitly respecting the original license, the ToS may allow users to bypass the copyleft nature of the license. Indeed, as Joey Hess, author of git-annex, explained :

The new TOS is potentially very bad for copylefted Free Software. It potentially neuters it entirely, so GPL licensed software hosted on Github has an implicit BSD-like license

Hess has since removed all his content (mostly mirrors) from GitHub.

Others disagree. In a well-reasoned blog post, Debian developer Jonathan McDowell explained the rationale behind the changes:

My reading of the GitHub changes is that they are driven by a desire to ensure that GitHub are legally covered for the things they need to do with your code in order to run their service.

This seems like a fair point to make: GitHub needs to protect its own rights to operate the service. McDowell then goes on to do a detailed rebuttal of the arguments made by Glaser, arguing specifically that section D.5 "does not grant [...] additional rights to reproduce outside of GitHub".

However, specific problems arise when we consider that GitHub is a private corporation that users have no control over. The "Services" defined in the ToS explicitly "refers to the applications, software, products, and services provided by GitHub". The term "Services" is therefore not limited to the current set of services. This loophole may actually give GitHub the right to bypass certain provisions of licenses used on GitHub. As Hess detailed in a later blog post:

If Github tomorrow starts providing say, an App Store service, that necessarily involves distribution of software to others, and they put my software in it, would that be allowed by this or not?

If that hypothetical Github App Store doesn't sell apps, but licenses access to them for money, would that be allowed under this license that they want to my software?

However, when asked on IRC, Bradley M. Kuhn of the Software Freedom Conservancy explained that "ultimately, failure to comply with a copyleft license is a copyright infringement" and that the ToS do outline a process to deal with such infringement. Some lawyers have also publicly expressed their disagreement with Glaser's assessment, with Richard Fontana from Red Hat saying that the analysis is "basically wrong". It all comes down to the intent of the ToS, as Kuhn (who is not a lawyer) explained:

any license can be abused or misused for an intent other than its original intent. It's why it matters to get every little detail right, and I hope Github will do that.

He went even further and said that "we should assume the ambiguity in their ToS as it stands is favorable to Free Software".

The ToS are in effect since February 28th; users "can accept them by clicking the broadcast announcement on your dashboard or by continuing to use GitHub". The immediacy of the change is one of the reasons why certain people are rushing to remove content from GitHub: there are concerns that continuing to use the service may be interpreted as consent to bypass those licenses. Hess even hosted a separate copy of the ToS [PDF] for people to be able to read the document without implicitly consenting. It is, however, unclear how a user should remove their content from the GitHub servers without actually agreeing to the new ToS.

CLAs

When I read the first draft, I initially thought there would be concerns about the mandatory Contributor License Agreement (CLA) in section D.5 of the draft:

[...] unless there is a Contributor License Agreement to the contrary, whenever you make a contribution to a repository containing notice of a license, you license your contribution under the same terms, and agree that you have the right to license your contribution under those terms.

I was concerned this would establish the controversial practice of forcing CLAs on every GitHub user. I managed to find a post from a lawyer, Kyle E. Mitchell, who commented on the draft and, specifically, on the CLA. He outlined issues with wording and definition problems in that section of the draft. In particular, he noted that "contributor license agreement is not a legal term of art, but an industry term" and "is a bit fuzzy". This was clarified in the final draft, in section D.6, by removing the use of the CLA term and by explicitly mentioning the widely accepted norm for licenses: "inbound=outbound". So it seems that section D.6 is not really a problem: contributors do not need to necessarily delegate copyright ownership (as some CLAs require) when they make a contribution, unless otherwise noted by a repository-specific CLA.

An interesting concern he raised, however, was with how GitHub conducted the drafting process. A blog post announced the change on February 7th with a link to a form to provide feedback until the 21st, with a publishing deadline of February 28th. This gave little time for lawyers and developers to review the document and comment on it. Users then had to basically accept whatever came out of the process as-is.

Unlike every software project hosted on GitHub, the ToS document is not part of a Git repository people can propose changes to or even collaboratively discuss. While Mitchell acknowledges that "GitHub are within their rights to update their terms, within very broad limits, more or less however they like, whenever they like", he sets higher standards for GitHub than for other corporations, considering the community it serves and the spirit it represents. He described the process as:

[...] consistent with the value of CYA, which is real, but not with the output-improving virtues of open process, which is also real, and a great deal more pleasant.

Mitchell also explained that, because of its position, GitHub can have a major impact on the free-software world.

And as the current forum of preference for a great many developers, the knock-on effects of their decisions throw big weight. While GitHub have the wheel—and they’ve certainly earned it for now—they can do real damage.

In particular, there have been some concerns that the ToS change may be an attempt to further the already diminishing adoption of the GPL for free-software projects; on GitHub, the GPL has been surpassed by the MIT license. But Kuhn believes that attitudes at GitHub have begun changing:

GitHub historically had an anti-copyleft culture, which was created in large part by their former and now ousted CEO, Preston-Warner. However, recently, I've seen people at GitHub truly reach out to me and others in the copyleft community to learn more and open their minds. I thus have a hard time believing that there was some anti-copyleft conspiracy in this ToS change.

GitHub response

However, it seems that GitHub has actually been proactive in reaching out to the free software community. Kuhn noted that GitHub contacted the Conservancy to get its advice on the ToS changes. While he still thinks GitHub should fix the ambiguities quickly, he also noted that those issues "impact pretty much any non-trivial Open Source and Free Software license", not just copylefted material. When reached for comments, a GitHub spokesperson said:

While we are confident that these Terms serve the best needs of the community, we take our users' feedback very seriously and we are looking closely at ways to address their concerns.

Regardless, free-software enthusiasts have other concerns than the new ToS if they wish to use GitHub. First and foremost, most of the software running GitHub is proprietary, including the JavaScript served to your web browser. GitHub also created a centralized service out of a decentralized tool (Git). It has become the largest code hosting service in the world after only a few years and may well have become a single point of failure for free software collaboration in a way we have never seen before. Outages and policy changes at GitHub can have a major impact on not only the free-software world, but also the larger computing world that relies on its services for daily operation.

There are now free-software alternatives to GitHub. GitLab.com, for example, does not seem to have similar licensing issues in its ToS and GitLab itself is free software, although based on the controversial open core business model. The GitLab hosting service still needs to get better than its grade of "C" in the GNU Ethical Repository Criteria Evaluations (and it is being worked on); other services like GitHub and SourceForge score an "F".

In the end, all this controversy might have been avoided if GitHub was generally more open about the ToS development process and gave more time for feedback and reviews by the community. Terms of service are notorious for being confusing and something of a legal gray area, especially for end users who generally click through without reading them. We should probably applaud the efforts made by GitHub to make its own ToS document more readable and hope that, with time, it will address the community's concerns.

Comments (22 posted)

Page editor: Jonathan Corbet

Security

A new process for CVE assignment

By Jake Edge
March 8, 2017

In early February, the Common Vulnerabilities and Exposures (CVE) assignment team posted about some changes to the process of getting an ID for an open-source project to the oss-security mailing list. CVE IDs (or just "CVEs") are the standard identifier used for security vulnerabilities and the system has been run by the MITRE Corporation since its inception in 1999. Open-source projects have been getting their CVE IDs by way of the oss-security mailing list for a while now, but that is changing. A new web-based system has been created, but there are still a few wrinkles to iron out in the process.

The basic idea is for CVE requesters to use the new web form, which provides a way to submit a vulnerability description that can more quickly be added to the CVE database. That will help avoid the common, but completely useless ** RESERVED ** entry in the database for an already public vulnerability. As the posting put it:

To more efficiently assign and publish CVE IDs and to enable automation and data sharing within CVE operations, MITRE is changing the way it accepts CVE ID requests on the oss-security mailing list. Starting today, please direct CVE ID requests to this web form <https://cveform.mitre.org/>. Through this form, you can request a new CVE ID, update a CVE ID that was already assigned, and submit questions or feedback to the CVE Team.

[...] When you enter a vulnerability description on the web form, the CVE and description will typically be available on the NVD and CVE web sites at the same time or shortly after we email the CVE ID to you.

But Simon McVittie was concerned that the web form is not particularly well-suited for open-source projects. It is geared toward products from known vendors, rather than projects that are distributed by multiple "vendors":

For open source it seems impractical: for instance, I'm a maintainer of both D-Bus and ikiwiki, neither of which has any particular allegiance to any larger legal entity than the individual maintainers.

Once released, D-Bus is later packaged by Debian, Red Hat, etc., and ikiwiki is packaged in at least Debian and Fedora; but they are not the people issuing releases and do not have any special authority over the upstream project, so there's no particular reason why the upstream maintainers should say that any particular OS distribution is "our vendor".

Or do you expect the upstream maintainers of open source software that is packaged by at least one major distribution to choose one of those distributions arbitrarily, and claim they are the vendor for the purposes of your web form? If so, please make that more clear.

McVittie's complaint was echoed by others, but another concern was raised: oss-security is seen as the place to go for "a reasonably comprehensive and timely list of vulnerabilities for specific products", as Reid Priedhorsky put it. Kurt H. Maier concurred, saying that he would prefer "an alternate solution in place before the CVE system disappears behind an inscrutable web form". The worry is that the mailing list will no longer carry all of the useful information that it currently has. Beyond that, Debian security team member Moritz Muehlenhoff is worried that adding friction to the process of getting a CVE will result in fewer CVEs being requested:

Having CVEs assigned is of lesser importance, this was never primarily why we posted security vulnerabilities here. Obtaining CVE IDs caused little overhead on our side, but if that changes (and the announced changes sound like that), then there will simply be less CVE coverage I'm afraid.

Problems with the CVE system have been apparent for some time. Almost exactly a year ago, LWN looked at the problem. At the time, the Distributed Weakness Filing (DWF) project had just been announced by Kurt Seifried. DWF is meant to assist projects in getting CVE IDs, without needing to be affiliated with some larger product or vendor. In response to some complaints that MITRE's interests do not align well with the open-source world, Adam Caudill pointed out that DWF should neatly help solve many of the problems in the existing system:

Once it's completely up and running, DWF should address these issues. Researchers and organizations can easily become CNAs [CVE Numbering Authorities] under DWF, with assigned CVE blocks. For OSS, the process of getting a CVE (including pre-publication) should be much simpler than it has been, especially in recent years. It's not quite there yet, but Kurt [Seifried] and team have put a lot of effort into laying the groundwork for a much better solution than the ad-hoc "send an email and hope" process that we've become accustomed to.

The old system was far from perfect, as is the interim MITRE web form - hopefully with the help of the community, DWF will be able to provide a better process for all involved. For OSS, DWF is the solution we need to be focused on, and helping it to evolve to suit the needs of everyone.

The CVE assignment team responded to the concerns that were brought up. It is clear that the team expects DWF to step up reasonably soon to manage CVEs and CNAs for open-source projects; the message pointed to some DWF documentation that describes how that will work. MITRE is also amenable to automatically posting the CVE assignments that it makes as a result of the web-form submissions to the oss-security list, which should help those using the list to track vulnerabilities. The team had envisioned that reporters would resend the assignment information to oss-security (and provided an example of that happening), but believes it can automate the process.

There may still be places where improvements are needed, though. There have been occurrences where CVE requests posted to the list were not given a CVE, but were still made public on the list. The web-based process could end up obscuring those reports, as Guido Berhoerster pointed out:

One significant advantage of monitoring this list was that requests were immediately visible and there are sometimes significant delays between a CVE request and the response from MITRE. Or in some cases requests were rejected with a rationale or did not receive a response at all -- with the web form such cases will now just disappear in a black hole.

There was fairly widespread support for automatically posting the CVE assignments (or even the raw form data from public vulnerabilities) to oss-security or another list. Alexander Peslyak (better known as "Solar Designer") suggested that MITRE should implement that, and various others in the thread agreed. Evidently, there are still some internal debates going on at MITRE about how to make that happen. The assignment team posted an update on that in mid-February. There is a concern that information about non-public vulnerabilities would need to be weeded out, so for now the status quo remains:

In general, there's a common case (the requester only provides a basic technical outline of the vulnerability and the commit URL) where implementation is easy. There are several corner cases where implementation is hard. The simple solution is to always ask the requester to make their own (correctly threaded) oss-security post that contains any or all of the response from MITRE. Until we have a better understanding of why that simple solution is incorrect, we are continuing to go with that simple solution.

The main benefit to the new assignment mechanism is in the reduction or even elimination of "RESERVED" entries for already public vulnerabilities that have been assigned CVEs. These are quite common today, so reducing those and replacing them with real information about the flaw is certainly to be welcomed. Once DWF comes fully on-line, it will likely make things even easier. But it appears that open-source security folks are not willing to let go of their oss-security forum and the vulnerability information it currently contains. It may take some tedious resending to make it all happen, but it seems like that will still be done.

Comments (2 posted)

Brief items

Security quotes of the week

Seriously your phone is like eleven billion times easier to infect than your TV is and you carry it everywhere. If the CIA want to spy on you, they'll do it via your phone. If you're paranoid enough to take the battery out of your phone before certain conversations, don't have those conversations in front of a TV with a microphone in it. But, uh, it's actually worse than that.

These days audio hardware usually consists of a very generic codec containing a bunch of digital→analogue converters, some analogue→digital converters and a bunch of io pins that can basically be wired up in arbitrary ways. Hardcoding the roles of these pins makes board layout more annoying and some people want more inputs than outputs and some people vice versa, so it's not uncommon for it to be possible to reconfigure an input as an output or vice versa. From software.

Anyone who's ever plugged a microphone into a speaker jack probably knows where I'm going with this. An attacker can "turn off" your TV, reconfigure the internal speaker output as an input and listen to you on your "microphoneless" TV. Have a nice day, and stop telling people that putting glue in their laptop microphone is any use unless you're telling them to disconnect the internal speakers as well.

Matthew Garrett

[...] there is a simple solution to the entire "smart TV as bug" category of concerns — don't buy those TVs, and if you have one, don't connect it to the Internet directly.

Don't associate it with your Wi-Fi network — don't plug it into your Ethernet.

Lauren Weinstein

In this paper, we demonstrate fine-grained software-based side-channel attacks from a malicious SGX [Software Guard Extensions] enclave targeting co-located enclaves. Our attack is the first malware running on real SGX hardware, abusing SGX protection features to conceal itself. Furthermore, we demonstrate our attack both in a native environment and across multiple Docker containers. We perform a Prime+Probe cache side-channel attack on a co-located SGX enclave running an up-to-date RSA implementation that uses a constant-time multiplication primitive. The attack works although in SGX enclaves there are no timers, no large pages, no physical addresses, and no shared memory. In a semi-synchronous attack, we extract 96% of an RSA private key from a single trace. We extract the full RSA private key in an automated attack from 11 traces within 5 minutes.
Michael Schwarz, Samuel Weiser, Daniel Gruss, Clémentine Maurice, and Stefan Mangard

Comments (11 posted)

How Threat Modeling Helps Discover Security Vulnerabilities (Red Hat Security Blog)

Over at the Red Hat Security Blog, Hooman Broujerdi looks at threat modeling as a tool to help create more secure software. "Threat modeling is a systematic approach for developing resilient software. It identifies the security objective of the software, threats to it, and vulnerabilities in the application being developed. It will also provide insight into an attacker's perspective by looking into some of the entry and exit points that attackers are looking for in order to exploit the software. [...] Although threat modeling appears to have proven useful for eliminating security vulnerabilities, it seems to have added a challenge to the overall process due to the gap between security engineers and software developers. Because security engineers are usually not involved in the design and development of the software, it often becomes a time consuming effort to embark on brainstorming sessions with other engineers to understand the specific behavior, and define all system components of the software specifically as the application gets complex. [...] While it is important to model threats to a software application in the project life cycle, it is particularly important to threat model legacy software because there's a high chance that the software was originally developed without threat models and security in mind. This is a real challenge as legacy software tends to lack detailed documentation. This, specifically, is the case with open source projects where a lot of people contribute, adding notes and documents, but they may not be organized; consequently making threat modeling a difficult task."

Comments (none posted)

Security updates

Alert summary March 2, 2017 to March 8, 2017

Dist. ID Release Package Date
Arch Linux ASA-201703-1 curl 2017-03-03
CentOS CESA-2017:0388 C7 ipa 2017-03-03
CentOS CESA-2017:0386 C7 kernel 2017-03-06
CentOS CESA-2017:0396 C7 qemu-kvm 2017-03-03
Debian DLA-848-1 LTS freetype 2017-03-07
Debian DSA-3799-1 stable imagemagick 2017-03-01
Debian DSA-3800-1 stable libquicktime 2017-03-02
Debian DLA-846-1 LTS libzip-ruby 2017-03-06
Debian DLA-836-2 LTS munin 2017-03-03
Debian DSA-3794-2 stable munin 2017-03-02
Debian DSA-3794-3 stable munin 2017-03-03
Debian DLA-845-1 LTS qemu 2017-03-01
Debian DSA-3801-1 stable ruby-zip 2017-03-04
Debian DLA-847-1 LTS texlive-base 2017-03-08
Debian DSA-3803-1 stable texlive-base 2017-03-08
Debian DSA-3802-1 stable zabbix 2017-03-05
Fedora FEDORA-2017-d0c9bf9508 F24 bind99 2017-03-05
Fedora FEDORA-2017-96b7f4f53e F25 bind99 2017-03-05
Fedora FEDORA-2017-a513be0939 F24 cacti 2017-03-08
Fedora FEDORA-2017-8b0737b093 F25 cacti 2017-03-07
Fedora FEDORA-2017-b9ffa8b00f F25 canl-c 2017-03-07
Fedora FEDORA-2017-d62c8f91e4 F25 cxf 2017-03-02
Fedora FEDORA-2017-cc7249b821 F24 drupal7-metatag 2017-03-08
Fedora FEDORA-2017-c87bbae385 F25 drupal7-metatag 2017-03-08
Fedora FEDORA-2017-98f85533f0 F25 freeipa 2017-03-08
Fedora FEDORA-2017-a9e6a5c249 F24 gtk-vnc 2017-03-05
Fedora FEDORA-2016-93679a91df F24 jenkins 2017-03-05
Fedora FEDORA-2016-93679a91df F24 jenkins-remoting 2017-03-05
Fedora FEDORA-2017-53338ece0c F25 kdelibs 2017-03-05
Fedora FEDORA-2017-ad67543fc5 F24 kernel 2017-03-03
Fedora FEDORA-2017-d875ae8299 F25 kernel 2017-03-03
Fedora FEDORA-2017-f9ab92fa6c F25 kf5-kio 2017-03-05
Fedora FEDORA-2017-d068b54614 F24 libICE 2017-03-05
Fedora FEDORA-2017-c02eb668a7 F25 libICE 2017-03-03
Fedora FEDORA-2017-bcb1999e65 F24 libXdmcp 2017-03-05
Fedora FEDORA-2017-9a9328c159 F25 libXdmcp 2017-03-03
Fedora FEDORA-2017-5a6ed9d326 F25 libcacard 2017-03-03
Fedora FEDORA-2017-404f1a29fc F24 mingw-gtk-vnc 2017-03-08
Fedora FEDORA-2017-c3739273e5 F25 mingw-gtk-vnc 2017-03-08
Fedora FEDORA-2017-9a819664a6 F25 mupdf 2017-03-07
Fedora FEDORA-2017-fa4e441e03 F24 netpbm 2017-03-02
Fedora FEDORA-2017-f9f3a78148 F24 suricata 2017-03-08
Fedora FEDORA-2017-f3aac83a8f F25 suricata 2017-03-08
Fedora FEDORA-2017-e9171a0c00 F24 vim 2017-03-03
Fedora FEDORA-2017-8494d0142c F25 vim 2017-03-02
Fedora FEDORA-2017-1607a3a78e F24 xen 2017-03-08
Fedora FEDORA-2017-05e32fe278 F24 xrdp 2017-03-03
Mageia MGASA-2017-0070 5 ming 2017-03-03
Mageia MGASA-2017-0071 5 quagga 2017-03-03
Mageia MGASA-2017-0072 5 util-linux 2017-03-03
Mageia MGASA-2017-0069 5 webkit2 2017-03-02
openSUSE openSUSE-SU-2017:0587-1 42.1 42.2 ImageMagick 2017-03-02
openSUSE openSUSE-SU-2017:0620-1 42.1 42.2 bind 2017-03-07
openSUSE openSUSE-SU-2017:0621-1 42.1 42.2 munin 2017-03-07
openSUSE openSUSE-SU-2017:0618-1 42.2 mysql-community-server 2017-03-07
openSUSE openSUSE-SU-2017:0598-1 42.1 42.2 php5 2017-03-03
openSUSE openSUSE-SU-2017:0588-1 42.2 php7 2017-03-02
openSUSE openSUSE-SU-2017:0590-1 42.1 util-linux 2017-03-02
openSUSE openSUSE-SU-2017:0589-1 42.2 util-linux 2017-03-02
Oracle ELSA-2017-0388 OL7 ipa 2017-03-02
Oracle ELSA-2017-0386 OL7 kernel 2017-03-02
Oracle ELSA-2017-0386-1 OL7 kernel 2017-03-03
Oracle ELSA-2017-0454 OL5 kvm 2017-03-07
Oracle ELSA-2017-0396 OL7 qemu-kvm 2017-03-02
Red Hat RHSA-2017:0448-01 OSCP ansible and openshift-ansible 2017-03-06
Red Hat RHSA-2017:0388-01 EL7 ipa 2017-03-02
Red Hat RHSA-2017:0462-01 EL6 EL7 java-1.8.0-ibm 2017-03-08
Red Hat RHSA-2017:0365-01 EL6.2 kernel 2017-03-01
Red Hat RHSA-2017:0366-01 EL6.5 kernel 2017-03-01
Red Hat RHSA-2017:0386-01 EL7 kernel 2017-03-02
Red Hat RHSA-2017:0403-01 EL7.1 kernel 2017-03-02
Red Hat RHSA-2017:0387-01 EL7 kernel-rt 2017-03-02
Red Hat RHSA-2017:0402-01 MRG/EL6 kernel-rt 2017-03-02
Red Hat RHSA-2017:0454-01 EL5 kvm 2017-03-07
Red Hat RHSA-2017:0361-01 OSP8.0 openstack-puppet-modules 2017-03-01
Red Hat RHSA-2017:0359-01 OSP9.0 openstack-puppet-modules 2017-03-01
Red Hat RHSA-2017:0435-01 OSP9.0 python-oslo-middleware 2017-03-02
Red Hat RHSA-2017:0396-01 EL7 qemu-kvm 2017-03-02
Red Hat RHSA-2017:0444-02 Atomic Host 7 rpm-ostree and rpm-ostree-client 2017-03-03
Scientific Linux SLSA-2017:0388-1 SL7 ipa 2017-03-02
Scientific Linux SLSA-2017:0386-1 SL7 kernel 2017-03-02
Scientific Linux SLSA-2017:0454-1 SL5 kvm 2017-03-07
Scientific Linux SLSA-2017:0396-1 SL7 qemu-kvm 2017-03-02
Slackware SSA:2017-066-01 firefox 2017-03-07
Slackware SSA:2017-066-02 thunderbird 2017-03-07
SUSE SUSE-SU-2017:0625-1 SLE12 qemu 2017-03-07
Ubuntu USN-3216-1 12.04 14.04 16.04 16.10 firefox 2017-03-08
Ubuntu USN-3222-1 12.04 14.04 16.04 16.10 imagemagick 2017-03-08
Ubuntu USN-3219-1 14.04 kernel 2017-03-07
Ubuntu USN-3220-1 16.04 linux, linux-gke, linux-raspi2, linux-snapdragon 2017-03-07
Ubuntu USN-3221-1 16.10 linux, linux-raspi2 2017-03-07
Ubuntu USN-3218-1 12.04 linux, linux-ti-omap4 2017-03-07
Ubuntu USN-3221-2 16.04 linux-hwe 2017-03-07
Ubuntu USN-3219-2 12.04 linux-lts-trusty 2017-03-07
Ubuntu USN-3220-2 14.04 linux-lts-xenial 2017-03-07
Ubuntu USN-3215-1 14.04 munin 2017-03-02
Ubuntu USN-3215-2 14.04 munin 2017-03-03
Ubuntu USN-3217-1 12.04 14.04 16.04 16.10 network-manager-applet 2017-03-07
Ubuntu USN-3211-2 16.04 16.10 php7 2017-03-02
Ubuntu USN-3214-1 12.04 14.04 w3m 2017-03-02
Full Story (comments: none)

Page editor: Jake Edge

Kernel development

Brief items

Kernel release status

The current development kernel is 4.11-rc1, released on March 5. Linus said: "This looks like a fairly regular release. It's on the smallish side, but mainly just compared to 4.9 and 4.10 - so it's not really _unusually_ small (in recent kernels, 4.1, 4.3, 4.5, 4.7 and now 4.11 all had about the same number of commits in the merge window)." There were 10,960 non-merge commits pulled in the end, so it's definitely not unusually small.

Stable updates: none have been released in the last week.

Comments (none posted)

Quotes of the week

Just because a patch doesn't solve world hunger isn't really a good reason to reject it.
Daniel Vetter

But they are memory barriers! They are -supposed- to look weird!
Paul McKenney

Comments (none posted)

The end of the 4.11 merge window

By Jonathan Corbet
March 7, 2017
By the time Linus Torvalds released 4.11-rc1 and closed the merge window for this development cycle, 10,960 non-merge commits had been pulled into the mainline repository. Just over 800 of those were pulled after the writing of last week's summary. Thus, there is a relatively small set of patches to cover here, but a couple of the more significant changes were saved for last.

  • The long-awaited statx() system call has been merged. This new version of stat(), which has been in the works since 2010, adds a number of new features and efficiency improvements. See the commit changelog for details.

  • New hardware support includes: Renesas R-Car Gen3 thermal sensors, ZTE zx2967 SoC thermal sensors, and QLogic QEDF 25/40/100Gb FCoE initiators

  • As predicted, the large sched.h refactoring work has been merged. In theory, all kernel code that needs to be fixed in responses to these changes has indeed been fixed, but there may still be a few loose ends here and there.

The 4.11 kernel has now entered the stabilization period. If things go according to the normal schedule, the final 4.11 release can be expected on April 16 or 23.

Comments (none posted)

March 6 Kernel Podcast

Jon Masters's kernel podcast for March 6 is out. "In this week’s kernel podcast: Linus Torvalds announces Linux 4.11-rc1, rants about folks not correctly leveraging linux-next, the remainder of this cycle’s merge window pulls, and announcements concerning end of life for some features."

Comments (none posted)

Kernel development news

Per-task CPU-frequency control

By Jonathan Corbet
March 8, 2017
The kernel's power-management code attempts to run each processor on the system at a level that minimizes power consumption while ensuring that sufficient CPU time is available for the currently running tasks. CPU frequency management has, over the last few years, become more closely tied to the scheduler, since that is where the information about the current workload resides. The scheduler, however, does not know which processes are most important to the user. Various attempts to fill in that information have been made over time, with none making it into the mainline; the latest version takes a different approach.

The core idea behind workload-sensitive power management is that the user (or, more likely, some sort of policy daemon working on the user's behalf) may want to influence how decisions are made depending on which processes are running. For processes that the user would like to see run quickly — those currently running in the foreground on a handset, for example — it may be desirable to run the CPU at a higher rate than is strictly necessary to get the expected amount of work done. On the other hand, if only a low-priority background task is running, it may be best to put an upper limit on how fast the CPU runs, even if that task has a lot of work to do. At the moment, however, the power-management code cannot distinguish those types of process from each other, so the same frequency-scaling policies apply to all of them.

Recent attempts to solve this problem have taken the form of a control-group controller called SchedTune. This controller allowed a "boost" value to be applied to processes in a specific control group. Those processes would be made to appear to require more CPU time than they actually needed, causing the CPU-frequency governor to pick a higher frequency than it otherwise would have. This approach worked, but one might argue that the approach of distorting the apparent load to influence frequency selection lacked elegance.

At the end of February, Patrick Bellasi posted a new patch set that takes a different approach. The separate SchedTune controller is no more; instead, CPU-frequency policy has been moved into the core CPU controller, where it can be found alongside the other scheduling parameters for any given control group.

The "boost" value and the load-distorting algorithm it used are gone. In their place are two new control knobs, called capacity_min and capacity_max. They place bounds on the CPU frequency choices that can be made when any process in the group is running. The capacity_min value describes the slowest allowable CPU speed; by default, it is set to zero, meaning that even the slowest CPU frequency is acceptable. The maximum allowable frequency is set by capacity_max; the default value here is 1024, allowing the CPU to go to its maximum speed. An important process can thus be guaranteed a certain minimum CPU performance by setting capacity_min to an appropriate value, while low-priority tasks can be prevented from pushing the CPU frequency too high with capacity_max.

At any given time, there may be multiple runnable processes, and they may not all have the same capacity_min and capacity_max parameters. Changing the CPU's operating parameters is a relatively expensive operation, so it does not make sense to change the operating frequency every time a new process is given access to the CPU. One could also argue that, when a process with relatively high CPU-power requirements is waiting, the other processes should be run at just as high a power level to avoid delaying that process excessively.

The end result is that the scheduler needs to pick a set of parameters that is suitable for all of the processes that are currently runnable. To meet that requirement, the controller will apply the maximum value of both parameters. That ensures that the process(es) with the highest values will actually get those values, and no process will run at a lower CPU frequency than it is entitled to. Implementing this policy requires adding two red-black trees to each control group tracking the processes with the highest capacity_min and capacity_max values.

When multiple levels of control groups are in use, subgroups are only allowed to tighten the constraints set in their parent groups. So capacity_min in a subgroup cannot go below that value in the parent, while capacity_max cannot exceed the parent's value.

In previous patch sets, this feature has been focused on the SCHED_OTHER (normal) scheduling class. With this patch set, though, it has also been extended to the realtime and deadline scheduling classes. In current kernels, those classes are run at the maximum speed the processor supports. With this change, realtime and deadline scheduling can be used in a more power-friendly mode. Needless to say, tuning of these parameters with such workloads will need to be done carefully to avoid configuring a system that cannot meet its realtime requirements.

As of this writing, there have been no comments on the new patch set. That, perhaps, is one of the hazards of posting core-kernel patches during the merge window. One might guess that this version offers relatively little to complain about, but experience suggests that one might easily guess incorrectly when it comes to scheduler patches. Once the scheduler developers have a chance to look at this code, we'll have a better idea of whether it's likely to get into the mainline in its current form.

Comments (none posted)

RCU and the mid-boot dead zone

March 7, 2017

This article was contributed by Paul McKenney

When discussing RCU with mainstream formal-verification researchers, there often comes a time when they ask for RCU's specification. There is of course a specification of a sort, which was first published here, here, and here; it is currently maintained in the Linux-kernel source tree. However, these “specifications” are empirical in nature: As hardware, other parts of the kernel, and workloads change, RCU's specification also changes. This is not what mainstream formal-verification researchers want to hear, so I usually tell them stories of how I learned about various aspects of the RCU specification. This article tells one of those stories.

But first, let's review RCU's grace-period guarantee. This guarantee requires that RCU's synchronous grace-period primitives wait for any pre-existing RCU read-side critical sections. For example, consider the following two in-kernel tasks:

int x, y, r1, r2;

void task0(void)
{
	WRITE_ONCE(x, 1);
	synchronize_rcu();
	WRITE_ONCE(y, 1);
}

void task1(void)
{
	rcu_read_lock();
	r1 = READ_ONCE(x);
	r2 = READ_ONCE(y);
	rcu_read_unlock();
}

Suppose that task1()'s load from x returns zero. This means that some part of task1()'s RCU read-side critical section (delimited by rcu_read_lock() and rcu_read_unlock()) executed prior to task0()'s store to x, which in turn means that this critical section started before task0()'s RCU grace period. RCU therefore guarantees that the rest of task1()'s critical section section completes before that grace period ends, which in turn means that the read from y will return zero. Similarly, if task1()'s read from y returns one, part of task1()'s RCU read-side critical section has executed after task0()'s RCU grace period. RCU therefore guarantees that the entirety of task1()'s critical section executes after the start of the grace period, which in turn means that the read from x will return one.

In short, RCU read-side critical sections are not permitted to completely overlap RCU grace periods.

During early boot, it is trivially easy to provide this guarantee because there is only one task and preemption is disabled. This means that the fact that synchronize_rcu() has been called means that all pre-existing readers must have been completed. Therefore, RCU's grace-period primitives can be no-ops during early boot. But early boot ends as soon as the kernel starts spawning kthreads.

At run time, RCU's grace-period guarantee is provided by the run-time RCU machinery, which by that time has been fully initialized. But the run-time RCU machinery cannot operate correctly until after all of RCU's kthreads have been spawned and initialized, which clearly cannot happen until some time after the kernel starts spawning kthreads.

Let's call time period between early boot and run time the mid-boot dead zone. This dead zone starts when the kernel spawns the first kthread, and ends once all of RCU's kthreads have been spawned and are ready. As noted here, RCU's synchronous grace periods might well deadlock during the mid-boot dead zone.

Hoping that nobody calls for a synchronous grace period during the mid-boot phase worked well for some years. However, I made the mistake of accidentally causing synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), and synchronize_sched_expedited() to operate correctly during the mid-boot dead zone. The ACPI developers noticed that these primitives worked, and promptly took full advantage of my lapse, perhaps completely unintentionally. Because I didn't make these functions log a warning if used during the dead zone, these developers had absolutely no hint that they were skating on thin ice. Had they built with CONFIG_SMP=n or booted with the rcu_normal kernel-boot parameter, RCU would have complained bitterly. However CONFIG_SMP=n is used primarily for deep embedded systems, and rcu_normal is used primarily on realtime systems, so it is not all that surprising that they didn't test them.

However, the ACPI developers did notice once v4.9 came out, because that was the release in which I switched synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), and synchronize_sched_expedited() to workqueues. This change eliminated some ugly interactions with POSIX signals, however it also re-introduced the mid-boot dead zone, which had the minor downside of complete and utter failure for the ACPI developers.

Quick quiz: But wouldn't this mid-boot dead zone end when workqueues are initialized, which happens much earlier than the spawning of RCU's kthreads?
Answer

Although this could be fixed in ACPI, it is easy to imagine a use case that really needed a real RCU grace period. It is therefore preferable to get RCU's mid-boot dead zone out of ACPI's way. If nothing else, eliminating RCU's mid-boot dead zone should save me considerable time explaining that dead zone to future Linux-kernel developers. As usual, this was easier said than done.

My first thought was to spawn RCU's kthreads much earlier in the boot process, thus narrowing the mid-boot dead zone, so that the ACPI use fell outside of that zone. However, RCU creates different numbers and types of kthreads under different kernel configurations, which complicates the task of creating all these kthreads at one point in the code. This approach therefore did not make it past the design phase, although it did consume at least its share of paper and ink.

My second thought was to introduce kthreads into RCU's expedited grace-period primitives, given that the expedited code can be driven by a single kthread. Once this is in place, non-expedited synchronous grace periods can be forced to use the expedited code path during the dead zone, which would allow full functionality. This is much simpler than the first approach, and resulted in this reasonably simple patch. Borislav Petkov tested this patch and found that it fixed the problem, which was another plus.

However, this patch had the disadvantage of turning RCU into a special kernel subsystem that creates its kthreads before any other kernel subsystem. This might work fine for awhile, but Murphy says that it is only a matter of time before some other kernel subsystem also needs to be the first to spawn its kthreads. In addition, there is still a dead zone, albeit a very short one. But if kthread creation itself ever needed to invoke synchronous RCU grace periods, this approach would be completely broken. It would be much better if RCU grace periods simply worked throughout the entire boot process.

My third thought was to make expedited RCU grace periods go back to their 4.8 behavior, so that the requesting task drives the expedited grace period. In order to avoid the ugliness involving POSIX signals, expedited grace periods would switch back to workqueues as soon as RCU's kthreads had been spawned. This assumes that in-kernel tasks never send each other POSIX signals during the mid-boot dead zone, which seems a safe assumption for the moment, and which can be worked around if needed. In addition, it results in a reasonably small patch.

The great strength of this approach is that there is no longer a mid-boot dead zone: synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), synchronize_sched_expedited(), synchronize_rcu(), synchronize_rcu_bh(), and synchronize_sched() may be invoked throughout the entire boot process. This in turn simplifies RCU's specification, at a price of only about seventy lines of code added to the kernel, and without the addition of any kthreads. In addition, RCU can continue to spawn its kthreads at early_initcall() time, so that RCU need not be the special first subsystem to create kthreads. Finally, the switch to normal run-time operation can happen at core_initcall() time: there is no need to switch to run-time mode immediately after RCU's kthreads have been spawned.

It is still early days for this patch, but current results are quite encouraging.

This experience resulted in several lessons (re)learned:

  1. Maintaining uniform semantics across the Linux kernel's boot-time and run-time code can be quite challenging, but greatly improves ease-of-use.
  2. If you don't make it warn, it won't be considered illegal.
  3. If you didn't make it warn, but then make it no longer work, you will likely have unhappy users.

Last, but by no means least, RCU's specification is empirical, and this is the story of how I learned about yet another new-to-me aspect of that specification.

Acknowledgments

I own thanks to Lv Zheng, Borislav Petkov, Stan Kain, Ivan (AKA waffolz@hotmail.com), Emanuel Castelo, Bruno Pesavento, Frederic Bezies, and Rafael J. Wysocki for reporting, reviewing, testing, and otherwise keeping me honest. I also owe thanks to Jim Wasko for his support of this effort.

Quick Quiz answer

Quick Quiz: But wouldn't this mid-boot dead zone end when workqueues are initialized, which happens much earlier than the spawning of RCU's kthreads?

Answer: In theory, yes. In practice, the kernel might have been booted with rcu_normal, which would cause the expedited grace periods to use the non-expedited code path. So in this case, the mid-boot dead zone for synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), and synchronize_sched_expedited() is exactly the same as that for synchronize_rcu(), synchronize_rcu_bh(), and synchronize_sched(), which ends after RCU's kthreads has been spawned.

Back to Quick Quiz 1.

Comments (2 posted)

Patches and updates

Kernel trees

Linus Torvalds Linux 4.11-rc1 Mar 05
Sebastian Andrzej Siewior v4.9.13-rt12 Mar 08

Architecture-specific

Core kernel code

Development tools

Device drivers

Device driver infrastructure

Jarkko Sakkinen in-kernel resource manager Mar 03
Shashank Sharma HDMI 2.0: Scrambling in DRM layer Mar 03
Lina Iyer CPU PM domains Mar 03

Filesystems and block I/O

Memory management

Michal Hocko kvmalloc Mar 06
Michal Hocko scope GFP_NOFS api Mar 06
Kirill A. Shutemov 5-level paging Mar 06
Christoph Lameter Slab Fragmentation Reduction V16 Mar 07

Virtualization and containers

Page editor: Jonathan Corbet

Distributions

Making distributions Just Work on ARM servers

By Jonathan Corbet
March 8, 2017

Linaro Connect
The personal computer market has benefited from a set of reasonably well established standards from nearly its beginning. Those standards have encouraged a high level of interoperability; they have also made it relatively easy to build Linux distributions that will run on a wide range of machines. The embedded world, which is where ARM processors are most often found, has not been so fortunate. As ARM moves into the server market, though, interest in system standards is on the rise. At the 2017 Linaro Connect event in Budapest, Jon Masters of Red Hat and Dong Wei of ARM described the work that has been done to bring order to the ARM server market.

Masters started by noting that the server and embedded markets are, in many ways, stark opposites of each other. In the embedded area, products are developed under tight deadlines, and the operating-system software is typically modified to suit the hardware. In the server market, instead, the operating system that will be installed on next year's systems was made [Jon Masters] last year. In this realm, he said, changing the hardware is seen as being easier than changing the software. Thus, there is a strong desire for a set of standards that will allow vendors to make servers that can run last year's enterprise distribution releases.

To that end, two documents have been written. The first of these, the Server Base System Architecture (SBSA) [PDF], describes the minimal features that a compliant system must provide. These include architectural specifications (a minimum of the ARM v8.0 architecture, for example) and the system-on-chip (SoC) features (interrupts, clocks, watchdogs, etc.) that must be provided. The other document, the Server Base Boot Requirements (SBBR) [PDF], provides the requirements to be able to boot a compliant distribution. It represents, Masters said, the "ACPI takeover of the world". Along with ACPI, it mandates the UEFI firmware standard and the set of expected boot protocols.

The result is a minimal specification for a set of standard hardware that can boot into a standard bootloader and discover the hardware that is available. There is, at this point, a lot that is not covered. The operation of caches is not discussed, for example, and there is no standard memory map. Emerging technologies are necessarily not a part of this document.

At this point, SUSE developer Alexander Graf noted that the goals behind these standards are good, but there is a missing piece: a way to inject drivers into the system during the installation process. New servers will certainly include hardware that was unknown at the time that a given distribution release was made, so there needs to be a way to add drivers at a later date. Masters agreed that this was a problem that should be worked on, but questioned whether it is appropriate to address it in the SBSA standard; Grant Likely added that it might be better handled at the UEFI level.

Another question had to do with how the standard had been defined; it was acknowledge that, so far, it has been a "who you know" process. Red Hat wanted to have something in place and there wasn't much time to get it done, so things just went forward with as much input from distributors and hardware vendors as could be arranged. There is a desire to open up and formalize the process in the near future, though. Masters noted that many of the core x86 standards were created in a closed manner early on; that is just how things have to be done at that stage, he suggested. An effort will be made to make the process more democratic going forward.

Dong Wei then took over, noting that, while ACPI started as an x86-dominated standard, most of the contributions to ACPI are coming from the ARM community now. The ACPI process has been opened up as this happened, and the actual specifications can now be found online and downloaded without even a click-through license step.

Work is being done on a set of compliance tests, he said. The SBSA test suite checks for the necessary system components and looks at the integration of PCIe peripherals. There have been a lot of "very creative" PCIe implementations shipped, he said, requiring thorough testing. The SBBR test suite tests the firmware interfaces provided by the subject system. The UEFI tests are based on the UEFI self-certification tests (SCT), while the ACPI tests use the Firmware Test Suite from Canonical. The intent is to provide all of these tests under [Dong Wei] open-source licenses. Unfortunately, at the moment, the UEFI SCT is only available to members. The development of this suite has been moved to a private GitHub repository as a preparatory step toward opening it entirely. There is a unified test suite release coming together in this GitHub repository.

An audience member asked why these tests are being done in the ARM community, rather than existing as a set of cross-architecture tests. Users want their systems to look the same, after all, regardless of their underlying architecture. The answer was that these are low-level tests that are aimed at just that goal, but that getting there requires this layer of architecture-specific test suites.

There was some discussion of a certification process for hardware. Users want their distributions to just work, and a certification sticker would give them confidence that a given product would, indeed, work properly. Masters answered that the tests being discussed here are an input to the certification process, rather than the certification itself. Once the hardware passes the compliance tests, distributors will start doing their own certification testing. There is, though, interest in adding distribution boot testing to the test suites as a way of improving the testing overall.

The final question had to do with cross-platform drivers. There is an interpreted executable format known as EFI Byte Code (EBC); drivers compiled to that format can run on multiple architectures. There is an EBC interpreter for ARM in the Tianocore UEFI reference implementation now. Shipping EBC drivers would allow vendors to support multiple architectures with a single binary, simplifying their lives.

It was noted, though, that this benefit vanishes if x86 drivers continue to be shipped as native executables. Adding ARM drivers requires supporting a second binary format; whether it's EBC or ARM native doesn't make a whole lot of difference. Only if a third major architecture arises will there be value in using EBC. There are also problems with EBC compilers being proprietary, a lack of support for the existing x86 EBC interpreter, and the fact that Microsoft, the only company doing secure-boot signing for drivers, will not sign EBC drivers.

Graf asked whether drivers could, instead, be shipped as a multiple-architecture binary. Progress is being made in this direction, and EFI supports multiple binary formats. Regardless of what form the solution might take, it was asserted that this is a critical blocker for the ARM server market. Without a way to simplify the driver-support problem, it is going to be hard to get card vendors to support ARM servers in general. As the session wound down, this situation was described as the highest-priority problem needing solution for distributors wanting to support ARM servers.

[Thanks to Linaro and the Linux Foundation for funding your editor's travel to Connect.]

Comments (6 posted)

Brief items

Distribution quote of the week

I don't think anybody expects Rawhide to miraculously turn into "rolling stable" - the first step is to get it to "rolling usable" :-)
Owen Taylor

Comments (none posted)

Tails 2.11 is out

Tails 2.11 has been released. This version adds notifications that Tails 3.0 will not work on 32-bit processors and that I2P will be dropped in Tails 2.12. "Maintaining software like I2P well-integrated in Tails takes time and effort and our team is too busy with other priorities. Unfortunately, we failed to find a developer outside of our team to maintain I2P in Tails. As a consequence, the last version of I2P being shipped in Tails is 0.9.25, which is nearly one year old now at this moment."

Comments (none posted)

Distribution News

Debian GNU/Linux

Debian Project Leader Elections 2017: Call for nominations

Nominations are open until March 11 for the 2017 Debian Project Leader election. "Prospective leaders should be familiar with the constitution, but just to review: there's a one week period when interested developers can nominate themselves and announce their platform, followed by a three week period intended for campaigning, followed by two weeks for the election itself."

Full Story (comments: none)

Announcing the BSP in Paris

There will be a Bug Squashing Party in Paris, France May 13-14. "We attempt to create an event which is a bit more broader than "just" a BSP, we are looking forward to have a more diverse event for people with different skillsets they already bring, e.g. design and graphics, testing of the upcoming release etc." Register by the end of March.

Full Story (comments: none)

Newsletters and articles of interest

Distribution newsletters

Comments (none posted)

Francis: The story of Firefox OS

Ben Francis has posted a detailed history of the Firefox OS project. "For me it was never about Firefox OS being the third mobile platform. It was always about pushing the limits of web technologies to make the web a more competitive platform for app development. I think we certainly achieved that, and I would argue our work contributed considerably to the trends we now see around Progressive Web Apps. I still believe the web will win in the end. "

Comments (15 posted)

Page editor: Rebecca Sobol

Development

Giving Upspin a spin

March 8, 2017

This article was contributed by Nur Hussein

In late February 2017, Google announced the Upspin project on its security blog. It is an open-source project from Google researchers that was released under the 3-clause BSD license. Upspin is a set of protocols and services that allow users to share files with a degree of privacy and security. Upspin is not a filesystem but instead provides a global namespace to securely address and retrieve files across networks along with access control to limit who can read and write those files. Users' identities are defined via keys; access is allowed based on the ownership of the file and for other users as specified by the owner. You can find the project's web page at upspin.io.

The objectives of the project as described in the overview document are universal naming, data sharing, and security. With Upspin, users can share their data securely; all of the data that users share will use a naming mechanism that will globally identify each object and which user that object belongs to. End-to-end encryption ensures data remains private and is not able to be seen without user consent even by the service provider hosting it.

The targeted use case for Upspin is personal, not corporate use, so the focus is on individual sharing while maintaining user privacy. A typical example is friends and family sharing photos among themselves on the internet, but with privacy settings to not allow the pictures to be viewed by unauthorized users. This is motivated by the fact that most users upload data onto web sites and apps such as Facebook, Twitter, and Instagram; it is hard to extract and share this data with other applications and services, or to apply access-control mechanisms to the data once it's uploaded. Upspin was devised as a way to name, store, and share data without resorting to uploading it onto a multitude of internet silos, and ensuring that no one (not even the service provider) can view or tamper with the data without authorization from the user.

Structure

Upspin is implemented as set of servers and a client written in Go. The code itself is open source and hosted on GitHub; that mirrors a repository on Google's Git infrastructure. The code is a reference implementation, and the developers "expect and hope that many other implementations will arise" as stated in the overview document.

An Upspin object identifier is composed of a unique user ID in the form of an email address, and a path to the object like a filesystem path. For example:

    alice@domain.com/dir/datafile
A validated email address prefix guarantees a unique global identifier for every user, so the string that follows is the virtual path name of the object in question. Files are organized in a directory tree starting from a "/" root directory, much like a Unix filesystem. Given an object identifier, Upspin will read or write a file to the network store if the user has permission to do so.

Upspin comprises the following components:

  • A key server that holds user identification keys and a pointer to a directory server.
  • One or more directory servers that store the users' file directory structure and pointers to the storage servers where they can be found.
  • One or more storage servers.
  • Optional caching servers.

While the directory structure is maintained on the directory server, it does not map to the actual layout of the physical filesystem the files are stored on. Rather the leaf nodes on the directory tree are pointers to objects stored on the storage servers. Those objects, in turn, are stored not by their filenames but referenced with a SHA-256 hash of their contents. The decoupling of the directory description and actual physical storage of the files creates flexibility for implementing any number of caching and data-distribution techniques.

When given a universal identifier such as alice@domain.com/dir/datafile, an Upspin client will send the request to its configured key server. The key server will look up the user alice@domain.com in its records, which will then return the location of the directory server the user's files are described in. The client will then query the directory server, which knows where all of Alice's files are. The directory server will evaluate the path name /dir/datafile to locate the file datafile and return the storage server it is kept on. The client then retrieves the file from the storage server.

[Upspin file lookup]

The concept is reminiscent of other efforts to provide a solution in this problem space, such as AFS, IPFS, Tahoe-LAFS, and Plan 9's namespaces. On the Upspin mailing list, Brian Van Klaveren asked if AFS was an influence on Upspin. Developer Andrew Gerrand replied that the team was aware of AFS, but it was not a direct influence. The Plan 9 influences are perhaps the most prominent on the design of Upspin, as Dave Presotto and Rob Pike were involved in the development of Upspin. Namespaces, and running a service as a combination of a set of different servers highly resembles Plan 9's design. Pike said that Upspin is unique in its combination of universality, security, and ability to share with "fine-grained" security options.

The global namespace key server is run by Google. This server keeps the public keys of all Upspin users, half of a private-public key pair generated with a P-256 elliptic curve algorithm. The server authenticates a user using their public key; a valid user will be able to use their private key to verify their identity and create a session. It is possible to run your own key server and form a private Upspin installation, but the design goal is for global participation in the Upspin ecosystem to prevent the fragmentation of the global namespace. As developer Eduardo Pinheiro explained:

That could fragment the namespace if not done carefully. One would need to know the authoritative key server for a user and then reconcile/reject discrepancies. But it's something to consider. We're just not there yet as it increases complexity.

One frequently asked question is whether the key server is always going to be a single point of failure; Pike replied:

While I share your concern about a single point of failure, it is important to the project that there be only one space of user names. A distributed user name set introduces technical difficulties but more important the possibility of name conflicts, which would be fatal to the project.

In time it may become important to build a distributed, federated key server that shares the load, but it should always form a single name space and guarantee its correctness. That project is for the future, though.

The idea is that every user in the global Upspin ecosystem should be universally identifiable and addressable, which is currently implemented with the central key server run by Google at key.upspin.io.

Access control

Access control, security, and privacy are enforced by Upspin via end-to-end encryption. All files stored are encrypted and decrypted on the client side, so user file contents cannot be read by hosting providers. By default, all of a user's files are encrypted and only accessible by the owner.

To share files, a user can place a simple, signed text file named "Access" with an access-control list in the directory that contains the files the user wants to share. The Access file contains a list of users (identified by their Upspin email) they wish to share the contents of the directory with; it specifies whether those users can read or write to the directories or files contained in the directory. File permissions are either read-only or read/write, while directories have options to have their contents listed ("listability") or to allow objects (including subdirectories) to be added or deleted from them. A directory that is not listable does not prevent users from accessing a file in that directory if they know its name; a user with read or read/write permissions to a file can still access it, even if they cannot list the rest of the files in the hierarchy.

The permissions apply to all files in the directory containing the Access file and all of its subdirectories unless there is an Access file deeper in the hierarchy that overrides them. A user may also define user groups in a special directory called Group, which needs to reside at the top-level directory. Groups are then defined in named files in the Group directory. An Access file may either contain users or groups.

By default, all stored files are encrypted with a random symmetric key using the AES-256 cipher, and the random symmetric key itself is encrypted with the file owner's public key. When a file is shared to other users, the random key is extracted and re-encrypted with each of the public keys of all the users the file is shared with. This is handled by the client, which will request all of the users' public keys from the key server. Integrity checks are done using the SHA-256 algorithm.

This access control model is simple, and resembles the access-control lists of Windows shared folders. However, because of the dependence on special files or directories named with common words like Access and Group, there is a likelihood that there will be a name collision with a user's personal files, as pointed out by Brian Swetland on the Upspin mailing list. Pike noted simplicity as the reason for that in his reply:

Regarding the access control files, all those issues were thought about but we decided to keep it simple. Our target is regular users and funny characters are not friendly to regular users. Still, we are certainly aware of the possibilities for collisions. We also worry about the parochial nature of English words here.

If the key server, directory server, or storage server were compromised, an attacker will be able to observe metadata retrieval, but not actual data thanks to the end-to-end encryption. Other security scenarios and encryption information can be found in the Upspin security document.

Trying it out

Both the Upspin clients and servers are available as an open source download. Go version 1.8 or later is needed for them to work. Go's go get mechanism can be used to fetch and install upspin:

    $ go get -u upspin.io/...

Once it is installed, there is a sign-up script to run; it expects to be given the user's email address and the location of the directory and storage servers the user wants to use. The script will then generate a private and public key pair, which are used for encryption and signing of the user's files. The default encryption algorithm uses the P-256 elliptic curve, but other options may arise as time goes on. The private key is kept locally, while the public key is sent to key.upspin.io for registration. The user then gets an email with a confirmation link, which finishes the two-step sign-up process.

The sign-up script will also create a configuration file in the user's home directory, which is stored in $HOME/upspin/config. It contains various options for Upspin in an easy to read YAML-formatted text file. The configuration knobs allow the user to edit the location of the three servers (key, directory, and storage), plus there is an option to edit encryption options. A user can disable encryption (but leave integrity check signing on), enable both encryption and integrity check (the default), or disable both.

Setting up the server side is also relatively simple. Upspin needs to be run on a host that accepts connections on port 443 and a domain name that the administrator can add DNS records to. A set-up script will generate a pair of keys on the server. The keys belong to a "server user", which will also register itself with the key server. The server set-up script generates a signature using the private key, which needs to be added as a DNS TXT record for the domain to prove to the key server that the administrator does indeed have authorization to add users for that domain. Once that is done, the server administrator can then add authorized users (called Writers) to the domain. The users are identified by the email addresses they used to sign up on the key server.

The server need not (and should not) be run as root. An upspin user should be created to be used for running the server. The upspinserver binary listens on port 443 (with the appropriate capability to do this set with setcap), and gets TLS certificates from Let's Encrypt.

Adding, retrieving and deleting files from an Upspin account can be done via commands to the upspin client, or via a FUSE interface that mounts the installation as a filesystem.

Conclusion

Upspin is currently experimental software. As such, it is rather rough around the edges. When I tried to set up my own installation with the client running on my workstation in Malaysia and the servers on Amazon Web Services on the west coast of the US, RPC timeouts occurred when storing files larger than a few megabytes using the Upspin command-line interface. The FUSE-based filesystem that the installation comes with also did not work for me, as it ended up hanging whenever the Upspin filesystem was mounted. An strace of the operation indicated that it was waiting for a read operation, most likely from the network, which would be in line with the RPC timeouts experienced using the Upspin commands.

In addition to needing a host to try out Upspin, setting up a server also requires a domain and server administration. Casual users will not want to invest time and resources to do this, so at this stage Upspin is still very much for developers and advanced users.

The Upspin project is trying to address a problem (sharing files) that already has a few solutions. After all, a universal naming mechanism for objects already exists in the form of the W3C's Uniform Resource Identifiers (URIs). One can imagine an Upspin-like service that can be built entirely with existing protocols such as HTTP, and using any of the existing key-server technologies. We shall have to see where Upspin's fits in as time goes on.

It is still unclear how Upspin will ultimately be provided to a critical mass of users; it is still in its early stages and the design issues are being worked out. It is likely that it will catch on first among cloud providers and special purpose wide-area networks. If that happens, the biggest selling point might be the simple access-control list mechanism and the automatic encryption that Upspin provides.

Comments (9 posted)

Brief items

Development quotes of the week

If you're big into the Internet of Things, and you've got everything from Hue light bulbs to Samsung refrigerators to smart door locks and thermostats, I hope you paid attention to that last section—it's the reason why sometimes your Wi-Fi sucks and your devices drop off the network even though you've got four bars everywhere in the house. If your smart TV is streaming 4K Netflix and your kid is watching YouTube and your spouse is playing DOTA, there may not be enough bandwidth left for the thermostat in the living room to get a packet in edgewise, and adding more RF signal strength isn't going to fix the problem.
Jim Salter

Autoconf and Meson walk into a bar…

…and by the time Meson has finished its second drink, Autoconf realizes it’s missing its wallet.

Ernestas Kulik (Thanks to Paul Wise)

Comments (7 posted)

Firefox 52.0

Firefox 52.0 has been released. This version features support for WebAssembly, adds user warnings for non-secure HTTP pages with logins, implements the Strict Secure Cookies specification which forbids insecure HTTP sites from setting cookies with the "secure" attribute, and enhances Sync to allow users to send and open tabs from one device to another. See the release notes for more information.

Comments (137 posted)

GnuPG 2.1.19 released

GnuPG 2.1.19 has been released. This version will print a warning if Tor mode is requested but the Tor daemon is not running and there is a new status code DECRYPTION_KEY to print the actual private key used for decryption. gpgv has new options --log-file and --debug. scd supports multiple card readers, the option --debug-disable-ticker has been removed, and detection of card inserting and removal has been improved. See the announcement for more details.

Full Story (comments: none)

Samba 4.6.0 Available for Download

Samba 4.6 has been released with many new features and changes. New features include Kerberos client encryption types, a new option for owner inheritance, multi-process Netlogon support, new options for controlling TCP ports used for RPC services, and more.

Full Story (comments: 3)

systemd 233

Systemd 233 has been released. Changes include the "hybrid" control group mode has been modified to improve compatibility with "legacy" cgroups-v1 setups, the default control group setup mode may be selected both at boot-time via a set of kernel command line parameters, DBus policy files are now installed into /usr rather than /etc, all python scripts shipped with systemd now require Python 3, and much more.

Full Story (comments: 3)

Newsletters and articles

Page editor: Rebecca Sobol

Announcements

Brief items

FSF: Three devices from Vikings GmbH now FSF-certified to respect your freedom

The Free Software Foundation has awarded Respects Your Freedom (RYF) certification to three devices from Vikings GmbH: the Vikings D16 Mainboard, the Vikings X200 libre-friendly laptop, and the Vikings USB stereo sound adapter. "These are their first products to be awarded RYF certification. The Vikings D16 Mainboard is the first server or workstation mainboard certified by the FSF."

Comments (none posted)

VMware Becomes Linux Foundation Gold Member

The Linux Foundation has announced that VMware has become a Gold member. "VMware has been involved in open source for years, by contributing to existing open source projects as well as open sourcing some of the company’s own code. This includes significant participation in and contributions to Linux Foundation projects such as Open Network Automation Platform (ONAP), Cloud Foundry and Open vSwitch, as well as other open source projects including OpenStack."

Full Story (comments: 2)

Articles of interest

Free Software Supporter Issue 107, March 2017

This month the Free Software Foundation's newsletter covers LibrePlanet, response to Tim Berners-Lee's defeatist post about DRM in Web standards, FSF Job Opportunity: Campaigns Manager, RMS on the road, what's a cryptovalentine?, a battle rages for the future of the Web, Replicant 6.0 development updates, and several other topics.

Full Story (comments: none)

DRM in HTML5 is a victory for the open Web, not a defeat (Ars Technica)

Ars Technica argues that Encrypted Media Extensions (EME), a framework that will allow the delivery of DRM-protected media through the browser, will be good for the web. "Moreover, a case could be made that EME will make it easier for content distributors to experiment with—and perhaps eventually switch to—DRM-free distribution. Under the current model, whether it be DRM-capable browser plugins or DRM-capable apps, a content distributor such as Netflix has no reason to experiment with unprotected content. Users of the site's services are already using a DRM-capable platform, and they're unlikely to even notice if one or two videos (for example, one of the Netflix-produced broadcasts like House of Cards or the forthcoming Arrested Development episodes) are unprotected. It wouldn't make a difference to them."

The Free Software Foundation has a different take on EME. "We have been fighting EME since 2013, and we will not back off because the W3C presents weak guidance as a fig leaf for DRM-using companies to hide their disrespect for users' rights. Companies can impose DRM without the W3C; but we should make them do it on their own, so it is seen for what it is—a subversion of the Web's principles—rather than normalize it or give it endorsement."

Comments (35 posted)

FSFE: What happened in Munich

The Free Software Foundation Europe has put out a release providing its view of the decision in Munich to possibly back away from its free-software-based infrastructure. "Since this decision was reached, the majority of media have reported that a final call was made to halt LiMux and switch back to Microsoft software. This is, however, not an accurate representation of the outcome of the city council meeting. We studied the available documentation and our impression is that the last word has not been spoken."

Full Story (comments: 13)

SCALE 15x Wraps

The Southern California Linux Expo (SCALE) team wraps up 15X. "Recapping, SCALE 15x was something of a mosaic. There were sessions and side conversations about virtual machines, microcontrollers, boot loaders, management tools … each a small, shiny, interesting bit. In later sessions and on the expo floor, you meet people who are assembling those shiny bits into a much bigger picture. DIY hardware boards are transformed into a clustered, auto-scaling container platform, for example."

Full Story (comments: none)

New Books

Arduino Playground -- new from No Starch Press

No Starch Press has released "Arduino Playground" by Warren Andrews.

Full Story (comments: none)

Calls for Presentations

PyCon ZA 2017 - Call for Speakers

PyCon ZA will take place October 5-6 in Cape Town, South Africa. The call for participation includes talks, tutorials, demos, open spaces, and sprints.

Full Story (comments: none)

CFP Deadlines: March 9, 2017 to May 8, 2017

The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.

DeadlineEvent Dates EventLocation
March 12 April 26 foss-north Gothenburg, Sweden
March 15 May 13
May 14
Open Source Conference Albania 2017 Tirana, Albania
March 18 June 19
June 20
LinuxCon + ContainerCon + CloudOpen China Beijing, China
March 20 May 4
May 6
Linuxwochen Wien 2017 Wien, Austria
March 27 July 10
July 16
SciPy 2017 Austin, TX, USA
March 28 October 23
October 24
All Things Open Raleigh, NC, USA
March 31 June 26
June 28
Deutsche Openstack Tage 2017 München, Germany
April 1 April 22 16. Augsburger Linux-Infotag 2017 Augsburg, Germany
April 2 August 18
August 20
State of the Map Aizuwakamatsu, Fukushima, Japan
April 10 August 13
August 18
DjangoCon US Spokane, WA, USA
April 10 July 22
July 27
Akademy 2017 Almería, Spain
April 14 June 30 Swiss PGDay Rapperswil, Switzerland
April 16 July 9
July 16
EuroPython 2017 Rimini, Italy
April 18 October 2
October 4
O'Reilly Velocity Conference New York, NY, USA
April 20 April 28
April 29
Grazer Linuxtage 2017 Graz, Austria
April 20 May 17 Python Language Summit Portland, OR, USA
April 23 July 28
August 2
GNOME Users And Developers European Conference 2017 Manchester, UK
April 28 September 21
September 22
International Workshop on OpenMP Stony Brook, NY, USA
April 30 September 21
September 24
EuroBSDcon 2017 Paris, France
May 1 May 13
May 14
Linuxwochen Linz Linz, Austria
May 1 October 5 Open Hardware Summit 2017 Denver, CO, USA
May 2 October 18
October 20
O'Reilly Velocity Conference London, UK
May 5 June 5
June 7
coreboot Denver2017 Denver, CO, USA
May 6 September 13
September 15
Linux Plumbers Conference 2017 Los Angeles, CA, USA
May 6 September 11
September 14
Open Source Summit NA 2017 Los Angeles, CA, USA
May 7 August 3
August 8
PyCon Australia 2017 Melbourne, Australia

If the CFP deadline for your event does not appear here, please tell us about it.

Upcoming Events

EuroPython 2017: Official dates available

EuroPython will take place July 9-16 in Rimini, Italy. "Conference tickets will allow attending Beginners’ Day, keynotes, talks, trainings, poster sessions, interactive sessions, panels and sprints."

Full Story (comments: none)

Events: March 9, 2017 to May 8, 2017

The following event listing is taken from the LWN.net Calendar.

Date(s)EventLocation
March 6
March 10
Linaro Connect Budapest, Hungary
March 10
March 12
conf.kde.in 2017 Guwahati, Assam, India
March 11
March 12
Chemnitzer Linux-Tage Chemnitz, Germany
March 16
March 17
IoT Summit Santa Clara, CA, USA
March 17
March 19
MiniDebConf Curitiba 2017 Curitiba, Brazil
March 17
March 19
FOSS Asia Singapore, Singapore
March 18 Open Source Days Copenhagen Copenhagen, Denmark
March 18
March 19
curl up - curl meeting 2017 Nuremberg, Germany
March 20
March 21
Linux Storage, Filesystem & Memory Management Summit Cambridge, MA, USA
March 22
March 23
Vault Cambridge, MA, USA
March 25
March 26
LibrePlanet 2017 Cambridge, MA, USA
March 28
March 31
PGConf US 2017 Jersey City, NJ, USA
April 3
April 7
DjangoCon Europe Florence, Italy
April 3
April 4
Power Management and Scheduling in the Linux Kernel Summit Pisa, Italy
April 3
April 6
‹Programming› 2017 Brussels, Belgium
April 3
April 6
Open Networking Summit Santa Clara, CA, USA
April 5
April 6
Dataworks Summit Munich, Germany
April 6
April 8
Netdev 2.1 Montreal, Canada
April 10
April 13
IXPUG Annual Spring Conference 2017 Cambridge, UK
April 17
April 20
Dockercon Austin, TX, USA
April 21 Osmocom Conference 2017 Berlin, Germany
April 22 16. Augsburger Linux-Infotag 2017 Augsburg, Germany
April 26 foss-north Gothenburg, Sweden
April 28
April 29
Grazer Linuxtage 2017 Graz, Austria
April 28
April 30
Penguicon Southfield, MI, USA
May 2
May 4
3rd Check_MK Conference Munich, Germany
May 2
May 4
samba eXPerience 2017 Goettingen, Germany
May 2
May 4
Red Hat Summit 2017 Boston, MA, USA
May 4
May 5
Lund LinuxCon Lund, Sweden
May 4
May 6
Linuxwochen Wien 2017 Wien, Austria
May 6
May 7
LinuxFest Northwest Bellingham, WA, USA
May 6
May 7
Community Leadership Summit 2017 Austin, TX, USA
May 6
May 7
Debian/Ubuntu Community Conference - Italy Vicenza, Italy

If your event does not appear here, please tell us about it.

Page editor: Rebecca Sobol


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds