LWN.net Weekly Edition for February 21, 2019
Welcome to the LWN.net Weekly Edition for February 21, 2019
This edition contains the following feature content:
- Patent exhaustion and open source: a FOSDEM talk on an interesting intersection between free software and software patent law.
- The case of the supersized shebang: a kernel regression resulting from a series of errors.
- Per-vector software-interrupt masking: a proposal for improving software-interrupt latency.
- Some challenges for GNOME online accounts: centralized access to web-based services is not as easy as it seems.
- Producing an application for both desktop and mobile: how the Subsurface project supports several different systems with the same code base.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
Patent exhaustion and open source
When patents and free software crop up together, the usual question is about patent licensing. Patent exhaustion — the principle that patent rights don't reach past the first sale of a product — is much less frequently discussed. At FOSDEM 2019, US lawyer Van Lindberg argued that several US court decisions related to exhaustion, most of them recent but some less so, could come together to have surprising beneficial effects for free software. He was clear that the argument applied only in the US but, since court systems tend to look to each other for consistency's sake, and because Lindberg is an engaging speaker, the talk was of great interest even in Brussels.
![Van Lindberg [Van Lindberg]](https://static.lwn.net/images/2019/fosdem-lindberg-sm.jpg)
A patent is a limited legal monopoly granted to protect an invention, giving the holder the right to exclude others from using, making, selling, and importing the invention (including things that embody the invention) for a fixed period of time. Much has been said and written over the years about the extension of patents to cover ideas that are expressed in software, but software patents are definitely with us at the moment.
There are, however, a number of limitations on the rights that a patent grants. One of these is patent exhaustion, which protects the ability of those lawfully in possession of goods embodying patents to use, sell, or import those goods without interference from the patent holder. Exhaustion prevents the patent holder from profiting more than once from the sale of any particular item; in Lindberg's words, as soon as the patent holder puts something "into the stream of commerce", the patent rights are exhausted. If Alice holds a patent for an invention embodied in a widget, and she sells a widget to Bob, then Bob is protected against accusations of patent infringement because he acquired the widget from the patent holder. If Bob sells his widget to Carol, she is similarly protected; not because she has licensed the patent from Alice, but because Alice's patent interest in that widget was exhausted by that first sale to Bob.
Naturally, over the years enthusiastic patent holders have tried a number of tricks to do end-runs around patent exhaustion, but often the courts have knocked them back. Lindberg outlined the limiting principles provided by five such cases that clarify the extent of patent exhaustion and underlie his surprising argument:
- Exhaustion results from any authorized transfer: Lexmark sells printers, said Lindberg, and the company really wants you to buy its expensive ink cartridges. So Lexmark sold ink cartridges embodying patented ideas on the condition that empty cartridges must be returned to them. Someone else acquired empty cartridges, refilled and sold them, and got sued for patent infringement. In Impression v. Lexmark, the US Supreme Court said that wasn't going to fly. There might be a contractual issue between Lexmark and the original purchasers of the cartridges, but the patents were exhausted by the sale of the cartridges, and patent law could not be used to pursue the reseller.
- Exhaustion applies to both system and method patents: There is more than one kind of patent. Among those types are systems patents, each of which covers a novel "product, device, or apparatus", which is to say, a tangible item. There are also method patents, each of which covers a novel "series of acts or steps", which is to say, a way of doing things. In Quanta v. LG it was argued that a method patent couldn't be sold embodied in an item in the same way that a systems patent could, and so the patent could not be exhausted by the sale of an item. The US Supreme Court said that "a patented method may not be sold in the same way as an article or device, but methods nonetheless may be 'embodied' in a product, the sale of which exhausts patent rights".
- Exhaustion applies even when there is an express reservation of patent rights: Some people, said Lindberg, have tried to exclude patent rights from the sale. There was a Xerox PARC license that was essentially the BSD license with added terms that limited the license only to copyright, excluding any license of patent rights unless the rights-holder explicitly added their name. But in Impression v. Lexmark, the US Supreme Court said that "this court accordingly has long held that, even when a patentee sells an item under an express restriction, the patentee does not retain patent rights in that product".
- Exhaustion applies to any authorized transfer, not just sales: Others have tried to argue that if they give something away, instead of selling it, then patent rights are not exhausted. In LifeScan Scotland v. Shasta Techs, though, the US Court of Appeals for the Federal Circuit was particularly unimpressed by this argument, saying that "a patentee cannot evade patent exhaustion principles by choosing to give the article away rather than charging a particular price for it". Lindberg noted that this is directly applicable to free and open-source software (FOSS). As he amusingly summarized it, if you choose to transfer your widget for zero dollars, you can't come back later and complain you didn't charge enough and should therefore still have patent rights to enforce. So even if we didn't have Jacobsen v. Katzer telling us that there is an economic benefit associated with having people use your free software, the zero-cost nature of free software wouldn't prevent the doctrine of patent exhaustion from applying.
- Exhaustion applies to foreign sales: Also from Impression v. Lexmark, an authorized sale outside the US, just as one within the US, exhausts all rights under the Patent Act.
- Exhaustion prevents patent assertions against authorized recipients of FOSS: In Cascades Computer Innovation v. Samsung Electronics, the dispute arose because Cascades had licensed some patents to Google for use in certain ways inside the Dalvik virtual machine with a condition that Google could only use the license for Google products. One such product was the Android Open Source Project (AOSP). Samsung took the AOSP code, compiled it, and distributed it on its phones. Cascade sued, but the Northern District Court for Illinois, according to Lindberg, said "no", because once you've put something into the stream of commerce, your patent is exhausted. The AOSP came from a producer with a patent license; the fact that Google chose not to charge money for its product doesn't preclude exhaustion.
- Code written by non-patent-licensees can still exhaust the patent if it is distributed by a licensee: Lindberg was quite coy about the fifth case, Intel v. ULSI, when he introduced it earlier in the talk. Coming back to it, he described it as the kicker, the one with the twist, even though coming from 1993 it's by far the oldest of the judgments. In this case, HP was given a license by Intel to be a foundry for certain computer chips: to manufacture and sell them to third parties. Another company, ULSI, designed its own, similar chip, and asked HP to manufacture it. HP did so, at which point Intel sued ULSI for infringing Intel's patents, as ULSI had obtained no license from Intel. The US Court of Appeals for the Federal Circuit held that because HP had manufactured the chips, and because at the time it did so it held a license to the patents, no infringement had occurred; any sales of ULSI chips were lawful and thus exhausted those patents. Code is a good, just like any other product, as Lindberg confirmed in response to a later question, so the passage of code through the hands of a patent licensee effectively "sanitizes" the code with respect to those patents, exhausting them in the process.
Having established his list of principles for patent exhaustion, Lindberg described a hypothetical scenario. Suppose that Alice made a chip that embodied a patent owned by Bob. Alice had no license to do it, she just created this chip, ran off millions of copies, and they got embedded in phones; that sort of thing, said Lindberg, happens all the time. Carol realizes this chip is embedded in phones which can be bought from Bob's Phone Shop. She buys a lot of these phones, cracks them open, extracts Alice's chips, and makes a nice business out of reselling them. Bob sues Carol for violating his patents, saying that he didn't give Alice a license for the chips that she made and Carol bought. Carol, however, argues that she got the phones from Bob, and using the principles above, wins. It turns out, said Lindberg, that the analysis is no different if it's a piece of software than if it's a chip in a phone.
We in the free software world have repositories, distributions, and mirrors; copies of source code are hosted by companies willy-nilly. Suppose that some company had mirrored a copy of a Linux distribution, with its thousands of constituent programs, each of which might embody one or more patents. Then that same company, because it is an authorized licensee for such of those patents as the company itself either held or had a right to use (by virtue of being in one or more patent pools or cross-licensing arrangements), would have exhausted those patent rights with respect to that software. Lindberg did add a caveat, however: courts frequently try to avoid surprising outcomes, therefore a court might follow the argument but decide not to allow it anyway.
At this point, Lindberg reminded attendees that Microsoft bought GitHub. After a short pause, the entire room, with a large proportion of lawyers in the audience, giggled, a sound that can only be described as chilling, then applauded. He then went further and proposed an N-way merge across copies of code bases sanitized by different distributors with respect to their different patent portfolios, to create code bases that are exhausted with respect to all patents that all those various distributors are authorized to use.
This was a fairly difficult talk to follow, and it's not an argument I've heard before. But the audience reception was fairly friendly; there were a couple of detailed legal questions about the implications of other judgments, but Lindberg didn't seem to feel they were fatal. I hope to hear this argument a lot more in the future because, if it works, it bodes well indeed for controlling software patents.
The original talk can be seen and heard here.
[We would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Brussels for FOSDEM.]
The case of the supersized shebang
Regressions are an unavoidable side effect of software development; the kernel is no different in that regard. The 5.0 kernel introduced a change in the handling of the "#!" (or "shebang") lines used to indicate which interpreter should handle an executable text file. The problem has been duly fixed, but the incident shows how easy it can be to introduce unexpected problems and highlights some areas where the kernel's development process does not work as well as we might like.By longstanding Unix convention, an attempt to execute a file that does not have a recognized binary format will result in that file being passed to an interpreter. By default, the interpreter is a shell, which will interpret the file as a shell script. If, however, the file starts with the characters "#!", the remainder of the first line will be treated as the name of the interpreter to use (and possibly arguments to be passed to that interpreter). This mechanism allows programs written in almost any interpreted language to be executed directly; the user need never know which interpreter is actually doing the work behind the scenes.
[Update: as noted in the comments, the above behavior is the result of both kernel and user-space code; in particular, the default to a shell is implemented within current shells and C libraries.]
The array used to hold the shebang line is defined to be 128 bytes in length. That naturally leads to the question of what happens if the line exceeds that length. In current kernels, the line will simply be truncated to fit the buffer, after which execution proceeds as normal. Or, at least, as normal as can be expected given that part of the shebang line is now missing. Recently, Oleg Nesterov decided that this behavior is wrong; it could cause misinterpreted arguments or, should the truncated line happen to be the valid name of an interpreter executable in it own right, run the wrong interpreter entirely. He put together a patch (merged for 5.0-rc1) changing that behavior; the kernel would fail the attempt to find an alternative interpreter entirely in that situation, causing a fallback to the default shell.
Trouble for NixOS
The NixOS distribution, it seems, takes an unusual approach to the management of scripts. As noted in a problem report posted by Samuel Dionne-Riel on February 13, NixOS scripts can have shebang lines like:
#! /nix/store/mbwav8kz8b3y471wjsybgzw84mrh4js9-perl-5.28.1/bin/perl -I/nix/store/x6yyav38jgr924nkna62q3pkp0dgmzlx-perl5.28.1-File-Slurp-9999.25/lib/perl5/site_perl -I/nix/store/ha8v67sl8dac92r9z07vzr4gv1y9nwqz-perl5.28.1-Net-DBus-1.1.0/lib/perl5/site_perl -I/nix/store/dcrkvnjmwh69ljsvpbdjjdnqgwx90a9d-perl5.28.1-XML-Parser-2.44/lib/perl5/site_perl -I/nix/store/rmji88k2zz7h4zg97385bygcydrf2q8h-perl5.28.1-XML-Twig-3.52/lib/perl5/site_perl
This line has been split for (relative) ease of reading; it is all a single line in the files themselves. This line exceeds the maximum length by a fair amount, triggering the new code. The end result is that the Perl interpreter is not invoked as expected and the attempt to execute the file fails. User-space code reacts by passing the script to a shell, which rather messily fails to do the right thing with it. In other words, a change intended to prevent scripts from being passed to the wrong interpreter caused the system to start passing scripts to the wrong interpreter. The NixOS developers, rightly, saw this change as a regression; something that used to work no longer does with the 5.0 kernel.
One might well wonder just how things worked before, since a truncated version of that shebang line is still wrong. It turns out that the Perl interpreter is able to detect this truncation; it rereads the first line itself and sets its arguments properly. As long as the interpreter itself is the correct one, things will work as expected. As of 5.0-rc1, though, the correct interpreter would no longer be invoked, and things went downhill from there.
The kernel project's policy on this kind of change is clear, but Linus Torvalds reiterated it in this case anyway:
Yes, maybe it never *should* have worked. And yes, it's sad that people apparently had cases that depended on this odd behavior, but there we are.
The change has since been reverted, so NixOS will be able to run 5.0 kernels. There is work being done to achieve the original goal (preventing the kernel from possibly running the wrong interpreter) while not breaking existing users; that is proving harder than one might expect and will almost certainly have to wait for 5.1.
Regressions in stable kernels
Had that been the end of the story, it would have been just another case of a regression introduced during the merge window, then corrected during the stabilization period. But, as it happens, this change found its way into the 4.20.8, 4.19.21, 4.14.99, and 4.9.156 stable kernel updates, despite the fact that neither the author nor the maintainer who merged it (Andrew Morton) had marked it for stable backporting. Morton complained, noting that he had concluded that the patch should not be backported, but that backport had happened anyway.
Not that long ago, the lack of an explicit tag would prevent a patch from
being backported to the stable releases, but the situation has changed
somewhat in recent years.
Along with many of the other changes in that set of especially large stable
kernel updates, Nesterov's patch had been automatically selected for
backporting by Sasha
Levin's machine-learning system. Greg Kroah-Hartman suggested
that concerned developers and users should have noticed this patch and
complained before it was shipped: "This came in through Sasha's
tools, which give people a week or so to say 'hey, this isn't a stable
patch!' and it seems everyone ignored that
". The implication is
that, had people been paying attention, this regression would not have
found its way into the stable updates.
The patch in question was flagged
for backporting as part of a set of 304 selected for 4.20 on
January 28. It then found its way into the 4.20.8
review notification on February 11. That stable-release cycle
gave developers and users a mere 352 patches to look over, but perhaps some
understanding can be extended to those who didn't quite manage to evaluate
the whole set in time. In truth, of course, there is little chance that
anybody can truly look at that patch volume (multiplied by several major
releases receiving stable updates at the same time) and pick out the bad
patch. So some developers, such as
Michal Hocko, have said (again) that the process of moving patches into
stable releases should be slower, perhaps waiting until those patches have
appeared in a major release from Torvalds. That is especially true, he said,
of the "nice-to-have
" patches that don't address problems
users are complaining about.
Levin does not think that will help:
As a general rule, that might even be true, but it happens to not be in this case: the NixOS developers discovered the problem on January 8, and filed a report in the kernel bugzilla on February 2. The commit causing the problem had been identified (through bisection) on February 3. Shipping the regression in the stable updates had nothing to do with its discovery and reversion, in other words — the problem had already been identified well before the stable kernels shipped it.
Even so, Levin remains adamant that the process of automatically selecting patches for backporting is the right thing to do:
This is undoubtedly an issue that will arise again; there are a great many fixes going into the kernel, and users of stable kernels (almost all of us) benefit from getting those fixes. But there are clearly some things that can be improved here. There was no test for this particular regression because it had never occurred to anybody that things could break in that way; we now know better, but no tests have been added yet. A kernel bugzilla instance that doesn't prevent a known-bad patch from getting into a stable release is clearly not doing its job; the kernel community as a whole lacks a convincing story on how bugs should be reported and tracked. The kernel development process works well in many ways, but that does not mean that it is without some glaring problems.
Per-vector software-interrupt masking
Software interrupts (or "softirqs") are one of the oldest deferred-execution mechanisms in the kernel, and that age shows at times. Some developers have been occasionally heard to mutter about removing them, but softirqs are too deeply embedded into how the kernel works to be easily ripped out; most developers just leave them alone. So the recent per-vector softirq masking patch set from Frederic Weisbecker is noteworthy as an exception to that rule. Weisbecker is not getting rid of softirqs, but he is trying to reduce their impact and improve their latency.Hardware interrupts are the means by which the hardware can gain a CPU's attention to signal the completion of an I/O operation or some other situation of interest. When an interrupt is raised, the currently running code is (usually) preempted and an interrupt handler within the kernel is executed. A cardinal rule for interrupt handlers is that they must execute quickly, since they interfere with the other work the CPU is meant to be doing. That usually implies that an interrupt handler will do little more than acknowledge the interrupt to the hardware and set aside enough information to allow the real processing work to be done in a lower-priority mode.
The kernel offers a number of deferred-execution mechanisms through which that work can eventually be done. In current kernels, the most commonly used of those is workqueues, which can be used to queue a function call to be run in kernel-thread context at some later time. Another is tasklets, which execute at a higher priority than workqueues; adding new tasklet users tends to be mildly discouraged for reasons we'll get to. Other kernel subsystems might use timers or dedicated kernel threads to get their deferred work done.
Softirqs
Then, there are softirqs which, as their name would suggest, are a software construct; they are patterned after hardware interrupts, but hardware interrupts are enabled while software interrupts execute. Softirqs have assigned numbers ("vectors"); "raising" a particular softirq will cause the handler function for the indicated vector to be called at a convenient time in the near future. That "convenient time" is usually either at the end of hardware-interrupt processing or when a processor that has disabled softirq processing re-enables it. Softirqs thus run outside of the CPU scheduler as a relatively high-priority activity.
In the 5.0-rc kernel, there are ten softirq vectors defined:
- HI_SOFTIRQ and TASKLET_SOFTIRQ are both for the execution of tasklets; this is part of why tasklets are discouraged. High-priority tasklets are run ahead of any other softirqs, while normal-priority tasklets are run in the middle of the pack.
- TIMER_SOFTIRQ is for the handling of timer events. HRTIMER_SOFTIRQ is also defined; it was once used for high-resolution timers, but that has not been the case since this change was made for the 4.2 release.
- NET_TX_SOFTIRQ and NET_RX_SOFTIRQ are used for network transmit and receive processing, respectively.
- BLOCK_SOFTIRQ handles block I/O completion events; this functionality was moved to softirq mode for the 2.6.16 kernel in 2006.
- IRQ_POLL_SOFTIRQ is used by the irq_poll mechanism, which was generalized from the block interrupt-polling mechanism for the 4.5 release in 2015. Its predecessor, BLOCK_IOPOLL_SOFTIRQ was added for the 2.6.32 release in 2009; no softirq vectors have been added since then.
- SCHED_SOFTIRQ is used by the scheduler to perform load-balancing and other scheduling tasks.
- RCU_SOFTIRQ performs read-copy-update processing. There was an attempt made by the late Shaohua Li in 2011 to move this processing to a kernel thread, but performance regressions forced that change to be reverted shortly thereafter.
Thomas Gleixner once summarized the
software-interrupt mechanism as "a conglomerate of mostly unrelated
jobs, which run in the context of a randomly chosen victim w/o the ability
to put any control on them
".
For historical reasons that long predate Linux, software interrupts also
sometimes go by the name "bottom halves" — they are the half of interrupt
processing that is done outside of hardware interrupt mode. For this
reason, one will often see the term "BH" used to refer to software
interrupts.
Since software interrupts execute at a high priority, they can create high levels of latency in the system if they are not carefully managed. As little work as possible is done in softirq mode, but certain kinds of system workloads (high network traffic, for example) can still cause softirq processing to adversely impact the system as a whole. The kernel will actually kick softirq handling out to a set of ksoftirqd kernel threads if it starts taking too much time, but there can be performance costs even if the total CPU time used by softirq processing is relatively low.
Softirq concurrency
Part of the problem, especially for latency-sensitive workloads, results from the fact that softirqs are another source of concurrency in the system that must be controlled. Any work that might try to access data concurrently with a softirq handler must use some sort of mutual exclusion mechanism and, since softirqs are essentially interrupts, special care must be taken to avoid deadlocks. If, for example, a kernel function acquires a spinlock, but is then interrupted by a softirq that tries to take the same lock, that softirq handler will wait forever — the sort of situation that latency-sensitive users tend to get especially irritable over.
To avoid such problems, the kernel provides a number of ways to prevent softirq handlers from running for a period of time. For example, a call to spin_lock_bh() will acquire the indicated spinlock and also disable softirq processing for as long as the lock is held, preventing the deadlock scenario described above. Any subsystem that uses software interrupts must take care to ensure that they are disabled in places where unwanted concurrency could occur.
Linux software interrupts have an interesting problem — interesting because it is seemingly obvious but has been there since the beginning. The softirq vectors described above are all independent of each other, and their handlers are unlikely to interfere with each other. Network transmit processing should not be bothered if the block softirq handler runs concurrently, for example. So code that must protect against concurrent access from a softirq handler need only disable the one handler that it might race with, but functions like spin_lock_bh() disable all softirq handling. That can cause unrelated handlers to be delayed needlessly, once again leading to bad temper in the low-latency camp.
Per-vector masking
Weisbecker's answer to this is to allow individual softirq vectors to be disabled while the others remain enabled. The first attempt, posted in October 2018, changed the prototypes of functions like spin_lock_bh(), local_bh_disable(), and rcu_read_lock_bh() to contain a mask of the vectors to disable. There was just one little problem: there are a lot of callers to those functions in the kernel. So the bottom line for that patch set was:
945 files changed, 13857 insertions(+), 9767 deletions(-)
The kernel community has gotten good at merging large, invasive patch sets, but that one still pushed the limits a bit. That is especially true given that almost all call sites still disabled all vectors; doing anything else requires careful auditing of every change. The second time around, Weisbecker decided to take an easier approach and define new functions, leaving the old ones unchanged. So this patch set introduces functions like:
unsigned int spin_lock_bh_mask(spinlock_t *lock, unsigned int mask); unsigned int local_bh_disable_mask(unsigned int mask); /* ... */
After the call, only the softirq vectors indicated by the given mask will have been disabled; the rest can still be run if they were enabled before the call. The return value of these functions is the previous set of masked softirqs; it is needed when renabling softirqs to their previous state.
This patch set is rather less intrusive:
36 files changed, 690 insertions(+), 281 deletions(-)
That is true even though it goes beyond the core changes to, for example, add support to the lockdep locking checker to ensure that the use of the vector masks is consistent. One thing that has not yet been done is to allow one softirq handler to preempt another; that's on the list for future work.
No performance numbers have been provided, so it is not possible to know
for sure that this work has achieved its goal of providing better latencies
for specific softirq handlers. Still, networking maintainer David Miller
indicated
his approval, saying: "I really like this stuff, nice
work
". Linus Torvalds had some low-level comments that will need to be
addressed in the next iteration of the patch set. Some other important
reviewers have yet to weigh in, so it would be too soon to say that this
work is nearly ready. But, in the absence of a complete removal of
softirqs, there is clear value in not disabling them needlessly, so this
change seems likely to vector itself into the mainline sooner or later.
Some challenges for GNOME online accounts
The cynical among us might be tempted to think that an announcement from the GNOME project about the removal of a feature — a relatively unused feature at that — would be an unremarkable event. In practice, though, Debarshi Ray's announcement that the GNOME Online Accounts (GOA) subsystem would no longer support the "documents" access point touched off a lengthy discussion within the project itself. The resulting discussion revealed a few significant problems with GOA and, indeed, with the concept of online-account management in any sort of open-source umbrella project like GNOME.GOA is meant to provide a single sign on system integrating GNOME applications with web-based services. Any application that, for example, wants to access files stored in Google Drive would ordinarily have to ask the user for credentials and log into Drive separately, which gets tiresome for users running a lot of applications. By routing this access through GOA, the GNOME developers hope to simplify the process of using those services. GOA includes a number of different "integration points" for different types of services, including files, email, calendars, contacts, and more.
The "documents" point was used by the Documents application,
which is meant to help users manage their documents. It has suffered,
though, from a lack of both users and developers and lacks basic features;
Michael Catanzaro described it as
"basically just 'bad evince'
". That certainly restricts its
prospects for success; as Ray put it:
"it doesn't stand any chance of adoption unless it
can open files like /usr/bin/evince
". Documents has duly been removed
from the core set of GNOME applications. Since it was the only core
application using the "documents" integration point, that point is now
being removed.
The initial concerns that were raised had to do with the prospects of stranding any other application that might have been using the GOA documents integration point, though no such applications have been named. Bastien Nocera asked:
These concerns are amplified by discussions within the Fedora project about dropping the Evolution email client and deleting the corresponding email integration point from GOA. That seems likely to break other email clients, including Geary, which is just now gaining GOA support. The GNOME project itself is not currently considering removing email support (though Catanzaro did say that it makes no sense in the absence of a core GNOME email application), but even if that removal is confined to Fedora, that would have a significant impact. It would make it much harder for developers to be able to count on the existence of GOA support for their particular application.
Given such concerns, one might wonder why the GNOME developers are considering removing support from GOA. There are a couple of significant forces at play here, one of which being that keeping GOA support working requires constant effort, and few developers are stepping up to do that work. Indeed, as Ray explained, almost nobody is working on it, and that is driving the desire to reduce its scope:
This is why, from my point of view, it's better to have a simpler, more straightforward GOA, because then we can invest whatever little resources we have to keep our SDKs alive.
This problem is compounded, Ray said, by the fact that a lot of
application developers are uninterested in "betting the farm
"
on GOA in the first place. Many key applications have continued to carry
their own web-service integration features.
As it happens, it seems that developers for many of those projects may have
been right in not feeling
entirely welcome to use GOA at all. Early in the discussion,
Allan Day described the question of whether
applications that are not considered to be a part of the GNOME core should
be using GOA at all as "a rather big gray area
". By the time
he posted some "clarifications" on the subject on
February 11, the line taken was rather harder:
There are a couple of reasons behind the desire to restrict access to GOA, but the strongest of those would appear to be that the GNOME project must obtain API keys from service providers to provide its integration features. Those keys come with a long and rapidly changing set of requirements about how they can be used; a failure to follow the rules can cause the keys to be revoked, breaking all users. This happened in 2016, when an evolution-data-server bug caused usage limits to be exceeded. If GNOME is to avoid problems like that in the future, it needs to keep a handle on how its keys are used.
This has led developers to say that non-GNOME-core applications should be shipped with their own API keys. That, of course, breaks the single-sign-on functionality that was the motivation behind the whole thing. One other minor problem with all of this, as Catanzaro pointed out, is that GNOME is open-source software, so its API keys are not exactly secret. He asked:
Nobody had any sort of convincing answer to that question.
Finally, GOA has one other problem that needs to be worked out: in a world where the project is encouraging application developers to ship their wares in the Flatpak format, should those applications have access to GOA? After all, one of the key features in Flatpak is its ability to sandbox applications; giving those applications access to the keys to web-service accounts would rather defeat the purpose. Solving that issue, it seems, is going to require some significant rethinking of how GOA works.
Days's clarifications concluded by saying "We realise that this does
not provide complete clarity around GNOME Online Accounts
". The
project is promising to work on a number of issues, including an actual
definition of what constitutes a "GNOME application", a design for a GOA
that handles sandboxed applications properly, and a new design for GOA in
general. One can imagine that this is not a discussion that will reach a
conclusion anytime soon.
Producing an application for both desktop and mobile
These days applications are generally moving away from the desktop and toward the mobile space. But taking a multi-platform desktop application and adding two mobile platforms into the mix is difficult to do, as Dirk Hohndel described in his linux.conf.au 2019 talk. Hohndel maintains the Subsurface dive log application, which has added mobile support over the past few years; he wanted to explain the process that the project went through to support all of those platforms. As the subtitle of the talk, "Developing for multiple platforms without losing your mind", indicates, it is a hard problem to solve sanely.
Hohndel noted that he rarely has slides for his talks but that he makes an exception for Subsurface talks. He has spoken about Subsurface at LCA several times along the way and uses photos from his most recent dives in the slides. He is "like the grandparent who invites you for dinner and then forces you to look at their vacation pictures"; this year's photos were from dives in Fiji the previous week.
Linus Torvalds (who attended the talk) started the Subsurface project in 2011 when he was unable to work on the kernel for a few weeks due to the kernel.org breakin. A year or so later, Hohndel took over as the maintainer. He has chronicled the project in LCA talks in 2013 through 2015, but only returned to the conference in 2019. Things have changed quite a bit in that interval.
There is a generational change among divers (along with many of the rest of us) regarding the use of laptops; younger divers do not want to bring a laptop along on their trips. Instead, they want to use their phones. Even a tablet is too big to bring along for many. That means they are looking for a mobile app. Most new users of Subsurface come in via the mobile app and only later start using the desktop version because it makes some operations (e.g. editing) much easier.
Adding mobile
![Dirk Hohndel [Dirk Hohndel]](https://static.lwn.net/images/2019/lca-hohndel-sm.jpg)
So when it became apparent that a mobile version of Subsurface was needed, the first idea was to take the desktop application and put it on the mobile device. That is a "really bad plan", Hohndel said. A mouse is a high-precision pointer, and touch is quite coarse, so there is a mismatch there. In addition, when you touch a user interface (UI) element, you are covering it with your finger, which is something about touch interfaces that drives him crazy, he said. Using the same UI on the desktop and mobile device will not work, he said; Subsurface tried that and failed.
So he looked at Android and iOS, since those are the only two mobile platforms that matter. The two have different development philosophies and recommended languages (Objective C, or maybe Swift, for iOS and Java for Android). But Subsurface has a lot of native C/C++ code for various tasks (e.g. talking to dive computers or the cloud, managing the different kinds of data that it handles); the project does not want to redevelop all of that twice. Furthermore, Hohndel "speak[s] neither Java nor Objective C", which would make it difficult for him to continue maintaining the code.
In order to talk to most recent dive computers, Subsurface must be able to talk Bluetooth Low Energy (BLE). It is good that many newer dive computers use BLE, since that is really the only way to communicate with iOS, he said. Older computers and even some of the newer ones use serial over USB to communicate, which "kinda sorta sometimes works" on Android, but not for iOS. Much of that is a work in progress for Subsurface; the mobile apps have been out for three years, but they are not all that good yet, he said.
Subsurface is based on the Qt toolkit, which has a UI markup language: Qt Modeling Language (QML). It is the obvious route for a Qt-based desktop application to move to the mobile space. When you first look at QML, he said, it looks fairly straightforward; you describe the UI elements that should appear on the screen and what kind of actions should happen when those elements are clicked and so on. But, when you try to actually use it, it turns out to be "really hard".
QML has no widgets or UI libraries; it simply has boxes that can be filled in various ways, text fields for input, and support for mouse and touch input. "It is as bare bones as you can get", he said. Beyond that, it requires you to wrap your mind around the difference between declarative and procedural programming. Once again, that sounds simple but a declarative program describes the outcomes rather than the process—and the process magically happens with callbacks and signals behind the scenes. It was "an insane learning curve" for him and the other developers.
Kirigami
Along the way, Subsurface developers encountered Kirigami, which is a part of the KDE project. It is a UI framework that has more structure, adds widgets, and provides other more complex UI elements. Most crucially, though, the KDE community provided a great deal of help with using it. As he has mentioned several times in his talks along the way, the Qt and KDE/Plasma communities have been amazingly helpful. One Plasma developer, who happens to also be a diver, reached out to the project and was able to put together a rough overall framework for the mobile app in about a month.
To this day, when Subsurface has questions about how to make something work, it can get answers from KDE developers fairly quickly. Distressingly often, however, those answers are about "some magic thing you just have to know". Part of that may be because Subsurface is the first project outside of KDE to use Kirigami, so the only other users are quite familiar with it and how it should work. The documentation leaves something to be desired, he said, and Googling "Kirigami" will mostly teach you about a particular style of paper cutting and folding.
Hohndel put up an example of a button in Kirigami and noted that it is "very verbose". It is not immediately obvious that you are looking at the code for a button. It is also difficult to encapsulate the Kirigami code so that parts and pieces can be reused elsewhere. He noted that the code he was showing might not be particularly good code, but it does work.
The example also showed how QML interfaces with the existing Subsurface C++ code, which is simply to make function calls. That is both the major strength and the biggest weakness of Kirigami (and QML, which underlies it). There are three different ways to interface to C++ from QML, but the other two ways are fraught with oddities in object lifetimes and such. So calling functions is the way to go in his mind, but doing things that way makes for "horrible code". When the QML folks show up, usually in response to some call for help, they always say "you're doing it wrong", but it is the only way that works reliably for Subsurface.
He showed some screen shots of the mobile app on Android that looked reasonable from a functional standpoint but perhaps lacked some visual appeal. In part, that is because Kirigami "is its own thing"; it does not follow the Android or iOS UI design philosophy, so the app does not look like other apps geared for those platforms. Because Kirigami has one look and feel, only one manual is needed for the app, but all of the reviews on the iOS and Android stores complain about it. "Everyone thinks that we suck", Hohndel said.
Kirigami apps can run natively on the desktop, rather than in a simulator, which makes it great for debugging, however. It is relatively easy to take an existing desktop application and add mobile support using Kirigami. But, in the end, you may not be entirely happy with the UI that you get. Kirigami is also lacking a UI design tool, which is the biggest missing feature in his mind.
Another problem is that it is difficult to find experienced QML developers. In response to his query about QML development knowledge, one attendee raised their hand, so Hohndel joked that Torvalds should tackle that person at the end of the talk so that they couldn't leave the room. There is roughly four orders of magnitude difference between the numbers of iOS or Android developers and those with QML experience. On the flip side, though, he tried to make a Java app that called into the Subsurface native code and "it's a nightmare". There are around a million lines of code that the project already has; making that work with Objective C and Java was not really in the cards. It is a matter of which battle you want to fight, he said.
Connecting to the Subsurface C++ code is rather tricky. The documentation is lacking, there are all sorts of magic rules that you need to know, and there are object creation heartaches that need to be worked through. Hohndel noted that he had been complaining a lot, "I'm German by birth, it is the national pastime", but once those problems are all resolved, and a pull request comes in making things work, "it is beautiful". Once it is all hooked up correctly, the bugs can all be fixed in one place. There is no need to fix them five different times, once for each desktop (Linux, Windows, macOS) and mobile platform (iOS, Android).
The mobile app now has access to all of the infrastructure that the desktop application has built up over the years. That infrastructure understands the data that Subsurface uses and how it all works together. It is "really wonderful" that complex features from the desktop can easily be added to the mobile app (e.g. filtering) because most of the work has already been done. Even with all the "bitching and moaning" he did about the complications and his complaints about the UI not being quite what he wanted, taking this path did make it "really really easy to make progress" on a mobile app.
Packaging frustration
Over the years, he has given talks on Linux packaging, which he finds to be "really frustrating". But packaging for iOS and Android is "so much more frustrating that I have thought about stopping this several times", Hohndel said. Packaging for the mobile platforms is geared toward using the IDEs and packaging tools that go with the platforms; Subsurface is built using GCC or Clang with scripts and CMake, "all kinds of things that no one expects you to do". It is worse on iOS than Android, he said; you essentially cannot package an iOS app without using Xcode.
Mobile app packaging is also poorly documented and there is a "ton of magic" required. Beyond that, when it doesn't work, it is nearly impossible to figure out what went wrong. There is no real way to debug the install process. Google and Apple regularly break things as well. As far as he knows, Subsurface is the only Kirigami app with a version in both the iOS and Android stores, which is part of why he has struggled with it so much—no one else is doing it. He would like to see more projects do so but, as someone in the audience pointed out, he doesn't really make it sound like much fun.
There is also the question of putting GPL-licensed code onto the iOS store, which has been deemed impossible by the FSF. Subsurface is under GPLv2 but the project has decided that it owns the copyrights and doesn't care that Apple distributing the app from its store might violate the license. The libraries Subsurface uses are all open source and available and the scripts and tools are packaged up in various ways (e.g. container images) such that people can easily build the packages themselves if they want. "So we're pretty sure that we actually are sort of, mostly, kind of compliant with the licenses", though he is not a lawyer, he cautioned. For projects that have members with strong views on open-source compliance, that could be an issue to keep in mind. Subsurface has declared the license to both Apple and Google and has not been challenged on it.
In summary
The Subsurface project knew exactly what it wanted, which was to reuse as much of its existing code as possible, but it still doesn't know how to get that. In the end, the developers ended up with something fairly close. The apps are not as beautiful and intuitive as he would like.
There is an "incredibly hard tradeoff" between the sunk costs of a particular path and the cost of change. Subsurface came down on the side of change back in the days of the switch from GTK+ to Qt; Hohndel has been tempted to simply switch to starting over with a Java app, but hasn't ever quite gotten there. The ease of being able to reuse all of the existing program infrastructure in the mobile app is what keeps the project headed down this path. When that all works, the benefits outweigh the downsides even though he is not entirely happy with the result.
For Subsurface, it is "cross-platform development squared"; not only does it support five operating systems, it also supports two very different hardware platform types (desktop and mobile). That is "incredibly hard" to do. He did some searching to see if he could find another project that does what Subsurface does; he could find no other project that supports all of those environments from a single code base with a relatively small UI layer to handle the differences between desktop and mobile platforms. That means Subsurface is really unusual, but he thinks that will change; more projects will head down this path as mobile apps become ever more important.
There are definitely frustrations, but there are also huge successes. Using the app on the phone to download from the dive computer while sitting on the dive boat, pushing the data up to the cloud, then being able to retrieve it later on a laptop is quite an accomplishment. "I look forward to that day", he deadpanned to laughter. In truth it usually works today. He recommended that projects and developers considering this path should understand the tradeoffs: you can make amazing things happen but it takes a fair amount of effort to get there.
In the Q&A session, Hohndel said that he did not think Flutter was the solution for Subsurface. QML is clearly designed to do what Subsurface needs; it is the extension of Qt into the mobile space. The fact that he has been unable to get it to do what the project needs is a different problem. Another question was what he might recommend to a new project just starting out. If you want a cross-desktop application, Qt is probably what you want as there is little else in the open-source realm that has good support for Windows, macOS, and Linux.
Another attendee said that Subsurface is better than any other dive log application that he has used, which, as Torvalds pointed out, is a pretty low bar. Hohndel said that the proprietary dive computer applications are "a disaster, they are so bad". Some of them are starting to get better, but they are always tied to a specific vendor's dive computers. There are also two single-OS proprietary applications (one for macOS, one for Windows) that handle multiple dive computers, but their phone apps are not well integrated and do not share code with the desktop. As far as he can tell, no one else is doing what Subsurface does.
Interested readers can view a WebM-format video or a YouTube video of the talk.
[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Christchurch for linux.conf.au.]
Brief items
Security
Security quotes of the week
Honestly, cryptocurrencies are useless. They're only used by speculators looking for quick riches, people who don't like government-backed currencies, and criminals who want a black-market way to exchange money.
Kernel development
Kernel release status
The current development kernel is 5.0-rc7, released on February 17. Linus said: "Nothing particularly odd stands out, and everything is pretty small. Just the way I like it."
Stable updates: 4.20.9, 4.19.22, 4.14.100, and 4.9.157 were released on February 15, followed a few milliseconds later by 4.20.10, 4.19.23, 4.14.101, and 4.9.158, which contained a fix for the oversized shebang regression. The large 4.20.11, 4.19.24, 4.14.102, 4.9.159, 4.4.175, and 3.18.135 updates came out on February 20.
Distributions
Debian 9.8 released
The Debian project has announced the eighth update of Debian 9 "stretch". As a stable point release, this version mainly adds bugfixes for security issues and other serious problems. Click below for a list of changes.Ubuntu 18.04.2 LTS released
The Ubuntu team has announced the release of Ubuntu 18.04.2 LTS for its Desktop, Server, and Cloud products, as well as other flavors of Ubuntu with long-term support. Support periods vary for different flavors. "Like previous LTS series, 18.04.2 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures and is installed by default when using one of the desktop images." Ubuntu Server installs the GA kernel, however the HWE kernel may be selected from the installer bootloader.
Distribution quotes of the week
Although the average listener might not notice the difference with this distribution, audiophiles will. The clarity and playback of digital music on Audiophile Linux far exceeded that on both Elementary OS and Ubuntu Linux. So if that appeals to you, I highly recommend giving Audiophile Linux a spin.
Development
digiKam 6.0.0 released
The digiKam team has announced the release of digiKam 6.0.0. New features include full support of video files management working as photos; an integration of all import/export web-service tools in LightTable, Image editor, and Showfoto; raw file decoding engine supporting new cameras; similarity data is now stored in a separate file; simplified web-service authentication using OAuth protocol; and more.Geary 0.13.0 released
Version 0.13.0 of the Geary graphical email client is out. "This is a major new release, featuring a number of new features — including a new user interface for creating and managing email accounts, integration with GNOME Online Accounts (which also provides OAuth login support for some services), improvements in displaying conversations, composing new messages, interacting with other email apps, reporting problems as they occur, and number of important bug fixes, server compatibility fixes, and security fixes."
Yaghmour: gitgeist: a git-based social network proof of concept
On his blog, Karim Yaghmour writes about an experimental social network that he and a colleague cobbled together using Git. While it is simply a proof of concept at this point, he is looking for feedback and, perhaps, collaborators to take it further. "It turns out that git has practically everything that's needed to act both as storage and protocol for a social network. Not only that, but it's very well-known within and used, deployed and maintained in the circles I navigate, it scales very well (see github), it's used for critical infrastructure (see kernel.org), it provides history, it's distributed by nature, etc. It's got *almost* everything, but not quite everything needed. So what's missing from git? A few basic things that it turns out aren't very hard to take care of: ability to 'follow', getting followee notifications, 'commenting' and an interface for viewing feeds. And instead of writing a whole online treatise of how this could be done, I asked my colleague Francois-Denis Gonthier to implement a proof and concept of this that we called 'gitgeist' and just published on github [https://github.com/opersys/gitgeist-poc]."
PostgreSQL 11.2, 10.7, 9.6.12, 9.5.16, and 9.4.21 released
The PostgreSQL project has put out updated releases for all supported versions. "This release changes the behavior in how PostgreSQL interfaces with 'fsync()' and includes fixes for partitioning and over 70 other bugs that were reported over the past three months." The fsync() issue was covered here in April 2018.
Development quotes of the week
Page editor: Jake Edge
Announcements
Newsletters
Distributions and system administration
- DistroWatch Weekly (February 18)
- Lunar Linux Weekly News (February 15)
- openSUSE Tumbleweed Review of the Week (February 15)
- Reproducible Builds Weekly Report (February 19)
- Tails Report (January)
- Ubuntu Weekly Newsletter (February 16)
Development
- Emacs News (February 18)
- GCC 8.3 Status Report (February 15)
- What's cooking in git.git (February 13)
- LLVM Weekly (February 18)
- LXC/LXD/LXCFS Weekly Status (February 18)
- OCaml Weekly News (February 19)
- OpenStack Technical Committee Status Update (February 19)
- Perl Weekly (February 18)
- Weekly changes in and around Perl 6 (February 18)
- PostgreSQL Weekly News (February 17)
- Python Weekly Newsletter (February 14)
- Ruby Weekly News (February 14)
- This Week in Rust (February 19)
Meeting minutes
- Fedora Council minutes (February 20)
- Fedora FESCO meeting minutes (February 18)
- openSUSE board meeting minutes (January 8)
Calls for Presentations
Linux IPsec workshop 2019
There will be a Linux IPsec workshop on March 18-20 in Prague, Czech Republic. "The workshop is invitation based and limited to ca. 20 - 25 IPsec developers from user and kernel space. We almost reached the limit, but still have a few spare places. If you think you can contribute with a discussion topic or presentation, please send me a mail with your proposal."
ATO: CFP "Office Hours" Announced
All Things Open will take place October 13-15 in Raleigh, NC. The call for papers is open until March 15. "We'll host two official CFP office hours sessions this year, and we encourage anyone interested in submitting a talk that might have questions to join us. We do these every year with the help of amazing volunteers and each will be chock-full of great information for prospective speakers. Have questions? Join us February 26 and March 12." This announcement also covers two meetups in March and Open Source 101 in April.
CFP Deadlines: February 21, 2019 to April 22, 2019
The following listing of CFP deadlines is taken from the LWN.net CFP Calendar.
Deadline | Event Dates | Event | Location |
---|---|---|---|
February 22 | June 24 June 26 |
KubeCon + CloudNativeCon + Open Source Summit | Shanghai , China |
February 28 | May 16 | Open Source Camp | #3 Ansible | Berlin, Germany |
February 28 | June 4 June 6 |
sambaXP 2019 | Goettingen, Germany |
March 4 | June 14 June 15 |
Hong Kong Open Source Conference 2019 | Hong Kong, Hong Kong |
March 10 | April 6 | Pi and More 11½ | Krefeld, Germany |
March 17 | June 4 June 5 |
UK OpenMP Users' Conference | Edinburgh, UK |
March 19 | October 13 October 15 |
All Things Open | Raleigh, NC, USA |
March 24 | July 17 July 19 |
Automotive Linux Summit | Tokyo, Japan |
March 24 | July 17 July 19 |
Open Source Summit | Tokyo, Japan |
March 25 | May 3 May 4 |
PyDays Vienna 2019 | Vienna, Austria |
April 1 | June 3 June 4 |
PyCon Israel 2019 | Ramat Gan, Israel |
April 2 | August 21 August 23 |
Open Source Summit North America | San Diego, CA, USA |
April 2 | August 21 August 23 |
Embedded Linux Conference NA | San Diego, CA, USA |
April 12 | July 9 July 11 |
Xen Project Developer and Design Summit | Chicago, IL, USA |
April 15 | August 26 August 30 |
FOSS4G 2019 | Bucharest, Romania |
April 15 | May 18 May 19 |
Open Source Conference Albania | Trana, Albania |
April 21 | May 25 May 26 |
Mini-DebConf Marseille | Marseille, France |
If the CFP deadline for your event does not appear here, please tell us about it.
Upcoming Events
SCALE17X: openSUSE Summit & UbuCon
SCALE will take place March 7-10 in Pasadena, CA. On March 8 openSUSE will host a summit, and UbuCon takes place March 7-8.Netdev 0x13 schedule released
Netdev will take place March 20-22 in Prague, Czech Republic. The schedule is available.Events: February 21, 2019 to April 22, 2019
The following event listing is taken from the LWN.net Calendar.
Date(s) | Event | Location |
---|---|---|
February 25 February 26 |
Vault Linux Storage and Filesystems Conference | Boston, MA, USA |
February 26 February 28 |
16th USENIX Symposium on Networked Systems Design and Implementation | Boston, MA, USA |
March 5 March 6 |
Automotive Grade Linux Member Meeting | Tokyo, Japan |
March 7 March 10 |
SCALE 17x | Pasadena, CA, USA |
March 7 March 8 |
Hyperledger Bootcamp | Hong Kong |
March 9 | DPDK Summit Bangalore 2019 | Bangalore, India |
March 10 | OpenEmbedded Summit | Pasadena, CA, USA |
March 12 March 14 |
Open Source Leadership Summit | Half Moon Bay, CA, USA |
March 14 | Icinga Camp Berlin | Berlin, Germany |
March 14 | pgDay Israel 2019 | Tel Aviv, Israel |
March 14 March 17 |
FOSSASIA | Singapore, Singapore |
March 19 March 21 |
PGConf APAC | Singapore, Singapore |
March 20 | Open Source Roundtable at Game Developers Conference | San Francisco, CA, USA |
March 20 March 22 |
Netdev 0x13 | Prague, Czech Republic |
March 21 | gRPC Conf | Sunnyvale, CA, USA |
March 23 | Kubernetes Day | Bengaluru, India |
March 23 March 24 |
LibrePlanet | Cambridge, MA, USA |
March 23 March 26 |
Linux Audio Conference | San Francisco, CA, USA |
March 29 March 31 |
curl up 2019 | Prague, Czech Republic |
April 1 April 4 |
‹Programming› 2019 | Genova, Italy |
April 1 April 5 |
SUSECON 2019 | Nashville, TN, USA |
April 2 April 4 |
Cloud Foundry Summit | Philadelphia, PA, USA |
April 3 April 5 |
Open Networking Summit | San Jose, CA, USA |
April 5 April 7 |
Devuan Conference | Amsterdam, The Netherlands |
April 5 April 6 |
openSUSE Summit | Nashville, TN, USA |
April 6 | Pi and More 11½ | Krefeld, Germany |
April 7 April 10 |
FOSS North | Gothenburg, Sweden |
April 10 April 12 |
DjangoCon Europe | Copenhagen, Denmark |
April 13 | OpenCamp Bratislava | Bratislava, Slovakia |
April 13 April 17 |
ACM SIGPLAN/SIGOPS Conference on Virtual Execution Environments | Providence, RI, USA |
April 18 | Open Source 101 | Columbia, SC, USA |
If your event does not appear here, please tell us about it.
Security updates
Alert summary February 14, 2019 to February 20, 2019
Dist. | ID | Release | Package | Date |
---|---|---|---|---|
Arch Linux | ASA-201902-19 | cairo | 2019-02-18 | |
Arch Linux | ASA-201902-16 | firefox | 2019-02-18 | |
Arch Linux | ASA-201902-20 | flatpak | 2019-02-18 | |
Arch Linux | ASA-201902-18 | hiawatha | 2019-02-18 | |
Arch Linux | ASA-201902-17 | webkit2gtk | 2019-02-18 | |
Debian | DSA-4396-1 | stable | ansible | 2019-02-19 |
Debian | DSA-4395-1 | stable | chromium | 2019-02-18 |
Debian | DLA-1685-1 | LTS | drupal7 | 2019-02-20 |
Debian | DLA-1677-1 | LTS | firefox-esr | 2019-02-15 |
Debian | DSA-4391-1 | stable | firefox-esr | 2019-02-14 |
Debian | DLA-1681-1 | LTS | gsoap | 2019-02-18 |
Debian | DSA-4388-2 | stable | mosquitto | 2019-02-17 |
Debian | DLA-1679-1 | LTS | php5 | 2019-02-16 |
Debian | DLA-1675-1 | LTS | python-gnupg | 2019-02-14 |
Debian | DLA-1683-1 | LTS | rdesktop | 2019-02-19 |
Debian | DSA-4394-1 | stable | rdesktop | 2019-02-18 |
Debian | DLA-1660-2 | LTS | rssh | 2019-02-19 |
Debian | DLA-1684-1 | LTS | systemd | 2019-02-19 |
Debian | DSA-4393-1 | stable | systemd | 2019-02-18 |
Debian | DLA-1678-1 | LTS | thunderbird | 2019-02-16 |
Debian | DSA-4392-1 | stable | thunderbird | 2019-02-16 |
Debian | DLA-1680-1 | LTS | tiff | 2019-02-18 |
Debian | DLA-1676-1 | LTS | unbound | 2019-02-14 |
Debian | DLA-1682-1 | LTS | uriparser | 2019-02-18 |
Fedora | FEDORA-2019-e0f5a82082 | F29 | botan2 | 2019-02-20 |
Fedora | FEDORA-2019-df57551f6d | F29 | bouncycastle | 2019-02-19 |
Fedora | FEDORA-2019-6a2e72916a | F29 | ceph | 2019-02-20 |
Fedora | FEDORA-2019-df2e68aa6b | F29 | docker | 2019-02-15 |
Fedora | FEDORA-2019-df57551f6d | F29 | eclipse-jgit | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | eclipse-linuxtools | 2019-02-19 |
Fedora | FEDORA-2019-44a9d99647 | F29 | elfutils | 2019-02-18 |
Fedora | FEDORA-2019-3b8d06c61e | F29 | firefox | 2019-02-20 |
Fedora | FEDORA-2019-82acb29c1b | F28 | ghostscript | 2019-02-18 |
Fedora | FEDORA-2019-710afd062a | F28 | gsi-openssh | 2019-02-18 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-annotations | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-bom | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-core | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-databind | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-dataformat-xml | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-dataformats-binary | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-dataformats-text | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-datatype-jdk8 | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-datatype-joda | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-datatypes-collections | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-jaxrs-providers | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-module-jsonSchema | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-modules-base | 2019-02-19 |
Fedora | FEDORA-2019-df57551f6d | F29 | jackson-parent | 2019-02-19 |
Fedora | FEDORA-2019-164946aa7f | F29 | kernel | 2019-02-16 |
Fedora | FEDORA-2019-3da64f3e61 | F28 | kernel-headers | 2019-02-16 |
Fedora | FEDORA-2019-164946aa7f | F29 | kernel-headers | 2019-02-16 |
Fedora | FEDORA-2019-3da64f3e61 | F28 | kernel-tools | 2019-02-16 |
Fedora | FEDORA-2019-164946aa7f | F29 | kernel-tools | 2019-02-16 |
Fedora | FEDORA-2019-8e683d3810 | F28 | kf5-kauth | 2019-02-18 |
Fedora | FEDORA-2019-19b1d53695 | F29 | kf5-kauth | 2019-02-18 |
Fedora | FEDORA-2019-e2f47b40a3 | F29 | libexif | 2019-02-15 |
Fedora | FEDORA-2019-6cc827b7a1 | F29 | mingw-podofo | 2019-02-18 |
Fedora | FEDORA-2019-b0bd3c604a | F29 | mingw-poppler | 2019-02-18 |
Fedora | FEDORA-2019-829524f28f | F28 | moby-engine | 2019-02-19 |
Fedora | FEDORA-2019-352d4b9cd8 | F29 | moby-engine | 2019-02-19 |
Fedora | FEDORA-2019-8cbe2a05cd | F28 | mosquitto | 2019-02-18 |
Fedora | FEDORA-2019-6cc827b7a1 | F29 | podofo | 2019-02-18 |
Fedora | FEDORA-2019-095c760511 | F29 | python-markdown2 | 2019-02-18 |
Fedora | FEDORA-2019-3f19f13ecd | F29 | runc | 2019-02-15 |
Fedora | FEDORA-2019-1f81367ac3 | F28 | subversion | 2019-02-19 |
Mageia | MGASA-2019-0081 | 6 | avahi | 2019-02-14 |
Mageia | MGASA-2019-0077 | 6 | dom4j | 2019-02-14 |
Mageia | MGASA-2019-0089 | 6 | firefox | 2019-02-17 |
Mageia | MGASA-2019-0090 | 6 | flash-player-plugin | 2019-02-17 |
Mageia | MGASA-2019-0080 | 6 | gvfs | 2019-02-14 |
Mageia | MGASA-2019-0083 | 6 | kauth | 2019-02-14 |
Mageia | MGASA-2019-0085 | 6 | libwmf | 2019-02-14 |
Mageia | MGASA-2019-0079 | 6 | logback | 2019-02-14 |
Mageia | MGASA-2019-0087 | 6 | lxc | 2019-02-17 |
Mageia | MGASA-2019-0078 | 6 | mad | 2019-02-14 |
Mageia | MGASA-2019-0084 | 6 | python | 2019-02-14 |
Mageia | MGASA-2019-0086 | 6 | python-django | 2019-02-14 |
Mageia | MGASA-2019-0082 | 6 | radvd | 2019-02-14 |
Mageia | MGASA-2019-0088 | 6 | thunderbird | 2019-02-17 |
openSUSE | openSUSE-SU-2019:0215-1 | 15.0 | GraphicsMagick | 2019-02-19 |
openSUSE | openSUSE-SU-2019:0214-1 | 42.3 | GraphicsMagick | 2019-02-19 |
openSUSE | openSUSE-SU-2019:0196-1 | 15.0 | LibVNCServer | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0200-1 | 42.3 | LibVNCServer | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0197-1 | 15.0 | avahi | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0206-1 | chromium | 2019-02-18 | |
openSUSE | openSUSE-SU-2019:0216-1 | chromium | 2019-02-19 | |
openSUSE | openSUSE-SU-2019:0204-1 | 15.0 | chromium | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0205-1 | 42.3 | chromium | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0174-1 | 15.0 | curl | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0173-1 | 42.3 | curl | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0189-1 | 15.0 | docker | 2019-02-16 |
openSUSE | openSUSE-SU-2019:0201-1 | 42.3 | docker-runc | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0202-1 | 42.3 | firefox | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0166-1 | 15.0 | haproxy | 2019-02-13 |
openSUSE | openSUSE-SU-2019:0203-1 | 15.0 | kernel | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0199-1 | 42.3 | libu2f-host | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0175-1 | 15.0 | lua53 | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0183-1 | 15.0 | mozilla-nss | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0195-1 | 15.0 42.3 | nginx | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0195-1 | 15.0 42.3 | nginx | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0207-1 | 42.3 | php7 | 2019-02-19 |
openSUSE | openSUSE-SU-2019:0194-1 | phpMyAdmin | 2019-02-18 | |
openSUSE | openSUSE-SU-2019:0194-1 | 15.0 42.3 | phpMyAdmin | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0198-1 | 15.0 | pspp, spread-sheet-widget | 2019-02-18 |
openSUSE | openSUSE-SU-2019:0212-1 | 42.3 | pspp, spread-sheet-widget | 2019-02-19 |
openSUSE | openSUSE-SU-2019:0184-1 | 15.0 | python | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0169-1 | python-slixmpp | 2019-02-13 | |
openSUSE | openSUSE-SU-2019:0185-1 | 15.0 | rmt-server | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0170-1 | runc | 2019-02-13 | |
openSUSE | openSUSE-SU-2019:0208-1 | 15.0 | runc | 2019-02-19 |
openSUSE | openSUSE-SU-2019:0167-1 | 15.0 | spice | 2019-02-13 |
openSUSE | openSUSE-SU-2019:0176-1 | 42.3 | spice | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0182-1 | 42.3 | thunderbird | 2019-02-14 |
openSUSE | openSUSE-SU-2019:0171-1 | uriparser | 2019-02-13 | |
openSUSE | openSUSE-SU-2019:0165-1 | 15.0 | uriparser | 2019-02-13 |
Oracle | ELSA-2019-0373 | OL6 | firefox | 2019-02-19 |
Oracle | ELSA-2019-0374 | OL7 | firefox | 2019-02-19 |
Oracle | ELSA-2019-0375 | OL7 | flatpak | 2019-02-19 |
Oracle | ELSA-2019-0368 | OL7 | systemd | 2019-02-19 |
Red Hat | RHSA-2019:0373-01 | EL6 | firefox | 2019-02-19 |
Red Hat | RHSA-2019:0348-01 | EL6 | flash-plugin | 2019-02-13 |
Red Hat | RHSA-2019:0361-01 | EL7 | rhvm-appliance | 2019-02-18 |
Slackware | SSA:2019-045-01 | mozilla | 2019-02-14 | |
Slackware | SSA:2019-044-01 | mozilla | 2019-02-13 | |
SUSE | SUSE-SU-2019:0387-1 | SLE15 | build | 2019-02-14 |
SUSE | SUSE-SU-2019:0392-1 | couchdb | 2019-02-14 | |
SUSE | SUSE-SU-2019:0385-1 | OS6 SLE12 | docker-runc | 2019-02-13 |
SUSE | SUSE-SU-2019:0362-1 | SLE15 | docker-runc | 2019-02-13 |
SUSE | SUSE-SU-2019:0414-1 | SLE15 | dovecot23 | 2019-02-15 |
SUSE | SUSE-SU-2019:0438-1 | SLE15 | gvfs | 2019-02-19 |
SUSE | SUSE-SU-2019:0439-1 | OS7 SLE12 | kernel | 2019-02-19 |
SUSE | SUSE-SU-2019:0422-1 | SLE12 | kernel-firmware | 2019-02-18 |
SUSE | SUSE-SU-2019:0427-1 | SLE12 | kernel-firmware | 2019-02-19 |
SUSE | SUSE-SU-2019:13962-1 | SLE11 | kvm | 2019-02-15 |
SUSE | SUSE-SU-2019:0447-1 | SLE15 | libqt5-qtbase | 2019-02-20 |
SUSE | SUSE-SU-2019:0395-1 | OS7 SLE12 | nodejs6 | 2019-02-14 |
SUSE | SUSE-SU-2019:13961-1 | SLE11 | php53 | 2019-02-14 |
SUSE | SUSE-SU-2019:0393-1 | SLE12 | podofo | 2019-02-14 |
SUSE | SUSE-SU-2019:0391-1 | OS8 | python-PyKMIP | 2019-02-14 |
SUSE | SUSE-SU-2019:0419-1 | SLE12 | python-numpy | 2019-02-18 |
SUSE | SUSE-SU-2019:0448-1 | SLE12 | python-numpy | 2019-02-20 |
SUSE | SUSE-SU-2019:0418-1 | SLE15 | python-numpy | 2019-02-16 |
SUSE | SUSE-SU-2019:0435-1 | SLE12 | qemu | 2019-02-19 |
SUSE | SUSE-SU-2019:0423-1 | SLE15 | qemu | 2019-02-18 |
SUSE | SUSE-SU-2019:0394-1 | OS7 | rubygem-loofah | 2019-02-14 |
SUSE | SUSE-SU-2019:0428-1 | OS7 SLE12 | systemd | 2019-02-19 |
SUSE | SUSE-SU-2019:0424-1 | SLE12 | systemd | 2019-02-18 |
SUSE | SUSE-SU-2019:0425-1 | SLE12 | systemd | 2019-02-18 |
SUSE | SUSE-SU-2019:0426-1 | SLE15 | systemd | 2019-02-18 |
SUSE | SUSE-SU-2019:0390-1 | OS7 SLE12 | util-linux | 2019-02-14 |
SUSE | SUSE-SU-2019:0416-1 | velum | 2019-02-15 | |
Ubuntu | USN-3892-1 | 18.04 18.10 | gdm3 | 2019-02-20 |
Ubuntu | USN-3850-2 | 12.04 | nss | 2019-02-18 |
Ubuntu | USN-3891-1 | 16.04 18.04 18.10 | systemd | 2019-02-18 |
Kernel patches of interest
Kernel releases
Architecture-specific
Core kernel
Development tools
Device drivers
Device-driver infrastructure
Filesystems and block layer
Memory management
Networking
Security-related
Virtualization and containers
Miscellaneous
Page editor: Rebecca Sobol