|
|
Subscribe / Log in / New account

Leading items

Welcome to the LWN.net Weekly Edition for April 2, 2020

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

Three candidates vying to be DPL

By Jake Edge
April 1, 2020

The annual Debian project leader (DPL) election is well underway at this point; voting begins in early April and the outcome will be known after the polls close on April 18. Outgoing DPL Sam Hartman posted a lengthy "non-platform" in the run-up to the election, which detailed the highs and lows of his term, perhaps providing something of a roadmap, complete with pitfalls, for potential candidates—Hartman is not running again this time. When the nomination period completed, three people put their hats into the ring: Jonathan Carter, Sruthi Chandran, and Brian Gupta. Their platforms have been posted and there have been several threads on the debian-vote mailing list with questions for the candidates; it seems like a good time to look in on the race.

After the call for nominations went out, Carter posted a self-nomination that was rather longer than the usual message of that type. Hartman was concerned that his "not running" message might have influenced the length of Carter's "thoughtful message"—and set too high of a bar for others who might want to nominate themselves. Carter assured Hartman that his posting (and its length) had no bearing on the self-nomination, however.

Carter's message is also referenced from his platform, not surprisingly. He sets down his reasons for running for DPL, including finding ways to highlight the many positive aspects of the project:

As project members, we love to point out all the things we don't like about Debian, and why shouldn't we? We are passionate and we have every right to. And I do believe that the vast majority of Debian Developers have good intentions when they point out problems, because they do want to see those problems solved. On the flip side, I don't think we do enough to sing Debian's praises within the project. It's almost as if people are afraid to do so. It's my opinion that we almost have a responsibility to toot our own horns when it comes to the Debian project. It's like we lack a certain type of confidence in the project and [the] world can pick up on that. I feel that we need to do more to celebrate the project's successes and showcase the incredible amount of good work and progress within the project.

Debian Foundations

Chandran had a more traditional self-nomination, while Gupta revealed the sole plank of his platform in his self-nomination message:

I am running for DPL with a singular goal. The creation of Debian US and EU Foundations. I largely view my candidacy as a referendum on this goal and its details. During the campaigning period, I will share the details as part of my platform, and I will update my platform to incorporate feedback. If it's clear there isn't a rough consensus to move forward, I will likely withdraw my candidacy. (Perhaps for reworking and another day.)

The platforms page has links for each candidate's platform. Each candidate is given the opportunity to add "rebuttals" of their opponents' platforms once they have been revealed. Chandran did not choose to do so, but both Gupta and Carter did. In general, the rebuttals are meant to show differences between the candidates or possible problems with a platform in a polite and formalized way.

Gupta's quest for the creation of foundation organizations for the project garnered a generally favorable response, though some wondered why it required being DPL to advance that cause. As described in his platform, Gupta wants to free the DPL from their purely administrative duties: "Debian Project Leaders should have more time to lead rather than be buried in the set of administrative tasks they currently face". The existing trusted organizations (TOs), mainly Software in the Public Interest (SPI), are falling short in various ways in his view; Debian-focused foundations would better serve the project. He also explained why having two foundations, under different legal frameworks, will provide redundancy in the face of problems in a particular jurisdiction and allow local financial transactions within the two dominant regions for the project.

Hartman agreed that establishing foundations made sense for the project, though for somewhat different reasons. Once his DPL term ends, he would be willing to sponsor a general resolution (GR) as a referendum on the foundations idea; he thinks Gupta would be a good choice to lead that effort, so the GR would "include text delegating making it happen" to Gupta. "But I can't figure out why you'd need or want to be DPL to do that", Hartman said.

Gupta replied that his idea would need the explicit support of the DPL, so that would be easiest if he were the DPL. He also sees the DPL election as a somewhat lighter-weight alternative to a GR; those can be kind of contentious within Debian these days. He does not think a GR is required "as my current plan doesn't require any changes to the constitution". But the election provides a mechanism for project members to express their preferences:

If I am elected DPL, that would likely be a clear sign the project supports my proposal. If I was ranked below "None of the above", that's another clear message. Finally, if most people ranked me above "None of the above", even if I wasn't first choice, I'd assume that as a signal of support for the proposal and would try to work with the elected DPL to implement the proposal.

Mixing the DPL election with another issue was questioned by several in the thread and, in particular, with Gupta's interpretation of what a vote for "none of the above" (NOTA) should "mean" with respect to the foundations idea. Enrico Zini put it this way:

[...] I would very likely vote for you above NOTA for a pure DPL election, and I would very likely vote in favour of a GR option to create a Debian Foundation. I would however rank you below NOTA if you insisted in conflating the two, as I cannot endorse what I see as a misuse of our voting system. You would however incorrectly interpret my vote as a vote against the idea of a Debian Foundation, underestimating support for something you care about.

In his rebuttal to Gupta's platform, Carter raised another point: it is likely to take more than a single term to make a change of this sort. In fact, the kinds of questions that will need to be asked could be problematic to resolve. He is concerned that could easily stretch past a single term:

Simple questions like "Who will staff these organisations?", "Will they earn a salary?", etc could be really difficult to answer in the context of Debian. All of these could have good answers, but I think it will take us some time to figure out exactly what we want, what we need, and how to match that up to what's possible in a way that allows the project to thrive. [...]

Starting one or more foundations could also have far-reaching consequences that we haven't begun to consider yet. What if Brian is elected, puts in a year of work to get things going, then things don't work out, and then he decides not to run for DPL again. Will it be up to the rest of us to dismantle and fix everything? Or will we be stuck with yet another Debian mess that we'll have to live with for years to come?

Gupta is US-based, so he plans to start with a foundation for the US if elected, but he acknowledged that even that might well stretch beyond a year:

I certainly don’t think it will be possible to create both Foundations in one term, and it may not be possible to even finish creating the US Foundation in one DPL term, but a lot of progress can be made. [...]

I commit that if I am elected DPL, that I will run for a second term, and finish the creation of the US Foundation if it hasn’t already been completed, whether or not I am re-elected as DPL. In my first term, I will also begin working with European developers to create the European Foundation but have no expectation of completing that during the first-term.

Diversity

Chandran, like Gupta in some ways, is a "single-issue" candidate; she would like to increase the diversity within the Debian project, particularly along gender lines.

[...] How many times did we have a non-male candidate for DPL? My primary goal of contesting for DPL is to bring the diversity issues to the mainstream.

I am aware that Debian is doing things to increase diversity within Debian, but as we can see, it is not sufficient. A good amount of money is also spent, but are we getting quality diverse contributors? We need to find answers. We need to find out better and effective ways.

One effective way I see to encourage diverse people to contribute is to have more visibility for diversity already within the community. I would encourage more women (both cis and trans), trans men, and genderqueer people who are already part of the project to be more [visible] instead of staying hidden in some part of the project (like I was doing until recently).

As she noted, there were not a lot of specific plans in her platform, which is something that Carter and Gupta mentioned in their rebuttals. Beyond that, Chandran is leading the team bringing DebConf to India in 2022; that will keep her quite busy over the next two years, as both of the other candidates pointed out.

Hector Oron asked the candidates about the Debian Outreach team; in particular, he wondered whether there were flaws in the current outreach efforts (mostly based around Outreachy and Google Summer of Code) and what could be done about them. Part of the thread veered away from candidate answers to some extent, but covered some important ground.

There was some general questioning of the value that Debian was getting from its paid outreach efforts. Martin Michlmayr noted that Debian really might not be providing a path for interns once they complete their internship:

So we pay people to work on Debian for a few months? And then? Then they get the opportunity to work on Debian for free!

Compare that to someone working on Outreachy for the Linux kernel where a full-time, paid job from Intel, IBM, etc will likely await them afterwards.

So Outreachy might help some people get involved in Debian, but do we have a compelling "career path" for them to stay involved afterwards?

Ulrike Uhlig, who was a Debian Outreachy intern in 2015, pointed out that there is a feedback element missing from the outreach efforts. It could be added as part of the outreach evaluation:

This process could cover:
  • Did their mentor introduce them to Debian processes, mailinglists, other Debian Developers, teams, tools?
  • Do they feel they are now independent with regards to Debian work?
  • Do they want to continue contributing to Debian? If no, what would they need, what are they missing?
  • What can the Debian Outreach do better in the next rounds?
and much more.. Happy to help working out such a process with the current Outreach coordinators in Debian.

Having such a feedback process could ensure that the money Debian spends on Outreachy is well used.

Chandran agreed that it would be useful to come up with such a process, as did Outreach team member Pranav Jain. Both said they would be happy to work with Uhlig on that effort.

In answer to Oron's original question, Carter said that there was simply not enough being done in the outreach department. For now, it would seem that outside organizations are better placed to handle the management of internships for Debian, but that it does not have to be that way:

I agree that we *might* be able to come up with some more efficient programs that have greater impact for the same amount of money, but then we need Debian contributors who will do all the work and co-ordination to make that happen. Few people seem to have the time and energy for that at the moment.

As for how I would address that, I know some Debianites hate the answer of "more discussion", but I think it's what's needed. We need more answers, more ideas, more people to step up and do work, frame the exact problems that we intend to solve and then use our collective skills to hone [in] on those.

Chandran also thought that finding the right balance in terms of diversity spending is important:

[...] I am running for DPL primarily with "Diversity" in focus. So if I become DPL, I would definitely take it on personally to analyze Outreachy/diversity budget and efficiency. Even when I advocate for diversity, doing things and spending money in the name of diversity with no returns is not something I support. Right now I do not have a perfect picture about the current scenario, but this would be one of the priorities as DPL.

Priorities

Carter's platform is the one with the most concrete ideas and plans, as he is not focused on one particular issue. He listed a half-dozen separate "community improvements" that he would like to help foster, including better onboarding for new contributors, increasing the number of local Debian groups, and promoting mentorship within the project so that those efforts are as highly regarded as, say, being a Debian Developer (DD) is. He also presented two plans under the heading of "improve reporting": more detailed and frequent financial reports, along with more "bite-sized updates" from the DPL that would get included into the usual monthly DPL report.

The item that drew the most attention, though, was the first community improvement Carter listed: "Initiate a public discussion on our membership nomenclature". The different types of project members have confusing names, he said; in particular, "Debian Developer" is used in a different way than terms like "Android developer". Other names, like "Debian Maintainer and non-uploading DD", are unwieldy and unhelpful to outsiders. He would like to see the project grapple with that, come up with new terms that make more sense, and have a GR to resolve the issue.

Gupta said that he was not really in favor of changing the DD name as part of his rebuttal. In addition, Sean Whitton felt that, since the nomenclature issue was the most concrete part of Carter's platform, it might give the impression that it was the only thing that he intended to pursue for sure as DPL. Whitton asked Carter to better describe his priorities, which Carter did at some length, starting with:

By the end of the term, I would like to have a shared sense of 'business as usual' within the project. I'd like our contributors and project members to have a sense of belonging, and that they can focus on their work and improve Debian's technical excellence without having to spend too much time on unproductive drama. I know that sounds incredibly broad, and at the same time somewhat vague, but I believe it's what the project needs right now.

He also noted that another of his bullet points was about increasing online interaction possibilities, including perhaps having a regular online-only DebConf offset by six months from the usual in-person variety. He said that regardless of whether he is elected, he would like to pursue some of that:

An idea that I had after I finished my platform, that I've been enjoying thinking of, is to start a team for a MiniDebConf Online. The situation with COVID-19 means that many conferences over for at least the next two months will be cancelled, so maybe we could put all our online tools (including tools like storm.debian.net) to the test and see if we could actually pull off having an online MiniDebConf. I think it will help make a lot of people feel better giving them a bit of a social lift with all the physical distancing we have to practice, and at the same time we can improve Debian, and find weak spots in our tooling that we can improve.

There were, of course, other questions, ideas, and threads on debian-vote; interested readers (and voters) should poke into all of that further. One thing missing from the discussions is any real mention of the project harassment issue raised by Hartman in his non-platform; it would seem that the candidates are ignoring that particular elephant in the room, at least for now. This election cycle seems somewhat different than many, with two candidates focused on singular goals for the project and one who is, seemingly, looking to calm the waters within the distribution some. We will know fairly soon how the project feels about those options.

Comments (4 posted)

Avoiding retpolines with static calls

By Jonathan Corbet
March 26, 2020
January 2018 was a sad time in the kernel community. The Meltdown and Spectre vulnerabilities had finally been disclosed, and the required workarounds hurt kernel performance in a number of ways. One of those workarounds — retpolines — continues to cause pain, with developers going out of their way to avoid indirect calls, since they must now be implemented with retpolines. In some cases, though, there may be a way to avoid retpolines and regain much of the lost performance; after a long gestation period, the "static calls" mechanism may finally be nearing the point where it can be merged upstream.

Indirect calls happen when the address of a function to be called is not known at compile time; instead, that address is stored in a pointer variable and used at run time. These indirect calls, as it turns out, are readily exploited by speculative-execution attacks. Retpolines defeat these attacks by turning an indirect call into a rather more complex (and expensive) code sequence that cannot be executed speculatively.

Retpolines solved the problem, but they also slow down the kernel, so developers have been keenly interested in finding ways to avoid them. A number of approaches have been tried; a few of which were covered here in late 2018. While some of those techniques have been merged, static calls have remained outside of the mainline. They have recently returned in the form of this patch set posted by Peter Zijlstra; it contains the work of others as well, in particular Josh Poimboeuf, who posted the original static-call implementation.

An indirect call works from a location in writable memory where the destination of the jump can be found. Changing the destination of the call is a matter of storing a new address in that location. Static calls, instead, use a location in executable memory containing a jump instruction that points to the target function. Actually executing a static call requires "calling" to this special location, which will immediately jump to the real target. The static-call location is, in other words, a classic code trampoline. Since both jumps are direct — the target address is found directly in the executable code itself — no retpolines are needed and execution is fast.

Static calls must be declared before they can be used; there are two macros that can do that:

    #include <linux/static_call.h>

    DEFINE_STATIC_CALL(name, target);
    DECLARE_STATIC_CALL(name, target);

DEFINE_STATIC_CALL() creates a new static call with the given name that initially points at the function target(). DECLARE_STATIC_CALL(), instead, declares the existence of a static call that is defined elsewhere; in that case, target() is only used for type checking the calls.

Actually calling a static call is done with:

    static_call(name)(args...);

Where name is the name used to define the call. This will cause a jump through the trampoline to the target function; if that function returns a value, static_call() will also return that value.

The target of a static call can be changed with:

    static_call_update(name, target2);

Where target2() is the new target for the static call. Changing the target of a static call requires patching the code of the running kernel, which is an expensive operation. That implies that static calls are only appropriate for settings where the target will change rarely.

One such setting can be found in the patch set: tracepoints. Activating a tracepoint itself requires code patching. Once that is done, the kernel responds to a hit on a tracepoint by iterating through a linked list of callback functions that have been attached there. In almost every case, though, there will only be one such function. This patch in the series optimizes that case by using a static call for the single-function case. Since the intent behind tracepoints is to minimize their overhead to the greatest extent possible, use of static calls makes sense there.

This patch set also contains a further optimization not found in the original. Jumping through the trampoline is much faster than using a retpoline, but it is still one more jump than is strictly necessary. So this patch causes static calls to store the target address directly into the call site(s), eliminating the need for the trampoline entirely. Doing so may require changing multiple call sites, but most static calls are unlikely to have many of those. It also requires support in the objtool tool to locate those call sites during the kernel build process.

The end result of this work appears to be a significant reduction in the cost of the Spectre mitigations when using tracepoints — a slowdown of just over 4% drops to about 1.6%. It has been through a number of revisions, as well as some improvements to the underlying text-patching code, and appears to be about ready. Chances are that static calls will go upstream in the near future.

Comments (45 posted)

Per-system-call kernel-stack offset randomization

By Jonathan Corbet
March 27, 2020
In recent years, the kernel has (finally) upped its game when it comes to hardening. It is rather harder to compromise a running kernel than it used to be. But "rather harder" is relative: attackers still manage to find ways to exploit kernel bugs. One piece of information that can be helpful to attackers is the location of the kernel stack; this patch set from Kees Cook and Elena Reshetova may soon make that information harder to come by and nearly useless in any case.

The kernel stack will always be an attractive target. It typically contains no end of useful information that can be used, for example, to find the location of other kernel data structures. If it can be written to, it can be used for return-oriented programming attacks. Many exploits seen in the wild (Cook mentioned this video4linux exploit as an example) depend on locating the kernel stack as part of the sequence of steps to take over a running system.

In current kernels, the kernel stack is allocated from the vmalloc() area at process creation time. Among other things, this approach makes the location of any given process's kernel stack hard to guess, since it depends on the state of the memory allocator at the time of its creation. Once the stack has been allocated, though, its location remains fixed for as long as the process runs. So if an attacker can figure out where the kernel stack for a target process is, that information can be used for as long as that process lives.

As it turns out, there are a number of ways for an attacker to do that. Despite extensive cleanup work, there are still numerous kernel messages that will expose addresses of data structures, including the stack, in the kernel log. There are also attacks using ptrace() and cache timing that can be used to locate the stack. So the protection offered by an uncertain stack location is not as strong as one might like it to be.

Cook and Reshetova's patch set (which is inspired by the PaX RANDKSTACK feature, though the implementation is different) addresses this problem by changing a process's kernel stack offset every time that process makes a system call. Specifically, it modifies the system-call entry code so that the following sequence of events happens:

  • The pt_regs structure, containing the state of the processor registers, is pushed onto the base of the stack, just like it is done in current kernels.
  • A call to alloca() is made with a random value. This has the effect of "allocating" a random amount of memory on the stack, which is really just a matter of moving the stack pointer down by that amount.
  • The system call proceeds with its stack pointer in the now randomized location.

In other words, the kernel stack itself doesn't move, but the actual stack contents shift around and are located differently for every system call. That makes any attack that depends on placing data at a specific location in the stack likely to fail; even if the attacker succeeds in figuring out where the stack is to be found, they won't know exactly where any given system call will place its data on that stack.

Pushing the pt_regs structure before applying the randomization is important. The ptrace() attack mentioned above can be used to locate this structure (and thus the kernel stack); if it were located after the offset is applied, such attacks would thus reveal the offset.

Currently, the randomization amount is obtained by reading some low-order bits from the CPUs time-stamp counter. Cook notes that other, more robust sources of entropy can be added in the future, but he doesn't think that needs to be figured out before the current patches can be considered. There are currently five bits of entropy applied to the stack offset on 64-bit systems, and six bits on 32-bit systems. That is not a huge amount of entropy, but it is enough that any attack that depends on precise kernel-stack locations will probably fail — and generate a kernel oops — on the first few tries. More entropy can be added, at the cost of wasting more stack space.

With this feature in use, Cook measured the overhead as being about 0.9% on a no-op system call; it would clearly be less on any system call that does real work. But for people who don't want to pay even that cost, there is a static label to turn the randomization off.

The end result is a relatively simple mechanism to further harden the kernel against attack. Cook noted that it's not perfect, adding that "most things can't be given the existing kernel design trade-offs". If other developers agree, per-system-call stack offset randomization is likely to find its way into the mainline kernel's arsenal of hardening techniques.

Comments (22 posted)

Some 5.6 kernel development statistics

By Jonathan Corbet
March 30, 2020
When the 5.6 kernel was released on March 29, 12,665 non-merge changesets had been accepted from 1,712 developers, making this a fairly typical development cycle in a number of ways. As per longstanding LWN tradition, what follows is a look at where those changesets came from and who supported the work that created them. This may have been an ordinary cycle, but there are still a couple of differences worth noting.

As Linus Torvalds pointed out in the release announcement, the current coronavirus pandemic does not appear to have seriously affected kernel development — so far. One should not, though, lose track of the fact that the 5.6 merge window closed in early February, well before the impact of this disaster was broadly felt outside of China. Most of the work merged for 5.6 was done even earlier, of course. Given the delays involved in getting work into the mainline, the full effect may not be felt until the 5.8 cycle.

It goes without saying that we hope those effects are minimal, and that the people in our community (and beyond) come through this experience as well as possible.

Of the developers working on 5.6, 214 were first-time contributors. Many projects would be delighted to have that many new contributors in a nine-week period, but that is low for the kernel — the lowest since 3.11, which featured 203 first-time contributors and was released in September 2013. This dip does not appear to be part of a long-term trend:

[first-time contributors
chart]

It is possible that this drop is partly due to the current pandemic; a surprising number of first-time contributors show up late in the development cycle with bug fixes.

The most active developers contributing to 5.6 were:

Most active 5.6 developers
By changesets
Takashi Iwai4063.2%
Chris Wilson3062.4%
Sean Christopherson1431.1%
Jérôme Pouiller1251.0%
Eric Biggers1221.0%
Arnd Bergmann1140.9%
Zheng Bin1100.9%
Geert Uytterhoeven1030.8%
Tony Lindgren1030.8%
Masahiro Yamada940.7%
Colin Ian King920.7%
Ben Skeggs910.7%
Ville Syrjälä900.7%
Andy Shevchenko880.7%
Russell King880.7%
Alex Deucher860.7%
Krzysztof Kozlowski820.6%
Thomas Zimmermann800.6%
Jens Axboe770.6%
Jani Nikula740.6%
By changed lines
Kalle Valo484837.2%
Arnd Bergmann294154.3%
Jason A. Donenfeld186642.8%
Ben Skeggs134712.0%
Greg Kroah-Hartman119311.8%
Chris Wilson106151.6%
Srinivas Kandagatla87391.3%
Alex Maftei85811.3%
Maxime Ripard75211.1%
Peter Ujfalusi69701.0%
Tony Lindgren63200.9%
Helen Koike57890.9%
Takashi Iwai56220.8%
Shuming Fan56040.8%
Michal Kalderon54450.8%
Sricharan R50650.7%
Andrii Nakryiko48570.7%
Roman Li48520.7%
Thierry Reding48450.7%
Sunil Goutham47620.7%

This time around, the developer with the most commits is Takashi Iwai, who did a bunch of cleanup and API-migration work in the sound subsystem. Chris Wilson worked exclusively on the i915 graphics driver, Sean Christopherson has, seemingly, been rewriting the KVM hypervisor from the ground up, Jérôme Pouiller worked on the wfx wireless network interface driver in the staging tree, and Eric Biggers contributed a lot of work to the filesystem and crypto subsystems.

Kalle Valo made it to the top of the "lines changed" column with just five commits; the one adding the ath11k network driver was large. Arnd Bergmann, among many other things, removed a set of obsolete ISDN drivers and more-or-less completed the task of readying the kernel for the year 2038. Jason Donenfeld added the WireGuard VPN subsystem, Ben Skeggs worked extensively on the nouveau graphics driver, and Greg Kroah-Hartman deleted the unloved octeon driver from the staging tree.

The credits for testing and reviewing patches look like this:

Test and review credits in 5.6
Tested-by
Keerthy617.6%
Andrew Bowers475.9%
Aaron Brown364.5%
Peter Ujfalusi212.6%
Tero Kristo202.5%
Stephan Gerhold202.5%
John Garry202.5%
Brian Masney182.2%
Alexei Starovoitov172.1%
Steven Rostedt151.9%
Arnaldo Carvalho de Melo151.9%
Reviewed-by
Rob Herring1402.8%
Alex Deucher992.0%
David Sterba881.8%
Andrew Lunn871.7%
Florian Fainelli831.7%
Tvrtko Ursulin821.6%
Linus Walleij781.6%
Chris Wilson781.6%
Tony Cheng741.5%
Laurent Pinchart701.4%
Andy Shevchenko691.4%

Of the patches going into 5.6, 669 (5.3% of the total) carried Tested-by tags, a decline from 5.5. Reviewed-by tags, instead, appeared in 4,183 patches, 33% of the total.

There were 877 patches added for 5.6 that included Reported-by tags to credit the reporting of a bug; the most active reporters were:

Reporting credits in 5.6
Hulk Robot17818.6%
Syzbot9910.4%
kernel test robot586.1%
Dan Carpenter232.4%
Randy Dunlap202.1%
Stephen Rothwell151.6%
Linus Torvalds70.7%
Marek Szyprowski70.7%
Christoph Paasch60.6%
Naresh Kamboju60.6%
Dmitry Osipenko50.5%
Ravi Bangoria50.5%
Michael Ellerman50.5%
Jann Horn50.5%
Erhard Furtner50.5%
Qian Cai50.5%

We continue to see an increasing number of bug reports coming from automated testing systems; such reports now make up just over a third of the total.

The work on the 5.6 kernel was supported by 207 employers that we were able to identify, a significant decline from 5.5 (which had support from 231 employers). The most active employers were:

Most active 5.6 employers
By changesets
Intel169413.4%
(Unknown)9047.1%
AMD7816.2%
(None)7786.1%
SUSE7135.6%
Red Hat7025.5%
Google5584.4%
Linaro5034.0%
Huawei Technologies4833.8%
Facebook2982.4%
Mellanox2522.0%
Renesas Electronics2472.0%
IBM2321.8%
Arm2311.8%
Code Aurora Forum2221.8%
(Consultant)2161.7%
Texas Instruments2131.7%
NXP Semiconductors2101.7%
Oracle1471.2%
Broadcom1431.1%
By lines changed
Intel7808311.5%
Code Aurora Forum6853810.1%
Linaro594928.8%
AMD449796.6%
Red Hat405536.0%
(Unknown)285914.2%
(None)273874.0%
(Consultant)232713.4%
Google200383.0%
SUSE192742.8%
Facebook175252.6%
Texas Instruments165612.4%
Mellanox149772.2%
Linux Foundation122891.8%
Marvell116781.7%
Realtek109681.6%
Collabora94911.4%
NXP Semiconductors86891.3%
Solarflare Communications86701.3%
IBM85861.3%

We have reached the point where a full one-eighth of the patches coming into the kernel originate from within Intel. For years, Red Hat was the top contributor of changesets, but its position has been slowly falling for some time; this may be the first time that SUSE contributed more patches during a development cycle. Otherwise, these numbers look about the same as they usually do.

If one looks at Signed-off-by tags applied to patches that were written by somebody else, the picture changes a bit:

Non-author signoffs in 5.6
Developers
David S. Miller116210.1%
Alex Deucher7486.5%
Greg Kroah-Hartman6535.7%
Mark Brown4453.9%
Paolo Bonzini2712.4%
Kalle Valo2392.1%
Herbert Xu2362.1%
Andrew Morton2201.9%
Mauro Carvalho Chehab2131.9%
Alexei Starovoitov1881.6%
Employers
Red Hat242321.1%
Linaro121310.6%
AMD7866.9%
Intel7636.7%
Google7466.5%
Linux Foundation7016.1%
Facebook3953.4%
SUSE3903.4%
(None)3513.1%
Mellanox2962.6%

When a developer adds a Signed-off-by tag to somebody else's patch, it (usually) means that said developer is routing that patch toward the mainline, usually by applying it to a subsystem repository. These signoffs thus give some visibility into who the kernel's gatekeepers are. David Miller, the maintainer of the networking subsystem, has kept that top position for years. The presence of other developers indicate that there continues to be a lot of activity in the AMD graphics, device support, and KVM subsystems, among others.

In the right-hand column we see that, while the percentage of patches coming from Red Hat has dropped over the years, over 20% of the patches getting into the mainline still pass through the hands of Red Hat developers.

The first time that LWN looked at signoff statistics was for the 2.6.22 development cycle in 2007. At that time, the top gatekeepers were Andrew Morton and Linus Torvalds, neither of whom handles vast numbers of patches now; the third place was held by David Miller. Four of the top-ten maintainers in 2007 are still in the top ten now. Similarly, five of the top-ten companies were in the top ten 13 years ago too (if one deems the 2013 Novell to be the same as the 2020 SUSE).

All told, the picture that emerges indicates that it's mostly business as usual in the kernel community. The flow of patches continues at a steady rate and the number of developers remains large. The makeup of the community changes — slowly — but the process of cranking out kernels continues uninterrupted.

Comments (1 posted)

Reworking StringIO concatenation in Python

By Jake Edge
April 1, 2020

Python string objects are immutable, so changing the value of a string requires that a new string object be created with the new value. That is fairly well-understood within the community, but there are some "anti-patterns" that arise; it is pretty common for new users to build up a longer string by repeatedly concatenating to the end of the "same" string. The performance penalty for doing that could be avoided by switching to a type that is geared toward incremental updates, but Python 3 has already optimized the penalty away for regular strings. A recent thread on the python-ideas mailing list explored this topic some.

Paul Sokolovsky posted his lengthy description of a fairly simple idea on March 29. The common anti-pattern of building up a string might look something like:

buf = ""
for i in range(50000):
    buf += "foo"
print(buf)
As the Python FAQ notes, though, each concatenation creates a new object, which leads to a quadratic runtime cost based on the total string length. The FAQ recommends using a list to collect up all of the string pieces, then calling the join() string method to turn the list into the final string. But Sokolovsky focused on a different mechanism in his post; the FAQ also suggests using the io.StringIO class in order to change strings in place. Using that instead of repeated concatenation might look like:
buf = io.StringIO()
for i in range(50000):
    buf.write("foo")
print(buf.getvalue())

To make it easier for existing programs to be switched from one form to the other, he suggested adding a "+=" operator for StringIO as an alias for the write() method. Adding an __iadd__() method for the StringIO class would allow the write() call to be removed in favor of using +=. The buffer initialization and getvalue() call would still be needed, but those are each typically done in only one place, while the concatenation may be done in multiple places. So a code base could fairly easily be switched from the anti-pattern to more proper Python just by creating the buffer instead of a string and getting its value where needed with getvalue(), the rest of the code can stay the same; "it will leave the rest of code intact, and not obfuscate the original content construction algorithm".

As Sokolovsky noted, his performance benchmarking shows that CPython 3 has already optimized for the anti-pattern, though. So even though it is still considered to be a bad practice, there is no real penalty for writing code of that sort in CPython 3—but only for that version of the language:

These results can be summarized as follows: of more than half-dozen Python implementations, CPython3 is the only implementation which optimizes for the dubious usage of an immutable string type as an accumulating character buffer. For all other implementations, unintended usage of str incurs overhead of about one order of magnitude, 2 order of magnitude for implementations optimized for particular usecases (this includes PyPy optimized for speed vs MicroPython/Pycopy optimized for small code size and memory usage).

The optimization, which is described by Paul Ganssle in a blog post, effectively allows CPython to treat the string as mutable in the case where there are no other references to it. In a loop like the one in the example, there is no other reference to the string object being used, so instead of creating a new object and freeing the old, it simply changes the existing object in place. CPython can detect that case because it uses reference counts on its objects for garbage collection; PyPy is not reference-counted, so it cannot use the same trick.

But Sokolovsky is trying to target the bad practice regardless of the (lack of a) performance impact. The practice is widespread; the optimization added to CPython is evidence that it needs addressing, he said. He suggested that other implementations can either follow the lead of CPython (if possible) or try to promote better practices: "This would require improving ergonomics of existing string buffer object, to make its usage less painful for both writing new code and refactoring existing." And, of course, he was advocating the latter.

He also noted that, since the performance problem does not really exist for CPython, it might be seen as an argument that there is nothing to fix. "This is related to a bigger [question] 'whether a life outside CPython exists', or put more formally, where's the border between Python-the-language and CPython-the-implementation." Beyond that, one could "fix" the problem by creating a new class derived from StringIO that has an __iadd__(), but that suffers from worse performance as well, which argues that the problem should be addressed in C in StringIO itself.

The overall reception to the idea was chilly, at best, perhaps partly fueled by Sokolovsky's somewhat aggressive tone in his original note and some of the followups. Andrew Barnert replied that the join() mechanism is really the better alternative:

It’s usually about 40% faster than using StringIO or relying on the string-concat optimization in CPython, it’s efficient across all implementations of Python, and it’s obvious _why_ it’s efficient. It can sometimes take more memory, but the [tradeoff] is usually worth it.

Barnert said that StringIO is meant to be a file object that resides in memory, so it is appropriate that its API does not support +=. He concluded with a third option for alternative Python implementations beyond the two that Sokolovsky presented:

Recognize that Python and CPython have been promoting str.join for this problem for decades, and most performance-critical code is already doing that, and make sure that solution is efficient, and [recognize] that poorly-written code is uncommon but does exist, and may take a bit more work to optimize than a 1-line change to optimize, but that’s acceptable—and not the responsibility of any alternate Python implementation to help with.

The problem with the join() mechanism is that it is somewhat non-intuitive, especially for those coming to Python from another language. As Barnert noted, though, it can use more memory as well. Sokolovsky attempted to measure the difference in memory use, but the technique he used was not entirely convincing. His focus would appear to be on embedded Python, such as his Pycopy Python implementation. Pycopy is descended from MicroPython, which he also worked on. For the embedded use case, StringIO may well be the better choice for building strings, at least from a memory perspective; is that enough of a reason to turn a file-like object (StringIO) into a string-like object, but only for concatenation (+=)? The consensus answer would seem to be "no".

There was some discussion of having a generalized mutable string type, though that was not at all what Sokolovsky was after; there are some good reasons why that idea has never really taken off for Python, as Christopher Barker described. "So I'd say it hasn't been done because (1) it's a lot of work and (2) it would be a bit of a pain to use, and not gain much at all."

The objections to the original idea are basically that += can be trivially implemented for a derived class of StringIO; if the performance of that is not sufficient, switching to join() would fix that problem. The existing "join() on a list of strings" idiom works well for most people and nearly all use cases; it is the preferred way to solve this problem in Python, so making another idiom more usable is muddying the water to a certain degree. As The Zen of Python puts it: "There should be one-- and preferably only one --obvious way to do it."

On the other hand, CPython is the dominant player in the ecosystem, as Steven D'Aprano pointed out; that means applications can be written to take advantage of CPython quirks. On the other hand, even if all of the other Python implementations agreed on a change, it will not really be used unless CPython follows suit.

It seems to me that Paul makes a good case that, unlike the string concat optimization, just about every interpreter could add this to StringIO without difficulty or great cost. Perhaps they could even get together and agree to all do so.

But unless CPython does so too, it won't do them much good, because hardly anyone will take advantage of it. When one platform dominates 90% of the ecosystem, one can sensibly write code that depends on that platform's specific optimizations, but going the other way, not so much.

That is something for the CPython community to keep in mind. The existence of the other implementations of the language may provide opportunities to make some changes that are meant to be CPython-only (or at least not mandated for Python the language). But those changes can still get baked into the language via the back door—because most Python code runs on CPython.

In the final analysis, it is a pretty miniscule change being sought. The existence of the string concatenation optimization indicates that there is interest in helping "badly written" code to some extent, but perhaps adding += to StringIO is a bridge too far. There definitely does not seem to be any kind of groundswell of support for the idea and there are costs, beyond just the (minimal) code maintenance required, including in documentation and user education. The benefits, which some find to be dubious to begin with, are seemingly not enough to outweigh them.

Comments (38 posted)

Page editor: Jonathan Corbet
Next page: Brief items>>


Copyright © 2020, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds