Leading items
Welcome to the LWN.net Weekly Edition for March 24, 2022
This edition contains the following feature content:
- A method for replacing Python tuple entries: a difficult discussion on a small proposed Python language addition.
- Three candidates vying for Debian project leader: how the candidates stand on a variety of topics.
- Improved response times with latency nice: a new priority value to provide control over response times.
- Driver regression testing with roadtest: a way of testing device drivers in the absence of the hardware.
- A look at some 5.17 development statistics: where the code in 5.17 came from.
This week's edition also includes these inner pages:
- Brief items: Brief news items from throughout the community.
- Announcements: Newsletters, conferences, security updates, patches, and more.
Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.
A method for replacing Python tuple entries
A recent discussion on the python-ideas mailing list gives some insight into how to—or how not to—propose a feature to be added to the language. At first blush, adding a method to Python's immutable tuple type for replacing one of its elements is not a particularly strange idea, nor one that would cause much in the way of backward-compatibility concerns. Even though there was some evidence offered that such a method might be useful, it seems pretty unlikely that the idea will go anywhere, at least in part because of the repetitive, bordering on aggressive, manner in which its benefits were argued.
On March 10, "wfdc" posted a short note proposing a replace() method for tuples that would return a new tuple with one of the original values replaced at the index specified:
>>> t = (1, 2, 3) >>> t.replace(1, 9) (1, 9, 3)Wfdc pointed to a Stack Overflow question about replacing a value in a tuple and said that it would be a "
natural counterpart" to the _replace() method for the namedtuple type that is part of the collections module of the standard library. Namedtuples add the ability to refer to elements in a tuple by name and the _replace() method can be used to create a new namedtuple with new values for the names given as keyword arguments.
Rob Cliffe objected
to the name "replace", saying that it could be confused with the str.replace()
method, which does something completely different (replaces a given
substring). In addition, since it can be written as a one-line function,
as demonstrated in the post, "there is no strong need for a new
method for it
". But wfdc disagreed,
saying that the API of a method for an unrelated type "is not a valid
objection
". They mentioned Python's "batteries included" philosophy
and concluded: "If users find themselves re-implementing the same
utility function over again and over again across different projects, it's
a good sign that such a function should be part of the standard
library.
"
Jeremiah Vivian agreed
with that assessment, suggesting that repeated re-implementations of the
functionality "could be made a little more readable by making it into
an actual method
". Christopher Barker said
that he was leery about adding methods to built-in types, but has found the
need to replace tuple elements "pretty frequently, and it always
feels far more awkward than it should
". In addition, adding it as a
method would mean that it could be optimized, as the Python implementations
use multiple temporary tuples or lists to build up the new tuple.
Chris Angelico questioned
the use of a tuple, and thought that a namedtuple might be a better choice
if replacement functionality was needed. But Barker disagreed
that switching to namedtuple was necessarily the right path. He pointed
out that wfdc's original one-liner implementation was buggy; "Not
every one or two line function needs to be in the stdlib, but short
functions that are easy to get wrong are good candidates :-)
".
Beyond that, namedtuples are more heavyweight than tuples and slower; "I would
never recommend a namedtuple for a situation where a tuple is a fine choice
other the the lack of a replace method.
"
Cost
There was some disagreement about whether the Stack Overflow question really indicated much about the need for the method, and if replace() was even something that was commonly needed. Wfdc maintained that it was useful, especially when compared to the existing built-in tuple methods such as index() and count() (which are part of the common operations for sequence types; tuple is an immutable sequence). But Brendan Barnwell said that argument was unconvincing and left out an important piece:
As far as I can see, you don't seem to be providing many substantive arguments in favor of your proposal. All you are saying is that sometimes it might be useful, and that there are other existing methods that aren't super useful, so why not add this one too? [...]Similarly, what you are not providing is any consideration of the *costs* of adding this feature relative to the benefit. These costs include implementation and maintenance, increased API surface area for users to familiarize themselves with, and cluttering the type's namespace with yet another method name (even though you seem to acknowledge that it's already cluttered with methods that aren't very useful). Against that we have the benefit of being able to effectively "assign" to a tuple element, which is not zero but also not exactly a burning need. I don't think you're going to get much traction on this suggestion unless you at least engage with this idea of cost and benefits of *your proposed feature* itself, rather than just focusing on what you perceive as the low utility of things that are already part of Python.
Paul Moore pointed out that finding significant code that would benefit from a proposed feature is the best path toward adding it to the language. He is not in favor of the idea, but said that perhaps another core developer would be interested if such code could be found:
In the past, proposals that succeeded often did so by surveying significant bodies of real-world code (for example, the standard library) and pointing out where the proposed new feature would demonstrably improve the readability or maintainability of parts of the code.
Moore, like Angelico, wondered about the use case; wfdc did not provide any real context for why their program needed to use tuples, when other solutions might be better. Moore said that he was concerned that it was an example of an XY problem, where the actual need is being obscured by the question being asked (e.g. "How can I use X to do Y?").
What's the real-world problem being solved? Why, in the context of that real-world problem, is a tuple (as opposed to, say, an immutable dataclass or namedtuple, both of which have replace methods) the demonstrably best choice, *in spite* of the fact that other choices provide the supposedly important replace functionality?
Throughout the thread, this request was repeated by various commenters, but
wfdc never truly responded to it.
Like Moore, Steven D'Aprano tried to
help steer wfdc in the right direction. D'Aprano thought that a Python
Enhancement Proposal (PEP) would be needed if this feature were to be
seriously considered. He also pointed out the costs associated with adding any new
feature to Python, some of which might not be obvious. It may seem overly
burdensome to take the PEP route, but it is the normal path to follow; "Every change has to
justify itself, every feature has costs
". He suggested looking at
both successful and unsuccessful PEPs for guidance, noting that PEP 584 ("Add Union
Operators To dict
"), which he co-authored, has an interesting
history:
By the way, when I started writing PEP 584, it was with the intention that the PEP be rejected once and for all, so we could have a place to point people at when they wanted to know [why] they can't add dicts. Only it turned out that the case for dict "addition" (dict union) was stronger than anyone thought, and the PEP was accepted.
D'Aprano described what should be in a PEP of this sort at some length. Writing a PEP certainly makes for a somewhat daunting task, for what seems like a simple addition to the language, but that is to be expected. Changing a 30-year-old language should be somewhat difficult; it requires overcoming the headwinds against any new feature, which means that the need must be balanced against the cost in order to start convincing the core developers that the net result is positive.
Need
Wfdc took a step toward better demonstrating the need with a pair of Sourcegraph queries (here and here) that used regular expressions to search open-source Python code for the existence of code implementing the two common ways to replace a tuple entry. The results show many uses of those idioms in a wide variety of code throughout the Python ecosystem. D'Aprano appreciated the effort, even though the links did not work for him at first, but noted that there is more to it than simply posting links:
Thank you for providing some search terms, but you misunderstand the process of proposing new functionality to the builtins and stdlib. The onus is not on *us* to do the work, but for those making the proposal. If we ask for examples, we expect you to do the work of curating some good examples, not just to dump a regex in our laps and tell us to search for them ourselves.
D'Aprano suggested, again, that if wfdc wanted to continue pushing the feature, they should be looking at putting together a PEP and attracting a core developer as a sponsor. That might entail broadening the idea to apply to the abstract base class (ABC) for sequences, since simply adding the functionality to tuples is such a small change, D'Aprano said.
Arguments
The discussion degenerated somewhat, with more requests for wfdc's use case, disagreements over the name, more questions of whether namedtuple might be a better choice, if the feature can actually be implemented as a one-liner, and so on. There was no real progress toward an actual proposal, nor any core developer clamoring to sponsor one. In part, that was because wfdc apparently did not want to put in the time and effort to get there, but there was a fair amount of sniping (on both sides) throughout. This kind of exchange is not an uncommon pattern, though, as Cliffe noted:
This is a common scenario on python-list or python-ideas: Someone has an idea that they think is the greatest thing since sliced bread. They propose it, and feel hurt / rejected when they get pushback instead of everyone jumping up and down and saying how brilliant it is. Sometimes they are downright rude. I wouldn't say you've quite crossed that line, but your tone certainly comes across (to me at least) as belligerent. It won't help your cause to put people's backs up. I apologise if that's not how you meant it.
The argument style used by wfdc is somewhat combative, certainly repetitive
(they linked the queries several times, for example), and many of their replies had
entries like "see my answer to ..." without a link, which made it hard to
determine which of their (many) replies they were referring to. Several
people tried to be helpful to wfdc along the way, but there was also a fair
amount of negativity coming from the Python community, as D'Aprano noted:
"There is nothing wrong with using plain tuples as your data
structure,
and the instance from certain people that wfdc is wrong to do so is
arrogant, rude and condescending.
" D'Aprano continued
that in another message, saying again that there was room for both sides to
improve:
It is true that much of this thread has been dominated by some extremely negative, condescending replies. But you are not helping by biting the hands of those trying to help you.You have been told the processes that you have to follow to get this change accepted into Python. Snarling and biting at us is not going to change that.
Barker shared
some of those same thoughts as D'Aprano, noting that Angelico had recently
"lamented that Python-ideas seemed to
immediately reject any new idea out of hand
", but that he was part
of the chorus of negativity in the thread. Angelico did
not see things that way, but Barker said:
Anyway -- despite a not-great tone, I think the conversation has been at least a little productive. The OP should have an idea what they need to do if they really want to push this forward. And persuading this list is not what needs to be done.BTW: I'm not suggesting that it's a bad idea to be discouraging: getting something even seemingly small into Python is a lot of work, and is often not successful. Giving proposers an honest assessment of their chances is the kind thing to do.
But there is a very big difference in tone between:
"This is a small thing, and I don't think it's likely to be accepted"
and
"This is not a good idea at all"
And it feels (to me anyway) like there was quite a bit of the latter in this thread.
Without major changes in wfdc's approach, it seems highly unlikely that anything will come of the idea. Perhaps someone else can pick it up and follow a path more likely to succeed, or at least to get further along in the process. There are multiple replies in the thread that outline what steps are needed to do so—along with ample examples of how not to engage with the Python community. Looking at some of that will be helpful to anyone who is considering proposing a new feature for Python of any sort, so the discussion, however dysfunctional, surely has some value.
Three candidates vying for Debian project leader
Three candidates have thrown their hat into the ring as candidates for the 2022 Debian project leader (DPL) election. One is Jonathan Carter, who is now in his second term as DPL, while the other two are Felix Lechner and Hideki Yamane. As is the norm, the candidates self-nominated during the nomination period and are now in the campaigning phase until April 1. The vote commences April 2 and runs for two weeks; the results will be announced shortly thereafter and the new DPL term will start on April 21. The candidates have put out platforms and are fielding questions from the voters, Debian developers, thus it seems like a good time to look in on the election.
While the DPL is the titular head of the project, their powers are pretty limited by the Debian Constitution; most of the power in Debian lies collectively and individually with the developers. The DPL is, to a certain extent, an administrative role more than it is an executive one. The intent is also for the DPL to be kind of a thought leader for the project, leading discussions, possibly proposing general resolutions (GRs), and consulting with the developers on how to use project money or other assets; in addition, the DPL is a catch-all for urgent decisions or those for which there is no relevant decision-making entity in the organization.
Platforms
Lechner's platform
describes his approach as trying to "elevate the happiness and
well-being of all Debianites for the coming year
". He would like to
teach the project how to "leave aside the cynicism and the anger that
take center stage in Debian sometimes
". He envisions Debian as a
republic, somewhat modeled on his hometown of Fremont, California, which
has "has a civic system of boards and commissions that means everyone
gets a voice
"; he believes this is part of what leads Fremont to be
the "happiest
community in North America
". He described his agenda as well:
As your leader, I will work tirelessly to reduce conflict within the project. Toward the outside world, I will make every effort to help users and specialty communities around the world fall in love with Debian again. We should be the premier development platform for all programming ecosystems.I furthermore hope to advance on a variety of long-term challenges for the project, such as replenishing an aging membership and dealing with the proliferation of language-specific package managers (aka the "vendoring" problem). With your help, I hope to put Debian on a good course for the next ten or so years. Let's work together!
In his platform, Carter focuses on tasks he would like to work on for his third term:
Last year I learned that it's a lot harder pushing things forward in a release year, than in a non-release year. During freeze, we're squarely focused on remaining issues to get the stable release out, and it's not the best time to have GRs or very involved discussions about project changes.
There are three organizational items he would like to address: looking at
formal registration of Debian as its own organization, getting
"minimal agreements
"
on paper with the trusted organizations (which hold assets for the
project), and improving accounting with better tracking of the assets.
To a certain extent, each of those goals builds on the others.
Carter also has a technical goal to try to coordinate changes in handling
firmware for Debian:
The methods in which firmware is loaded and distributed has changed significantly over the last decades. Having non-updated firmware and microcode can lead to significant security risks, and many new devices store no permanent embedded copies of firmware, requiring it to be loaded from disk. This has some significant consequences for Debian. Our default free media doesn't ship with important microcode updates, and on our live media we run into problems with both firmware and non-free drivers, causing a large amount of systems to be unusable with those media. I'm not advocating to just include non-free bits on all our media, but I do think there are improvements we can make and actions we can take without compromising our core values. I'd also like to approach both FSF, OSI and LF to see if there's scope for us to work together on firmware problems. Also, we have quite a bit of funds available, we could make some funds available for the development of free firmware in the cases where it's plausible.
Yamane has been active in Debian since 2010, helping out on translating
documentation into his native Japanese language, among other
activities, as his platform
describes. He warned that he will need more support as DPL
from Debian contributors than the other candidates because of his
English-language skills and a lack of ability to "mediate fighting
between our contributors. Be calm, stay cool, stay safe.
"
The platform also outlines the things Yamane wants to work on as DPL,
starting with providing a better experience for both contributors and users
of the distribution. Part of that would be hearing what users want,
discussing it among the contributors, then trying some things: "More
tries, more failures, and get some success during that. We’re in 2020s. Be
Agile.
"
He thinks that the Debian infrastructure ("Web, Wiki, BTS
[bug-tracking system],
repository, etc.
") may need upgrading and expansion as well, with an
eye toward what he called a "moonshot": "give a most comfortable
environment for developers, more stability and less vulnerabilities for
admins, a reasonably fresh desktop environment for average users, more
i18ned applications and documentations for non-native English people,
etc.
" Lastly, he wants to work on knowledge transfer from the
existing contributors to new ones so that current know-how will be
maintained "for the next shiny
decades
".
Q&A
After the nominations, it is traditional for people to post questions for the candidates to the debian-vote mailing list. Charles Plessy started things off by asking about term limits for those in positions of power within Debian. He noted the change to add term limits for the technical committee and wondered what the candidates thought of that and of applying them more widely in the project.
Lechner is in
favor of term limits in general, but wants to set up an "appointments
committee" as one of his first orders of business to gather input from
contributors and advise the DPL on who might best serve on various
committees. If the pool of
volunteers for delegations is substantial enough, "a future
referendum could then introduce term limits for delegates
". Ansgar
Burchardt said
that the DPL can simply replace delegates, but that there are other areas
in the project where there are positions of power:
[...] maintainership over packages as an example. In case of disagreement, the bar to change maintainers is higher than for changing delegates, but the Technical Committee can do so.Do you think that an Appointments Committee should also handle package maintainership and should we have term limits for how long people can maintain packages, in particular core packages like gcc, libc, dpkg, apt, ...?
Lechner replied
that he had only talked about delegates because the limits on the powers
of the DPL end there. He was not at all sure that term limits for
maintainers made sense, but if Burchardt or other contributors thought so: "Please make
your case with the Appointments Committee, or apply to become a member
thereof. Then you can use the political weight of your office to
initiate a referendum.
"
Carter had a lengthy
response that broke down the different organizations within the project
and suggested there might be a place for term limits on some of them. The
limits on the members of the technical committee "seems to have worked well so
far
", but there are other situations where the continuity in
knowledge and skills is important so "having a strict term limit might also be
a bad idea
". There are various mechanisms that can be
used to help keep Debian and its organizations running smoothly:
I think expiry is one of the available tools we can use to make teams/delegations better. Voting is another, and tiered memberships yet another. There's probably a lot that we can explore, but I don't think this is best driven by the DPL, it needs to come from the teams and from the project members. Unfortunately, after two terms, I think any prospective DPL who thinks that they'll have time to actively drive all of this by themselves is in for some disappointment.
As might be guessed, based on his platform, Yamane was also concerned
about setting hard term limits. "Without
succession of knowledge and skill to new people, it just causes a
discontinuity.
"
Handling long-running legal issues
Molly de Blanc asked, in general terms, about a topic that had been raised on the Debian-contributor-only debian-private mailing list about an ongoing legal dispute that the project has been handling. Because that discussion is private, answers about it have to be fairly non-specific, which led to a somewhat tangled—perhaps heated—sub-thread where Lechner tried to answer the question. It is difficult not to guess that it is related to the Daniel Pocock mess that had already been running for a few years when we wrote about it two years ago. Naturally, Pocock himself was unable to resist making a cameo appearance with "questions" for the candidates.
In any case, De Blanc's question was: "How would you transition into taking
on this particular responsibility and similar longer running issues
should they arise in the future?
". Carter said
that there was a team working on the problem, with a Git repository
"that contains a lot of evidence
complete with a timeline that links to all the individual bits
".
That team would still be available to work on it, as would he, until a new
DPL was "comfortable enough for me to move on from there
".
But he was optimistic that the issue in question would not really need much
of a transition: "we've been making some large strides and we are likely to
hit a significant milestone even before the DPL elections start, so
hopefully by the time the elections are done there won't be too much work
left on that.
"
Yamane said that his general style would be to put together a team of contributors and outside lawyers to try to deal with legal problems. Some separate infrastructure would be set up for the team to use to track the problem. The DPL role would be to coordinate between that team and others that might need to be consulted and to communicate as much as possible monthly to the project.
Lechner began by stepping
lightly around the question because of the private nature of the
dispute, saying that he has been a negotiator in his day job for many
years; "Compromise is my life.
" In a follow-up, De Blanc asked
a "tangential question
" about where he draws the line between
individual and community-wide issues. She cited two example areas where
those lines might blur:
To use an example of copyright claims: Would it be Debian's responsibility if someone raised a copyright claim against an individual for their participation in Debian? Alternatively: If a Debian contributor (maintainer, developer, etc) was being harassed due to their involvement with the project, what responsibilities do you think the project would have to them? Do you think there's a significant difference if the copyright claim (or harassment) is coming from inside the Debian community or outside?
The questions might make one wonder if De Blanc and Lechner have disagreed
on these topics in the past, perhaps even on the debian-private thread in
question.
Lechner said
that he plans "to
exercise very few of the broad presidential powers available to the
project leader under the constitution
", instead he would like to see
that power distributed to "an open and transparent
system of boards and commissions that enjoy broad community
support
". Richard Laager asked
for some specifics about how that would work; the idea has some appeal
"but I fear that the idea and the
reality may be different
".
Lechner outlined
some of his ideas; he wants to ensure that his decisions have "some
measure of democratic legitimacy
". He used monetary disbursements
as an example, saying that there would be a committee in charge of that,
made up of contributors who represent different views within the
organization; the committee's meetings would be open and those who have
concerns or complaints could bring them to the committee.
That mechanism would have an overall beneficial effect for
the project, he said:
For contentious topics, the debate over disbursements would automatically be compartmentalized to your tiny committee without burdening the entire project. There is no need to write to d-devel (or to threaten to do so) unless some outrageous conduct deserves broader attention. Neither would there be a need for a General Resolution, or the all too popular threat of one. The moderating effect grows with the size of your committee.The overall temperature of the project would also go down. We already do something similar with our technical teams.
Another part of Lechner's response to De Blanc, which seemingly directly referenced part of the debian-private thread, concerned the harassment part of her question:
The harassment case is easily distinguished in that (1) the victim seeks to initiate legal action instead of needing help with a defense, and (2) the project's survival is not at risk—unless the victim sues Debian as well.For harassment originating inside Debian, the project has (or will soon have) an appropriate disciplinary process. That is the extent of Debian's responsibility.
Steve McIntyre wondered
what that meant with regard to harassment victims: "Do you not feel
that project should stand with and support
contributors facing harassment because of their work in Debian?
"
Lechner danced around that question some as well, wondering
whether said support was "for empathy or for financial
assistance
", but eventually said that as DPL he
would offer whatever kinds of support "the members perceive as
proper
" to harassed contributors.
Perhaps surprisingly, Carter replied that voting on such things was impractical and that the question was meant to try to get a sense for what a DPL candidate would do, since the form of government Lechner envisions is not likely to cover everything:
Even if you end up setting up that army of committees (I can't imagine all the bureaucracy that will come with), you would still have to make frequent decisions unlikely covered by those committees. So, again, how would you gauge what project members perceive as proper?
Lechner disagreed that
his system was overly bureaucratic, but feels that rule by decree is not
right for the project. "I believe that a civic system, however simple, approximates the will
of the people to a greater degree. No referendums are needed.
"
Meanwhile, another response
to De Blanc's questions perhaps gets to the heart of the matter, but was
seen as insensitive, at best, by participants in the thread: "Did the
project provide assistance to you, and do you worry that the
assistance might not continue if I am elected?
"
Tiago Bortoletto Vaz replied,
calling it "a rhetorical passive-aggressive borderline-bullying
response
". Gunnar Wolf more or less agreed,
with both saying that it made voting for Lechner unlikely. Wolf
said: "This year I think I will break my usual
practice, and vote a certain DPL candidate below NotA [None of the Above]
:-\
". Others in that sub-thread, which went further off the rails,
concurred with that assessment. Lechner was apologetic
to a certain extent, saying that he was somewhat confused in how to interpret
De Blanc's and McIntyre's questions, but it would seem that, at least for
some, the damage has been done.
A Debian organization?
Another semi-related question for the candidates was asked
by Christian Kastner: "What is your position on registering Debian as
an organization?
" Kastner noted that Carter had specifically
mentioned doing so in his platform, so the question was mostly aimed at
Lechner and Yamane. Lechner said
that he thinks "Debian's governance is presently insufficient
to support any kind of incorporation
", but that he is in favor of
changing that: "I believe Debian should stand on its own. I am ready
to put Debian on a short path to incorporation.
" Yamane said
that he currently has a positive outlook on the idea, but would want to
put together a special team to outline the pros and cons of doing so.
But Bill Allombert was not
sure he understands what is meant by having a separate
organization. Lechner replied
that his thinking about forming some kind of organization for Debian has
evolved over the course of the parallel discussions on the questions
asked. As he noted in one of his early replies to De Blanc: "Did
Debian survive for so long in part because there was no organization
to sue?
" He is concerned that having a single overarching
organization might lead to real problems down the road:
If the project finances lawsuits, as suggested elsewhere, we may soon have a liability problem. Newton's law also applies in conflict: Exerting force always creates a counter-force. (Many folks in Debian do not understand that basic maxim of diplomacy.) It would only be a matter of time until we have to defend ourselves.The same thinking has kept me from pushing for lawsuits as your trademark delegate.
Kastner said
that he was "less concerned with regards to malicious litigation
(although that is a valid concern)
", but more with the day-to-day
problems with not having a single organization where contributions can go
and that can hold assets (e.g. hardware, copyrights) for the project
directly. "Currently, the Project has no legal standing of its
own, meaning that within any legal context, there is no Project.
"
Allombert sees
that as a feature, however; Debian "is not bound to any
particular [jurisdiction], it only exists through consensus of its
members
". He also pointed out that any kind of organization would
need to registered in some particular country and be subject to its laws;
"Debian members would be split
between those that are in the [jurisdiction] of the foundation and those
that are not and the former would be inevitably advantaged.
" At
least so far, proponents of the separate-organization path have not replied
to those concerns.
There are, currently, several other questions being discussed, including one on the "Bits from the DPL" reports that used to come out monthly, another on the Code of Conduct, and a third on Debian and people with disabilities. There is still time for more to be added before the voting period starts on April 2. While it is unfortunate that there seems to have been information from the private list that spilled over into the discussions, it seems that the voters are getting a pretty clear view of the candidates from those (and other) questions. We'll have to wait and see how it all comes out on April 16.
Improved response times with latency nice
CPU scheduling can be a challenging task; the scheduler must ensure that every process gets a fair share of the available CPU time while, at the same time, respecting CPU affinities, avoiding the migration of processes away from their cached memory contents, and keeping all CPUs in the system busy. Even then, users can become grumpy if specific processes do not get their CPU share quickly; from that comes years of debates over desktop responsiveness, for example. The latency-nice priority proposal recently resurrected by Vincent Guittot aims to provide a new tool to help latency-sensitive applications get their CPU time more quickly.Over the years, numerous approaches have been used to try to improve the response time of important processes. The traditional Unix "nice" value can be used to raise a process's priority, for example. That can work, but a process's niceness does not directly translate into latency; it controls how much of the available CPU time the process can consume, but not when the process can actually run. Using the realtime priorities will cause the scheduler to run a process quickly, especially if realtime preemption is enabled, but a process running at that priority can also take over the system.
The latency-nice concept is a different approach that tries to address those problems; it applies to the completely fair scheduler used for most processes, so no realtime priorities are needed. It adds a second nice value which, mirroring the existing nice value, is a number between -20 and 19. The lower the number, the higher the priority, so the highest-priority latency-nice value is -20. As with traditional nice values, any process can increase its latency-nice setting, but lowering it requires the CAP_SYS_NICE capability.
The traditional nice value works by regulating how much CPU time a process may consume relative to others on the system; processes with a lower nice value get more CPU time. Changing the latency-nice value, instead, does not change the amount of CPU time a process may consume. It does, however, make a difference in when that time will be made available. Processes with lower latency-nice values are deemed to be more latency-sensitive, and thus should not have to wait as long before being able to use the CPU time that is available to them.
With that model, the implementation of latency nice is relatively straightforward. Whenever a blocked process wakes, the scheduler must decide whether to run it immediately or to put it into a run queue and make it wait for a CPU. A number of factors go into that decision now; the latency-nice mechanism adds another. If the new process has a higher latency-nice priority than the process that is running in a CPU, and that new process has available CPU time in its current slice, then the new process can preempt the running process. The new process does not get any more CPU time than before, but it has the right to obtain the CPU more quickly when it has time available.
Similarly, a process with a higher latency-nice value (and thus, a lower priority) will not preempt other running processes. It will thus tend to get its entire time allotment toward the end of the slice, once the higher-priority processes have used their time. This process, too, will get all of the time that it is entitled to, but it will not block others and will, because it does not preempt others, cause fewer context switches in general.
Traditional nice values are set with the nice() system call. Latency nice, instead, is controlled with sched_setattr(). A new field (latency_nice) has been added to the sched_attr structure passed to that system call, and the SCHED_FLAG_LATENCY_NICE flag is provided to indicate that a new latency-nice value is being requested. Latency nice can also be managed using the scheduler control-group controller; a new knob (latency) has been provided for that purpose.
This patch in the series includes some benchmark results showing how latency nice works. Running the hackbench benchmark with a high latency-nice value yields better performance due to the lower number of preemptions that take place. Throwing in a cyclictest run, at a low latency-nice value, demonstrates greatly reduced latency results for that test. Overall, it would seem that the patch set works as intended.
Previously, this work had been developed by Parth Shah; the fifth revision of the patch set was posted in February 2020. The work had acquired some Reviewed-by tags by that point, but it stalled thereafter. Interestingly, it had gotten as far as adding the infrastructure to manage the latency-nice value, but had not actually implemented any new semantics in the scheduler. At that time, there were a few ideas circulating on how the system might respond to the latency-nice settings and a discussion on latency nice was held at the OSPM 2020 gathering, but no seeming consensus on the right approach emerged.
Two years later, Guittot has dusted this work off and added the wakeup implementation described above. As of this writing, there have been few comments on this work. Improving response times for important processes has been on many developers' wishlists for a long time, though. If further testing shows that the latency-nice mechanism represents progress in that direction, then this new push may well be the one that gets this work into the mainline kernel.
Driver regression testing with roadtest
The kernel community has a number of excuses for the relative paucity of regression-test coverage in the project, some of which hold more water than others. One of the more convincing reasons is that a great deal of kernel code is hardware-specific, and nobody can ever hope to put together a testing system with even a small fraction of all the hardware that the kernel supports. A new driver-testing framework called roadtest, posted by Vincent Whitchurch, may make that excuse harder to sustain, though, at least for certain kinds of hardware.One of the problems with hardware is its sheer variety. Consider a device as conceptually simple as a GPIO port which, at its core, drives a single electrical line to either a logical true or false value. GPIO drivers should be simple things, and many of them are, but vendors like to add their own flourishes with each new release. As a result, there are well over 150 GPIO drivers in the kernel source, many of which can drive more than one variant of a device. There is no way to build a system with all of those devices in it; most of them are part of a larger peripheral or system-on-chip, and many of them have not been commercially available for years.
Of course, each of those drivers was, at one point, tested on the hardware it drives. They would normally be expected to continue to work. But the kernel is constantly changing, and changes often affect drivers as well. Developers making those changes do their best to avoid breaking anything, but they have no way to test changes that affect most drivers; even subsystem maintainers will normally only have a subset of the devices available for testing. So there is always a possibility that regressions will slip in and go unnoticed until somebody's device stops working.
Roadtest aims to circumvent this problem by eliminating the need to actually have the hardware present to test whether a driver still works. This is done by pairing driver tests with mock devices that can run anywhere; when a developer makes a set of regression tests for a specific driver, that work includes the mocked version of the target device(s) as well. The tests are then run under a specially built User-Mode Linux kernel, with the mocked hardware filling in for the real thing.
Forcing a test author to also implement an emulated version of the device under test sounds like a high bar to overcome. The good news is that the mocked devices need not encapsulate the full complexity of the real thing; they simply need to respond well enough to verify that the driver is behaving in the expected way. Emulating the device's programming interface (without actually doing the things a real device would do) may well be sufficient.
Consider, for example, this test from the patch set, which verifies the driver for the opt3001 light-sensor driver. Both the tests and the mocked devices are written in Python; the core part of the implementation for the mocked opt3001 device looks like this:
class OPT3001(SMBusModel): def __init__(self, **kwargs: Any) -> None: super().__init__(regbytes=2, byteorder="big", **kwargs) # Reset values from datasheet self.regs = { REG_RESULT: 0x0000, REG_CONFIGURATION: 0xC810, REG_LOW_LIMIT: 0xC000, REG_HIGH_LIMIT: 0xBFFF, REG_MANUFACTURER_ID: 0x5449, REG_DEVICE_ID: 0x3001, } def reg_read(self, addr: int) -> int: val = self.regs[addr] if addr == REG_CONFIGURATION: # Always indicate that the conversion is ready. This is good # enough for our current purposes. val |= REG_CONFIGURATION_CRF return val def reg_write(self, addr: int, val: int) -> None: assert addr in self.regs self.regs[addr] = val
The opt3001 is an SMBus device, programmable via writes to (and reads from) a set of registers. Using the SMBus emulation provided with roadtest, this mock device simply implements a handful of registers. It is hard to imagine a simpler implementation; the read side doesn't even bother to check whether a requested register number is valid, presumably on the assumption that the crash resulting from a bad read request would say "test failure" with adequate volume.
The roadtest framework will take the mock device implementation and connect it to the driver (in the User-mode Linux instance) as if it were a real device. The test itself runs as a user-space process in that instance; it tweaks some of those registers to simulate the arrival of data, then reads that data using the IIO API:
def test_illuminance(self) -> None: data = [ # Some values from datasheet, and 0 (0b_0000_0000_0000_0000, 0), (0b_0000_0000_0000_0001, 0.01), (0b_0011_0100_0101_0110, 88.80), (0b_0111_1000_1001_1010, 2818.56), ] with self.driver.bind(self.dts["addr"]) as dev: luxfile = dev.path / "iio:device0/in_illuminance_input" for regval, lux in data: self.hw.reg_write(REG_RESULT, regval) self.assertEqual(read_float(luxfile), lux)
The register writes (the self.hw.reg_write() call above) go straight to the mock device. The reads, instead, are directed to the driver, which will interact with the mock device to obtain the requested data. If the driver is working properly, it will read the simulated data from the mock device and return the results that the test is expecting.
This is a simple test; more complex tests could verify that the driver is setting up the hardware correctly, dealing with error conditions, and so on. Even so, there would appear to be limits to a mechanism like this; it will be difficult to use it to verify that, say, a Video4Linux driver is correctly managing the buffer queue when user-mapped buffers are in use with a planar YUV color scheme. But for simpler devices, of which there are many, a system like roadtest may provide a level of assurance that kernel developers currently do not have.
A lot more information on roadtest can be found in this documentation patch, which includes a tutorial on adding a test for a new device. The patch set as a whole contains tests for a few device types; presumably that list would grow considerably if this framework were to be merged into the mainline.
There have not been a lot of comments on the system so far, so it is hard
to be sure about what roadtest's prospects for merging are. Brendan
Higgins was clear
enough on his opinion of roadtest, though: "I love the framework;
this looks very easy to use
". Testing frameworks like roadtest
should not bother anybody who does not choose to use them and, if they are
made comprehensive enough, they can significantly increase the chances of
catching regressions before they get into a released kernel. So it is hard
to see a reason why roadtest wouldn't eventually become part of the
mainline kernel — unless, of course, kernel developers would really rather
not lose an excuse justifying the lack of regression testing for drivers.
A look at some 5.17 development statistics
At the conclusion of the 5.17 development cycle, 13,038 non-merge changesets had found their way into the mainline repository. That is a lower level of activity than was seen for 5.16 (14,190 changesets) but well above 5.15 (12,337). In other words, this was a fairly typical kernel release. That is true in terms of where the work that made up the release came from as well.The changes in 5.17 were contributed by 1,900 developers, down from the 1,988 seen in 5.16. Of those developers, 268 made their first kernel contributions in this cycle. The most active developers this time around were:
Most active 5.17 developers
By changesets Christoph Hellwig 168 1.3% Eric Dumazet 150 1.2% Mauro Carvalho Chehab 142 1.1% Hans de Goede 139 1.1% Andy Shevchenko 132 1.0% Martin Kaiser 132 1.0% Christophe Jaillet 125 1.0% Ville Syrjälä 123 0.9% Thierry Reding 114 0.9% Sean Christopherson 109 0.8% Thomas Gleixner 105 0.8% Matthew Wilcox 101 0.8% Andrii Nakryiko 97 0.7% Nicholas Piggin 96 0.7% Michael Straube 92 0.7% David Howells 89 0.7% Lad Prabhakar 86 0.7% Dmitry Osipenko 82 0.6% Rob Herring 76 0.6% Vladimir Oltean 74 0.6%
By changed lines David Howells 26567 4.4% Thierry Reding 16552 2.7% Christoph Hellwig 10734 1.8% Luiz Augusto von Dentz 10106 1.7% Mauro Carvalho Chehab 10010 1.7% Vinod Koul 9363 1.5% Zong-Zhe Yang 8135 1.3% Svyatoslav Ryhel 7204 1.2% Horatiu Vultur 6962 1.1% Hans de Goede 6537 1.1% Chengchang Tang 6255 1.0% AngeloGioacchino Del Regno 6198 1.0% Andrzej Pietrasiewicz 6035 1.0% Dmitry Osipenko 6013 1.0% Amit Cohen 5949 1.0% Daniel Bristot de Oliveira 5598 0.9% Jie Wang 5553 0.9% Ville Syrjälä 5451 0.9% Jacob Keller 4943 0.8% Emmanuel Grumbach 4615 0.8%
Christoph Hellwig continues to do extensive refactoring, mostly in the block and filesystem layers; once again, this work has made him the top changeset contributor. Eric Dumazet, as always, has been busy making the networking stack work better; he also added the reference-count tracking infrastructure during this cycle. Mauro Carvalho Chehab does most of his work in the media subsystem, Hans de Goede works mostly in the graphics layer (including adding generic support for privacy screens this time around), and Andy Shevchenko contributed numerous cleanups throughout the driver subsystem.
David Howells topped out the "lines changed" column by rewriting and replacing the caching layer used by network filesystems. Thierry Reding contributed a lot of Tegra SoC hardware support, and Luiz Augusto von Dentz worked extensively on the Bluetooth host-controller interface code.
The most active testers and reviewers of patches this time around were:
Test and review credits in 5.17
Tested-by Daniel Wheeler 105 10.1% Gurucharan G 51 4.9% Nishanth Menon 41 3.9% Michael Kelley 31 3.0% Konrad Jankowski 31 3.0% Sebastian Andrzej Siewior 21 2.0% Juergen Gross 19 1.8% Wolfram Sang 17 1.6% Bean Huo 16 1.5% Tony Brelinski 14 1.3% Valentin Schneider 11 1.1% Arnaldo Carvalho de Melo 10 1.0% Sachin Sant 10 1.0%
Reviewed-by Rob Herring 175 2.9% Christoph Hellwig 138 2.3% Andy Shevchenko 104 1.7% David Sterba 96 1.6% Jason Gunthorpe 91 1.5% Pierre-Louis Bossart 84 1.4% Jeff Layton 83 1.4% Kai Vehmanen 68 1.1% Greg Kroah-Hartman 66 1.1% Krzysztof Kozlowski 65 1.1% Ranjani Sridharan 65 1.1% Darrick J. Wong 62 1.0% Ville Syrjälä 59 1.0%
Many of these names have appeared in these tables for a while now; perhaps the biggest change is the appearance of Andy Shevchenko, whose Reviewed-by tag appears on patches throughout the driver subsystem.
Work on 5.17 was supported by 245 employers that we were able to identify; that is, again, a typical number for recent kernels. The most active employers were:
Most active 5.17 employers
By changesets Intel 1510 11.6% (Unknown) 975 7.5% Red Hat 894 6.9% 878 6.7% (None) 572 4.4% AMD 490 3.8% Huawei Technologies 481 3.7% Linaro 452 3.5% NVIDIA 438 3.4% SUSE 425 3.3% 388 3.0% (Consultant) 381 2.9% IBM 316 2.4% Oracle 265 2.0% Renesas Electronics 245 1.9% Arm 217 1.7% Alibaba 205 1.6% NXP Semiconductors 186 1.4% Qualcomm 185 1.4% Collabora 150 1.2%
By lines changed Intel 76222 12.6% Red Hat 56565 9.3% (Unknown) 39769 6.6% NVIDIA 32721 5.4% Huawei Technologies 30039 5.0% 24971 4.1% (None) 21917 3.6% AMD 21133 3.5% Linaro 20866 3.4% Qualcomm 20116 3.3% (Consultant) 18397 3.0% SUSE 16841 2.8% 14405 2.4% Collabora 13845 2.3% Realtek 11255 1.9% Microchip Technology 9613 1.6% IBM 8974 1.5% NXP Semiconductors 8039 1.3% SoMainline 7789 1.3% Renesas Electronics 7767 1.3%
Once again, there are few surprises here.
Old bugs?
While a development series like 5.17 brings a long list of new features, it also includes fixes for older bugs. There are various ways of calculating just how old those bugs are, but one metric has the advantage of being relatively easy to calculate: how many patches in 5.17 have been backported to the stable updates for previous kernels? The 4.19 kernel was released in October 2018, for example, so any patches backported to the 4.19 stable updates can be seen as being fixes for problems that are at least that old.
It's a fairly straightforward task to look at the mainline commit ID for each commit in a stable series and see if it is a 5.17 commit or not. Indeed, this can be done for older kernels as well; the results look like this:
Stable update Patches from 5.17 5.16 5.15 5.14 Count Pct Count Pct Count Pct Count Pct 5.16.14 2,323 99.3% 5.15.28 1,993 45.0% 2,361 53.8% 5.10.105 1,241 10.2% 1,342 10.9% 1,374 11.1% 1,864 15.1% 5.4.184 739 4.2% 869 4.9% 860 4.9% 1,120 6.4% 4.19.234 528 2.4% 599 2.7% 564 2.6% 760 3.5% 4.14.271 426 1.9% 477 2.1% 439 1.9% 565 2.1% 4.9.306 334 1.6% 368 1.7% 339 1.6% 434 2.0% 4.4.302 133 0.7% 289 1.5% 269 1.4% 329 1.7%
For each mainline/stable-update pair, the entries in the table show how many patches were backported to the stable series from that mainline release, and the percentage of all the patches in that stable series that came from that mainline release. Thus we see, for example, that essentially all of the patches backported to 5.16 came from 5.17 — an unsurprising observation. (It is not 100% because there are always a few patches that are not directly backported, or are just version tags).
Reading down the columns shows that, as time goes on, the number of bugs fixed in the older stable updates does decrease, as one would expect. But it definitely does not drop to zero; patches were still being backported from 5.17 to the 4.4 kernel (released over six years ago) right up until that kernel stopped receiving support. Reading across the columns suggests that there is nothing special about 5.17; every mainline release is fixing a steady stream of bugs that have been present for years.
Of course, there are any number of important caveats here. For example, backported patches could be fixing bugs in other backported patches, in which case the bugs are more recent than it would seem. As documented here in the past, the regression rate in stable kernels runs anywhere from about 3% to 12%, depending on how one counts. There is also the fact that not all backported patches are bug fixes; some add device IDs or improve performance, for example.
There are also fixes for hardware bugs. For example, about 30 of the just-backported 5.17 patches address the branch history injection Spectre vulnerability. It is not fair to chalk those up as fixes for bugs in the older kernels, but they are problems that needed to be fixed regardless.
These factors all suggest that the above numbers could be adjusted but are not fundamentally wrong.
Overall, 5.17 was another typical, relatively boring kernel development cycle. The kernel-creation machine continues to crank out releases on a predictable schedule. As of this writing, linux-next contains over 12,800 changesets waiting to be dumped into the mainline for 5.18, so it does not look like the process will stop anytime soon.
A postscript
Given the stability of the kernel's development process, these reports have become increasingly uninteresting over time; there is not a lot of news to be found here. Given that, your editor is, once again, questioning the value of producing them for every kernel release. Increasingly, they seem like a bunch of boilerplate with some side investigations tossed in to try to make them more interesting. Might it be more useful to discontinue this practice in favor of, say, a full-year report on the occasion of each long-term-stable release? Please feel free to let us know, via the comments or email to lwn@lwn.net, if you have an opinion on this matter.
Page editor: Jonathan Corbet
Next page:
Brief items>>