|
|
Subscribe / Log in / New account

A new policy is born...

A new policy is born...

Posted Oct 10, 2024 16:57 UTC (Thu) by jhoblitt (subscriber, #77733)
Parent article: On Rust in enterprise kernels

Does the policy of blocking changes to mainline that would be difficult to backport to a release from the previous decade only apply to features implemented in rust or is it a blanket rule?


to post comments

A new policy is born...

Posted Oct 10, 2024 17:51 UTC (Thu) by jgg (subscriber, #55211) [Link] (3 responses)

There has always been a complex ill defined tension between backporters and the mainline. Keep in mind most members of our community participate in backporting at some level.

In general you shouldn't be making coding changes to accomodate backporting. That is severely frowned upon, but does ocassionally happen regardless.

But, there is a whole set of other grey areas where things get murky. The gcc toolchain has been kept back for alot of different reasons, including backporting.

People tend to think carefully before renaming files or moving code around. I've seen patches pushed back on because of code motion harming backporting. For instance if you send a patch to reorder functions to avoid forward declarations then there is a chance that will be refused as being too trival and too harmful. It depends on the maintainer.

I would say the basic informal agreement we seem to have is that people have to do the best technical work upstream but that with enough patches the backporting will succeed. Rust is a big, big step up of the backport requirement and I think nobody with customers and timelines funding their projects wants to be forced to be the first one that pipe cleans this. Nobody knows what this looks like, or what is possible, and there is a real fear that the project would end up useless for the real world if backporting fails.

A new policy is born...

Posted Oct 10, 2024 18:30 UTC (Thu) by raven667 (subscriber, #5198) [Link]

This all seems reasonable, don't make hard blocking decisions today because of how things might be in a couple years (not saying about bad about R4L or Nova but the future is always uncertain at some level, plans could change), by the time Nova is ready to replace Nouveau it can have vGPU support with all the necessary infrastructure, now the requirement is known, by the time old hardware has aged out of use at the hosting companies offering vGPU and old Enterprise kernels have aged out of new hardware support, it should be possible to have vGPU on Nova for new systems with new Enterprise kernels, vGPU on Nouveau for old ones, everyone wins. There is probably a period of time where support for old Enterprise kernels with vGPU on Nouveau overlaps with Nova/vGPU that increases the support costs for the vGPU team, but at that time Nouveau is probably in maintenance mode and not getting new features, hardware support, etc. so the support costs probably won't be unmanagable.

A new policy is born...

Posted Oct 12, 2024 17:26 UTC (Sat) by marcH (subscriber, #57642) [Link]

> For instance if you send a patch to reorder functions to avoid forward declarations then there is a chance that will be refused as being too trivial and too harmful. It depends on the maintainer.

An example that trivial and harmful should really not depend on the maintainer. Do some maintainers want to get rid of stable kernel branches?

A new policy is born...

Posted Oct 13, 2024 0:28 UTC (Sun) by marcH (subscriber, #57642) [Link]

So "upstream first" and "just run the latest" (regressions) was not enough, now it's: "vaporware first".

Major exaggeration to get the point across, apologies but the idea stands.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds