Not-a-GPU accelerator drivers cross the line
The GPU-driver rule is the result of a "line in the sand" drawn by direct-rendering (DRM) maintainer Dave Airlie in 2010. The kernel side of most GPU drivers is a simple conduit between user space and the device; it implements something similar to a network connection. The real complexity of these drivers is in the user-space component, which uses the kernel-provided channel to control the GPU via a (usually) proprietary protocol. The DRM maintainers have long taken the position that, without a working user-space implementation, they are unable to judge, maintain, or test the kernel portion of the driver. They have held firm for over a decade now, and feel that this policy is an important part of the progress that this subsystem has made over that time.
At its core, a GPU is an accelerator that is optimized to perform certain types of processing much more quickly than even the fastest CPU can. Graphics was the first domain in which these accelerators found widespread use, but it is certainly not the last. More recently, there has been a developing market in accelerators intended to perform machine-learning tasks; one of those, the Habana Gaudi, is supported by the Linux kernel.
The merging of the Gaudi driver has raised a number of questions about how non-GPU accelerators should be handled. This driver did not go through the DRM tree and was not held to that subsystem's rules; it went into the mainline kernel while lacking the accompanying user-space piece. That was later rectified (mostly — see below), but the DRM developers were unhappy about a process that, they felt, bypassed the rules they had spent years defending. Just over one year ago, the arrival of a couple of other accelerator drivers spurred a discussion on whether those drivers should be treated like GPUs or not; no clear conclusions resulted.
The Habana driver has been the source of a few similar discussions over the last few months, with bursts in late June and early July. The problem now is an expansion of that driver's capabilities that requires using the kernel's DMA-BUF and P2PDMA subsystems to move data between devices. These subsystems were developed to work with GPU drivers and are clearly seen by some DRM developers as being part of the kernel's GPU API; drivers using them should, by this reasoning, be subject to the GPU subsystem's merging rules. Or, as Airlie phrased it in his objection to merging the Gaudi changes:
NAK for adding dma-buf or p2p support to this driver in the upstream kernel. There needs to be a hard line between "I-can't-believe-its-not-a-drm-driver" drivers which bypass our userspace requirements, and I consider this the line.This driver was merged into misc on the grounds it wasn't really a drm/gpu driver and so didn't have to accept our userspace rules.
Adding dma-buf/p2p support to this driver is showing it really fits the gpu driver model and should be under the drivers/gpu rules since what are most GPUs except accelerators.
The interesting twist here, as acknowledged
by DRM developer Daniel Vetter, is that there is, indeed, a free user-space
implementation of the Gaudi driver. What is still not available is
the compiler used to generate the instruction streams that actually drive
this device. Without the compiler, Vetter said, the available code is
"still useless if you want to actually hack on the driver
stack
". He elaborated further:
Can I use the hw how it's intended to be used without it?If the answer is no, then essentially what you're doing with your upstream driver is getting all the benefits of an upstream driver, while upstream gets nothing. We can't use your stack, not as-is. Sure we can use the queue, but we can't actually submit anything interesting.
Over the course of the discussions, the DRM developers have tried to make it clear that they want a working, free implementation of the user-space side. It does not have to be the code that is shipped to customers, as long as it is sufficient to understand how the driver as a whole works. To some, though, the compiler requirement stretches things a bit far. Habana developer Oded Gabbay has described the DRM subsystem's requirements this way:
I do think the dri-devel merge criteria is very extreme, and effectively drives-out many AI accelerator companies that want to contribute to the kernel but can't/won't open their software IP and patents.I think the expectation from AI startups (who are 90% of the deep learning field) to cooperate outside of company boundaries is not realistic, especially on the user-side, where the real IP of the company resides.
Cooperating outside of company boundaries is, of course, at the core of the Linux kernel development process. The DRM subsystem is not alone in making such requirements; Vetter responded by pointing out, among other things, that the kernel community will not accept a new CPU architecture without a working, free compiler.
Over the years, there has been no shortage of problems with vendors that want their hardware to work with Linux while keeping their "intellectual property" to themselves. This barrier has been overcome many times, resulting in wider and better hardware support in the kernel we all use. Getting there has required at least two things: demand from customers for free drivers and a strong position in the development community against proprietary drivers. The demand side must develop on it own (and often does), but the kernel community has worked hard to maintain and communicate a unified position on driver code; consider, for example, the position statement published in 2008. As a result there is a consensus in the community covering a number of areas relevant to proprietary drivers; one will have to work hard to find voices in favor of exporting symbols to benefit such drivers, for example.
This ongoing series of discussions makes it clear that the kernel community has not yet reached a consensus when it comes to the requirements for drivers for accelerator devices. That creates a situation where code that is subject to one set of rules if merged via the DRM subsystem can avoid those rules by taking another path into the kernel. That, of course, will make it hard for the rules to stand at all. Concern about this prospect extends beyond the DRM community; media developer Laurent Pinchart wrote:
I can't emphasize strongly enough how much effort it took to start getting vendors on board, and the situation is still fragile at best. If we now send a message that all of this can be bypassed by merging code that ignores all rules in drivers/misc/, it would be ten years of completely wasted work.
Avoiding that outcome will require getting kernel developers and (especially) subsystem maintainers to come to some sort of agreement — always a challenging task.
In the case of the Gaudi driver, Greg Kroah-Hartman replied that he had
pulled the controversial code into his tree. In response to the subsequent
objections, he dropped that work
and promised to "write more
" once time allows. Dropping the
patches for now helps to calm the situation, but it has not resolved the
underlying disagreement. At some point, the kernel community will have to
reach some sort of conclusion regarding its rules for accelerator drivers.
Failing that, we are likely to see a steady stream of not-a-GPU drivers
finding their way into the kernel — and a lot of unhappiness in their
wake.
Index entries for this article | |
---|---|
Kernel | Device drivers/Accelerators |
Posted Aug 26, 2021 16:39 UTC (Thu)
by q_q_p_p (guest, #131113)
[Link] (31 responses)
Posted Aug 26, 2021 17:00 UTC (Thu)
by farnz (subscriber, #17727)
[Link] (29 responses)
The thing that makes these things GPU-like is that a GPU is a programmable accelerator that takes in memory buffers (including those provided by DMA-BUF and P2PDMA interfaces) and outputs new memory buffers with processing applied based on the contents of one of the input memory buffers (which contains a program for the accelerator to run). These "not-a-GPU" things are programmable accelerators that take in memory buffers (including those provided by DMA-BUF and P2PDMA interfaces) and outputs new memory buffers with processing applied based on the contents of one of the input memory buffers (which contains a program for the accelerator to run).
A floppy drive driver would not have the programmable accelerator bit.
Posted Aug 26, 2021 17:45 UTC (Thu)
by q_q_p_p (guest, #131113)
[Link] (28 responses)
Posted Aug 26, 2021 18:29 UTC (Thu)
by ab (subscriber, #788)
[Link]
Posted Aug 26, 2021 19:30 UTC (Thu)
by airlied (subscriber, #9104)
[Link] (9 responses)
So for you that might be something you hold true to, but in the real world, DRM (without KMS) is a subsystem that exposes accelerators to userspace. It just happens to be 3D accelerators.
We don't call these devices GPUs we call them accelerators. Does that make it easier to understand, and yes DSPs and FPGAs should be in the same boat, we have bunch of pointless upstream FPGA drivers that are pretty much unmaintainable because they are tied to closed userspaces. Xilinx even submitted a drm driver in the past.
Posted Aug 26, 2021 20:03 UTC (Thu)
by brigdh (subscriber, #137272)
[Link] (8 responses)
Posted Aug 26, 2021 20:49 UTC (Thu)
by nwnk (guest, #52271)
[Link] (6 responses)
Posted Aug 26, 2021 21:14 UTC (Thu)
by brigdh (subscriber, #137272)
[Link] (5 responses)
Posted Aug 26, 2021 21:22 UTC (Thu)
by blackwood (guest, #44174)
[Link]
Posted Aug 27, 2021 20:39 UTC (Fri)
by linusw (subscriber, #40300)
[Link] (3 responses)
Posted Aug 27, 2021 20:53 UTC (Fri)
by blackwood (guest, #44174)
[Link] (2 responses)
That's all important to squeeze actual good real world performance out of your hw, but not something we care about for a driver stack. The actual compiler backend is absent for real hw. They have llvm (for cpus), cuda (we know that one is blobby) and bring-your-own-codegen (definitely going to be blobby too).
With these fairly pure math accelerators with a lot less fixed function than 3d the only somewhat interesting thing in the hw is the actual compute block and corresponding compiler. And that one is always the sticking point, all the pieces around are generally fairly easy to get after a bit of talking.
Posted Aug 28, 2021 15:07 UTC (Sat)
by linusw (subscriber, #40300)
[Link] (1 responses)
Posted Aug 30, 2021 18:26 UTC (Mon)
by blackwood (guest, #44174)
[Link]
On a quick look this seems fairly interesting. I think there's a few other NN stacks for inteference engines at least which are similarly open. Nothing yet that achieved actual cross hw-vendor status, but good thing to watch for sure.
This one also seems to just redirect to the arm ethnos stack, and not actually merge the backend fully into the upstream project. Only when you do that do the benefits of a real upstream project kick in.
Posted Aug 26, 2021 23:25 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 27, 2021 9:07 UTC (Fri)
by farnz (subscriber, #17727)
[Link] (16 responses)
KMS is an interface for display controllers, not GPUs; GPUs are simply compute accelerators attached to the main CPU, which take a program and some input buffers, and produce output, and that's what DRM is intended to provide a good interface for. It happens, for historical reasons, that the first major use of programmable compute accelerators in Linux was accelerators attached to display controllers, but that's simply history.
This is why other compute accelerators belong with DRM; after more than a decade of getting the interfaces for compute acceleration right, and fighting the fight for maintainable kernel drivers, the DRM team (Dave Airlie et al) know what they're doing. One of the (many) things they've learnt over the years is that the driver maintainers need to be able to understand the program bytes for security reasons - if you don't know what the program bytes do, you can't determine what memory accesses the hardware is intended to attempt, and which ones are security bugs, nor is it even possible to filter out programs that are going to ask the hardware to make memory accesses the kernel does not want it to.
There's even an option for teams that think that their software secret sauce is what makes the difference; do what AMD has done, and have a complete open source stack atop the kernel driver (the Mesa stack in the case of AMD OpenCL compute drivers), and have a separate proprietary stack (the AMDGPU-PRO stack in the case of AMD OpenCL compute drivers) that includes the secret sauce.
Also, when we're trying to determine what devices should utilize DRM, "utilizes DRM" is not a good dividing line because it's circular :-)
Posted Aug 27, 2021 10:35 UTC (Fri)
by daniels (subscriber, #16193)
[Link] (15 responses)
Not just security reasons, but also because dma-buf and thus dma-fence have deep ties into memory management.
The analogy to things like floppy drives is a total red herring. For storage and network devices, you have well-known interfaces on both sides: industry standards like SCSI/NVMe/RDMA for hardware, and standardised kernel/userspace API through POSIX or Linux-specific development. Accelerator development is much more wild-west in that regard, so the kernel is being asked to be a dumb shovel between huge unknowns on both sides. That was much easier to handle in the past, because the driver had to bash registers and IRQs directly, so you could at least derive from that how the hardware works and thus what the implications should be. But now kernel drivers for modern accelerators just map command/event queues into userspace, so it's a total black box.
That's why there's a line in the sand: because the deep integration into all kinds of complex subsystems means that we're tying the behaviour of huge amounts of kernel behaviour into totally unknown vendor-specific black boxes. Having an open userspace - something meaningful, not just submitting an empty 'ping' command into the ring - is the only thing that lets us reason about what this actually means.
Posted Aug 27, 2021 17:50 UTC (Fri)
by NYKevin (subscriber, #129325)
[Link] (13 responses)
What is stopping these manufacturers from releasing product A, with FOSS drivers/userspace, getting the drivers upstreamed, and then releasing product B, with proprietary userspace, but which just so happens to have the same firmware/hardware-level interface as product A, so that you can use the same drivers at the kernel level (but if you try to use product A's userspace with product B's hardware, it doesn't work)?
Posted Aug 27, 2021 19:59 UTC (Fri)
by dezgeg (subscriber, #92243)
[Link]
Posted Aug 27, 2021 20:24 UTC (Fri)
by blackwood (guest, #44174)
[Link]
Posted Aug 27, 2021 21:43 UTC (Fri)
by Cyberax (✭ supporter ✭, #52523)
[Link] (10 responses)
That's the situation now with the Radeon drivers. There's a reasonable open source stack, and then there's AMDGPU Pro closed-source driver that uses the same kernel interface as the open source driver, but provides more optimizations and functionality.
Posted Aug 27, 2021 23:21 UTC (Fri)
by daniels (subscriber, #16193)
[Link] (9 responses)
The question though was: what if someone makes hardware that’s similar enough to existing open hardware + userspace stacks to fool the kernel, but is only usable with new proprietary userspace, presumably with new extensions as well? And honestly, it’s a good question, and would be a great problem to have. But like blackwood said, no-one has yet done it and thanks to the designs open stacks enforce, it’s not on the cards either.
Posted Aug 28, 2021 16:08 UTC (Sat)
by alyssa (guest, #130775)
[Link]
Posted Aug 29, 2021 18:37 UTC (Sun)
by marcH (subscriber, #57642)
[Link] (7 responses)
What would be the kernel maintenance problem in that case?
Posted Aug 30, 2021 15:48 UTC (Mon)
by NYKevin (subscriber, #129325)
[Link] (6 responses)
Posted Aug 30, 2021 19:41 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Posted Aug 31, 2021 4:22 UTC (Tue)
by NYKevin (subscriber, #129325)
[Link] (1 responses)
Posted Aug 31, 2021 4:30 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Aug 31, 2021 10:16 UTC (Tue)
by farnz (subscriber, #17727)
[Link] (2 responses)
If there's a regression with the proprietary userspace only, then that's what's at fault, automatically - after all, the FOSS userspace works just fine.
Posted Aug 31, 2021 14:41 UTC (Tue)
by mathstuf (subscriber, #69389)
[Link] (1 responses)
The *right* thing to do is to ask for a way to reproduce the behavior with the FOSS userspace stack as part of the regression's test suite :) . Because how else are kernel developers going to know that they broke it?
Posted Aug 31, 2021 15:27 UTC (Tue)
by farnz (subscriber, #17727)
[Link]
In this case, because the driver is split into two parts - the userspace component, and the kernelspace component. Without source for both, how do you know that the kernelspace component is causing the breakage by doing the wrong thing? It might, for example, be the userspace component expecting that the kernel doesn't enable IOMMU facilities for this device, and being caught out by the change.
Posted Sep 7, 2021 2:17 UTC (Tue)
by florianfainelli (subscriber, #61952)
[Link]
So maybe the approach should be to push back, wait for these 100 AI startups or so to get out of business/be acquired/merged and start working on what resembles a standard. The same is true with any silicon these days, the differentiation is in support, pricing, intrinsic hardware design and software should just be a commodity, because it will be down the road.
Posted Aug 27, 2021 17:01 UTC (Fri)
by dvdeug (guest, #10998)
[Link]
Posted Aug 26, 2021 17:27 UTC (Thu)
by flussence (guest, #85566)
[Link] (12 responses)
Posted Aug 26, 2021 19:01 UTC (Thu)
by airlied (subscriber, #9104)
[Link] (11 responses)
Posted Aug 27, 2021 9:08 UTC (Fri)
by shiftee (subscriber, #110711)
[Link]
Posted Aug 27, 2021 15:15 UTC (Fri)
by flussence (guest, #85566)
[Link] (9 responses)
There's plenty of weird drivers already in the kernel but all of them have the major distinction that a normal person could realistically obtain the hardware and use it under Linux without breaking any laws (cf. the PS4 R600 support patches - rejecting those was the right thing to do even though it annoyed a few people). This hardware on the other hand sounds effectively time-bomb drm'ed — it'll be obsolete before it shows up on the grey market, and it's non-functional without the secret sauce component. Aside from the technical reasons: there's very few honest uses for these responsibility laundering accelerators to begin with.
A quick skim over the EULA on their site suggests that the company keeps tight legal control over who has access to the userspace part, so once they lose interest in it it becomes a paperweight. And it's an Intel subsidiary, throwing things over the wall and discontinuing support on a timeframe of months is all but guaranteed.
Posted Aug 29, 2021 18:39 UTC (Sun)
by marcH (subscriber, #57642)
[Link] (8 responses)
Unlike... smartphones?
Posted Sep 3, 2021 6:06 UTC (Fri)
by flussence (guest, #85566)
[Link] (7 responses)
These ML chips will never get that kind of attention, because they're neither interesting nor accessible to hackers. If the company's lucky they may become a curiosity in a hardware museum someday, but they probably won't be plugged in.
Posted Sep 3, 2021 19:13 UTC (Fri)
by marcH (subscriber, #57642)
[Link] (5 responses)
... to other Intel products either.
Posted Sep 21, 2021 4:09 UTC (Tue)
by flussence (guest, #85566)
[Link] (4 responses)
And that's just the problems concerning Linux. The entire USB3/4/C/TB mess they've foisted on the world is a miserable experience no matter which OS you use, to the point where I've actively avoided buying any devices that need it.
Posted Sep 21, 2021 5:47 UTC (Tue)
by marcH (subscriber, #57642)
[Link] (3 responses)
Posted Sep 21, 2021 17:10 UTC (Tue)
by flussence (guest, #85566)
[Link] (2 responses)
It'd be irresponsible to go there right now though, what with the tulip mania apocalypse going on.
Posted Sep 22, 2021 6:37 UTC (Wed)
by zdzichu (subscriber, #17118)
[Link] (1 responses)
Posted Sep 22, 2021 8:39 UTC (Wed)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Posted Sep 7, 2021 10:42 UTC (Tue)
by immibis (subscriber, #105511)
[Link]
I have been looking at Tenstorrent, which promises to sell you a PCIe card for $750 (but it's not ready yet). Some other companies will sell you a powerful server for $X,XXX (or maybe $1X,XXX) which seems to still be in the range of (perhaps small groups of) really dedicated ML hackers. And, yes, some will only sell thousands of servers to huge cloud enterprises, but we can't assume that as a rule.
---
Not to mention that in 5-10 years, all these now-outdated accelerators made by now-defunct startups will be discarded and will have to go somewhere. At that point you may be able to pick one up on eBay for $100-$1000.
I found a set of "obsolete" Infiniband hardware that way - where "obsolete" = "only does 40Gbps when the modern hardware does 400Gbps". So it does happen. Now I can hack on Infiniband if I get the time and inclination. Because there are open source drivers for these NICs.
Posted Aug 26, 2021 19:30 UTC (Thu)
by bjacob (guest, #58566)
[Link] (5 responses)
Maybe what Linux needs is some middle-ground between fully supported in-tree drivers and "NAK"...
Posted Aug 26, 2021 19:43 UTC (Thu)
by olof (subscriber, #11729)
[Link]
As a result, there's better things to spend my time on than dealing with difficult or toxic people, and I've stopped paying attention.
Posted Aug 26, 2021 19:51 UTC (Thu)
by airlied (subscriber, #9104)
[Link] (3 responses)
Piling crap into the kernel isn't maintainable, especially crap with it's own bespoke ioctl interfaces and user APIs. Like a bad network driver you can remove later, but a bad accelerator driver with a uAPI that userspace has started using is nightmare fuel. There are still a lot of kernel maintainers who think all drivers are like network or scsi and assume userspace isn't important to them. Really kernel maintainers need to take more responsibility for the messes they create by merging stuff.
There is no track record of meeting these companies half way working, some people believe that if we let them merge stuff it'll magically create a community with no effort or they will suddenly see the light.
In 10-15 years of experience in this is that companies will only do what they have to satisfy customer/market pressures. The only reason most of these companies want upstream drivers is to get them into RHEL in the end, and I don't get why we'd want to create upstream as a dumping ground for that.
Posted Aug 27, 2021 2:58 UTC (Fri)
by developer122 (guest, #152928)
[Link] (2 responses)
Posted Aug 27, 2021 6:57 UTC (Fri)
by pbonzini (subscriber, #60935)
[Link] (1 responses)
It doesn't for exactly the same reasons that Dave mentioned,. It is in Red Hat's interest to have (at least in the future) a common upstream subsystem instead of locking in Red Hat's customers to the SDKs of individual vendors.
Posted Aug 28, 2021 13:29 UTC (Sat)
by lacos (guest, #70616)
[Link]
That's how it should be, but at least VDO is an exception, to my understanding. (See e.g. the "kmod-kvdo" package from "rhel-7-workstation-rpms".)
https://www.redhat.com/en/blog/look-vdo-new-linux-compres...
> The code is available in the source RPMs, and upstream projects are getting established.
https://man7.org/linux/man-pages/man7/lvmvdo.7.html
> To use VDO with lvm(8), you must install the standard VDO user-space tools vdoformat(8) and the currently non-standard kernel VDO module "kvdo".
https://github.com/dm-vdo/kvdo#status
> there are a number of issues that must still be addressed to be ready for upstream
The file list of "linux-5.13.13.tar.xz" does not seem to contain relevant "uds" of "vdo" hits. Even on Fedora, the kernel module(s) seem only available from personal COPRs -- two different (competing?) ones, at that.
https://copr.fedorainfracloud.org/coprs/rhawalsh/dm-vdo/
I fully agree that RHEL *should* not include out-of-tree drivers, but at least VDO is a precedent to the contrary.
IIUC, VDO is now (C) Red Hat, and Red Hat actually open-sourced originally proprietary code, which is fantastic -- but for this discussion, the driver is still out-of-tree. (I abandoned evaluating VDO on my RHEL installs when I realized I wouldn't be able to mount the same volumes under Fedora Live DVDs; e.g. for system recovery.)
Posted Aug 26, 2021 21:54 UTC (Thu)
by marcH (subscriber, #57642)
[Link] (2 responses)
I think this should be the "line in the sand" (except "to hack" sounds vague and/or "too big" in this context).
I have the corresponding hardware and I want to make a dead simple one-line change in this driver. How easily can I run my one-line change before submitting? If I cannot run a simple one-line change in that driver then it is not really open-source. This should not require the manufacturer to expose advanced features or other intellectual property.
Discussing what is a GPU and what is not Does not Really Matter.
Posted Aug 27, 2021 3:10 UTC (Fri)
by developer122 (guest, #152928)
[Link] (1 responses)
To maintain something you need to be able to exercise it. The open-source implementation doesn't need to be high-performance, or even very good. However, there needs to be enough there that a maintainer can come along and exercise the kernel driver and hardware in all of the ways that are important to what the device is intended to do. For GPUs, a maintainer can run a video game over Mesa (or run a pytorch ML model over ROCm).
The inevitable alternative is to have broken drivers in the tree because either they couldn't be tested to ensure continued functionality, or the provided precompiled command language or anemic open source userspace components didn't exercise it comprehensively enough to notice the break.*
Either customers _eventually_ discover the break years later when they are (forced) to move to a new kernel/version of RHEL, or there's a heated debate years later over whether users still exist for a decrepit driver and if it can finally be dropped.
*(or worse, there's not enough information available to fix it, meaning reversions to not-break-userspace)
Either way, reverts, complaining users, and calls for users of decrepit drivers are no fun for kernel maintainers to deal with.
Posted Aug 29, 2021 19:17 UTC (Sun)
by marcH (subscriber, #57642)
[Link]
That sounds more ambiguous to me than "run a one-line change".
> To maintain something you need to be able to exercise it.
Agreed, the devil is in defining "to exercise", especially in today's binary world of yes/no answers.
A number/metric can be attached to "running a one-line change" to make completely non-ambiguous, example: ability to run a one-line change in 80% of the driver code (without any NDA).
This all sounds like a test coverage metric because it basically is, as you noted yourself:
> For GPUs, a maintainer can run [TEST!] a video game over Mesa (or run [TEST!] a pytorch ML model over ROCm). The inevitable alternative is to have broken drivers in the tree because either they couldn't be tested
I think a surprisingly number of kernel maintenance issues come down to a simple lack of consideration for test code. For instance in this case if there were a requirement to provide some minimal test code and/or instructions as part of the "line in the sand" then this entire discussion would probably have not happened and this article not be written.
Posted Aug 26, 2021 22:07 UTC (Thu)
by alyssa (guest, #130775)
[Link] (1 responses)
Posted Aug 27, 2021 4:04 UTC (Fri)
by marcH (subscriber, #57642)
[Link]
Spoilt with choice.
Posted Aug 27, 2021 8:33 UTC (Fri)
by pabs (subscriber, #43278)
[Link] (1 responses)
Posted Aug 28, 2021 2:17 UTC (Sat)
by josh (subscriber, #17465)
[Link]
Posted Aug 28, 2021 9:46 UTC (Sat)
by jezuch (subscriber, #52988)
[Link]
Posted Aug 28, 2021 14:23 UTC (Sat)
by moorray (guest, #54145)
[Link]
This is not a discussion between peers, he’s #2 in the project.
Posted Aug 29, 2021 7:55 UTC (Sun)
by tchernobog (guest, #73595)
[Link] (1 responses)
> I do think the dri-devel merge criteria is very extreme, and effectively drives-out many AI accelerator companies that want to contribute to the kernel but can't/won't open their software IP and patents.
Then the few remaining will have the advantage of working with Linux, which would make them succeed on the rather fundamental enterprise market. Frankly, I fail to see how this is not benefiting the community at large in the long run.
I have yet to see these startup companies contribute large pieces of the Linux kernel. Mostly it has been hacks and tweaks to work around GPL restrictions, which are more harmful than not. The bulk of the work (outside the driver itself, which is company-specific) is still being done by others.
Posted Aug 31, 2021 15:00 UTC (Tue)
by Wol (subscriber, #4433)
[Link]
If it's going in the kernel, then it *shouldn't* have got past the patent office.
We need to bang that drum loud and hard ...
Cheers,
Posted Aug 29, 2021 8:55 UTC (Sun)
by zdzichu (subscriber, #17118)
[Link] (2 responses)
Posted Aug 29, 2021 18:54 UTC (Sun)
by marcH (subscriber, #57642)
[Link] (1 responses)
> There is a tendency in these discussions to anthropomorphize large corporations. Which is dangerous, because it is grossly inaccurate to reality. A large corporation is difficult to steer from the top and impossible to steer from the bottom. ...
Search keywords: https://www.google.com/search?q=site%3Alwn.net+anthropomo...
> Besides, Habana Labs is not a startup anymore, it's a part of Intel.
Another oversimplification. I don't know about Habana specifically but in general: https://www.google.com/search?q=%22hands-off%22+merger
Posted Aug 30, 2021 14:07 UTC (Mon)
by joib (subscriber, #8541)
[Link]
Posted Aug 30, 2021 8:06 UTC (Mon)
by tamara_schmitz (guest, #141258)
[Link]
And for us, if we're over
Posted Sep 10, 2021 9:21 UTC (Fri)
by scientes (guest, #83068)
[Link]
Posted Sep 10, 2021 15:31 UTC (Fri)
by geert (subscriber, #98403)
[Link]
Thanks to LaF0rge and Thorsten Leemhuis for the reporting trail.
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Essentially your "marketing" doesn't match your intent.
what does "drm" mean
what does "drm" mean
what does "drm" mean
what does "drm" mean
what does "drm" mean
what does "drm" mean
https://discuss.tvm.apache.org/t/rfc-ethosn-arm-ethos-n-i...
what does "drm" mean
Not-a-GPU accelerator drivers cross the line
Maybe change that to Device Resource Manager? Because it's kinda what it is these days.
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Broadcom drivers is an area we need to compromise for the sake of regular users.
This seems like a bad deal where we provide kernel infrastructure and maintenance for very little return.
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
https://www.tomshardware.com/news/intel-iris-xe-dg1-budge...
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
https://copr.fedorainfracloud.org/coprs/tmm/VDO/
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
> all of the ways that are important to what the device is intended to do.
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
Wol
"Startup companies" argument
Besides, Habana Labs is not a startup anymore, it's a part of Intel.
Intel people should know better how Linux development works.
"Startup companies" argument
"Startup companies" argument
Not-a-GPU accelerator drivers cross the line
And I'm done holding back
So look out, clear the track
It's my turn!
I'm taking what's mine!
Every drop, every smidge
If I'm burning a bridge
Let it burn
But I'm crossing the line!
That's fine
I'm crossing the line.
Not-a-GPU accelerator drivers cross the line
Not-a-GPU accelerator drivers cross the line
https://lore.kernel.org/lkml/CAFCwf119s7iXk+qpwoVPnRtOGcx...