Not logged in
Log in now
Create an account
Subscribe to LWN
An unexpected perf feature
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
this needs to be in the kernel so that you can have multiple software packages (one of which being the kernel) accessing the same hardware.
Back to the kernel
Posted Apr 27, 2009 17:28 UTC (Mon) by mikov (subscriber, #33179)
The kernel and X also never access the hardware at the same time. Designing the whole architecture so that the kernel can display an oops message when it crashes is absurd.
The thing about handing monitor hot-plug interrupts is also a red herring. For one, you hardly need extremely low latency for that, so polling once every 200ms in user mode would probably be sufficient. Even if you wanted to do it with an interrupt, there is no need for everything else to be in the kernel - handle the interrupt in the kernel and notify user mode.
Low latency interactions between DMA and interrupt handling do need to be in the kernel, however graphics drivers rarely have those.
So, I am aware of some of the arguments used to justify this, and I don't find them entirely convincing. There are clearly some advantages (especially if you have multiple 3D apps), but I am surprised that everybody is accepting this as if it is the only possible and clearly superior solution.
There is the unfortunate tendency of trying to put stuff in the kernel, simply because the kernel is the only common part that can be relied on to be in every Linux system. Take mode settings. If you had a user mode daemon doing it, it would work equally well. However, will every distribution take it up? Who will maintain it and in what repository? So, the problem is at least partly political/organizational and that really worries me. Can there be another UDEV?
Posted Apr 27, 2009 17:33 UTC (Mon) by mikov (subscriber, #33179)
Posted Apr 27, 2009 17:57 UTC (Mon) by slougi (subscriber, #58033)
If you mean the memory management parts, as some have pointed out above there are reasons for that, like OpenCL for example.
> The kernel and X also never access the hardware at the same time. Designing the whole architecture so that the kernel can display an oops message when it crashes is absurd.
It's not just about displaying oops messages. You get other benefits, like no flickering on boot up when X starts. You finally get frame buffers in the kernel that do not conflict with X. You get safe and flicker free switching between virtual terminals and between multiple sessions when several users are logged in on the same machine.
> The thing about handing monitor hot-plug interrupts is also a red herring. For one, you hardly need extremely low latency for that, so polling once every 200ms in user mode would probably be sufficient. Even if you wanted to do it with an interrupt, there is no need for everything else to be in the kernel - handle the interrupt in the kernel and notify user mode.
Unfortunately, there are many chips that blank screens when you poll for outputs.
Posted Apr 27, 2009 23:59 UTC (Mon) by nix (subscriber, #2304)
Bring on kernel modesetting!
Posted Apr 27, 2009 18:02 UTC (Mon) by elanthis (guest, #6227)
The memory management makes a lot of sense, too -- nobody is asking for a userspace virtual-memory subsystem, and the graphics memory space is just as complex and in need of kernel maangement these days as system RAM. Nothing else is going into the kernel that hasn't already been there for many years.
"Designing the whole architecture so that the kernel can display an oops message when it crashes is absurd."
That is not the point of KMS at all, only a side benefit that some people think is neat.
The point of KMS is that the kernel can reset the graphics mode at any time if need be, and those times happen often enough to be important. When X crashes for example, which in the current architecture often requires a power cycle to get your display working again. Or take suspend/resume, where trying to coordinate with userspace processes is really hard and still after all these years of suspend/resume work fails pretty damn often for quite a few people.
"For one, you hardly need extremely low latency for that, so polling once every 200ms in user mode would probably be sufficient."
Except that that blows nuts for laptops (and low-power desktops) where we're trying to reduce and/or eliminate polling entirely so the CPU can spend most of its time in a low-power state. Hotplug is most attractive to laptop users, too. If my laptop is just sitting there showing a static screen (say I'm reading a page) I don't want it waking up the CPU at all if it can be helped. Battery life is possibly the single most important thing for many people with laptops... a laptop with no charge is just a big paperweight if you don't have a power outlet nearby, which is pretty often for travelers and even students.
"Take mode settings. If you had a user mode daemon doing it, it would work equally well."
Except for suspend/resume where any kind of userspace synchronization is a whore.
If you want a microkernel approach, there are plenty of OSes out there for you to comfortably use. Linux is not and never has been one of them, and things move into the kernel in the Linux world when the advantages for doing so are clear and the disadvantages can only be summed up as "some people don't like it on principle because they're afraid of code in kernel space."
Posted Apr 27, 2009 18:40 UTC (Mon) by jsbarnes (guest, #4096)
No, but that's not the issue. You can search around and look for the
motivations behind putting mode setting into the kernel, but they come
- panic/oops (yes this is important, don't know why you disagree)
- X independence (this is pretty big, especially on special purpose
- user experience (eliminating any unnecessary mode changes is a pretty
- fbdev APIs aren't suitable for most current devices
- centralizes control over hardware (no more fighting drivers)
- improves suspend/resume speed & robustness
Simply saying that "The kernel and X also never access the hardware at the
same time" doesn't make it true; in fact that very issue has been a big
source of bugs in the graphics architecture for a long time now.
And your comment about polling for display changes doesn't account for
power savings. Polling is a terrible thing when you're running on battery
and want to keep your CPU asleep as much as possible. And hotplug
interrupts are required for certain new output types to work correctly at
all (like displayport).
As for uvesafb/v86d I can't imagine why you'd want to depend on that
configuration. The VESA interfaces make you totally dependent on the
VBIOS implementation, which isn't actually used much by real drivers, and
so is often broken or missing modes for your hardware, and limits what you
can do quite a bit. Native drivers are simply much more suited to real
work than anything VESA/VBIOS based.
So overall I think you're ignoring a number of real issues with the
current design; in-kernel memory management and kernel based mode setting
should have happened long ago.
Posted Apr 27, 2009 20:00 UTC (Mon) by mikov (subscriber, #33179)
I am mostly concerned about the first part - the "riskiness". Adding more code to the kernel invariably will mean more bugs. Graphics code is very complex (at least the one I have worked with).
Fundamentally, putting stuff in the kernel offers only two true benefits: ability to maintain cheap shared state, and the ability to force changes to distributions in a centralized manner. (The latter is sad but true). Any other benefits are incidental.
So, if it is not terribly complex or inefficient, it makes sense to move code out of the kernel, when possible.
Of course it is absolutely impossible with some kinds of code - network and disk drivers, file systems, etc. However there are also other kinds, like USB, where user mode drivers are possible and even desirable. Graphics is somewhere between.
Posted Apr 27, 2009 20:13 UTC (Mon) by jsbarnes (guest, #4096)
Also you may have misunderstood my "centralized" item: putting output
configuration into the kernel is not about pushing things to distros in a
unified way. It's about putting the code in one place that allows all
applications to benefit. In an X-mode setting world, any app that wants
full control of outputs has to be an X app. If the code is in the kernel,
any app or library can make use of it, and some already have. Projects
like Wayland and standalone EGL libraries are made possible with kernel
based memory management and mode setting. I mean, who wants to rewrite
mode setting for every chip and every type of application out there?
You're right about graphics being an oddball though: certain bits of it
belong in the kernel and certain bits in userspace. I think the line
should be drawn between memory management + mode setting and command
generation. The former needs to be in the kernel so that apps don't stomp
on global GPU state, while the latter is very complex and generally
involves mapping APIs like GL and Cairo to a specific set of GPU
instructions, so is well suited to userspace libraries.
More generally, there's a dogma out there that says "if it can be done
outside the kernel, it *should* be done outside the kernel". But such a
rule is far too inflexible. In many cases doing something in the kernel
provides such broad benefits that though a given subsystem could be
implemented differently, doing it in the kernel really is best (for
example network drivers or system checkpointing). I think kernel mode
setting and memory management clearly fall into that category as well,
since although the systems are complex they provide very widespread
benefits, and keep much of the hard stuff in one place where it can be
fixed once and used by many.
Posted Apr 27, 2009 21:59 UTC (Mon) by drag (subscriber, #31333)
The Xorg DDX driver stuff... that had direct access to memory regions and would twiddle bits on the PCI bus, configure the hardware and things like that. Like you had it, at least partially, configuring the AGP ports and whatnot.
All of this was happenning, more or less, outside of the control of the Linux kernel and it was up to the Linux kernel, when dealing with your console or OpenGL graphics, to do what it can to avoid stepping on your Xserver's toes. If the Linux kernel or Xserver goofed up and had some sort of conflict then it would quite possibly hard lock your machine due to crashing the PC hardware.
It's rare, but a few times I've been using Linux and hte screen has gone all rainbow bright, and the lights started blinking, and the speakers started blaring static. This is not a pleasent experience.
Also X11 is a networking protocol. So when your running your system in the traditional X terminal mode your allowing a program that has direct access to your hardware to have network access.
So ya the Xserver and Xorg's drivers are technically 'userspace' and that is nice, but it's not really something that is hugely better then just shoving _everything_ into the kernel.
By moving the modesetting into the kernel (which is essentially low-level hardware configuration anyways...) and memory management (which Linux is good at and you can leverage the existing code for application virtual memory management) you can eliminate X's requirement to run as root, eliminate direct from your Xserver to your hardware, reduce the complexity and code redundancy in the Xorg drivers, and still have the the OpenGL stack and EXA-related stuff (except for video memory management) all happenning in userspace through a unified kernel interface.
Posted Apr 28, 2009 15:17 UTC (Tue) by lkundrak (subscriber, #43452)
Excuse me? Does that mean 5 wakeups a second?
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds