Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Posted Feb 8, 2013 20:49 UTC (Fri) by kugel (subscriber, #70540)Parent article: Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Posted Feb 8, 2013 21:14 UTC (Fri)
by shemminger (subscriber, #5739)
[Link] (69 responses)
Posted Feb 8, 2013 21:54 UTC (Fri)
by kugel (subscriber, #70540)
[Link] (68 responses)
That seems illogical to me, but on the other GKH isn't that network maintainer.
Posted Feb 8, 2013 23:05 UTC (Fri)
by brouhaha (subscriber, #1698)
[Link] (67 responses)
Posted Feb 8, 2013 23:31 UTC (Fri)
by teknohog (guest, #70891)
[Link]
Posted Feb 9, 2013 0:08 UTC (Sat)
by BrucePerens (guest, #2510)
[Link] (54 responses)
Posted Feb 9, 2013 2:55 UTC (Sat)
by brouhaha (subscriber, #1698)
[Link] (53 responses)
Obviously the same reasoning should apply to the case where D-Bus is NOT already in the kernel.
In general, I'm in favor of moving things out of the kernel. For example, I think putting KMS and DRI in the kernel were steps in the wrong direction.
Posted Feb 9, 2013 15:54 UTC (Sat)
by raven667 (subscriber, #5198)
[Link] (2 responses)
As far as KMS and DRI, they are definitely in the right place, keep the hardware management in the kernel and all the complicated graphics stack in userspace.
Posted Feb 10, 2013 4:49 UTC (Sun)
by bronson (subscriber, #4806)
[Link] (1 responses)
Posted Feb 12, 2013 10:05 UTC (Tue)
by ortalo (guest, #4654)
[Link]
BTW: Note that I evolved too and now also have sympathy for the idea of not bothering *at all* with undocumented hardware when doing kernel programming. I certainly wouldn't have admitted that 10 years ago, but GPUs nearly fit that class.
Posted Feb 9, 2013 17:03 UTC (Sat)
by quanstro (guest, #77996)
[Link] (5 responses)
at a very high level they seem to fill similar roles. (i know there's a lot
with this in mind, if going to the trouble of reworking d-bus, why not think
Posted Feb 9, 2013 18:40 UTC (Sat)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
DBUS is not in the business of sending/sharing files.
Posted Feb 10, 2013 15:02 UTC (Sun)
by quanstro (guest, #77996)
[Link] (1 responses)
also, there is a long tradition in plan 9 of using file servers for messaging.
whatever happened to the idea that "everything is a file"?
Posted Feb 10, 2013 17:04 UTC (Sun)
by hitmark (guest, #34609)
[Link]
Posted Feb 10, 2013 4:51 UTC (Sun)
by bronson (subscriber, #4806)
[Link] (1 responses)
Posted Feb 10, 2013 14:33 UTC (Sun)
by quanstro (guest, #77996)
[Link]
there are a few reasons this is never a problem on plan 9
Posted Feb 14, 2013 4:20 UTC (Thu)
by mmarq (guest, #2332)
[Link] (43 responses)
Why care to elaborate ?
In my limited view it seems to have stabilized the things above in the display drivers arena.
To me it seems clear that the trend in display/graphics is GPGPU... even for Intel (sooner or later)... and without wanting to offend feelings but the leader of the client side (ARM) is heavy on the HSA, along with the heavy weight Samsung, and their specification even includes the GPGPU as bus master in a boot process... something that shouldn't be strange to Intel since the PCIe v3 spec they control has provisions for it with the new multiplexing protocol of HP origin that they included.
I don't see any CPU low level driver or interface in userspace (if wrong correct please)... why should it be any different for GPGPU, that in HSA is not to be only because of name or workload, but to be a "processor" not a "device" ??
I think Dbus in kernel space could find many uses.
Posted Feb 14, 2013 4:51 UTC (Thu)
by brouhaha (subscriber, #1698)
[Link] (42 responses)
There would still have to be different kernel drivers for different families of graphics cards, but they would all be very small.
Posted Feb 14, 2013 5:01 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (37 responses)
It doesn't make it a good idea. Generally, everything that touches hardware directly should live in kernel.
Posted Feb 14, 2013 5:14 UTC (Thu)
by brouhaha (subscriber, #1698)
[Link] (33 responses)
EVERYTHING could be done from kernel space. It doesn't make it a good idea. You need a better justification to support putting stuff in the kernel.
Not too many years ago I worked for a very large router company, and in several of our Linux-based product lines, we put nearly everything that touched the hardware in user space, other than (as I suggested for graphics drivers) small kernel-space drivers to reflect interrupts to user space. Our research determined that there was no significant performance benefit to having our hardware drivers in kernel space, while there were significant advantages in maintainability, error recovery, and general robustness to having that driver code be in user space.
Posted Feb 14, 2013 5:27 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (27 responses)
Userspace GPU drivers never performed well. And ability to handle interrupts was the least of the problems - most real problems were in trying to choreograph the complicated dance of device handoffs between BIOS, kernel framebuffers and userspace GPUs.
Besides, all this shiny new GPU infrastructure allows to kill off stuff that REALLY should not be in the kernel: VT102 emulator for kernel framebuffers.
Posted Feb 14, 2013 5:48 UTC (Thu)
by brouhaha (subscriber, #1698)
[Link] (26 responses)
If VT handoff or interaction with V4L have some particularly tricky requirements, then perhaps doing those from userspace might need some additional infrastructure.
I'm not saying that I think we should go back to exactly the same userspace graphics code we used to have. There were definitely things wrong with those. However, moving chunks of it into the kernel isn't the only way to solve the problems we had.
I never expected anyone to take my comments about this very seriously. I don't have the time or inclination to work on GPU drivers. The people that actually do the work get to make the technical decisions about it, and I wouldn't have it any other way. This is just an exercise in being an armchair quarterback.
Posted Feb 14, 2013 5:54 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Oh, and this process must be able to get _exclusive_ access to the hardware. Because it intrinsically can't be shared.
Does it start to sound familiar? No?
> I'd be perfectly happy to have neither in the kernel, but if I had to choose between one or the other, I'd definitely keep the relatively small, simple, lightweight VT102 emulator in the kernel and put the big, complicated GPU stuff in user space.
The kernel-side drivers basically manage video buffer allocation and command submission. I.e. they take buffers with command batches and send them to hardware. They don't do anything that is fundamentally complicated.
Posted Feb 14, 2013 6:15 UTC (Thu)
by brouhaha (subscriber, #1698)
[Link] (1 responses)
Putting code in user space doesn't magically make it less robust, any more than it magically slows it down. If you want robust code, you have to design it well, regardless of whether it's in kernel space or user space.
Posted Feb 14, 2013 8:47 UTC (Thu)
by mmarq (guest, #2332)
[Link]
i suggest http://hsafoundation.com/ a tour by the specs and docs...
quite a list and quite heavy weights no ?
So what seems to stand out against your arguing in my view... even without deep tech arguments that i don't have now(if i ever)... is that is NOT about "devices" anymore but "processores"... yes talk GPU talk processor...
And none of other contender to this spec will be out... matter of fact Intel already has big "physics" cards... Apple neither will be in the shadow... all this things are/will be **fully programmable**.
It is a wonder some news about gaming in Linux... when some of its developers seem to want to keep it 3th class in the sector... boy! it is a luck to have game studios interested...
Posted Feb 14, 2013 8:13 UTC (Thu)
by mmarq (guest, #2332)
[Link] (22 responses)
Uff!... thank god! lol
I want the tremendous *compute power* of the GPGPU ... i want the ability to do "co-designed" on-the-fly binary translation with profiling for that compute power... i want all my graphics ray-traced for illumination, with OpenCL style physics effects ( hey! why not a KDE or Gnome environment with "physics" effects and animations ?... not only games ??)
quite a list no!? ... i think doing all that from userspace would make programing Cell look like superball.. lol
Posted Feb 14, 2013 18:01 UTC (Thu)
by nix (subscriber, #2304)
[Link] (21 responses)
But things like physical memory management and mediation between competing users -- that's what kernels *do*. It's what they do for the realm the CPUs rule over, and it seems perfectly sensible to have them do it for the GPU's realm too (and a heck of a lot more robust than the alternative).
Posted Feb 14, 2013 19:39 UTC (Thu)
by mmarq (guest, #2332)
[Link] (16 responses)
If traces parallels with Android model, yes they might adopt something like Aparapi (translates Java into OpenCL like, with knowledge of target ISAs) and push for a backend compiler or "finalizer" into the kernel.
But this is quite smaller than LLVM and resides below LLVM...quite low level.
Posted Feb 15, 2013 18:15 UTC (Fri)
by nix (subscriber, #2304)
[Link] (15 responses)
Posted Feb 18, 2013 4:21 UTC (Mon)
by mmarq (guest, #2332)
[Link] (14 responses)
I'll try in lay terms (warning: NOT a developer of this)
The HSA standard if that is what we are following, doesn't need kernel inclusions... i'm confident... not even the runtime... only the "finalizer" that touches metal directly could induce the need.
This back-end compiler is very ISA specific, is its job->knowing well the target and optimize for it with runtime info, its a JIT compiler, things that static compilers cannot do(well), even LLVM that sits above;
So most probably will be several different "finalizers" for the many "processores" HSA encompasses, including not only ARM/AMD/Imagination GPUs but also TI DSPs etc.
So it could be a DKMS feature, if Linux supports DKMS properly(?)...
Matter of fact it doesn't even have to be a "finalizer", the HSA runtime can take care of that like the OpenCL runtime or the C++11/AMP runtime does.
For the more performance aware version with a "finalizer", the only features that touch hardware are DMA(direct memory access), VM stuff, TLB stuff, scheduling and IOMMU stuff... things that seem not indicate GPU at all no ?...
And it could be done by improving what is already there... i don't think the patches must be intrusive if it comes to that, since the hardware features listed on the spec are really very small.
Matter of fact the basic spec seems like any other language, with its proper runtime... and i don't see the need for C or C++ libs on the kernel(?)... neither should this, tough its idiom HSAIL, is very low level and compatible with LLVM IL SSA, that is, it takes SSA transforms it to HSAIL then native by a "finalizer", this if you want your program having the same Virtual Memory space, meaning CPU + GPU/DSP/etc will have the same space for better integration overall... then the "finalizer" is the way to go...
OTOH it can also proceed directly from the IL to the target by the runtime, and a program can be build with both targets by their tools, native + HSAIL(finalizer), using exactly the same code, so even if Intel NEVER implements the hardware features, any HSA program could run on any Intel CPU *platform*(so far) **UNCHANGED**...
Only with the "finalizer" stuff, the program can run quicker in perhaps most cases(if properly coded)(less heavy or less specific stuff and suspect no difference at all)...
The idea is not only use the same *virtual memory space*, is use ANY language to program it... and much better than nVidia CUDA IMHO... it goes so far to support, C/C++, AMP, OpenCL, fortran Java(+scripts) Python and more, i think more than the supported natively by LLVM...
The advantage with a "common virtual space" is that any GPU/DSP/etc memory will be managed as system memory not a different pool... also the GPU/DSP/etc could have its context/exception handling and interrupt handling mostly outside of any CPU control (like if it were another CPU), which is better for power management... its a UMA heterogeneous SMP architecture, an obvious very good evolution to the co-processor idea...
intros
The drawback with this is i don't know if its compatible with the current DRM implementation, or if it can be patched to it, since rendering seems only one of the features that encompasses the standard, it could be complementary... or perhaps!... Intel top management, namely its CEO gets one of those Christmas Ghost moments, and Intel joins the HSA... eh!... without need for false promises to the ghost lol... since Intel doesn't need to license anything (i think), it only have to make its hardware features compatible -> then there will be no bickering but a patch flood... lol
Posted Feb 18, 2013 4:28 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (13 responses)
Yeah, it will immediately beat all other architectures, sure.
Posted Feb 18, 2013 5:07 UTC (Mon)
by mmarq (guest, #2332)
[Link] (12 responses)
Sorry but you are babbling nonsense... without ever having toke a pick on anything...
Only Samsung itself is bigger than Intel, and its not only IT stuff, Samsung and LG makes perhaps the majority of the consumer electronics world...
Posted Feb 18, 2013 5:23 UTC (Mon)
by mmarq (guest, #2332)
[Link] (6 responses)
*Sony*, Samsung, LG... IT IS *THE* consumer electronics world...
You better say "i don't want Linux on those things"... hard task to fulfill no doubt... and the new Playstation IV, you also don't want to see it with a Linux OS ??... doubt they will ever license Microsoft, or (M$ would accept)...
Well, PS IV is going to be HSA modeled... is not PowerPC, is x86 AMD, and Sony entered the HSA... the academic guys grow up and made real hardware lol ... and a flush of shills entered the Linux forums, or is that veterans went to sleep... or are pesky politics abounding ... ummm !!?..
Posted Feb 19, 2013 19:25 UTC (Tue)
by khim (subscriber, #9252)
[Link] (5 responses)
Wow! Looks like SONY just can't ever produce nice, easy-to-program console. After Cell fiasco (which is theoretically so much more powerful then XBox's CPU but so hard to program that most PS3 game ports are inferior to XBox360) then decided to use of-the-shelf-hardware but now they want to still screw the developers in some other way? I wish them luck, they'll need it.
Posted Feb 19, 2013 22:50 UTC (Tue)
by mmarq (guest, #2332)
[Link] (4 responses)
Sony should not be about nice and easy to program... it would be nice for a change... but primarily Sonny i think is after "PERFORMANCE"(try to match both the why of HSA)
See!?... the reason is simple, many bickering in hardware/review sites... but its seems CLEAR that if you want performance, PERFORMANCE IS IN THE SOFTWARE... arguing this CPU and that processor is best than this or that, only controversy to entertain morons!... it is best but with what software ???
More!... according to a leak that circulates, the Durango GPGPU that goes for the Xbox720 is very unusual and seems a highly customized design... many still do laugh and foresee the doom of Xbox, but that is what Microsoft choose, perhaps fitting better their tools and their programming paradigms... since its clear PERFORMANCE IS IN THE SOFTWARE, its highly premature to judge anything only by the hardware features alone...
So your argument of "off-the shelf" is simple WRONG by many accounts... and you should applaud because gives the software side much more leverage.
But i understand your point about "off-the shelf", the "traditional" world of graphics and video is just too stupid to begin with... with super bloated Driver an VM JIT layers for the GLs of the GPGPUs -> crazy!, usually those Console guys **program much more to-the-GPGPU-metal** and are able to extract an **order of magnitude (10x) more performance** than in the "PC world"... Microsoft is in the same boat some how, Xbox and Microsoft will be around *compute* and C++AMP allover...
LINUX IS NOT ABOUT "COMPUTE POWER", AND POWERS SEEMS NOT WANTING IT TO BE... that is why client side "mass adoption" concerning (it has to do with graphics/video)... Microsoft will win for Linux in the foreseeable future, and history repeats itself... even when they F### up as they did with Windows 8...
Sounds crazy but its true... and there is where HSA foundation entered... the idea is exactly "facilitate" this, open it to general languages, and that is why it catched so many interest and is a "de-facto" and "de-jure" standard right now.
Posted Feb 20, 2013 17:54 UTC (Wed)
by khim (subscriber, #9252)
[Link] (3 responses)
This one phrase says it all: bollocks. If you ever actually go and compare console versions of games and PC versions of the same you'll see that yes, they achive "10x more performance"... by replacing nice textures with tesselation and other niceties with blurry POS. Hardly an achievement worthy talking about. I think I'll side with viro: this is not a slashdot and since effective discussion with you is impossible I'll just use this nice LWN's feature and silence you in my stream. Have a nice day.
Posted Feb 20, 2013 22:17 UTC (Wed)
by mmarq (guest, #2332)
[Link]
I even can agree with some things you say, but everything must be in its proper context, and those are not fixed, they evolve tremendously...
Its not to hurt feelings, to make controversy... but the idea i trowed about forking the Linux Kernel... as exposed... perhaps are not off base.
And i'm not the right person to answer most things, i'm sure the HSA have ppl around here, they could answer erroneous ideas if they are "allowed"... perhaps in another thread...
Posted Feb 21, 2013 8:11 UTC (Thu)
by elanthis (guest, #6227)
[Link] (1 responses)
There was some AMD buffoon who clearly only knew consoles claiming that D3D12 would likely do away with most of the API since developers wanted to directly target the hardware. This is obviously impossible in a world with multiple GPU vendors, and even things like AMD replacing its machine code format in its own product line.
Also, we don't _want_ to do this direct hardware coding so much as we _have to_ to make sure the games we put out in 2012 looked better than our competitors' games in 2008 despite using the exact same hardware. We have to push micro-optimizations and tricks to improve performance because the hardware is locked in the stone age.
Consoles can be faster than an equivalent PC because we have no real OS and can micro-optimize in ways that PC/phone developers can't. This is not a good thing, for developers or users. Users want diversity and competition, and developers want easy APIs that make development cheap and efficient. Consoles do neither.
Meanwhile, real PCs can massively outperform consoles because for all the bloat, the hardware is massively more powerful, partly because many companies can compete to make the best hardware and users can and do upgrade at will. So mmarq's argument is just silly. Hardware does bring massive performance improvements, and software trickery is about making the most of limited hardware.
Posted Feb 21, 2013 9:18 UTC (Thu)
by khim (subscriber, #9252)
[Link]
Posted Feb 18, 2013 5:51 UTC (Mon)
by dlang (guest, #313)
[Link] (2 responses)
These vendors are examples of where things are not working, not examples that we should be following.
Posted Feb 18, 2013 8:23 UTC (Mon)
by mmarq (guest, #2332)
[Link] (1 responses)
I don't know what is the politics of HSA what they decided... but i think they have serious problems... and drivers is not the only concern...
They must have an *Open OS*... for their *Open Standards* -> they are open, all GPL compatible(Double licensing may be used, but also BSD style is used and not only LLVM)
IBM is not in it(doubt ever will, its not its targets, unless they pass to use GPGPUs and stuff for computation, which also doesn't make much sense for them )...
Apple though based on ARM will not go for it, open their OS, even if they join...
Microsoft the same or worst...
BSD are too lacking for "devices & stuff" and getting obsoleted by Linux
*\\It remains only Linux//*
More than drivers, i think it would be wise to see if they could build ALL of an OS, if not almost all of a Distro, using their very comprehensive compiler toolchain... and which could very well be ***much more ARMv8 64 bit centric than x86***.
GCC is in a paradox crosswords as reported by Pharonix, Intel ICC is too CPU centric and will remain as foreseeable, even if they open source it, and worst doubt they will ever target ARM...
Since they set Most of Industry, including the leaders of the client side of computing, ARM Samsung & CE, to build a top nosh compiler toolchain designed specifically for this... one natural candidate might be Android...
But i can't see if Google is up to it, they may decide to go a little like Apple/M$ and target primely their stuff... its Linux based, its some proprietary to, and they might NOT be to take the BURDEN...
HSA is big, they can do it...
So what only remains is fork the Linux kernel itself... I SEE NOTHING HOSTILE IN THIS... IT WOULD BE EXACTLY LIKE *ANOTHER* DISTRO BUT ONE THAT WOULD GO MUCH DEEPER THAN THE USUAL, WITH THE APPLYING PATCHES NOT APPROVED/WAITING MAINLINE... ALL DISTROS DO IT MORE OR LESS...
In the end LF and Linux can also benefit from it, it will be like another branch, and it already maintains several, so another one maintained elsewhere doesn't hurt... and build with HSA CC, and with its driver interfaces, which could be DKMS++ based and used also by many other distros ... and who knows if not useful in the future, who knows about GCC -> can it evolve ?
I'm afraid with the politics not all is rosy, or if anyone can ask more than the remarkable work that has been done so far by the LF, but when the HSA compiler collection is finished, they must find a real USEFUL solution and they would want to really test it with an OS... they will have no other choice, but to find that solution -> you simple don't waste a lot of time expertize and money building something and then not use it...
Don't know politics, but i see it like an addition... not something bad... distros, for everyone that flanked along this years, 2 more seemed to have popped out in its place, many with many peculiarities, everyone trying to be different... what would be the danger of having another one ??...
It would be not an attempt to take over... matter of fact i think exactly like any other distro, they would link LF attentively and try to participate, only they could go very different in a lot of changes...
The GPL is not really suitable for "exclusivity or my way or the highway kind of approaches", who ever thought of that must be desperate or in deep delusion...
In the end i don't know the decisions... but also i don't see many other options...
Posted Feb 18, 2013 9:12 UTC (Mon)
by mmarq (guest, #2332)
[Link]
I mean it is used, and could be used extensively... but doubt it will ever produce any political results for any particular agenda or impediments to others... besides exactly a cave-in choice from the producer... and doubt any "forcing" effect in any sizable receiver
That could have worked more or less for the type of distros he had so far... and even so not all... and none in strict patch software feature choice..
Linux as been protected by the inherent complexity from fork pressure and be able to dictate...
Android breached that somehow... in the future it can be much worst... and its not only about HSA...
So i find this comments about "my way" kind of vision and we should pursue this and not that very amusing... probably not conscious kernel devs, if one at all... and the case is you should pursue what you think best, but abstain from such comments.. lol...
Posted Feb 19, 2013 19:25 UTC (Tue)
by khim (subscriber, #9252)
[Link] (1 responses)
Nope. These are companies which cough up few millions here and there to make sure they'll be in the loop if this idea will actually reach some usable stage. Does not mean anything beyond that at this point. Remember the last "next big revolution" (called EPIC if you forgot)? It also had an impressive names: Intel, HP, IBM (which wanted to use the opportunity to unify UNIX), etc. They had really awesome slides! And really cool story. I'd say EPIC story. In the end we've got an epic fail, nothing more. I'm not saying they'll fail but the fact remains: number of supporters and sizes of such supporters don't automatically equal success.
Posted Feb 19, 2013 23:41 UTC (Tue)
by mmarq (guest, #2332)
[Link]
HSA is NOT about a CPU u-arch... at best its about a macro-arch concept, one which is highly flexible... but most of all is about a "PROGRAMMING PARADIGM"...
So according to the names involved, because *they use it*... HSA fits nicely with ARM, x86, MIPS and PowerPC(thought Sony changed, i think its used on some other stuff by some) on the CPU side... TI, ARM, MIPS etc DSPs... all the analogue stuff of ST-Ericsson and MIPS and ARM etc... and its open to many more!...
So catching a few millions here and there, all those guys make many times the size of Intel as example (5x perhaps !?)... Only Samsung itself is bigger than Intel... no matter how much many dogmatic sectarian POV wants to twisted it, it is truth...
Also no matter how hard it takes to many... can be debatable, but highly dependent on how and what you count... from Smarth/Superphones to desktop, by *unit accounts*, the CPU u-arch leader of the client side of computing is ARM not x86... an nothing will prevent with ARMv8 64bit, the spill in force into the traditional Mobile (laptop) side of computing and even some desktop( in this earlier steps)... not even if Intel goes thermonuclear (its too late now).
I think Google saw the light... "renderscript" smells HSA allover some how, and staying more ARM centered and not MIPS or x86... Android will be the only real fight to Microsoft in the client side... though Apple will remain very big, as big as Intel, i think they are "too closed" to ever be the clear dominant force on the client side.
Posted Feb 14, 2013 20:11 UTC (Thu)
by mmarq (guest, #2332)
[Link] (3 responses)
I don't know why you gentlemn fear or oppose some of this kernel inclusions. It could be modular a compile option, and import little additional maintenance burden... or the opposite, if with those it imports also additional savyy developers.
Posted Feb 15, 2013 18:18 UTC (Fri)
by nix (subscriber, #2304)
[Link] (2 responses)
Posted Feb 18, 2013 4:57 UTC (Mon)
by mmarq (guest, #2332)
[Link] (1 responses)
ummm... stange agendas !... DKMS is exactly a good idea for the overburden complains. Don't know why Kernel devs must maintain every thing about drivers... lots of bickering yet it already imports firmware blobs, when there could be a proper interface (discussed/DESIGNED case to case of family, and parts of DKMS changed accordingly)...
Then the more generic stuff goes kernel maintenance... APIs don't change that often(matter of fact quite unoften for many tastes)...come on you know its truth!... and the burden to support every single variant of a hardware family goes to the vendors... like half half...
Free drivers could co-exist with proprietary,*** which they already do***, so nothing new and nobody got hurt, and if properly designed, more things could be open and free drivers use a lot of the same stuff of the proprietary ones without the need to be a mess of half finished things and proprietary drivers that break at every 2 versions...
And if security bitching is a concern, why not direct the program caging and sandboxing to the DKMS side and really support it, instead of shoving it to the unsuspected user, that by no dreams will ever be using NSA grade...
About KMS it makes good sense in userspace... the reason is that consoles will also want to use the possibilities, kernel interfaces directly, so its sensible...
The other arguing, maintenance, stuff... issues... preferences... tastes...particular visions.... semantics for pointless bickering.
10 years and little change to many things.. oh! well, progress is not painless... action counter-reaction applies here to... (damn politics!)
Posted Feb 25, 2013 17:02 UTC (Mon)
by nix (subscriber, #2304)
[Link]
Posted Feb 14, 2013 5:51 UTC (Thu)
by dlang (guest, #313)
[Link]
because kernel developers care about backwards and forwards compatibility, userspace developers tend not to.
As an example of where 'put it in userspace' is an ongoing disaster, look at camera 'drivers' for android devices. This is done in userspace, but there are a lot of not-so-old android devices that cannot use the camera on newer versions because the closed source userspace drivers don't work with newer versions of android.
Posted Feb 14, 2013 7:45 UTC (Thu)
by mmarq (guest, #2332)
[Link] (2 responses)
Naa!.. that was because your kernel (in kernel space) wasn't nearly as good as Linux... lol
Seriously, i think of a parallel, Virtual Machines... ummm... why don't run the all shebang in userspace ??
Almost impossible for many tasks & functions, right ??
A driver for a full blown GPGPU is the same. Things have changed a lot from earlier models... things can change dramatically in near future, you'll see... research like you mention is not only curious, it will be mandatory to do again.
Posted Feb 14, 2013 15:11 UTC (Thu)
by brouhaha (subscriber, #1698)
[Link] (1 responses)
No, I don't think that was the reason.
Posted Feb 14, 2013 19:55 UTC (Thu)
by mmarq (guest, #2332)
[Link]
If it was as good as is now, why the fuss with all this development... make it worst ?
Posted Feb 14, 2013 17:21 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Then you are actually agreeing with the existing KMS design since that is pretty much how it is implemented. The small amount of hardware management and IO needed to get data to/from the GPU is implemented as a kernel driver and the bulk of the complexity is in a userspace library (libEGL/libGL). The area where there may be confusion is that the minimum viable complexity of the kernel driver may be quite a bit higher than hardware you were interfacing with in your example, the GPU is practically a whole other machine with its own CPU, RAM, I/O but also sharing with the host machine. There's no way to manage that without some level of cooperation between the GPU and the CPU kernel.
Posted Feb 14, 2013 7:28 UTC (Thu)
by mmarq (guest, #2332)
[Link] (2 responses)
That is the Exokernel approach... that happen can work very well and faster for many "things"... only i think Linus will let the dogs on anyone that dares to suggest that... lol...
Posted Feb 14, 2013 14:09 UTC (Thu)
by Cyberax (✭ supporter ✭, #52523)
[Link] (1 responses)
But please, leave Linux alone - we really would like it to succeed, not crash and burn like all other microkernel OSes.
Posted Feb 14, 2013 19:50 UTC (Thu)
by mmarq (guest, #2332)
[Link]
Point was, since the pointless bickering tends to be black or white... almost everything on one side or the other... i was just describing the option for a micro-micro-micro kernel (exo) (choose black or white as suits your fancy lol).
Posted Feb 14, 2013 6:09 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link] (3 responses)
Modesetting. The kernel needs to be able to reprogram a mode over suspend/resume. That means that the kernel needs to be able to program the mode that userspace requested. You need modesetting support in kernel. At that point, allowing userspace to program a mode that doesn't match the kernel's expectations is dangerous. The kernel needs to be able to set modes.
There's plenty that the kernel does that can't be handled in userspace, simply because (a) we don't trust userspace, and (b) userspace doesn't run during suspend/resume. You could write an operating system that did all of that in userspace, but it'd be shit and incapable of meeting modern expectations of OS security. The only reason to push hardware management out of your kernel is because you're producing a microkernel, and Linux isn't.
Posted Feb 14, 2013 6:25 UTC (Thu)
by brouhaha (subscriber, #1698)
[Link] (2 responses)
I'll accept your argument for modesetting for suspend/resume.
Posted Feb 14, 2013 6:33 UTC (Thu)
by mjg59 (subscriber, #23239)
[Link]
Yes! You shouldn't let any userspace application access the GPU directly, because these days we have an expectation that userspace shouldn't be able to compromise the kernel. The alternative is to have signed userspace, and that's not an acceptable option. So, unless you're producing a microkernel which has a separation between drivers and the rest of userspace (which Linux doesn't have), the correct line to draw is the one where only the kernel gets to drive hardware that can overwrite the rest of the OS.
Posted Feb 14, 2013 17:27 UTC (Thu)
by raven667 (subscriber, #5198)
[Link]
Sure it does, the kernel has no control over what is run in user space and can make no assumptions about it being the "same code" or some expected implementation, it can have zero trust that the data given to it from userspace isn't bogus and absolutely must check everything. This is different than a closed environment like an appliance where one entity has control of both userspace and kernelspace, maybe restrictions could be relaxed in that case.
Posted Feb 11, 2013 17:31 UTC (Mon)
by daniel (guest, #3181)
[Link] (10 responses)
Posted Feb 11, 2013 17:56 UTC (Mon)
by hp (guest, #5220)
[Link]
Posted Feb 11, 2013 19:27 UTC (Mon)
by Cyberax (✭ supporter ✭, #52523)
[Link] (8 responses)
My personal gripe with the DBUS is its poor handling of reconnections.
Posted Feb 11, 2013 20:54 UTC (Mon)
by hp (guest, #5220)
[Link] (7 responses)
I don't think the problem has anything to do with dbus-daemon; you can restart it fine.
The problem is that apps don't handle the restart... and that for them to handle it would require them to write quite a lot of complex code that would rarely be tested.
Say hypothetically that someone wrote all that code, and then religiously lobbied app developers to keep writing it in new apps, and kept testing it and fixing bugs...
Even given this hypothetical work, personally I would never trust that at a given point, I could trust all apps to have that codepath working. So I would just reboot anyway.
The difference between dbus and other daemons here is not that dbus somehow forbids restart. It's that dbus has persistent and stateful connections (and that's core and essential to the purpose of dbus).
Restarting dbus is like saying you want to restart the X server without killing any X apps. It's the same technical challenge as that. Namely, all apps would have to track and be able to restore all the state kept by the server.
Rule of thumb with dbus: what does X protocol do? dbus usually does the same thing and has the same pros and cons.
We have looked some at a client library design that makes it easier to handle daemon restart - basically a library where you provide a cascade of callbacks ("connect to bus handler", "service is now owned handler", etc.) and those callbacks could be re-run on bus re-connect. However, it is a lot harder for app developers to understand this kind of API, and in any case, existing apps aren't doing it this way.
Posted Feb 11, 2013 23:04 UTC (Mon)
by daniel (guest, #3181)
[Link] (1 responses)
Of course, we could always consider waiting for DBus to stop causing user space issues before welcoming it into kernel with open arms, hearts and minds. I'm trying to avoid using the word "gaping" here...
Posted Feb 11, 2013 23:10 UTC (Mon)
by hp (guest, #5220)
[Link]
Posted Feb 12, 2013 0:04 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (4 responses)
It's not really possible with the current DBUS protocol. That's another reason why naively networked DBUS is not such a good idea. However, layering DBUS on top of something like ZeroMQ could be interesting.
Posted Feb 12, 2013 1:00 UTC (Tue)
by hp (guest, #5220)
[Link] (3 responses)
dbus was widely adopted because it solved certain problems that were previously unsolved, by making different tradeoffs vs previous solutions.
Anybody can show up and say "oh that tradeoff has this downside." That's why it's called a tradeoff.
Anyone on the Internet could prove me wrong by showing the code which has the pros without the cons. That's the beauty. Everyone would jump to use a best of all worlds solution like that. Meanwhile, people are using a solution that exists.
In my view, the client libs could be designed to better support reconnection but ultimately the app has to handle the case. Neither the daemon nor the protocol are the source of the "restart problem." The problem is that handling restart in N different codebases, with none of them ever buggy, is not practical. It isn't impossible, but nobody who has actually written code, to date, has decided the cost:benefit ratio holds up and proceeded to tackle this.
Posted Feb 12, 2013 2:22 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link] (2 responses)
Yes, it works pretty well. But it does have shortcomings that could have been avoided by a more careful design. You can make a "reconnectable" messaging protocol pretty easy, it's not rocket surgery - by storing the current state of the server's subscriptions in the durable storage, for example. Or by introducing an explicit "reconnection" phase.
Posted Feb 12, 2013 10:24 UTC (Tue)
by ortalo (guest, #4654)
[Link] (1 responses)
Posted Feb 12, 2013 14:01 UTC (Tue)
by Cyberax (✭ supporter ✭, #52523)
[Link]
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Maybe we can move Firefox and LibreOffice into the kernel next. Then we can do away with user space entirely.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Except for the console.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
I know you're being sarcastic, but I'd take the collection around udev and move it back into the kernel first.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
is a dynamic uni/multicast router built from a regular 9p file server.
more in d-bus, but i'm going for the idea.)
big? why not consider a file server approach?
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
the plumber doesn't pass the file around, it passes a message around,
usually with a pointer (path) to the file. here's a direct link
http://www.plan9.bell-labs.com/magic/man2html/4/plumber
the literal plumber however isn't even the point. the point is that plumber
is an example of a file server which routes messages.
there are many more "virtual" file servers than there are disk file servers.
this avoids having to invent new address families. (and getting permission
from the kernel, libc, etc. to add them.) plan 9 uses regular 9p. network
transparency may be accomplished the same way as on-disk file systems. and
if one wants to see plumb messages on the edit port, just "cat /mnt/plumb/edit".
cat will block until a message arrives, display it and repeat. i don't need
a special plumbcat program.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
just think it solves a similar problem. and it's worth considering if
some of the mechanism might be right for this problem.
1. private namespaces. each user has his own set of private namespaces.
users don't interfere with one another.
2. on a shared machine, the user imports the plumber from their terminal,
so if they plumb a pdf, the viewer starts on the terminal.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
The only thing that the DRI and KMS code running in kernel space is doing that couldn't be done perfectly well from user space is handle interrupts. It should be entirely adequate to have a very small driver in kernel space that does nothing but allow a user-space thread to block until the graphics card requests an interrupt.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
EVERYTHING could be done from user space. [...] It doesn't make it a good idea.
Nor does it necessarily make it a bad idea.
Generally, everything that touches hardware directly should live in kernel.
Aside from that having been historically true, back when hardware was MUCH simpler, why should that be a particularly important distinction today? Back then we didn't have mmap() and such things, but now that we do, there's a lot less justification for throwing things into the kernel just because they happen to touch hardware registers.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
I'm not sure about VT handoff or interaction with V4L, but none of the other features that you've mentioned are especially better due to being in the kernel. There are suitable mechanisms for all of them for user-space processes. If there weren't, we'd invent some.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Userspace GPU drivers never performed well.
That might be true, though I never personally had any significant performance issues with them. But even if it was true, I don't think there's anything inherent to the problem that prevents it from being done in userspace with performance comparable to what we get in kernel space. That's certainly what we found at the very big router company. There's nothing magic about being in kernel space that makes code run significantly faster. Certainly in user space you need to avoid doing things that involve copying memory from one process address space into kernel space then again into another process address space. Naively designed code tends to do a lot of that, but it's quite possible and not even that difficult to avoid it by using modern OS facilities.
Besides, all this shiny new GPU infrastructure allows to kill off stuff that REALLY should not be in the kernel: VT102 emulator for kernel framebuffers.
I'd be perfectly happy to have neither in the kernel, but if I had to choose between one or the other, I'd definitely keep the relatively small, simple, lightweight VT102 emulator in the kernel and put the big, complicated GPU stuff in user space.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Please, go and educate yourself. GPU interaction requires an always-on privileged process that should be able to talk directly with hardware. If this process crashes you can easily get a hard system lockup.
Have you actually SEEN the GPU drivers? No? Well, that's usual.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
If this process crashes you can easily get a hard system lockup.
So? If a kernel driver crashs, you can easily get a hard system lockup. Been there, done that, got the T-shirt. I've been device driver developer on various flavors of Unix for more than 20 years.
Have you actually SEEN the GPU drivers?
As a matter of fact, I have looked at the GPU drivers, and they seem a lot more complicated than the VT102 emulator to me.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Don't follow, who said about puttin LLVM into the kernel ?
You did. The 'tremendous compute power of the GPGPU' is implemented by compiling things targetted to the GPU, via LLVM (at least that's the lion's share of it). Putting that in the kernel means putting LLVM in the kernel.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
http://www.slideshare.net/hsafoundation/hsa10-whitepaper
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Well, PS IV is going to be HSA modeled... is not PowerPC, is x86 AMD, and Sony entered the HSA...
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
usually those Console guys **program much more to-the-GPGPU-metal** and are able to extract an **order of magnitude (10x) more performance** than in the "PC world"
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
The only games which "make way more use of the hardware in consoles" are exclusives like Uncharted where you really can throw away all these HSA discussions and code to bare metal. Most cross-platform games (and very few games are exclusives nowadays) are better in XBox360 because it's hardware is closer to PC and usually you don't need 10x more powerful PC to beat the console game in quality (2x is more then enough) which implies that these "micro-optimizations and tricks" are not all that popular. They are used in some "computationally heavy" places (because otherwise game will just not work on console) but most of the time it's just plain simple reduction in quality (number and size of textures, etc) till you have the required FPS. And of course there are no HSA or "bare metal" access: it's OpenGL or Direct3D on consoles, too.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Samsung, ARM, TI, Sony, AMD, Imagination, LG, STMicro, ST Ericsson etc etc etc... are academic guys without real hardware ?
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
The other arguing, maintenance, stuff... issues... preferences... tastes...particular visions.... semantics for pointless bickering.
So... not having to implement everything twice is 'semantics for pointless bickering'? Ooh-kay.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Naa!.. that was because your kernel (in kernel space) wasn't nearly as good as Linux...
Linux (in kernel space) wasn't nearly as good as Linux?
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
GPUs can do DMA. Allowing userspace to submit arbitrary commands to GPUs means that userspace can do arbitrary DMA, and as such means that any application with access to your GPU can do anything it wants to.
Which is why you wouldn't let just any userspace application access the GPU directly. I don't let just any userspace application scribble on /dev/sda either.
There's plenty that the kernel does that can't be handled in userspace, simply because (a) we don't trust userspace,
That's a bizarre argument. Having code in user space doesn't magically make it less trustworthy than the same code would be in kernel space. Whether particular code should be trusted is a matter of policy, and there are various mechanism for policy enforcement.
The only reason to push hardware management out of your kernel is because you're producing a microkernel,
Assumes facts not in evidence. At the very large router company, we pushed hardware management out of the kernel for many reasons, none of which had anything whatsoever to do with whether the kernel we were running was monolithic or a microkernel. I could certainly accept that *A* reason to push hardware management out of your kernel is because you're producing a microkernel. It's a big jump from *A* to *The only*.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Maybe we can move Firefox and LibreOffice into the kernel next.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
I do not need to reboot Linux often, but when I do, the reason is usually DBus. Often because DBus has the CPU pegged at 100%, an effective DoS. Killing DBus usually leaves the system in an unusable state from which there is no obvious recovery. I say, if there is to be a world-eating message bus in the kernel, design it properly. DBus is not ready for prime time, far from it.
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
http://lists.freedesktop.org/archives/dbus/2005-March/002...
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel
Slightly off topic (Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel)
I am left wondering if it involves doing surgery in space or repairing rocket engines? (Hopefully not both...)
Slightly off topic (Kroah-Hartman: AF_BUS, D-Bus, and the Linux kernel)
