Not logged in
Log in now
Create an account
Subscribe to LWN
LWN.net Weekly Edition for May 16, 2013
A look at the PyPy 2.0 release
PostgreSQL 9.3 beta: Federated databases and more
LWN.net Weekly Edition for May 9, 2013
(Nearly) full tickless operation in 3.10
Well, I guess it's true: you can always add another layer of indirection...
LCA: Two talks on the state of X
Posted Feb 9, 2008 17:57 UTC (Sat) by drag (subscriber, #31333)
Especially when it makes sense.. like making it easier to write simpler drivers that are as
effective as the old ones. (and making that code more portable) Or making it possible to
support APIs other then just OpenGL. Or making it easier for programmers to utilize the GPU
for things other then just making games go fast.
The thing is is that the video card is no longer just useful for accelerating 3D rendering or
making movie playback smooth.
The GPU is more and more a co-processor for your computer then anything else. A GPU has a
massive amount of floating point performance and it usually has a fat wad of memory that has a
extraordinary amount of bandwidth between it and the GPU. For certain tasks a 300 dollar video
card can blow away a cluster of x86 machines.
Posted Feb 10, 2008 2:47 UTC (Sun) by flewellyn (subscriber, #5047)
This is true. I've read about projects to use the GPU's vector processing for general purpose computing. It's interesting, but it has me wondering when the vendors are going to realize the inefficiency of having two powerful but dissimilar processors on the machine, and fold GPU functions back into the CPU. IBM's Cell architecture may well be an example of this already.
But as far as what you pointed to, I'm a little bit worried about the attempts to cover over differences between OpenGL and D3D. The only reason I'm worried is because, of course, D3D is a moving target, and one for which Microsoft has every reason NOT to cooperate with portability efforts.
Posted Feb 10, 2008 17:27 UTC (Sun) by stevenj (guest, #421)
It's interesting, but it has me wondering when the vendors are going to realize the inefficiency of having two powerful but dissimilar processors on the machine, and fold GPU functions back into the CPU. IBM's Cell architecture may well be an example of this already.
This has been going on for decades... see Sutherland's wheel of reincarnation.
Posted Feb 10, 2008 22:40 UTC (Sun) by flewellyn (subscriber, #5047)
Oh, indeed. That's why I brought it up.
Posted Feb 11, 2008 14:08 UTC (Mon) by michaeljt (subscriber, #39183)
Is that such a problem, given that mode setting is definitely not performance critical?
Posted Feb 12, 2008 19:22 UTC (Tue) by rfunk (subscriber, #4054)
It has nothing to don with being performance-critical, and everything to
do with low-level hardware interaction. The Unix philosophy is to have
the kernel do all direct hardware interaction.
Copyright © 2013, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds