LCA: The X-men speak
LCA: The X-men speak
Posted Feb 12, 2013 17:02 UTC (Tue) by Cyberax (✭ supporter ✭, #52523)In reply to: LCA: The X-men speak by daniels
Parent article: LCA: The X-men speak
Though there certainly should be a mechanism to get access to all GPU capabilities.
Posted Feb 13, 2013 9:48 UTC (Wed)
by renox (guest, #23785)
[Link] (2 responses)
But not the other way round, correct me if I'm wrong but an integrated GPU can do 'page flipping'(and this is an important mechanism for performance) but a discrete GPU cannot do this because its memory is separated from the main memory.
Posted Feb 13, 2013 17:49 UTC (Wed)
by daniels (subscriber, #16193)
[Link] (1 responses)
The new mechanism Keith's talking about for partial page flipping (i.e. doing so on a per-page basis, whereas usually when we say 'page flipping' we mean doing it for the entire buffer - confusing I know) only really works at all on Intel hardware, and even then requires a great deal of co-operation from and hand-holding of all parts of the stack.
Posted Feb 21, 2013 10:27 UTC (Thu)
by mmarq (guest, #2332)
[Link]
then is useless, for anything more than trivial.
Posted Feb 21, 2013 7:07 UTC (Thu)
by elanthis (guest, #6227)
[Link] (2 responses)
Not true at all. It's very common for the discrete GPU to support a higher level of OpenGL and more extensions, which I really want to use. It's not at all uncommon on Windows to have self-written games and graphics apps crash and burn if you don't tell Optimus to use the discrete GPU for that process. If I know my game needs GL 3.3 and some extensions, and only one GPU can provide those, the stack should select that one. Likewise, the app probably knows if it only needs basic rendering or fast as possible rendering (the browser being a weird case, as different web apps might have different needs).
Part of the problem is that some heuristic and app list makes those decisions instead of the app saying during device handle creation that it would prefer the more capable GPU vs the more power efficient GPU. The app knows what it needs out of a GPU, and it should be able to at least hint to the stack about its preferences.
I know the DirectX/XDGI team has some plan here. No idea what Khronos plans to do with EGL, or if it's even thinking about the problem, but like most things Khronos does, it'll probably be some 1980's-style horrible error-prone API with zero debugging tools and be half a decade late to the party when it finally comes out.
Posted Feb 21, 2013 8:59 UTC (Thu)
by khim (subscriber, #9252)
[Link]
It used to be common problem, but is it really common here and now, though? Reviews of "third generation" HD Graphics (Ivy Bridge) usually highlight the fact that it's important milestone not because it's especially fast but because it finally bug-free enough to run most programs without crashing and artifacts. AMD's integrated graphics was always quite good in this regard. Hint - yes, pick - no. Most GPU-using programs are proprietary ones which means that they are have the least amount of information in the whole system (user knows what s/he bought, OS can be upgraded but programs are written once then used for years).
Posted Feb 21, 2013 10:36 UTC (Thu)
by dgm (subscriber, #49227)
[Link]
But it's the user who knows what he wants out of the app. It's the user's preferences for that app what matter.
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
LCA: The X-men speak
It's not at all uncommon on Windows to have self-written games and graphics apps crash and burn if you don't tell Optimus to use the discrete GPU for that process.
Part of the problem is that some heuristic and app list makes those decisions instead of the app saying during device handle creation that it would prefer the more capable GPU vs the more power efficient GPU. The app knows what it needs out of a GPU, and it should be able to at least hint to the stack about its preferences.
LCA: The X-men speak