> GPUs mainly differ in raw speed - my integrated GPU has pretty much the same functionality as my discrete GPU.
Not true at all. It's very common for the discrete GPU to support a higher level of OpenGL and more extensions, which I really want to use. It's not at all uncommon on Windows to have self-written games and graphics apps crash and burn if you don't tell Optimus to use the discrete GPU for that process. If I know my game needs GL 3.3 and some extensions, and only one GPU can provide those, the stack should select that one. Likewise, the app probably knows if it only needs basic rendering or fast as possible rendering (the browser being a weird case, as different web apps might have different needs).
Part of the problem is that some heuristic and app list makes those decisions instead of the app saying during device handle creation that it would prefer the more capable GPU vs the more power efficient GPU. The app knows what it needs out of a GPU, and it should be able to at least hint to the stack about its preferences.
I know the DirectX/XDGI team has some plan here. No idea what Khronos plans to do with EGL, or if it's even thinking about the problem, but like most things Khronos does, it'll probably be some 1980's-style horrible error-prone API with zero debugging tools and be half a decade late to the party when it finally comes out.