16 bits per channel is a great improvement over 8, and will allow natively handling images from digital cameras. (The best digital cameras have up to 14 bits of dynamic range.) But still it doesn't have the headroom for handling high dynamic range (HDR) images where there may be more than 16 powers-of-two difference between the darkest and lightest areas.
Is it a simple matter of changing one typedef to rebuild GEGL and GIMP with 32 bits per channel?
Alternatively, to give almost unlimited dynamic range, why not use floats or doubles?