There are two areas that the 'next gen' driver model can be ported to other operating systems..
1. The other OS can choose to impliment support for the DRI2 protocol and do kernel-level memory management that provides support for GEM-like objects. Then take care of mode setting in some way that doesn't depend on X DDX drivers (since that is hopefully going away in the long-term. About 50% of the code for X.org drivers is just to deal with mode setting.)
2. The other OS can target Gallium directly and create their own 'winsys' portion of the driver. (or whatever they now call the bottom layer of the Gallium design stack)
The bonuses that come with the new unified driver design is better memory management and much simplier driver model. Also for security it makes it possible to allow Xfree-style X server to run as a regular user.
I think that FreeBSD already has some support for DRI2 somewere with their own memory management scheme and OpenBSD folks should be interested in allowing users to get hardware acceleration without compromising security.
Windows and OS X could be taken care of by porting Gallium to their native acceleration APIs and get compatibility with the Linux-style display stack.
All in all this sort of thing is just a plain better design long-term.
Remember that with Windows Vista one of their big deals was to impliment a in-kernel video memory management system and shove the rest of the driver back into userspace.
Also according to ATI's proprietary drivers developers they have already moved to this sort of driver model, internal to the driver, some time ago. (For Windows and then later on for Linux) And, I expect, the same for Nvidia proprietary driver prior to ATI's.
In this way Linux open source graphics is just catching up with the rest of the world and this sort of model is going to be required to take better advantage of things like Intel Larrabee, the ATI GPGPU stuff, as well as OpenCT and manageable hardware acceleration for video decoding. (although I think that Linux's OSS graphics stack has such wonderful potentional since it's using a open and standardized approach.. were as with things like Nvidia's CUDA is very vendor, very hardware specific.)