Storage is not the problem. It has _never_ been the problem, even back when storage was small and not necessarily that cheap. Management is the problem. These are of course problems many Linux users are not familiar with, as their Linux experience is basically limited to "I run a Web server out of my mom's basement and I put Ubuntu on my laptop 'cause I'm such a tech-rock-star woooo go me!" Folks who have to or had to manage large rollouts of dozens, hundreds, or thousands of servers or workstations are the audience of separate /usr these days, not the masses of Linux fans.
I don't want to have to upgrade 10,000 machines individually. I want to upgrade one network image, and have all 10,000 machines automatically get the update next time they reboot and attach to the network share. (Noting that this is doable by versioning the images, having new connections always get the newest image, and existing connections retaining the image they already had.)
And when doing that, I want to have ONE network share to update, so there are no race conditions or such when a client is halfway through mounting /bin, /lib, /share, etc. and the rollout kicks in.
Finally, / itself can't just be the share point, because if it is to be read-only then /var, /etc, /home, /tmp, and so on would be screwed. The best you could do would be to put something like "/var /dev/sda1" into the network-mounted /etc/fstab, but then various other bits of machine-specific data need a lot of hacks to get working correctly. I've seen people try these hacks, like putting /etc on a separate network share and serving different images based on MAC/IP that are known to correlate to different hardware configurations... but man is that a lot of work and very error-prone.