Regarding the desktop/server argument, that's merely how I used Debian prior to the existence of testing. When testing came along I used that instead on my desktop. My reasoning is simply that when unstable broke it could be broken for a long time. That still happens on occasion, but it doesn't typically make it into testing that way.
As for the '1 week delay' into 'fresh' - occasionally testing does have issues that last a long time; generally in libraries and often moreso on non-x86 architectures. I expect this delay to be longer than one week typically as these more minor issues are caught later by the desktop users.
In this way, developers run unstable, testing is desktop users, fresh is for servers on the cutting edge, and stable is what you stick in a closet and forget about. However, your argument that desktop people don't run HylaFAX servers and AMANDA (for example), has merit.
The only question to me is: can there and should there be something in between 'stable' and 'testing' for server-type loads? Or does 'testing' provide a reasonable enough balance between up-to-date software and the sys admin's work load? Or does 'backports' suffice in that role?
For some of my work, backports has satistied my needs. When I've gone looking for web hosts, though, I've often found that I wanted something supporting newer software releases. It seems that what is done should be tailored to how the distribution is being used. These are just use cases based on my experiences.