Quite. I'm not sure it's even a visual model: I do exactly the same, and I have no visual memory at all to the extent that I have trouble recognising myself in a mirror. I think we're using the same neural system we use to navigate around physical spaces: switching to my work mail Emacs window feels similar mentally to walking to the shops (only it takes much less time, <0.2s rather than five minutes: nonetheless, I know the route from wherever I am and I know the surroundings). This system is evolutionarily ancient (google 'hippocampus place cells') and very efficient, so piggybacking on it seems like a good idea for user-interface metaphors.
Now one of the principal attributes of physical spaces is that they do not rearrange themselves spontaneously as you walk through them. Thus, we should probably try to avoid doing that here, too. I can't think of *anything* in the real world that a spontaneously reordering list could be modelled as. (Windows's alt-tab lists are just as bad. Watching users peck laboriously through them is painful.)