> Eventually the performance worsens as data structures and code are added to
> deliver required features.
I see it slightly differently here. The numbers do not actually show that CRFS rocks.
Instead they show that NFS sucks by not having a cache coherency protocol. In NFS, the server
does not know that a client have modified some data, and rely on clients to quickly commit
their changes to the server in order for other clients to see the changes. As a result, all
writes must commit very quickly (usually in a few seconds)--even if no other clients are
accessing the same data, costing big performance. And as a result, Unix filesystem semantics
cannot be kept: to make the whole "server doesn't know client writes" idea work, NFS needs to
change the filesystem write semantics, at times causing big annoyances to users.
> If you are actually interested in the performance aspects of coherent
> network file systems, there are a number of implementations which have
> existed for many years.
How about showing a couple of them here, especially those which are not just research
prototypes? (It seems all the first Google links are towards research papers, i.e., those
done by research students who need to get their PhD rather than by people who want to get
something off to market and commit support to the result.)
> There is also the newer, more vaporous pNFS effort.
That doesn't seem to do cache coherency to enable more aggressive local cache in clients, but
instead looks like an effort to allow multiple servers to serve the same piece of data to
increase data throughput. Am I right?