|
|
Subscribe / Log in / New account

Still far from proprietary MPI implementations

Still far from proprietary MPI implementations

Posted Sep 16, 2010 12:58 UTC (Thu) by ejr (subscriber, #51652)
In reply to: Still far from proprietary MPI implementations by Np237
Parent article: Fast interprocess messaging

There's a definition of zero copy floating around often attributed to Don Becker: Zero copy means someone *else* makes the copy.

That is more or less what happens in message passing using any shared memory mechanism. What you are describing is plain shared memory. It's perfectly fine to use within a single node, and I've done such a thing within MPI jobs working off large, read-only data sets to good success. (Transparent memory scaling of the data set when you're using multiple MPI processes on one node.) But it's not so useful for implementing MPI.

The interface here would help MPI when the receiver has already posted its receive when the send occurs. You then have the one necessary copy rather than two. Also, this interface has the *potential* of being smart with cache invalidation by avoiding caching the output on the sending processor! That is a serious cost; a shared buffer ends up bouncing between processors.


to post comments

Still far from proprietary MPI implementations

Posted Sep 16, 2010 13:20 UTC (Thu) by Np237 (guest, #69585) [Link]

Indeed that makes the performance much less predictable. I wonder how well this behaves on real-life codes, though. At least Bull claims their MPI implementation does that, and the single-node performance is impressive.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds