I hope this question isn't born out of ignorance...
The DMA idea sounds great, but I'm curious how this works when protocol layers will want to add headers to the region of memory that exactly preceeds my application-level data.
For example, suppose I have application-data messages that are 256 byte long. So I request a 256-byte-long user-space DMA region, and it's mapped to my process' VM address range x10000000 - x100000FF. And I then populate all 256 bytes of that region with application-level data.
If the TCP and IP layers are going to bolt their headers onto the beginning of the data I'm sending, won't each of those layers (1) allocate a buffer big enough for that layer's header + the data from the higher protocol level, and then (2) copy the higher-level's data into that new buffer? If so, I don't see how zero-copy is achieved.
It seems to me like we'd almost need the application to announce the purpose for which it intends to use the DMA region, so that the allocator can include extra space at the beginning/end for the network stack to use.
For example (not ideal, but just to clarify my point):
int dma_alloc(AF_INET, SOCK_STREAM, dma_mem_t *handle, size_t size, int flags);
Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds