Bulk memory-allocation APIs
Bulk memory-allocation APIs
Posted Apr 28, 2016 15:13 UTC (Thu) by jhhaller (guest, #56103)Parent article: Bulk memory-allocation APIs
DPDK uses many optimizations, mostly to avoid cache misses. The Xeon processors have evolved to support features which minimize cache misses during I/O. The primary reason for using huge page allocations is to reduce the probability of TLB cache misses. NUMA alignment is important, as the device can write the packet directly into the processor cache (with the right processor), without ever touching memory. Using memory not attached to the same socket with the connection to the network device prevents using the processor cache, causing cache misses (which also happens if a CPU not in the correct socket is used).