|| ||Tom Herbert <email@example.com> |
|| ||David Miller <firstname.lastname@example.org>, email@example.com |
|| ||[PATCH 0/2] rps: Receive packet steering |
|| ||Tue, 10 Nov 2009 22:53:06 -0800|
|| ||Article, Thread
This is the third version of the the patch that implements software receive
side packet steering (RPS). Â RPS distributes the load of received packet
processing across multiple CPUs. Â This version allows per NAPI steering maps
as well as using HW provided hash as an optimization.
These patches are also the basis for per-flow steering which was previously
discussed; we are still working on a general solution for the per flow
steering which prevents OOO packets.
Problem statement: Protocol processing done in the NAPI context for received
packets is serialized per device queue and becomes a bottleneck under high
packet load. Â This substantially limits pps that can be achieved on a single
queue NIC and provides no scaling with multiple cores.
This solution queues packets early on in the receive path on the backlog queues
of other CPUs. Â This allows protocol processing (e.g. IP and TCP) to be
performed on packets in parallel. Â For each device (or NAPI instance for
a multi-queue device) a mask of CPUs is set to indicate the CPUs that can
process packets for the device. A CPU is selected on a per packet basis by
hashing contents of the packet header (the TCP or UDP 4-tuple) and using the
result to index into the CPU mask. Â The IPI mechanism is used to raise
networking receive softirqs between CPUs. Â This effectively emulates in
software what a multi-queue NIC can provide, but is generic requiring no device
Many devices now provide a hash over the 4-tuple on a per packet basis
(Toeplitz is popular). Â This patch allow drivers to set the HW reported hash
in an skb field, and that value in turn is used to index into the RPS maps.
Using the HW generated hash can avoid cache misses on the packet when
steering the packet to a remote CPU.
The CPU masks is set on a per device basis in the sysfs variable
/sys/class/net/<device>/rps_cpus. Â This is a set of canonical bit maps for
each NAPI nstance of the device. Â For example:
echo "0b 0b0 0b00 0b000" > /sys/class/net/eth0/rps_cpus
would set maps for four NAPI instances on eth0.
The first patch in this set adds the RPS functionality to the core networking.
The second patch adds support to the bnx2x driver to record the Toeplitz hash
reported by the device for received skbs.
Generally, we have found this technique increases pps capabilities of a single
queue device with good CPU utilization. Â Optimal settings for the CPU mask
seems to depend on architectures and cache hierarcy. Â Below are some results
running 700 instances of netperf TCP_RR test with 1 byte req. and resp.
Results show cumulative transaction rate and system CPU utilization.
tg3 on 8 core Intel
Â Without RPS: 90K tps at 34% CPU
Â With RPS: Â Â 285K tps at 70% CPU
e1000 on 8 core Intel
Â Without RPS: 90K tps at 34% CPU
Â With RPS: Â Â 292K tps at 66% CPU
foredeth on 16 core AMD
Â Without RPS: 117K tps at 10% CPU
Â With RPS: Â Â 327K tps at 29% CPU
bnx2x on 16 core AMD
Â Single queue without RPS: Â Â Â Â 139K tps at 17% CPU
Â Single queue with RPS: Â Â Â Â Â 352K tps at 30% CPU
Â Multi queue (1 queues per CPU) Â 204K tps at 12%
We have been running a variant of this patch on production servers for a while
with good results. Â In some of our more networking intensive applications we
have seen 30-50% gains in end application performance.
Signed-off-by: Tom Herbert <firstname.lastname@example.org>
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to email@example.com
More majordomo info at http://vger.kernel.org/majordomo-info.html