Linux and TCP offload engines
Posted Aug 25, 2005 8:00 UTC (Thu) by gdt
In reply to: Linux and TCP offload engines
Parent article: Linux and TCP offload engines
At 10Gbps the issue is not the speed so much as the networking stack using so much of the CPU as the computer has too little user space CPU left to do anything much useful. That's what TOEs address.
Also note that the choice isn't about offload, but the amount of state the offload needs and provides. For example, an offload which allowed the important TCP control decisions to be made by the CPU would allow most of the advances in the Linux kernel whilst not increasing CPU load overly (since connections rarely alter rate or state). A TSO which played out at a specified rate would be extremely useful.
You are right, the stack can always revert to using the CPU when a feature which requires it is configured. But any network engineer that has used a router which radically drops its throughput when you innocently alter the configuration can tell you how frustrating this design choice can be.
There must be a way of manually disabling the TOE, just as other offloads can be manually disabled now. That just isn't useful for security, but for fault finding, resilience and running with known bugs (eg, the TSO feature was not compliant with congestion control needs in some kernel versions).
What concerns me more about Linux networking software is that the developers are getting fine results using ttcp and iperf but users that want to do large file transfers (ie, something useful as well as shunting about packets) are getting numbers typically around 300Mbps. The users have too few tools allowed for by the kernel for tracking down the source of their poor performance. It's a major exercise in patch application to get simple data like the amount of CPU and I/O used by kernel subsystems; or to get TCP's view of the performance of the network and remote host.
to post comments)