Queue configs and large buffer providers
| From: | Pavel Begunkov <asml.silence-AT-gmail.com> | |
| To: | netdev-AT-vger.kernel.org | |
| Subject: | [PATCH net-next v4 00/24][pull request] Queue configs and large buffer providers | |
| Date: | Mon, 13 Oct 2025 15:54:02 +0100 | |
| Message-ID: | <cover.1760364551.git.asml.silence@gmail.com> | |
| Cc: | Andrew Lunn <andrew-AT-lunn.ch>, Jakub Kicinski <kuba-AT-kernel.org>, davem-AT-davemloft.net, Eric Dumazet <edumazet-AT-google.com>, Paolo Abeni <pabeni-AT-redhat.com>, Simon Horman <horms-AT-kernel.org>, Donald Hunter <donald.hunter-AT-gmail.com>, Michael Chan <michael.chan-AT-broadcom.com>, Pavan Chebbi <pavan.chebbi-AT-broadcom.com>, Jesper Dangaard Brouer <hawk-AT-kernel.org>, John Fastabend <john.fastabend-AT-gmail.com>, Stanislav Fomichev <sdf-AT-fomichev.me>, Joshua Washington <joshwash-AT-google.com>, Harshitha Ramamurthy <hramamurthy-AT-google.com>, Jian Shen <shenjian15-AT-huawei.com>, Salil Mehta <salil.mehta-AT-huawei.com>, Jijie Shao <shaojijie-AT-huawei.com>, Sunil Goutham <sgoutham-AT-marvell.com>, Geetha sowjanya <gakula-AT-marvell.com>, Subbaraya Sundeep <sbhatta-AT-marvell.com>, hariprasad <hkelam-AT-marvell.com>, Bharat Bhushan <bbhushan2-AT-marvell.com>, Saeed Mahameed <saeedm-AT-nvidia.com>, Tariq Toukan <tariqt-AT-nvidia.com>, Mark Bloch <mbloch-AT-nvidia.com>, Leon Romanovsky <leon-AT-kernel.org>, Alexander Duyck <alexanderduyck-AT-fb.com>, kernel-team-AT-meta.com, Ilias Apalodimas <ilias.apalodimas-AT-linaro.org>, Joe Damato <joe-AT-dama.to>, David Wei <dw-AT-davidwei.uk>, Willem de Bruijn <willemb-AT-google.com>, Mina Almasry <almasrymina-AT-google.com>, Pavel Begunkov <asml.silence-AT-gmail.com>, Breno Leitao <leitao-AT-debian.org>, Dragos Tatulea <dtatulea-AT-nvidia.com>, linux-kernel-AT-vger.kernel.org, linux-doc-AT-vger.kernel.org, linux-rdma-AT-vger.kernel.org, Jonathan Corbet <corbet-AT-lwn.net> | |
| Archive-link: | Article |
Add support for per-queue rx buffer length configuration based on [2] and basic infrastructure for using it in memory providers like io_uring/zcrx. Note, it only includes net/ patches and leaves out zcrx to be merged separately. Large rx buffers can be beneficial with hw-gro enabled cards that can coalesce traffic, which reduces the number of frags traversing the network stack and resuling in larger contiguous chunks of data given to the userspace. Benchmarks with zcrx [2+3] show up to ~30% improvement in CPU util. E.g. comparison for 4K vs 32K buffers with a 200Gbit NIC, napi and userspace pinned to the same CPU: packets=23987040 (MB=2745098), rps=199559 (MB/s=22837) CPU %usr %nice %sys %iowait %irq %soft %idle 0 1.53 0.00 27.78 2.72 1.31 66.45 0.22 packets=24078368 (MB=2755550), rps=200319 (MB/s=22924) CPU %usr %nice %sys %iowait %irq %soft %idle 0 0.69 0.00 8.26 31.65 1.83 57.00 0.57 netdev + zcrx changes: [1] https://github.com/isilence/linux.git zcrx/large-buffers-v4 Per queue configuration series: [2] https://lore.kernel.org/all/20250421222827.283737-1-kuba@... Liburing example: [3] https://github.com/isilence/liburing.git zcrx/rx-buf-len --- The following changes since commit 3a8660878839faadb4f1a6dd72c3179c1df56787: Linux 6.18-rc1 (2025-10-12 13:42:36 -0700) are available in the Git repository at: https://github.com/isilence/linux.git tags/net-for-6.19-queue-rx-buf-len for you to fetch changes up to bc5737ba2a1e5586408cd0398b2db0f218ed3e89: net: validate driver supports passed qcfg params (2025-10-13 10:04:05 +0100) v4: - Update fbnic qops - Propagate max buf len for hns3 - Use configured buf size in __bnxt_alloc_rx_netmem - Minor stylistic changes v3: https://lore.kernel.org/all/cover.1755499375.git.asml.sil... - Rebased, excluded zcrx specific patches - Set agg_size_fac to 1 on warning v2: https://lore.kernel.org/all/cover.1754657711.git.asml.sil... - Add MAX_PAGE_ORDER check on pp init - Applied comments rewording - Adjust pp.max_len based on order - Patch up mlx5 queue callbacks after rebase - Minor ->queue_mgmt_ops refactoring - Rebased to account for both fill level and agg_size_fac - Pass providers buf length in struct pp_memory_provider_params and apply it in __netdev_queue_confi(). - Use ->supported_ring_params to validate drivers support of set qcfg parameters. Jakub Kicinski (20): docs: ethtool: document that rx_buf_len must control payload lengths net: ethtool: report max value for rx-buf-len net: use zero value to restore rx_buf_len to default net: clarify the meaning of netdev_config members net: add rx_buf_len to netdev config eth: bnxt: read the page size from the adapter struct eth: bnxt: set page pool page order based on rx_page_size eth: bnxt: support setting size of agg buffers via ethtool net: move netdev_config manipulation to dedicated helpers net: reduce indent of struct netdev_queue_mgmt_ops members net: allocate per-queue config structs and pass them thru the queue API net: pass extack to netdev_rx_queue_restart() net: add queue config validation callback eth: bnxt: always set the queue mgmt ops eth: bnxt: store the rx buf size per queue eth: bnxt: adjust the fill level of agg queues with larger buffers netdev: add support for setting rx-buf-len per queue net: wipe the setting of deactived queues eth: bnxt: use queue op config validate eth: bnxt: support per queue configuration of rx-buf-len Pavel Begunkov (4): net: page_pool: sanitise allocation order net: hns3: net: use zero to restore rx_buf_len to default net: let pp memory provider to specify rx buf len net: validate driver supports passed qcfg params Documentation/netlink/specs/ethtool.yaml | 4 + Documentation/netlink/specs/netdev.yaml | 15 ++ Documentation/networking/ethtool-netlink.rst | 7 +- drivers/net/ethernet/broadcom/bnxt/bnxt.c | 148 +++++++++++--- drivers/net/ethernet/broadcom/bnxt/bnxt.h | 5 +- .../net/ethernet/broadcom/bnxt/bnxt_ethtool.c | 9 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.c | 6 +- drivers/net/ethernet/broadcom/bnxt/bnxt_xdp.h | 2 +- drivers/net/ethernet/google/gve/gve_main.c | 9 +- .../ethernet/hisilicon/hns3/hns3_ethtool.c | 10 +- .../marvell/octeontx2/nic/otx2_ethtool.c | 6 +- .../net/ethernet/mellanox/mlx5/core/en_main.c | 10 +- drivers/net/ethernet/meta/fbnic/fbnic_txrx.c | 8 +- drivers/net/netdevsim/netdev.c | 8 +- include/linux/ethtool.h | 3 + include/net/netdev_queues.h | 88 +++++++-- include/net/netdev_rx_queue.h | 3 +- include/net/netlink.h | 19 ++ include/net/page_pool/types.h | 1 + .../uapi/linux/ethtool_netlink_generated.h | 1 + include/uapi/linux/netdev.h | 2 + net/core/Makefile | 1 + net/core/dev.c | 12 +- net/core/dev.h | 15 ++ net/core/netdev-genl-gen.c | 15 ++ net/core/netdev-genl-gen.h | 1 + net/core/netdev-genl.c | 92 +++++++++ net/core/netdev_config.c | 183 ++++++++++++++++++ net/core/netdev_rx_queue.c | 22 ++- net/core/page_pool.c | 3 + net/ethtool/common.c | 4 +- net/ethtool/netlink.c | 14 +- net/ethtool/rings.c | 14 +- tools/include/uapi/linux/netdev.h | 2 + 34 files changed, 650 insertions(+), 92 deletions(-) create mode 100644 net/core/netdev_config.c -- 2.49.0
