| From: |
| Mykyta Yatsenko <mykyta.yatsenko5-AT-gmail.com> |
| To: |
| bpf-AT-vger.kernel.org, ast-AT-kernel.org, andrii-AT-kernel.org, daniel-AT-iogearbox.net, kafai-AT-meta.com, kernel-team-AT-meta.com, eddyz87-AT-gmail.com, memxor-AT-gmail.com |
| Subject: |
| [PATCH bpf-next v6 0/6] bpf: Add support for sleepable tracepoint programs |
| Date: |
| Wed, 25 Mar 2026 11:55:17 -0700 |
| Message-ID: |
| <20260325-sleepable_tracepoints-v6-0-2b182dacea13@meta.com> |
| Cc: |
| Mykyta Yatsenko <yatsenko-AT-meta.com>, Peter Zijlstra <peterz-AT-infradead.org>, Steven Rostedt <rostedt-AT-goodmis.org> |
| Archive-link: |
| Article |
This series adds support for sleepable BPF programs attached to raw
tracepoints (tp_btf), classic raw tracepoints (raw_tp), and classic
tracepoints (tp). The motivation is to allow BPF programs on syscall
tracepoints to use sleepable helpers such as bpf_copy_from_user(),
enabling reliable user memory reads that can page-fault.
This series removes restriction for faultable tracepoints:
Patch 1 modifies __bpf_trace_run() to support sleepable programs
following the uprobe_prog_run() pattern: use explicit
rcu_read_lock_trace() for sleepable programs and rcu_read_lock() for
non-sleepable programs. Also removes preempt_disable from the faultable
tracepoint BPF callback wrapper, since migration protection and RCU
locking are now managed per-program inside __bpf_trace_run().
Patch 2 renames bpf_prog_run_array_uprobe() to
bpf_prog_run_array_sleepable() to support new usecase.
Patch 3 adds sleepable support for classic tracepoints
(BPF_PROG_TYPE_TRACEPOINT) by introducing trace_call_bpf_faultable()
and restructuring perf_syscall_enter/exit() to run BPF programs in
faultable context before the preempt-disabled per-cpu buffer
allocation. trace_call_bpf_faultable() uses rcu_tasks_trace for
lifetime protection, following the uprobe pattern. This adds
rcu_tasks_trace overhead for all classic tracepoint BPF programs on
syscall tracepoints, not just sleepable ones.
Patch 4 allows BPF_TRACE_RAW_TP, BPF_PROG_TYPE_RAW_TRACEPOINT, and
BPF_PROG_TYPE_TRACEPOINT programs to be loaded as sleepable, with
load-time and attach-time checks to reject sleepable programs on
non-faultable tracepoints.
Patch 5 adds libbpf SEC_DEF handlers: tp_btf.s, raw_tp.s,
raw_tracepoint.s, tp.s, and tracepoint.s.
Patch 6 adds selftests covering tp_btf.s, raw_tp.s, and tp.s positive
cases using bpf_copy_from_user() on sys_enter nanosleep, plus negative
tests for non-faultable tracepoints.
Signed-off-by: Mykyta Yatsenko <yatsenko@meta.com>
---
Changes in v7:
- Add recursion check (bpf_prog_get_recursion_context()) to make sure
private stack is safe when sleepable program is preempted by itself
(Alexei, Kumar)
- Use combined rcu_read_lock_dont_migrate() instead of separate
rcu_read_lock()/migrate_disable() calls for non-sleepable path (Alexei)
- Link to v6: https://lore.kernel.org/bpf/20260324-sleepable_tracepoint...
Changes in v6:
- Remove recursion check from trace_call_bpf_faultable(), sleepable
tracepoints are called from syscall enter/exit, no recursion is
possible.(Kumar)
- Refactor bpf_prog_run_array_uprobe() to support tracepoints
usecase cleanly (Kumar)
- Link to v5: https://lore.kernel.org/r/20260316-sleepable_tracepoints-...
Changes in v5:
- Addressed AI review: zero initialize struct pt_regs in
perf_call_bpf_enter(); changed handling tp.s and tracepoint.s in
attach_tp() in libbpf.
- Updated commit messages
- Link to v4: https://lore.kernel.org/r/20260313-sleepable_tracepoints-...
Changes in v4:
- Follow uprobe_prog_run() pattern with explicit rcu_read_lock_trace()
instead of relying on outer rcu_tasks_trace lock
- Add sleepable support for classic raw tracepoints (raw_tp.s)
- Add sleepable support for classic tracepoints (tp.s) with new
trace_call_bpf_faultable() and restructured perf_syscall_enter/exit()
- Add raw_tp.s, raw_tracepoint.s, tp.s, tracepoint.s SEC_DEF handlers
- Replace growing type enumeration in error message with generic
"program of this type cannot be sleepable"
- Use PT_REGS_PARM1_SYSCALL (non-CO-RE) in BTF test
- Add classic raw_tp and classic tracepoint sleepable tests
- Link to v3: https://lore.kernel.org/r/20260311-sleepable_tracepoints-...
Changes in v3:
- Moved faultable tracepoint check from attach time to load time in
bpf_check_attach_target(), providing a clear verifier error message
- Folded preempt_disable removal into the sleepable execution path
patch
- Used RUN_TESTS() with __failure/__msg for negative test case instead
of explicit userspace program
- Reduced series from 6 patches to 4
- Link to v2: https://lore.kernel.org/r/20260225-sleepable_tracepoints-...
Changes in v2:
- Address AI review points - modified the order of the patches
- Link to v1: https://lore.kernel.org/bpf/20260218-sleepable_tracepoint...
---
Mykyta Yatsenko (6):
bpf: Add sleepable support for raw tracepoint programs
bpf: Rename bpf_prog_run_array_uprobe() to bpf_prog_run_array_sleepable()
bpf: Add sleepable support for classic tracepoint programs
bpf: Verifier support for sleepable tracepoint programs
libbpf: Add section handlers for sleepable tracepoints
selftests/bpf: Add tests for sleepable tracepoint programs
include/linux/bpf.h | 24 +++-
include/linux/trace_events.h | 6 +
include/trace/bpf_probe.h | 2 -
kernel/bpf/syscall.c | 5 +
kernel/bpf/verifier.c | 13 ++-
kernel/events/core.c | 9 ++
kernel/trace/bpf_trace.c | 49 ++++++++-
kernel/trace/trace_syscalls.c | 110 ++++++++++---------
kernel/trace/trace_uprobe.c | 2 +-
tools/lib/bpf/libbpf.c | 39 +++++--
.../bpf/prog_tests/sleepable_tracepoints.c | 121 +++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints.c | 117 ++++++++++++++++++++
.../bpf/progs/test_sleepable_tracepoints_fail.c | 18 +++
tools/testing/selftests/bpf/verifier/sleepable.c | 17 ++-
14 files changed, 459 insertions(+), 73 deletions(-)
---
base-commit: 6c8e1a9eee0fec802b542dadf768c30c2a183b3c
change-id: 20260216-sleepable_tracepoints-381ae1410550
Best regards,
--
Mykyta Yatsenko <yatsenko@meta.com>