|
|
Subscribe / Log in / New account

Bounded loops in BPF programs

By Jonathan Corbet
December 3, 2018

Linux Plumbers Conference
The BPF verifier is charged with ensuring that any given BPF program is safe for the kernel to load and run. Programs that fail to terminate are clearly unsafe, as they present an opportunity for denial-of-service attacks. In current kernels, the verifier uses a heavy-handed technique to block such programs: it disallows any program containing loops. This works, but at the cost of disallowing a wide range of useful programs; if the verifier could determine whether any given loop would terminate within a bounded time, this restriction could be lifted. John Fastabend presented a plan for doing so during the BPF microconference at the 2018 Linux Plumbers Conference.

Fastabend started by noting that the lack of loops hurts; BPF developers are doing "crazy things" to work around their absence. He is working to enable the use of simple loops that can be modeled by the verifier. There is academic work on ways to verify more complex loops, but that is a problem for later. For now, the objective is to detect simple loops and verify that they will terminate; naturally, it's important that the verifier, too, is able to terminate in a reasonable amount of time.

The key to determining the behavior of a loop is to find the induction variable that controls it. For a simple loop like:

    for (i = 0; i < MAX; i++)
	/* loop body */

the induction variable is i. If the variable can be shown to be both monotonically increasing and bounded in value, then the verifier can conclude that the loop will terminate. Once the induction variable and its bounds have been identified, the verifier can also check that any memory references using that variable remain in range.

This kind of verification could be done relatively easily with knowledge of where the loops are. But BPF is an unstructured virtual machine language that doesn't contain information about loops, so the verifier has to figure [John Fastabend] out where they are itself. This is done by creating a dominator tree that describes the program to be examined; it will identify tests that control the execution of specific blocks of code. From there, it is possible to identify loops (by looking for reverse jumps to the dominator node) and the blocks of code that belong to each loop.

Doing so is not entirely easy. The use of a dominator tree requires that there be a single entry point into every loop; code that jumps into the middle of a loop cannot be verified. Identifying the looped-over code is also an expensive (O(n2)) algorithm.

Fastabend outlined a few approaches to this problem, describing the first as the "by the book" method. This algorithm builds the dominator tree, then works to detect (and abort on) loops that cannot be reduced to a verifiable case. For each loop, it finds the induction variable, and verifies that variable's bounds; the execution of the loop is then simulated with the induction variable's largest and smallest values. The problem here is that the induction variable must be found with pattern matching, and the LLVM optimizer creates a wide variety of patterns that change with every release. That makes the code fragile.

The next approach is to get the compiler to help by limiting the number of loop types that it generates. That makes the pattern-matching task easier, since the patterns that identify loops will be reduced in number and less prone to change. The verifier still has to do all of the work, but it becomes quite a bit more robust.

But the best solution, he said, would be to create a new set of BPF instructions specifically to implement loops. They would mark the beginning and end of each loop, and include the test of the induction variable; the verifier would replace those instructions with actual tests and jumps. With these markers to denote the loop blocks, there would be no need for the dominator tree, since the markers themselves would make it clear which code is controlled by the loop. That would keep the verifier code minimal which, he said, is the right tradeoff in the end.

The description of the session said that the goal "to come to a consensus on how to proceed to make progress on supporting bounded loops". That did not happen, but there was some discussion about the options and the development community is more aware of how this complex work is proceeding. In the end, real consensus is likely to come about in the usual way: through the posting of code that shows how the idea is implemented in the real world.

The slides from this talk [PDF] and a video recording [YouTube] are available. Curious readers can also see the implementation of the first alternative as it was posted in June.

[Thanks to the Linux Foundation, LWN's travel sponsor, for supporting my travel to the event.]

Index entries for this article
KernelBPF/Loops
ConferenceLinux Plumbers Conference/2018


to post comments

Bounded loops in BPF programs

Posted Dec 3, 2018 23:16 UTC (Mon) by rvolgers (guest, #63218) [Link] (11 responses)

> But BPF is an unstructured virtual machine language that doesn't contain information about loops, so the verifier has to figure out where they are itself.

This is why WebAssembly is explicit about control flow nesting and doesn't allow arbitrary jumps. Wouldn't be surprised at all if eventually BPF gets replaced with some fancy formally-verified minimal WebAssembly JIT from Google or something.

Bounded loops in BPF programs

Posted Dec 3, 2018 23:52 UTC (Mon) by roc (subscriber, #30627) [Link] (6 responses)

One of the key steps in compiling arbitrary C code to asm.js/Webassembly is/was the Relooper algorithm to turn unstructured control flow back into structured control flow. That's probably relevant here: http://mozakai.blogspot.com/2012/05/reloop-all-blocks.html

Bounded loops in BPF programs

Posted Dec 4, 2018 0:48 UTC (Tue) by rvolgers (guest, #63218) [Link] (2 responses)

The nice thing is that with WebAssembly this work is performed by the compiler, so for the BPF use case that would mean userland instead of in the kernel.

Either way there is still the problem of doing something with those loops to make sure they complete within a certain number of iterations though.

Bounded loops in BPF programs

Posted Dec 5, 2018 8:18 UTC (Wed) by edeloget (subscriber, #88392) [Link] (1 responses)

> The nice thing is that with WebAssembly this work is performed by the compiler, so for the BPF use case that would mean userland instead of in the kernel.

That, in turn, means that if I handcraft the code I send to the kernel, it will then have no way to perform the needed verifications :) Does not sound that great :)

Bounded loops in BPF programs

Posted Dec 9, 2018 1:05 UTC (Sun) by nteon (subscriber, #53899) [Link]

I think you misunderstand things -- you cannot encode unstructured control flow in WebAssembly, it doesn't have the primitives (which is why the compiler has to turn unstructured control flow into structured). If you handcraft WebAssembly, by definition it would be using structured control flow and would be in a format that allows easy verification.

Bounded loops in BPF programs

Posted Dec 5, 2018 23:30 UTC (Wed) by wahern (subscriber, #37304) [Link] (2 responses)

The Relooper algorithm isn't much better, if at all. Per the original paper, Emscripten: an LLVM-to-JavaScript compiler:

In general though, this is not always easy or even practical – there may not be a straightforward high-level loop structure corresponding to the low-level one, if for example the original C code relied heavily on goto instructions. In practice, however, almost all real-world C and C++ code tends to be amenable to loop recreation.

Of course "almost all" code is amenable because very little code comprises the more critical, performant inner loops and state machines--the parts which often heavily rely on goto.

Arguably if JavaScript had goto there likely never would have been a need to create and rely on the Relooper, and WebAssembly may have turned out differently. Lua 5.2 added a goto construct precisely because it made code generation so much easier. I suspect that eventually much BPF code will be generated from high-level DSLs. Not everybody will be using the same toolchains and libraries, and not having a goto construct is a serious headache when machine generating code. Generating performant assembly from a task-specific, declarative DSL is often trivial; much less so if you now need to implement a relooper or integrate a relooper library.

WebAssembly was originally envisioned as a simple, pure stack-based VM where variable scoping and structured blocks naturally aligned with the overall design. A goto construct was unfathomable. But then WebAssembly got much more complex and the simple stack-based design fell away. I suspect adding a goto construct could have been possible after the original, simpler concept gave way, and I wouldn't rule out its addition in the future. Gotos (including computed gotos) are still very useful in performant C code because optimizing compilers still can't do the job--in real-world code the best switch-statement optimizations still can't come close to idiomatic use of gotos and computed gotos. Like with JIT compiling, microbenchmarks and theoretical optimizations never quite make the transition to the real-world undiminished. Performance demands may still necessitate goto.

Bounded loops in BPF programs

Posted Dec 6, 2018 0:30 UTC (Thu) by wahern (subscriber, #37304) [Link] (1 responses)

> WebAssembly was originally envisioned as a simple, pure stack-based VM where variable scoping and structured blocks naturally aligned with the overall design.

What I had in mind here was the transition from the AST-based bytecode format to the structured stack machine format. The transition shifted WASM in the direction of a model where you could contemplate adding gotos. The transition happened because implementations were *already* using modern compiler techniques, like SSA transforms. Adding a goto construct with a sophisticated verifier is much less of a leap than when your machine model is based directly on ASTs.

For more context see https://github.com/WebAssembly/design/issues/755 and https://docs.google.com/document/d/1CieRxPy3Fp62LQdtWfhym...

Bounded loops in BPF programs

Posted May 12, 2019 11:16 UTC (Sun) by ibukanov (subscriber, #3942) [Link]

One of design goals of WebAssembly was fast verification to minimize loading time. Explicit goto significantly increase complexity of the verifier.

Bounded loops in BPF programs

Posted Dec 4, 2018 12:20 UTC (Tue) by nix (subscriber, #2304) [Link] (3 responses)

That would mean redoing all the JITters and everything, and breaking userspace and everything else that's grown up aroud the BPF world unless you also implement a BPF->kWASM translator. Honestly, compared to adding a few instructions (that the verifier replaces, so they don't even need JIT or interpreter support) this seems like a ridiculously heavy hammer.

(And why would Google have a WASM JITter suitable for running in *kernelspace* anyway?)

Bounded loops in BPF programs

Posted Dec 4, 2018 12:31 UTC (Tue) by roc (subscriber, #30627) [Link] (2 responses)

There's a pathological pattern that might be at work here.

1) We have a framework X.
2) X is missing a small feature. Framework Y has that feature, and is overall great, but it's more complex than we need right now, and besides the cost of migrating from X to Y for just this one feature can't be justified.
3) Add feature to X.
4) Goto 1.

Duplicating effort along the path of least resistance, one feature at a time.

"duplicating effort along the path of least resistance"

Posted Dec 4, 2018 14:30 UTC (Tue) by Herve5 (subscriber, #115399) [Link]

Indeed, this was a perfect comment, that I can see applying in a dozen industrial areas other than software, just around myself... I'll retain your wording :-D

Bounded loops in BPF programs

Posted Dec 4, 2018 20:02 UTC (Tue) by simcop2387 (subscriber, #101710) [Link]

This raises the question, and it's a big one. Does framework Y have things we don't want in framework X? If that answer is yes then that's a reason we can't use it directly. And at least with WebAssembly and BPF that answer is probably yes, there's a lot of foreign call support in webassembly that'd have to be removed and then other bits that would have to be reimplemented from BPF to make it work for the use cases where BPF is used in the kernel. I think in the end you'd end up with an incompatible with either hybrid framework Z that's a mishmash of X and Y, and I'm not sure that's better.

Bounded loops in BPF programs

Posted Dec 3, 2018 23:50 UTC (Mon) by roc (subscriber, #30627) [Link] (10 responses)

Seems to me that beyond just termination, you actually need to bound the number of iterations or the total execution time somehow. A loop that executes a billion iterations is probably unsuitable in many BPF usage contexts.

Bounded loops in BPF programs

Posted Dec 4, 2018 8:32 UTC (Tue) by mjthayer (guest, #39183) [Link] (9 responses)

> Seems to me that beyond just termination, you actually need to bound the number of iterations or the total execution time somehow. A loop that executes a billion iterations is probably unsuitable in many BPF usage contexts.

In my naivety, I wonder whether it would be possible simply to bound BPF programme execution time, and make it the user's responsibility to provide something which can finish within the time.

Bounded loops in BPF programs

Posted Dec 4, 2018 8:39 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (6 responses)

It would be possible. The BPF compiler will need to add the timeout checks before each backwards jump but somehow I doubt the kernel developers want this.

BPF is moving with a rapid speed towards a completely generic in-kernel virtual machine.

Bounded loops in BPF programs

Posted Dec 4, 2018 8:45 UTC (Tue) by mjthayer (guest, #39183) [Link] (5 responses)

> It would be possible. The BPF compiler will need to add the timeout checks before each backwards jump but somehow I doubt the kernel developers want this.

I was more thinking of something in the execution engine.

Bounded loops in BPF programs

Posted Dec 4, 2018 9:27 UTC (Tue) by Sesse (subscriber, #53779) [Link]

There is no execution engine in modern eBPF; it's statically compiled to machine code.

Bounded loops in BPF programs

Posted Dec 4, 2018 9:33 UTC (Tue) by roc (subscriber, #30627) [Link] (3 responses)

BPF is JITted these days so there is no boundary between the compiler and execution engine.

Bounded loops in BPF programs

Posted Dec 4, 2018 10:33 UTC (Tue) by sdalley (subscriber, #18550) [Link] (2 responses)

So, there's no identifiable thread or task doing the BPF that could be watchdogged ?

Bounded loops in BPF programs

Posted Dec 4, 2018 10:48 UTC (Tue) by corbet (editor, #1) [Link] (1 responses)

There is no separate thread, no. One could always set a timer to bring an end to runaway execution, but that would be an expensive thing to do.

Bounded loops in BPF programs

Posted Dec 4, 2018 18:43 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link]

You can keep track of "remaining instructions" in a register that is decremented on every backwards jump by the number of instructions in the jump. Then just compare it with zero.

The comparison will be branch-predicted most of the time and will be fast.

Bounded loops in BPF programs

Posted Dec 4, 2018 12:34 UTC (Tue) by roc (subscriber, #30627) [Link] (1 responses)

One problem with timers is that they make things nondeterministic. BPF programs may start failing under heavy load for example.

It might be significantly better to bound the number of BPF instructions executed by some (configurable?) constant. Then running the same program on the same input data should always either succeed or fail, which would make debugging easier.

Bounded loops in BPF programs

Posted Dec 4, 2018 13:00 UTC (Tue) by mjthayer (guest, #39183) [Link]

> It might be significantly better to bound the number of BPF instructions executed by some (configurable?) constant.

Right. I used the word "time", but I was also more thinking of instruction count or similar.

Bounded loops in BPF programs

Posted Dec 4, 2018 1:19 UTC (Tue) by ringerc (subscriber, #3071) [Link] (20 responses)

Not mentioning the halting problem in this article was quite a feat. I am amused by the "academic work" reference.

Bounded loops in BPF programs

Posted Dec 4, 2018 5:04 UTC (Tue) by marcH (subscriber, #57642) [Link]

Indeed!

It's not mentioned in the (three) slides either, just: "Lots of academic work on complex loops... fun but lets stick to basic ax+c for now."

> > In the end, real consensus is likely to come about in the usual way: through the posting of code that shows how the idea is implemented in the real world.

Scarier and scarier :-)

Halting problem

Posted Dec 4, 2018 7:20 UTC (Tue) by corbet (editor, #1) [Link] (18 responses)

Nobody is trying to solve the halting problem in any kind of general sense, so it's not really relevant here. After all, current BPF can guarantee that all programs it accepts will halt, no problem...

Halting problem

Posted Dec 4, 2018 8:26 UTC (Tue) by vbabka (subscriber, #91706) [Link] (10 responses)

Exactly. Halting problem means that there will always exist programs where you can't prove that they will halt. That might also include programs that would actually halt (but you are not able to prove it). But when you can prove halting for some class of programs (e.g. because they are sufficiently constrained), then (modulo bugs in the prover itself :) you do have the guarantee, regardless of the general halting problem.

Halting problem

Posted Dec 4, 2018 8:52 UTC (Tue) by amacater (subscriber, #790) [Link]

Once, enigmatically
Alan Mathison Turing
Said to his colleagues
"It's now us or them.
Now, if you will
Help me hedge
Programatically
Goedel's insoluble
Entscheidungsproblem"

[If Turing machines are obsolete, when do you stop production? ]

Halting problem

Posted Dec 4, 2018 10:32 UTC (Tue) by excors (subscriber, #95769) [Link] (8 responses)

> Halting problem means that there will always exist programs where you can't prove that they will halt.

And by "programs", it specifically means programs in a system equivalent to a Turing machine, which has infinite memory and infinite time. That means it doesn't apply to any practical computer - they will have limited memory, so it's trivial to prove whether a program will halt by just enumerating all possible states. (Of course that's also a practically useless thing to prove - the BPF verifier wouldn't want to accept a program that definitely halts but might take a million years to do so.)

Halting problem

Posted Dec 4, 2018 12:35 UTC (Tue) by nix (subscriber, #2304) [Link] (6 responses)

Only a million years? We have an upper bound for all-possible-states enumeration, the busy beaver problem. I'm fairly sure a six-symbol program would exceed any likely bound for a mere million-year verification, and that's *way* below the likely size of BPF programs. (Hell, the max size of BPF programs has *always* been above the limit at which Aaronson et al hav shown that finding the busy beaver becomes noncomputable, though most BPF programs are probably much smaller than that.)

Halting problem

Posted Dec 4, 2018 14:27 UTC (Tue) by excors (subscriber, #95769) [Link] (5 responses)

I think the busy beaver problem assumes infinite tape (i.e. memory). When you have finite memory, you can consider the contents of memory (including registers etc) to be the machine's state, and then it's just a finite state machine. If it has N bits of memory, then after 2^N cycles it must have either halted or found a cycle from one state back to itself (in which case it will never halt). From a theoretician's perspective, that solves the halting problem for this machine.

Halting problem

Posted Dec 4, 2018 16:31 UTC (Tue) by dskoll (subscriber, #1630) [Link] (4 responses)

Well, you know that N is so large that 2^N might as well be infinite for all practical purposes. Also, you assume that the computation is deterministic. In real-world situations where external events can change the state (a network packet arriving, for example), a program might terminate even if it repeats a state it has been in before.

Actually, it's trivial to prove that all programs on computers on earth will eventually halt, becuase the Sun will eventually expand and destroy the earth, thereby halting the program. That's the ultimate example of an external event affecting the state.

Halting problem

Posted Dec 4, 2018 17:40 UTC (Tue) by excors (subscriber, #95769) [Link] (3 responses)

Oh, sure - I think what I'm trying to get at is that there are two completely separate realms, one of theoretical computer science (which is really just computer-themed mathematics) where the halting problem and busy beavers and Turing machines are relevant, and most finite things are trivial; and one of practical reality where the difference between "1 msec" and "after the heat death of the universe" is an important distinction. Applying theoretical tools to practical problems doesn't give useful results, and that's why the halting problem isn't relevant to BPF.

Halting problem

Posted Dec 4, 2018 19:35 UTC (Tue) by dskoll (subscriber, #1630) [Link]

I wouldn't say the halting problem isn't relevant at all to BPF, but it's not the only problem with trying to make it work practically.

But yeah, mostly it's just fun to point out weird edge cases that distinguish theory from practice. :)

Halting problem

Posted Dec 4, 2018 23:53 UTC (Tue) by nix (subscriber, #2304) [Link]

Yes, except that this stuff is also tied up with complexity theory, and that *is* extremely relevant (a lot of algorithms exist that would be really nice to use in compilers except that their time or space complexity is in very much the wrong class, should we say... so one uses approximations, or a less optimal algorithm, and wishes we lived in a perfect world.)

Halting problem

Posted Dec 5, 2018 10:46 UTC (Wed) by farnz (subscriber, #17727) [Link]

It's also not relevant because the halting problem says that you can't sort all programs into "does not halt" and "does halt". If you accept a third bucket - "may or may not halt", then the problem falls away; you can sort all programs into those three buckets in theory, and then it's just a practical problem.

This is not a new insight - it's basically a conservative approximation to the halting problem (where you treat "may or may not halt" as "does not halt", and accept that you are rejecting some programs that do halt), and is a common process in compilers.

Halting problem

Posted Dec 5, 2018 14:16 UTC (Wed) by mina86 (guest, #68442) [Link]

Million years? You can do it in constant time! Just precompute all possible states of the machine and stash all that terminate to a hash set. As added bonus the computation can be used to replace bogomips loop.

Halting problem

Posted Dec 4, 2018 12:25 UTC (Tue) by nix (subscriber, #2304) [Link] (6 responses)

Yeah. The problem here is not the halting problem: the problem here is always finding an approximation that lets useful stuff past but isn't too complex to be worth it. The high time- or sometimes space-complexity of many algorithms needed by compilers is a routine cause of foul curses in developers of normal ahead-of-time compilers that don't run in kernel space: the approximations needed by something that runs in the kernel seem likely to be even harder.

(Really, it's a shame the verifier and JITters have to run in kernel space: this sort of work seems like the sort of thing far better done by an upcall to something in userspace, unprivileged but only installable by root -- only that would stop BPF from being used to protect the system *from* root, and it wouldn't help with the time problem at all since the kernel would still have to block the process installing the BPF to wait for verification/JITing anyway...)

It just makes me icky seeing that people are basically writing compilers that run in kernelspace. Surely there must be a better way.

Halting problem

Posted Dec 4, 2018 14:05 UTC (Tue) by excors (subscriber, #95769) [Link]

Maybe the kernel should use KVM to launch a minimal Linux distro, and use that as the userspace to run the compiler/JIT/verifier in, so it's unprivileged and not constrained by kernel programming requirements. The distro and applications can be encoded as a blob in a signed kernel module as part of the secure boot chain, so the VM is as secure as the kernel itself and can be protected against the root user.

Halting problem

Posted Dec 4, 2018 16:18 UTC (Tue) by farnz (subscriber, #17727) [Link] (3 responses)

This actually sounds like a potential application for proof-carrying code. The user-space verifier could attach a simple-to-verify proof to the program it's about to attach (even if it's an arbitrary program with loops and other such weirdnesses) that the kernel can check.

Halting problem

Posted Dec 4, 2018 23:49 UTC (Tue) by nix (subscriber, #2304) [Link] (2 responses)

It really does. Heck the original Necula and Lee paper might as well have had 'BPF' written all over it in letters of fire ten miles high :)

Halting problem

Posted Dec 5, 2018 10:16 UTC (Wed) by farnz (subscriber, #17727) [Link] (1 responses)

It would be rather surprising if a researcher at Berkeley working on Packet Filters was completely unaware of BPF :-)

Halting problem

Posted Dec 11, 2018 17:45 UTC (Tue) by nix (subscriber, #2304) [Link]

Oh yeah, that's true. I had forgotten their affiliation. :)

Halting problem

Posted Dec 8, 2018 7:02 UTC (Sat) by marcH (subscriber, #57642) [Link]

> The problem here is not the halting problem:

Did anyone say it was?

> the problem here is always finding an approximation...

The "feat" was just to never mention once the name of what this approximates.

The misunderstanding and overreaction is even more interesting.

Bounded loops in BPF programs

Posted Dec 4, 2018 8:28 UTC (Tue) by dezgeg (subscriber, #92243) [Link] (2 responses)

The Itanium processor has dedicated instructions for loops as well: https://blogs.msdn.microsoft.com/oldnewthing/20150806-00/.... I wonder if there is some code from the Itanium backend in LLVM that could be repurposed if the approach of dedicated looping instructions were taken.

Bounded loops in BPF programs

Posted Dec 4, 2018 9:37 UTC (Tue) by roc (subscriber, #30627) [Link] (1 responses)

So does x86 (LOOP)! That's not really helpful here though because those instructions don't guarantee nothing jumps into the middle of the loop.

Bounded loops in BPF programs

Posted Dec 5, 2018 19:55 UTC (Wed) by matthias (subscriber, #94967) [Link]

If the compiler would generate loop instructions for loops, there would be no need to reconstruct loops from jump instructions. There is no problem with jumps in the middle of the loop. The verifier can easily detect those and forbid them. The hard part is not not forbidding illegal jumps the hard part is to detect legal jumps and proof that they do no harm. This part would be much easier if loops would be explicitly marked instead of created by general purpose jump instructions.

Bounded loops in BPF programs

Posted Dec 4, 2018 11:50 UTC (Tue) by ibukanov (subscriber, #3942) [Link] (5 responses)

One can always replace a bounded loop by repeating the code up to max allowed number of iterations. So I suppose BPF can be extended with simple bytecodes for a loop that repeats the code, say, 100, times. If one wants to stop it earlier, one just use jumps. But proving that it cannot be subverted can be tricky.

Bounded loops in BPF programs

Posted Dec 4, 2018 14:19 UTC (Tue) by dskoll (subscriber, #1630) [Link] (4 responses)

A few nested loops and once again you can execute billions of iterations...

Bounded loops in BPF programs

Posted Dec 4, 2018 22:46 UTC (Tue) by ibukanov (subscriber, #3942) [Link] (1 responses)

There is no need to support nested loops. Any such loop can be re-written as single loop.

Bounded loops in BPF programs

Posted Dec 5, 2018 2:24 UTC (Wed) by dskoll (subscriber, #1630) [Link]

That would work if BPF programs couldn't call subroutines, or if the verifier chased subroutines down to make sure anything called within a loop didn't contain its own loop. Sounds pretty tricky and tedious. I don't envy those doing this work.

Bounded loops in BPF programs

Posted Dec 5, 2018 14:25 UTC (Wed) by mina86 (guest, #68442) [Link] (1 responses)

This seems to be the actual issue. For practical purposes, BPF program which terminates after an hour is just as unsafe as one which never terminates. Perhaps instead of trying to prove things about the program, it’d be better to simply limit how many instructions a program can execute and terminate it if it takes too long.

Bounded loops in BPF programs

Posted Dec 5, 2018 20:10 UTC (Wed) by matthias (subscriber, #94967) [Link]

This might be a valid catch all solution for situations where the proof aproach does not work. However, the verifier approach has many advantages. If the code can be proven to never execute more than the allowed number of instructions, there is no need to count instructions and place conditional jumps all over the code that will abort once the counter reaches zero. Apart from the performance boost, this also gives certainty that the code is never aborted due to taking too much time. If I rely on BPF programs that do not fail, I would anyway need a tool that can proof that the code does not use too many instructions. Then why not integrating this into the verifier?

Bounded loops in BPF programs

Posted Dec 5, 2018 10:51 UTC (Wed) by pctammela (guest, #126687) [Link] (3 responses)

As time goes, the BPF project is getting closer to the Lua kernel port. A port that exists since 2008.

Bounded loops in BPF programs

Posted Dec 5, 2018 10:52 UTC (Wed) by pctammela (guest, #126687) [Link]

Also incorporated in some BSDs.

Bounded loops in BPF programs

Posted Dec 6, 2018 0:53 UTC (Thu) by wahern (subscriber, #37304) [Link] (1 responses)

Interestingly, in the transition from Lua 5.1 to Lua 5.2 the authors *removed* bytecode verification. This decision was made after several holes in the verifier were discovered. Because the VM relied on the verifier these holes were easily exploitable. The decision was made that time and attention would be better spent focusing on the VM and the compiler, but particularly on writing a VM that was intrinsically safe. The interplay between the VM, the verifier, and the compiler added unnecessary complexity and opacity. (IME such a decision is not simply a matter of moving invariant checks; without a verification step you tend to design things in a way less sensitive to malicious input.)

Not sure exactly how that relates to eBPF as the parallels aren't exact. eBPF is already bytecode, but because it's destined to be JIT-compiled it's similar to Lua script in terms of the pipeline. And unlike Lua where the VM and compiler are developed completely separately (except coordinated releases), in eBPF I presume the verifier and JIT compiler are more intimately bound.

Still, the rule of thumb in security engineering is that verification should be disfavored as it duplicates and disassociates logic in a way that increases the risk for bugs. It can be difficult to keep track of the places where the JITer and the runtime depend on the verifier. Even with a safe-by-design bytecode spec, bugs happen, and they're more likely when invariants are expressed thousands of lines apart than when they're expressed a few lines apart.

Bounded loops in BPF programs

Posted Dec 6, 2018 12:30 UTC (Thu) by pctammela (guest, #126687) [Link]

> *removed* bytecode verification

Lua doesn't need a bytecode verifier if the interpreter exposed is already sandboxed by the developer, which was the real decision why the bytecode verifier was removed. Some people need it, some don't.

> Not sure exactly how that relates to eBPF as the parallels aren't exact.

Both are virtual machines running inside the operating system.

> Lua where the VM and compiler are developed completely separately

Lua has matured enough to be able to release new versions of both separately. It's just a matter of time to eBPF catch up to this pace.

This is also a design goal of Lua, allowing developers to tune/rewrite the VM without interfering with the compiler and maximizing portability of Lua scripts.

> Even with a safe-by-design bytecode spec, bugs happen, and they're more likely when invariants are expressed thousands of lines apart than when they're expressed a few lines apart.

Bugs happen because "Errare humanum est". The likeliness of bugs are intrinsically connected to software architecture and bugs can be minimized with a good test suite, which Lua has.

Lua is extremelly small, in fact the entire Lua language implementation (VM/Interpreter/C API) is just ~20k lines of logical C code, with the C API dominating a huge chunk of this. Last time I checked, this was smaller than (e)BPF itself. I don't understand your point here.

Bounded loops in BPF programs

Posted Dec 6, 2018 20:29 UTC (Thu) by Jandar (subscriber, #85683) [Link]

> the execution of the loop is then simulated with the induction variable's largest and smallest values

char x[1];
int i;
for( i=0; i<=10; ++i )
x[ i < 5 ? i : 10-i ] = 0;

Checking only the larges and smallest value of the induction variable isn't enough.


Copyright © 2018, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds