|
|
Log in / Subscribe / Register

Vulnerability Research Is Cooked (sockpuppet.org)

There is a blog post on sockpuppet.org arguing that we are not prepared for the upcoming flood of high-quality, LLM-generated vulnerability reports and exploits.

Now consider the poor open source developers who, for the last 18 months, have complained about a torrent of slop vulnerability reports. I'd had mixed sympathies, but the complaints were at least empirically correct. That could change real fast. The new models find real stuff. Forget the slop; will projects be able to keep up with a steady feed of verified, reproducible, reliably-exploitable sev:hi vulnerabilities? That's what's coming down the pipe.

Everything is up in the air. The industry is sold on memory-safe software, but the shift is slow going. We've bought time with sandboxing and attack surface restriction. How well will these countermeasures hold up? A 4 layer system of sandboxes, kernels, hypervisors, and IPC schemes are, to an agent, an iterated version of the same problem. Agents will generate full-chain exploits, and they will do so soon.

Meanwhile, no defense looks flimsier now than closed source code. Reversing was already mostly a speed-bump even for entry-level teams, who lift binaries into IR or decompile them all the way back to source. Agents can do this too, but they can also reason directly from assembly. If you want a problem better suited to LLMs than bug hunting, program translation is a good place to start.



to post comments

Premise

Posted Mar 31, 2026 16:54 UTC (Tue) by gf2p8affineqb (subscriber, #124723) [Link]

> I think this outcome is locked in.

The post has this premise, but doesn't really provide evidence. Anthropic found bugs but AFAIK they were human verified. Also, talking to a guy at Anthropic is not necessarily the best way to get objective data about their capabilities.

Significant raise of reports

Posted Mar 31, 2026 17:11 UTC (Tue) by wtarreau (subscriber, #51152) [Link] (6 responses)

On the kernel security list we've seen a huge bump of reports. We were between 2 and 3 per week maybe two years ago, then reached probably 10 a week over the last year with the only difference being only AI slop, and now since the beginning of the year we're around 5-10 per day depending on the days (fridays and tuesdays seem the worst). Now most of these reports are correct, to the point that we had to bring in more maintainers to help us.

And we're now seeing on a daily basis something that never happened before: duplicate reports, or the same bug found by two different people using (possibly slightly) different tools.

It's a bit scary (and tiring), but at least compared to the previous era of AI slop, you feel like you're not working for nothing because bugs get fixed. Also it's interesting to keep thinking that these bugs are within reach from criminals so they deserve to get fixed.

I don't know how long this pace will last. I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog (and I hope so).

Something I'm predicting is that at least it will change the approach to security fixes:
- embargoes will probably disappear, and for good: what's the point of hiding something that others can instantly find? I have not seen one in a while and that's good.
- people will finally understand that security bugs are bugs, and that the only sane way to stay safe is to periodically update, without focusing on "CVE-xxx"
- software that used to follow the "release-then-go-back-to-cave" model will have to change to start dealing with maintenance for real, or to just stop being proposed to the world as the ultimate-tool-for-this-and-that because every piece of software becomes a target.

Overall I think we're going to see a much higher quality of software, ironically around the same level than before 2000 when the net became usable by everyone to download fixes. When the software had to be pressed to CDs or written to millions of floppies, it had to survive an amazing quantity of tests that are mostly neglected nowadays since updates are easy to distribute. But before this happens, we have to experience a huge mess that might last for a few years to come! Interesting times...

Significant raise of reports

Posted Mar 31, 2026 17:31 UTC (Tue) by gf2p8affineqb (subscriber, #124723) [Link] (1 responses)

How does this compare to Syzbot? I see there are 1300 open issues right now on its dashboard.

Significant raise of reports

Posted Mar 31, 2026 17:49 UTC (Tue) by hailfinger (subscriber, #76962) [Link]

The easiest way to get attention to the backlog of syzbot reports (but absolute worst way to overload maintainers) is using them in an exploit chain. I expect that to happen pretty soon (or maybe it is already happening).
It's the old method of framing bugs as security bugs to get attention.

Significant raise of reports

Posted Mar 31, 2026 19:28 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link]

I don't know how long this pace will last. I suspect that bugs are reported faster than they are written, so we could in fact be purging a long backlog (and I hope so).

This makes sense, and the key way of making sure the bug reports are primarily about purging a backlog is to apply the same kind of scrutiny to code before it ever gets merged. Basically, the key is to use AI to improve code quality (both already merged and pre-merge) rather than just spamming as much new stuff as possible. This matches up very well with the article on Andrew Morton trying to make Sashiko a required part of submissions to the memory management subsystem.

Significant raise of reports

Posted Mar 31, 2026 20:36 UTC (Tue) by fw (subscriber, #26023) [Link] (1 responses)

My recommendation to one particular struggling security team was to triage what absolutely needs to be fixed under embargo—and work on fixes for those things only. The rest is published without an embargo and a fix. This way, everyone in the development community can chip in. Maybe even the various zero-CVE vendors can provide fixes if they want to stay true to their mission.

The previous approach, with the desire to have fixes available at the time of disclosure (with or without embargo/grace period for distributions) does not seem to work anymore. It doesn't encourage community collaboration, and it paves the way for rather extreme forms of freeloading.

Significant raise of reports

Posted Mar 31, 2026 21:16 UTC (Tue) by wtarreau (subscriber, #51152) [Link]

> My recommendation to one particular struggling security team was to triage what absolutely needs to be fixed under embargo—and work on fixes for those things only. The rest is published without an embargo and a fix. This way, everyone in the development community can chip in. Maybe even the various zero-CVE vendors can provide fixes if they want to stay true to their mission.

It's not much different from what we're doing, and due to the high volume we can only triage now. Syzbot reports are systematically redirected to public lists, all the stuff that doesn't represent an immediate risk of escalation but might only be used to lower some barriers (e.g. local kASLR defeats) or stuff that's outside the threat model gets the same fate, and the rest is often challenged a bit (e.g. unclear or too old version, dubious claims etc), we ask for patches and most of the time the reporters are interested in helping and they do their final share of the work, then we forward to maintainers and try to help both to get the issue fixed and merged ASAP. Many reporters are not familiar with processes, and it's the same for maintainers getting begged for the first time. It's extremely rare these days that issues are attempted to be resolved within the list (except if the maintainer is already there) as all subsystems are willing to participate to the resolution now, so the fixes are super fast, in a matter of days you can count on a single hand most of the time, which would make embargoes totally pointless anyway and even counter-productive.

> The previous approach, with the desire to have fixes available at the time of disclosure (with or without embargo/grace period for distributions) does not seem to work anymore. It doesn't encourage community collaboration, and it paves the way for rather extreme forms of freeloading.

I totally agree. And the only way it makes sense is when there's a risk of remote exploitation (e.g. RCE) but then it keeps users exposed longer, which is against the initial goal. Also this doesn't take into account the different levels of exposure of different classes of users due to different use cases. On the kernel, with a release every week, it's best to just merge and release. OpenSSL for example does one thing well, since they don't release often, they often indicate upfront that a fix is going to be released a few days later so that users can get prepared to downloading the fix and deploying. In haproxy as well we're going away from embargoes. We keep them only for critical stuff that affects common deployments with no reasonable workaround (e.g. about once a year or so), and we leave two or three days to some high profile users to deploy a fix and to distros to prepare packages. Each time it remains confusing for everyone involved and the risk of mistake remains high, so the least often the better. BTW it's well known that embargoed issues generally need two fixes: the first one which works on the reporter and the developer's machine, and the second one which fixes regressions in the field.

Significant raise of reports

Posted Apr 10, 2026 7:24 UTC (Fri) by worik (guest, #86776) [Link]

We need to use better tools to start with.

It will take a generation to fix all the C code out there from the 1970s to 2010s, but we have no excuses in making more. We have better tools now

Surely the developers themselves can run the models too

Posted Mar 31, 2026 19:01 UTC (Tue) by epa (subscriber, #39769) [Link] (14 responses)

The premise is that anyone can point a coding agent at a project and find vulnerabilities without much skill or effort. But if this is true, the original developers of the software can do it too. Before releasing a new version you could leave the LLMs running for a few hours to find any obvious vulnerabilities. If that costs too many tokens, I dare say the big AI firms will be happy to subsidize this use for major free software projects.

Surely the developers themselves can run the models too

Posted Mar 31, 2026 19:47 UTC (Tue) by hsivonen (subscriber, #91034) [Link] (3 responses)

The developers “can” in the sense that the tools are available on the market to subscribe to. It seems likely that there are going to be developers who won’t: They don’t want to use these tools for various reasons or smaller projects don’t have the time to deal with the mechanics involved.

Attackers are going to use these tools anyway, so users are going to get hurt when the developers don’t use the tools first. Not a particularly nice situation.

Surely the developers themselves can run the models too

Posted Mar 31, 2026 20:51 UTC (Tue) by epa (subscriber, #39769) [Link] (2 responses)

These vulnerabilities already existed and the big criminal and state-sponsored attackers were already capable of finding them. The disclosure may only come when the exploit is spotted in the wild. If the AI tools can let more people find the vulnerabilities there is more chance of fixing or preventing them before they are exploited.

Surely the developers themselves can run the models too

Posted Apr 1, 2026 3:12 UTC (Wed) by notriddle (subscriber, #130608) [Link]

> These vulnerabilities already existed and the big criminal and state-sponsored attackers were already capable of finding them.

Most people aren't targetted by cartels or intelligence agencies. Everyone is targetted by credit card thieves and botnet operators.

Surely the developers themselves can run the models too

Posted Apr 1, 2026 6:13 UTC (Wed) by hsivonen (subscriber, #91034) [Link]

You say “major” above and “big projects” in another comment. And, yes, the vulnerabilities were already there.

There are small projects in memory-unsafe languages, and one should expect some substantial proportion of them (for various reasons) not to get ahead of the attackers in using these tools to find the vulnerabilities that previously were not worth “elite attention” (to use Ptacek’s terminology) for attackers to find.

Users of such smaller projects written in memory-unsafe languages are going to have a bad time.

Surely the developers themselves can run the models too

Posted Mar 31, 2026 20:59 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

We've solved that a long time ago: use CI/CD workflows and forges. So any PR gets at least some automated checks.

But Linux developers are virulently opposed to modern workflows.

Surely the developers themselves can run the models too

Posted Apr 1, 2026 3:15 UTC (Wed) by marcH (subscriber, #57642) [Link]

> But Linux developers are virulently opposed to modern workflows

That's an incorrect simplification. A significant number of Linux developers are:
- virulently opposed to _proprietary_ tools (Bitkeeper etc). Understandable.
- virulently opposed to centralized tools / SPOF. Understandable.
- resistant to spending a lot of time dumping their existing tools and learning brand new ones. Like everyone else.

This can all give the impression that they are opposed to "modern" workflows. But they are not opposed to automated checks, new tools and evolution in general. Examples:
- https://lore.kernel.org/lkml/?q=0day
- https://lwn.net/Articles/1063303/ b4
- https://lwn.net/Articles/1064830/ Sashiko.

Surely the developers themselves can run the models too

Posted Apr 1, 2026 10:01 UTC (Wed) by ballombe (subscriber, #9523) [Link]

> But Linux developers are virulently opposed to modern workflows.
Yes it is crazy how slowly they adopted GIT!

Subsidies

Posted Mar 31, 2026 21:22 UTC (Tue) by corbet (editor, #1) [Link] (6 responses)

Instead, I think that the era of subsidized LLMs is going to come to an abrupt end the moment the companies involved think that people are sufficiently dependent on their systems — or they simply run out of money, whichever comes first. At some point they will have to stop burning dollars by the billion.

Subsidies

Posted Mar 31, 2026 22:11 UTC (Tue) by Cyberax (✭ supporter ✭, #52523) [Link] (3 responses)

Inference doesn't cost a lot, and it doesn't require unobtainum-level hardware. You can run the SOTA models with full precision on 4 networked Mac Studios.

My home setup has two Radeon GPUs with 64Gb VRAM in total, and it can run models that produce adequate results with reasonable speed. Top-of-the line inference accelerators use about 1000W of power and have ~300Gb of VRAM, they can comfortably run quantized models just on one chip. The amortized cost of the complete inference system is around $30k per chip (~1 concurrent user).

If we spread this over 5 years of use, that's $500 per month. If you're using it 10% of the time, that's around $50 per month cost, which is in line with many AI subscriptions. And one thing we learned from the history of semiconductors is that this number will likely rapidly decrease.

I don't expect inference tokens to become scarce any time soon. Companies may clamp down on the totally free use, but I don't expect it to become a luxury good.

Subsidies

Posted Mar 31, 2026 22:52 UTC (Tue) by rgmoore (✭ supporter ✭, #75) [Link] (2 responses)

Inference may be cheap, but building the models is likely to remain expensive. When the money starts getting tight and LLM companies decide it's time to start charging enough to make a profit, they are going to go after people running the models locally, too. Maybe you can get hardware for $50/user/month, but the model will be a whole other expense. Inference tokens may not be a luxury good, but the companies are going to charge as much as they think they can get away with.

Subsidies

Posted Apr 1, 2026 0:21 UTC (Wed) by karath (subscriber, #19025) [Link]

American ‘frontier’ LLMs are typically not published, so it should be possible to substantially increase usage charges. However, LLMs from China are frequently published as open-weights, which means that anyone with suitable hardware can run them with no restrictions. Those models are evaluated to be around 6 months behind the frontier and quickly catching up - my memory is that a year ago, they were considered to be about 12 months behind.

I can imagine that, when the Chinese models catch-up and even surpass the American models, then the developers could close their models and charge. If they do not catch up for another 12 months, then the models they publish in around 6 months should have similar capabilities to the frontier today. I can imagine that there will be plenty of competition to serve open-weights models once they are equivalent to the capabilities of 2 to 3 months ago.

One further possibility is that China strategically keep publishing open-weights until they force the frontier researchers into bankruptcy. This is very possible for OpenAI and Anthropic, however Google has very deep pockets and could easily survive.

Either way, I can’t see much short-term possibility for charges to end users to substantially increase. In the longer run, innovation should further bring down the cost to run the models. The cost of creating new models at any particular capability level should also decrease, however creating models at the latest frontier will remain vastly expensive.

In terms of the risks of running ‘foreign’ models, I believe that for technical topics, including coding and code-review, these risks are very low. Any attempt to bias a model towards creating hidden vulnerabilities will cause noticeable misbehaviour.

Subsidies

Posted Apr 1, 2026 1:01 UTC (Wed) by Cyberax (✭ supporter ✭, #52523) [Link]

Building new general-purpose models probably will, but fine-tuning existing models and using them to create domain-specific models is already within the realm of enthusiasts and small companies.

It's not like already existing models are going to disappear. And they are already plenty capable, as we see.

Subsidies

Posted Mar 31, 2026 22:12 UTC (Tue) by epa (subscriber, #39769) [Link]

I agree that prices of LLMs are probably going to rise but even then I would expect Google to continue providing a few hundred dollars' worth of compute resources to the kernel and other big projects. It benefits them as much as anyone if exploits can be found early rather than leaving them to be discovered by the bad guys.

Subsidies

Posted Apr 1, 2026 16:36 UTC (Wed) by karim (subscriber, #114) [Link]

It's my understanding that inference isn't that unreasonable to run on local hardware. ... Maybe we get into a business model where you subscribe to downloading trained "frontier" models for a fee and run them locally?

Memory safety

Posted Apr 1, 2026 11:42 UTC (Wed) by chris_se (subscriber, #99706) [Link] (11 responses)

> The industry is sold on memory-safe software, but the shift is slow going.

Memory safety is not a panacea. If you have a runtime memory safety bug in your code (e.g. accessing a member of an array beyond its bounds), a memory safe language just converts your remote code execution into a denial of service. That's definitely an huge improvement, don't get me wrong, but it's not like this completely solves the issue. And depending on what this system is, a DOS attack can be really bad as well.

And there are a lot of classes of bugs that a memory-safe language doesn't even address - PHP and Perl have been basically memory safe from the start, but how many websites had injection vulnerabilities (SQL, XSS, ...) or were vulnerable to something like CSRF? TOCTOU with system resources is something a memory safe language can't save you from intrinsically. And in release builds Rust doesn't enable integer overflow panics.

Don't get me wrong: moving towards memory safety is good, but that only addresses a subset of all vulnerabilities.

Memory safety

Posted Apr 1, 2026 12:15 UTC (Wed) by ojeda (subscriber, #143370) [Link] (9 responses)

> a memory safe language just converts your remote code execution into a denial of service

No, sometimes it is way better than a DoS, e.g. a language like Rust can prevent certain cases at compile time, e.g. UAFs and data races.

> And there are a lot of classes of bugs that a memory-safe language doesn't even address

Obviously, nobody said otherwise. But that does not mean removing memory safety issues and preventing UB in general is not worth it.

Not to mention that, depending on what languages you are comparing, certain languages can reduce the chances of logic bugs as well, e.g. one can employ a good type system to good effect.

> And in release builds Rust doesn't enable integer overflow panics.

Yes, but it is configurable, e.g. in Linux we provide a Kconfig option for users and, by default, they are enabled.

Memory safety

Posted Apr 1, 2026 12:36 UTC (Wed) by chris_se (subscriber, #99706) [Link] (8 responses)

> > a memory safe language just converts your remote code execution into a denial of service
> No, sometimes it is way better than a DoS, e.g. a language like Rust can prevent certain cases at compile time, e.g. UAFs and data races.
In certain cases, sure. But you can't check _everything_ at compile time - especially not the trickier ones, and once you're at runtime, my statement holds.

(And sure, you can write e.g. Rust code that doesn't panic, because you always use the try_ methods and always do proper error handling - but that puts the burden on the developer to write good code, which is kind of the point here, because we wouldn't need Rust if everybody could and would write excellent C code. ;-))

> > And there are a lot of classes of bugs that a memory-safe language doesn't even address
> Obviously, nobody said otherwise.
But people keep _implying_ otherwise. Read the article - it talks about the number of vulnerability reports, and then _implied_ the way it was written that once we've moved to memory-safe languages we'll not have this problem anymore, and that techniques such as sandboxes are a stop-gap measure until we've arrived there.

And precisely _that_ implication that I've seen many, many times is what I'm arguing about.

I'm **not** saying memory safety is superfluous, I'm just saying that I'm constantly seeing the _implication_ that it will solve all of our problems, which it won't.

> But that does not mean removing memory safety issues and preventing UB in general is not worth it.
I don't disagree with that statement, in fact I did agree in my posting beforehand.

> [...] e.g. one can employ a good type system to good effect.
I also completely agree with this - but that then puts the burden again on the programmers to write good code, and there's no enforcement by the language that they _have_ to do so. (And I don't think you could design a language that enforces this properly anyway.)

> > And in release builds Rust doesn't enable integer overflow panics.
> Yes, but it is configurable, e.g. in Linux we provide a Kconfig option for users and, by default, they are enabled.
But outside of the kernel I haven't seen this used in practice. (I'm sure you can point to some individual examples, but it's not the default.)

Memory safety

Posted Apr 1, 2026 16:17 UTC (Wed) by hsivonen (subscriber, #91034) [Link] (1 responses)

Panic on integer overflow isn’t an immediate panic by default, because that’s not memory-unsafety. Indexing a slice with an out-of-bound index would be memory-unsafety, so that panics.

As for memory-safe languages not preventing all, LLM-discoverable bugs: Memory-safe languges don’t prevent all bugs, but it’s very likely that the scale of how many LLM-discoverable bugs and how severe will be very different in memory-unsafe languages vs. memory-safe languages.

Memory safety

Posted Apr 2, 2026 9:54 UTC (Thu) by taladar (subscriber, #68407) [Link]

Coincidentally Rust actually does have solutions for both that avoids panics and optional clippy lints you can enable to flag them in your code base. It can be a bit of a pain to avoid them but it is absolutely doable with Rust today, unlike some of the older languages mentioned.

Memory safety

Posted Apr 1, 2026 17:01 UTC (Wed) by wtarreau (subscriber, #51152) [Link] (4 responses)

> > > And in release builds Rust doesn't enable integer overflow panics.
> > Yes, but it is configurable, e.g. in Linux we provide a Kconfig option for users and, by default, they are enabled.
> But outside of the kernel I haven't seen this used in practice. (I'm sure you can point to some individual examples, but it's not the default.)

I'd go even further: 8086 had the "INTO" instruction to cause an exception (int 4 IIRC) if the last operation had set the overflow flag. I don't think it has been of much use. I never met it in any code. And the 80186 had the BOUND exception to control that a register was within specified bounds otherwise trigger an exception (5?), to avoid array overflows. That one remained present in 32-bit mode but was not used either. None of these instructions have been reconducted in the x86_64 instruction set, which tends to indicate that really nobody cares :-)

Memory safety

Posted Apr 1, 2026 19:24 UTC (Wed) by tialaramex (subscriber, #21167) [Link]

I don't think it follows that nobody is enabling a feature which obviously trades performance for correctness from the fact that a niche optimization targeted at that trade isn't popular. This would only correlate if the optimization was free (it won't be). In the real case the only difference would be for a marginal group who know they can afford to pay some reduced performance or higher cost for checking, but cannot accept the current price. I anticipate that in reality this group is extremely small.

With the simple bounds safety checks what we found was a lot of superstition and not a lot of concrete knowledge. Ten years ago it wasn't difficult to find supposed thought leaders who would assure you that bounds checks are too expensive, they probably don't catch many bugs and so shouldn't be the default. But when people started measuring instead of guessing they found oh, the checks are cheaper than we thought and catch more bugs.

So my guess is that likewise a survey would find loads of people who don't have overflow checks in release because it's not the default, and very few if any who've measured and know they couldn't afford it and yet they would be able to afford it if the CPU vendors offered this feature.

Overflow flag

Posted Apr 2, 2026 6:28 UTC (Thu) by epa (subscriber, #39769) [Link]

Having to check for overflow after every operation, with a separate instruction, is unlikely to perform well. But integer overflow in a lot of code is “this should never happen, but if it ever does, fail safe”. I would expect the processor to set a flag saying overflow has occurred somewhere, which then stays on unless cleared. Then my program can do its calculations in a tight loop and check the overflow flag at the end (dying or throwing an exception in that unexpected situation).

Arithmetic overflow

Posted Apr 2, 2026 21:44 UTC (Thu) by anton (subscriber, #25547) [Link]

Similarly, MIPS has ADD (trap on signed overflow) and ADDU (modulo arithmetics).

Alpha has ADDV (trap) and ADD (modulo); while the functionality is the same, the names indicate a shift towards modulo arithmetics.

Finally, RISC-V has ADD, leaving away the trapping variant (or at least moving it to an extension).

So yes, there is a trend of going away from trapping on signed overflow. In higher-level programming languages, one tends to prefer to check this case with a conditional instruction (AMD64 still has the JO instruction), and if that check actually triggers, go for wider arithmetics.

Memory safety

Posted Apr 2, 2026 22:09 UTC (Thu) by khim (subscriber, #9252) [Link]

> None of these instructions have been reconducted in the x86_64 instruction set, which tends to indicate that really nobody cares :-)

Wrong. Not only they were they reconducted with “improvements”, they were supported by GCC from version 5 to version 9.

Except people have found out that “hardware” implementation brings many problems and doesn't add enough advantage over software based checks.

Memory safety

Posted Apr 3, 2026 21:16 UTC (Fri) by ojeda (subscriber, #143370) [Link]

> In certain cases, sure. But you can't check _everything_ at compile time - especially not the trickier ones, and once you're at runtime, my statement holds.

What do you mean by "in certain cases"?

The point I was making is that you can also rule out entire classes of errors statically. Focusing on the runtime ones undersells the value proposition.

> Read the article - it talks about the number of vulnerability reports, and then _implied_ the way it was written that once we've moved to memory-safe languages we'll not have this problem anymore, and that techniques such as sandboxes are a stop-gap measure until we've arrived there.

The way I see it is that stopping those kinds of vulnerabilities is critical, and that LLMs are discovering many, and quickly so, thus we need everything we can get. Not that we need to get rid of our layered defenses, or that there won't be floods of other vulnerabilities, or that it is the only problem.

In fact, I bet that if we actually had perfect memory-safe software everywhere (or if at least it became hard enough to exploit those), then we will just see more focus on other vulnerabilities, including using LLMs for that. In a way, the article is also arguing that.

But if we don't even do that, then all bets are off.

> I'm just saying that I'm constantly seeing the _implication_ that it will solve all of our problems, which it won't.

I have rarely seen that. In fact, what I actually notice more and more is people having to preempt such arguments with "this is not a silver bullet".

> I don't disagree with that statement, in fact I did agree in my posting beforehand.

Yes, but please see above, i.e. to me, by focusing on OOBs and so on, it makes it sound like there are way less advantages than there actually are.

To put it another way, if Rust was just about runtime checking of bounds, then it wouldn't have succeeded like it has because it wouldn't have been worth enough.

The key is that 1) Rust prevents statically important classes of errors, 2) avoids systematic use of UB everywhere and 3) has enough tools to create custom abstractions to do so.

> but that then puts the burden again on the programmers to write good code, and there's no enforcement by the language that they _have_ to do so.

The point is that you can write code (i.e. APIs) that "force" the rest of the code (i.e. callers) to do the right thing. And in the case of typical programs using libraries, or something like the kernel with drivers, "the rest of the code" can easily be the vast majority of the code.

In other words, there is an important difference having the right tools available or not, and in practice it is easy to notice.

For instance, the same kernel developers that write C and Rust in Linux, they tend to try to come up with the needed abstractions to avoid most of the bugs at compile time wherever possible.

In fact, you can see it happening with C too -- it is just that there are less tools to express certain things there.

> But outside of the kernel I haven't seen this used in practice. (I'm sure you can point to some individual examples, but it's not the default.)

There are indeed examples out there, e.g. one can find a bunch of projects searching for `overflow-checks` in TOML files in GitHub. But, sure, it is not the default. Perhaps it should be.

In any case, note that even if disabled, it is wraparound, not UB (unlike standard C in certain cases).

Memory safety

Posted Apr 3, 2026 14:04 UTC (Fri) by ebiederm (subscriber, #35028) [Link]

It should be mentioned that a core value of achieving memory safety (in the operating system not the programming language sense) is the removal of the undefined behavior caused by memory stomps.

Once undefined behavior is eliminated all bugs and their impacts can be assessed by analyzing the source code.

Given hardware bugs like row-hammer memory stomps and undefined may never be completely eliminated but it is a good direction to aim for as it makes other tools more effective.


Copyright © 2026, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds