|
|
Subscribe / Log in / New account

What to do in response to a kernel warning

By Jonathan Corbet
November 18, 2021
The kernel provides a number of macros internally to allow code to generate warnings when something goes wrong. It does not, however, provide a lot of guidance regarding what should happen when a warning is issued. Alexander Popov recently posted a patch series adding an option for the system's response to warnings; that series seems unlikely to be applied in anything close to its current form, but it did succeed in provoking a discussion on how warnings should be handled.

Warnings are emitted with macros like WARN() and WARN_ON_ONCE(). By default, the warning text is emitted to the kernel log and execution continues as if the warning had not happened. There is a sysctl knob (kernel/panic_on_warn) that will, instead, cause the system to panic whenever a warning is issued, but there is a lack of options for system administrators between ignoring the problem and bringing the system to a complete halt.

Popov's patch set adds another option in the form of the kernel/pkill_on_warn knob. If set to a non-zero value, this parameter instructs the kernel to kill all threads of whatever process is running whenever a warning happens. This behavior increases the safety and security of the system over doing nothing, Popov said, while not being as disruptive as killing the system outright. It may kill processes trying to exploit the system and, in general, prevent a process from running in a context where something is known to have gone wrong.

There were a few objections to this option, starting with Linus Torvalds, who pointed out that the process that is running when a warning is issued may not have anything to do with the warning itself. The problem could have happened in an interrupt handler, for example, or in a number of other contexts. "Sending a signal to a random process is just voodoo programming, and as likely to cause other very odd failures as anything else", he said.

Torvalds suggested that a better approach might be to create a new /proc file that will provide information when a system-tainting event (such as a warning) happens. A user-space daemon could poll that file, read the relevant information when a warning is issued, then set about killing processes itself if that seems like the right thing to do. Marco Elver added that there is a tracepoint that could provide the relevant information with just a bit of work. Kees Cook threw together an implementation, but Popov didn't like it; that approach would allow a process to continue executing after the warning happens, he said, and by the time user space gets around to doing something about the situation, it may be too late.

James Bottomley argued that all of the approaches discussed so far were incorrect. If a warning happens, he said, the kernel is no longer in a known state, and anything could happen:

What WARN means is that an unexpected condition occurred which means the kernel itself is in an unknown state. You can't recover from that by killing and restarting random stuff, you have to reinitialize to a known state (i.e. reset the system). Some of the reason we do WARN instead of BUG is that we believe the state contamination is limited and if you're careful the system can continue in a degraded state if the user wants to accept the risk.

Thus, he said, the only rational policies are to continue (accepting the risk that bad things may happen) or kill the system and start over — the options that the kernel provides now.

Popov had suggested that the ELISA project, which is working toward Linux deployments in safety-critical applications, might support the addition of pkill_on_warning. But Lukas Bulwahn, who works on the project (but who was careful to say he doesn't speak for ELISA), disagreed. The right solution, he said, is to kill the system on warnings, but also to ensure that warnings are only issued in situations where things have truly gone off the rails:

Warnings should only be raised when something is not configured as the developers expect it or the kernel is put into a state that generally is _unexpected_ and has been exposed little to the critical thought of the developer, to testing efforts and use in other systems in the wild. Warnings should not be used for something informative, which still allows the kernel to continue running in a proper way in a generally expected environment.

He added that being truly safe also requires ensuring that a call to panic() will really stop the system in all situations — something that is not as easy to demonstrate as one might think. A panic() call might hang trying to acquire a lock, for example.

Christoph Leroy said that warnings should be handled within the kernel so that the system can keep running as well as it can. Given that, he continued, "pkill_on_warning seems dangerous and unrelevant, probably more dangerous than doing nothing, especially as the WARN may trigger for a reason which has nothing to do with the running thread". Popov, however, disagreed with the idea that one can expect all warnings to be handled properly within the kernel:

There is a very strong push against adding BUG*() to the kernel source code. So there are a lot of cases when WARN*() is used for severe problems because kernel developers just don't have other options.

Indeed, his patch would, when the new option is enabled, have warnings behave in almost the same way as BUG() calls, which bring about the immediate end of the running process by default. As he noted, developers run into resistance when they try to add those calls because their effect is seen as being too severe.

It's not clear that adding an option to make warnings more severe as well is the best solution to the problem. A good outcome, in the form of some movement toward better-defined notion of just what a warning means and what should happen when one is generated, could yet result from this discussion, though. Like many mechanisms in the kernel, the warning macros just sort of grew in place without any sort of overall design. Engaging in a bit of design now that there is a lot of experience with how developers actually use warnings might lead to a more robust kernel overall.

Index entries for this article
KernelWarnings


to post comments

What to do in response to a kernel warning

Posted Nov 18, 2021 22:19 UTC (Thu) by NYKevin (subscriber, #129325) [Link] (3 responses)

In the past, "I can make XScreenSaver crash by doing a weird thing" has been a CVE (because if XScreenSaver crashes, the screen unlocks). If someone could find a semi-reliable way to generate a kernel warning while a specific process is executing, then similar attacks might become possible under a pkill_on_warn policy.

Hopefully, the move to Wayland will obviate that specific instance of the problem, but in the more general case, how does the kernel know that it's safe to kill the currently executing process? You might be causing a security problem instead of remedying it.

What to do in response to a kernel warning

Posted Nov 18, 2021 22:53 UTC (Thu) by a13xp0p0v (guest, #118926) [Link] (2 responses)

That is the right question.

In theory, the userspace should be adapted to this kernel behavior.

In practice, currently, the Linux kernel kills processes on oops. OOM killer kills processes. grsecurity also kills processes.

pkill_on_warn is simply stopping the process when the first signs of wrong behavior are detected. That complies with the Fail-Fast principle.
Bugs usually don't come alone, and a kernel warning may be followed by memory corruption or other negative effects. Real example:
https://a13xp0p0v.github.io/2020/02/15/CVE-2019-18683.html
pkill_on_warn would prevent this kernel vulnerability exploit.

It would also make kernel warning infoleaks less valuable for the attacker. Exploit examples using such infoleaks:
https://googleprojectzero.blogspot.com/2018/09/a-cache-in...
https://a13xp0p0v.github.io/2021/02/09/CVE-2021-26708.html

Anyway, this patch provoked a deep discussion.
Maybe one day, the Linux kernel will get a more consistent error handling policy.

What to do in response to a kernel warning

Posted Nov 19, 2021 9:21 UTC (Fri) by geert (subscriber, #98403) [Link]

But it doesn't Fail-Fast if it kills the wrong thread. In general, there is no guarantee the bad state that caused the warning is limited to the thread(s) killed.

What to do in response to a kernel warning

Posted Nov 20, 2021 2:38 UTC (Sat) by NYKevin (subscriber, #129325) [Link]

> In practice, currently, the Linux kernel kills processes on oops. OOM killer kills processes. grsecurity also kills processes.

Well...

1. oops is pretty darned unusual, in my experience.
2. OOM killing is both unusual and at least tries* to target processes that might possibly be responsible for the OOM condition.
3. I have no idea about grsecurity, but it's not part of Linus's tree, so I frankly don't care what it does.

* There is a difference between trying and succeeding, of course.

What to do in response to a kernel warning

Posted Nov 18, 2021 22:59 UTC (Thu) by dambacher (subscriber, #1710) [Link] (3 responses)

maybe we should taint the affected subsystem on a WARN,
and make security sensible processes aware of this.
they can decide themselves if it is save to continue or not.

What to do in response to a kernel warning

Posted Nov 19, 2021 2:51 UTC (Fri) by pbonzini (subscriber, #60935) [Link] (1 responses)

KVM recently added KVM_BUG_ON, which makes all subsequent ioctls return with EIO.

What to do in response to a kernel warning

Posted Dec 10, 2021 20:32 UTC (Fri) by Vipketsh (guest, #134480) [Link]

Translated: they are turning a "potential issue" into "definitely an issue". Depending on the situation that can be anything from great to down right bastard.

If you are some cloud provider with a large number of machines (google, facebook, etc.), you probably have a bunch of redundancy and load balancing in your setup: if one machine goes down for whatever reason, even if just because of a "potential issue", there are a bunch more available to take over and the load balancing makes sure it happens. Furthermore the cloud provider probably has engineers available to take a look at the issue in short order and return the machine to operation. In such a setting taking machines down due to "potential issue" is very well warranted.

On the other hand if you are using the machine in question as a primary work machine, the last thing you want is for a "potential issue" to take down your system and eat hours of work with it. In such a scenario chugging along for as long as possible is the desired mode of operation.

I think google & co. have enough expertise on hand to tweak defaults to whatever makes sense for them while individuals (using Linux as a primary work machine) very often do not. Thus the only same thing is to keep the defaults to what makes sense for individuals: keep going for as long as possible.

What to do in response to a kernel warning

Posted Nov 19, 2021 5:43 UTC (Fri) by developer122 (guest, #152928) [Link]

The fundamental problem here is a desire for compartmentalization that simply does not exist.

It seems there are many different kinds of warn()s that can be triggered in different ways at different times. Some might happen in an interrupt handler, others might be the result of a filesystem or driver problem, and then some other might be the result of something actually triggered by userspace. So not only are warn()s not tagged by probable cause, but tagging them with that may be impossible.

Warn()s also occur in the shared security boundary of "all of kernel space." There is not theoretical limit to what state might be contaminated. It might be limited to only what is currently being processed. It might affect the whole subsystem. Or it could lead to corruption to anything else the kernel is doing and possibly take the whole system down. So not only are warn()s not tagged with their probably effects, but it's quite possibly impossible to do so.

Without knowing what caused a warn, and what effects it might have, it is impossible to take any decision more fine-grained than "do nothing" or "reset the system." You can't reliably kill the culprit or avoid the conditions that caused it. You can't reliably determine the affected area and pick an appropriate mitigation (flushing a buffer, killing a process, killing a subsystem, or indeed shutting down a totally-compromised system).

What the kernel devs desire is a system like minix, where the various subsystems are isolated enough that anything bad happening in one is reasonably assured to not have affected the others, and where the components are fine grained enough that one experiencing a problem can be automatically killed without too much cost. This is fundamentally impossible outside of a microkernel.

I'm not saying that minix or other microkernels are better, but I am saying that what people desire is just not an option with the design choices that linux has made.

What to do in response to a kernel warning

Posted Nov 18, 2021 23:26 UTC (Thu) by xecycle (subscriber, #140261) [Link] (2 responses)

I don’t know whether journalctl -k -p warning means the same WARNING? I’m seeing warnings (and errors) on every boot saying tsc is unstable, switching clocksource to hpet; so if it does mean the same, maybe I cannot boot if this settled on “panic on warning”.

What to do in response to a kernel warning

Posted Nov 19, 2021 4:06 UTC (Fri) by a13xp0p0v (guest, #118926) [Link] (1 responses)

No, the output of WARN_ON() in the kernel log looks like that:

```
WARNING: CPU: 1 PID: 6739 at net/vmw_vsock/virtio_transport_common.c:34
...
CPU: 1 PID: 6739 Comm: racer Tainted: G W 5.10.11-200.fc33.x86_64 #1
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.13.0-2.fc32 04/01/2014
RIP: 0010:virtio_transport_send_pkt_info+0x14d/0x180 [vmw_vsock_virtio_transport_common]
...
RSP: 0018:ffffc90000d07e10 EFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff888103416ac0 RCX: ffff88811e845b80
RDX: 00000000ffffffff RSI: ffffc90000d07e58 RDI: ffff888103416ac0
RBP: 0000000000000000 R08: 00000000052008af R09: 0000000000000000
R10: 0000000000000126 R11: 0000000000000000 R12: 0000000000000008
R13: ffffc90000d07e58 R14: 0000000000000000 R15: ffff888103416ac0
FS: 00007f2f123d5640(0000) GS:ffff88817bd00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f81ffc2a000 CR3: 000000011db96004 CR4: 0000000000370ee0
Call Trace:
virtio_transport_notify_buffer_size+0x60/0x70 [vmw_vsock_virtio_transport_common]
vsock_update_buffer_size+0x5f/0x70 [vsock]
vsock_stream_setsockopt+0x128/0x270 [vsock]
...
```

What to do in response to a kernel warning

Posted Nov 19, 2021 4:25 UTC (Fri) by xecycle (subscriber, #140261) [Link]

Thanks, this way it feels safer to declare the system as broken.

What to do in response to a kernel warning

Posted Nov 20, 2021 18:44 UTC (Sat) by dullfire (guest, #111432) [Link]

I think it would be kind of nice if the kernel had an interface (maybe a character devices) that was poll-able. And on oops, read(2) would return something like

struct us_oops_event {
/* time that matches the kernel time stamp in dmesg of the event */
ktime_t time;
/* Which thread the oops occurred on */
pid_t victim;
};

After which, userspace can make any decisions: may it knows the pid it question is a specific process, or maybe it just forwards the even+dmesg over the network (if syslog forwarding isn't normally setup).

Anyhow that seems to me like the sanest thing the kernel can do (besides panicking, which is already an option).

What to do in response to a kernel warning

Posted Nov 21, 2021 17:54 UTC (Sun) by nix (subscriber, #2304) [Link] (2 responses)

Looking at the WARN_ONs I've experienced here in the last half-decade, I had warnings on boot for about two years because of amdgpu multi-monitor problems which eventually got fixed: but while they were happening, reboot-on-warn would have been precisely the wrong thing to do because the warnings didn't actually ever cause anything to go wrong that I could tell. I had warnings on minor bcache gc problems which might cause, at most, a leak of a single bucket (4MiB) which in any case would go away upon the next gc after reboot: there's no *way* that sort of thing would ever be worthy of a reboot unless it hit almost every bucket (it didn't, it hit one out of ~90k), but a WARN_ON was used because, well, it's worthy of a warning, right? In this case the warning was likely used as a "please tell the developer" flag, probably on the grounds that anything less would just be ignored.

The only warning I ever had that was actually worthwhile was a warning from xfs which was immediately afterwards followed by a verifier failure flipping the affected fs to readonly. This was entirely correct given that it would otherwise have resulted in fs corruption -- but I'm fairly sure a reboot would not have been preferable to the readonly-flipping it already did (though in the event the affected fs happened to be the rootfs, so there was little else I could do: but that need not have been the case).

So, so far, here at least, the number of WARNs that should have resulted in a reboot is, uh, 0%.

What to do in response to a kernel warning

Posted Nov 22, 2021 16:55 UTC (Mon) by ianmcc (subscriber, #88379) [Link] (1 responses)

Same here - in my previous system I used to get regular warnings (with a dire looking log message) from the NVIDIA proprietary drivers, but it never caused any actual problem, not even any noticeable graphics glitch.

What to do in response to a kernel warning

Posted Nov 26, 2021 23:50 UTC (Fri) by flussence (guest, #85566) [Link]

Another here - my laptop's i915 driver spews messages about corrupt EDID data on every boot. It used to be at warning severity but it seems they've downgraded it.

But graphics drivers aside, I've had actual WARN_ONs recently too, one was a soft lockup timeout because an NFS mount tree went away during a system upgrade. I would rather not have a spontaneous panic/reboot there before I get a chance to ensure the reboot will actually succeed.

What to do in response to a kernel warning

Posted Nov 22, 2021 11:31 UTC (Mon) by taladar (subscriber, #68407) [Link]

This is an interesting discussion, however when I read the headline I first thought of explanations similar to the ones the Rust compiler provides, giving the system administrator some guidance on what the warning actually means. That might be a useful thing to have as well.

What to do in response to a kernel warning

Posted Nov 22, 2021 15:55 UTC (Mon) by marcH (subscriber, #57642) [Link] (5 responses)

Meanwhile, not a single CI engine seems to support warnings. It's either red or green and nothing in the middle. At best you get stderr in a different color in the logs (that one one opens when the status is green).

What to do in response to a kernel warning

Posted Nov 22, 2021 19:24 UTC (Mon) by mathstuf (subscriber, #69389) [Link]

Agreed. My request to GitLab is here if anyone wants to "vote": https://gitlab.com/gitlab-org/gitlab/-/issues/219574

What to do in response to a kernel warning

Posted Nov 23, 2021 4:44 UTC (Tue) by Fowl (subscriber, #65667) [Link]

Microsoft Azure DevOps (horrible name for what used to be "Team Foundation Server"), aka. Microsoft's /other/ forge thing, does actually support in it's build/release pipelines.

This page has some screenshots -> https://github.com/melix-dev/azure-devops-dotnet-warnings...

What to do in response to a kernel warning

Posted Nov 24, 2021 0:16 UTC (Wed) by NYKevin (subscriber, #129325) [Link] (2 responses)

That's because the purpose of a CI system is not to give you a red/green/yellow status indicator. It is to perform a sequence of actions which (usually) culminates either in producing final (tested) artifacts, or in actually pushing those artifacts to production (depending on what your process looks like). "Green" is nothing more than an "I'm finished" indicator, and "red" means "I was not able to finish." There's no "yellow" state because the process is not allowed to "half-finish." Either it gives you the final end product/deployment (green), or it doesn't (red). Yellow would just end up being green or red but with a different UI indication.

It's also important to bear in mind that you can have more than one CI pipeline for the same underlying codebase. For example, you could have a lenient CI pipeline that "just" pushes nightlies into your staging or QA instance, and then a more stringent pipeline that pushes into prod. You can even have a darklaunch pipeline that works just like the real prod pipeline, except that it never actually pushes anything, so that you can still have advance notice that "this nightly is in staging now, but it won't make it into prod when that push happens for real, because it triggered a warning and failed the darklaunch pipeline."

The important thing to bear in mind is that it is *not enough* to just surface warnings in a UI somewhere. You need to have a systematic policy for what happens when a warning is triggered. Once such a policy exists, it can be enforced with code, regardless of what the UI looks like. But if there is no policy, you will get developers making ad-hoc judgment calls about whether a given warning is "bad enough" to stall the release for a day, or if we should just push it anyway. As it turns out, humans are pretty bad at making such decisions, especially when the PM wants to get our new feature out yesterday and the warning is in some really hairy subsystem that nobody has properly understood in many years.

What to do in response to a kernel warning

Posted Nov 24, 2021 1:07 UTC (Wed) by marcH (subscriber, #57642) [Link] (1 responses)

> "Green" is nothing more than an "I'm finished" indicator, and "red" means "I was not able to finish." There's no "yellow" state because the process is not allowed to "half-finish."

I'm sorry your CI is so limited. Ours runs tests and publishes dashboards and pass rates. I've seen others automatically track regressions and progressions.

What to do in response to a kernel warning

Posted Nov 24, 2021 1:41 UTC (Wed) by mathstuf (subscriber, #69389) [Link]

Agreed. Warnings are good for things like "it should work, but there may be something coming over the horizon". If it works, it should pass. If it fails, it should block things. However, there *is* a middle state of "things are changing, adapt before it's too late" (e.g., deprecation warnings and subsequent removals). If you want to fail on warnings, it's usually easy to do something like `-Werror` (however that ends up being spelled for the tool in question). But just blindly continuing on as if nothing is wrong on warnings is also not viable long-term.

As an example, we test git master with our stuff. Currently we make it block if it fails because git is pretty good and doesn't break things much (we've found one regression in 5 years). But if git weren't so stable, we'd *still* want to know if breakages are on their way so at least we could move out of the way of whatever the light ahead of us in the tunnel turns out to be.

What to do in response to a kernel warning

Posted Nov 23, 2021 10:38 UTC (Tue) by mm7323 (subscriber, #87386) [Link]

For the cases where a WARN() can be certainly attributed to some task, perhaps there should be a WARN_ON() variant which takes a struct task_struct e.g. WARN_ON_TASK(). Then some policy may decide to dump, kill or deliver a signal to that process, avoiding the risk of killing something unrelated and risking more harm.


Copyright © 2021, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds