Security quotes of the week
For modern UEFI systems, the firmware that's launched from the reset vector then reprograms the CPU into a sensible mode (ie, one without all this segmentation bullshit), does things like configure the memory controller so you can actually access RAM (a process which involves using CPU cache as RAM, because programming a memory controller is sufficiently hard that you need to store more state than you can fit in registers alone, which means you need RAM, but you don't have RAM until the memory controller is working, but thankfully the CPU comes with several megabytes of RAM on its own in the form of cache, so phew). It's kind of ugly, but that's a consequence of a bunch of well-understood legacy decisions.— Matthew GarrettExcept. This is not how modern Intel x86 boots. It's far stranger than that. Oh, yes, this is what it looks like is happening, but there's a bunch of stuff going on behind the scenes. Let's talk about boot security. The idea of any form of verified boot (such as UEFI Secure Boot) is that a signature on the next component of the boot chain is validated before that component is executed. But what verifies the first component in the boot chain? You can't simply ask the BIOS to verify itself - if an attacker can replace the BIOS, they can replace it with one that simply lies about having done so. Intel's solution to this is called Boot Guard.
But before we get to Boot Guard, we need to ensure the CPU is running in as bug-free a state as possible. So, when the CPU starts up, it examines the system flash and looks for a header that points at CPU microcode updates. Intel CPUs ship with built-in microcode, but it's frequently old and buggy and it's up to the system firmware to include a copy that's new enough that it's actually expected to work reliably. The microcode image is pulled out of flash, a signature is verified, and the new microcode starts running. This is true in both the Boot Guard and the non-Boot Guard scenarios. But for Boot Guard, before jumping to the reset vector, the microcode on the CPU reads an Authenticated Code Module (ACM) out of flash and verifies its signature against a hardcoded Intel key. If that checks out, it starts executing the ACM. Now, bear in mind that the CPU can't just verify the ACM and then execute it directly from flash - if it did, the flash could detect this, hand over a legitimate ACM for the verification, and then feed the CPU different instructions when it reads them again to execute them (a Time of Check vs Time of Use, or TOCTOU, vulnerability). So the ACM has to be copied onto the CPU before it's verified and executed, which means we need RAM, which means the CPU already needs to know how to configure its cache to be used as RAM.
It appears that a major problem here is that collectively we are unwilling to make any substantial investment in effective defence or deterrence. The systems that we use on the Internet are overly trusting to the point of irrational credulity. For example, the public key certification system used to secure web-based transactions is repeatedly demonstrated to be entirety untrustworthy, yet that's all we trust. Personal data is continually breached and leaked, yet all we seem to want to do is increase the number and complexity of regulations rather than actually use better tools that would effectively protect users.— Geoff Huston in a lengthy reflection on internet history—and its future
It's not just Congressdunderheads and Tiktok CEOs who treat "don't spy on under-13s" as a synonym for "don't let under-13s use this service." Every tech product designer and every general counsel at every tech company treats these two propositions as equivalent, because they are literally incapable of imagining a surveillance-free online service.— Cory Doctorow
Posted Apr 20, 2023 17:54 UTC (Thu)
by Karellen (subscriber, #67644)
[Link] (7 responses)
More from Geoff Huston's post: What makes this scenario even more depressing is the portent of the so-called Internet of Things (IoT). [...] What do we know about the “things” that are already connected to the Internet? Some of them are not very good. In fact, some of them are just plain stupid. And this stupidity is toxic, in that their sometime-inadequate models of operation and security affect others in potentially malicious ways. [...] But what we tend to forget is that all of these devices are built on layers of other people’s software that is assembled into a product at the cheapest possible price point. It may be disconcerting to realise that the web camera you just installed has a security model that can be summarised with the phrase: “no security at all,” and it’s actually offering a view of your house to the entire Internet. [...] The Internet of Things will continue to be a marketplace where the compromises between price and quality will continue to push us on to the side of cheap rather than secure. What’s going to stop us from further polluting our environment with a huge and diverse collection of programmed unmanaged devices with inbuilt vulnerabilities that will be all too readily exploited? What can we do to make this world of these stupid cheap toxic things less stupid and less toxic? So far, we have not found workable answers to this question.
Posted Apr 21, 2023 16:52 UTC (Fri)
by flussence (guest, #85566)
[Link]
Posted Apr 23, 2023 6:18 UTC (Sun)
by NYKevin (subscriber, #129325)
[Link] (5 responses)
Users who install custom firmware could easily spoof the MAC address, so they would be unaffected by such a regulation. The real question is whether this impinges on the freedom of the vast majority of users, who don't know how to install custom firmware, and just want their hardware to work as advertised. Is it right for a switch to decide that it knows better than the user, and refuse to connect a device to a network?
Perhaps this is tackling the wrong end of the problem. Maybe we could instead require that the software receive security updates for at least the natural life of the hardware. But we've been down this road before with product manufacturers. They will make every excuse to limit the "official" lifespan of their devices, as can be seen from the plethora of Android phones that no longer get updates, but are still perfectly functional. Phones are arguably at the less-bad end of the spectrum, too.
I suppose this is why the average tech worker has exactly zero smart devices in their home...
Posted Apr 23, 2023 7:54 UTC (Sun)
by Wol (subscriber, #4433)
[Link] (3 responses)
The problem is when the devices you want only come in smart versions. How many tech workers have an (intentionally) dumb tv?
Our second TV broadcaster has just switched its online service over to ITVx. That's broken both our main smart tv devices. The same thing happened a few years back. I couldn't give a monkeys but my wife's well upset. And we just don't want to have to shell out loads of money when the kit is perfectly functional - the other end has simply stopped talking to it.
It won't help when the problem is the internet end, but just mandate "open source". If the company stops supporting it, the customer has the right to the source. Tied in with this new EU security law saying companies *have* to keep their internet devices secure, we might actually get somewhere.
Cheers,
Posted Apr 23, 2023 10:20 UTC (Sun)
by mpr22 (subscriber, #60784)
[Link] (1 responses)
I have only ever owned a TV for use as a display for a games console.
(The BBC does not have £150/year worth of programmes I want to watch.)
Posted Apr 23, 2023 12:51 UTC (Sun)
by Wol (subscriber, #4433)
[Link]
I got my BA(Hons) through the OU. I paid 6 years worth of licence fees as a result. I stopped paying because the TV was switched on for less time than I could have bought time in the cinema for the same money.
We have a tv licence now, my wife can't live without it, but it's the one bill I refuse to pay. And she has control of the remote - I can never be bothered even to switch it on :-)
Cheers,
Posted Apr 24, 2023 7:29 UTC (Mon)
by pabs (subscriber, #43278)
[Link]
Posted Apr 27, 2023 14:06 UTC (Thu)
by Vipketsh (guest, #134480)
[Link]
There is little behaviour you could ask for that is more asshole than this. Effectively you are screwing over someone who has no idea what is going on and has actually done exactly nothing wrong to be screwed over. Moreover, I doubt that in the mind of the user the "responsible person" for the non-working situation is going to be the manufacturer of the device in question.
This is always a wonderful idea until you get the short end of the stick. A bit like communism...
> Users who install custom firmware [...]
I truly wonder how long you will be able to do this in general. The trend today is very clear: secure boot on everything with a complete ban on custom firmware.
> Maybe we could instead require that the software receive security updates for at least the natural life of the hardware.
While it would be great, who would pay for those updates ? Because money doesn't materialise just because someone said it must. I think the more sane approach would be to have security & code review part of the certification process. That way the cost is known, one-off and upfront.
> I suppose this is why the average tech worker has exactly zero smart devices in their home...
That is, quite frankly, the best advice I give to people.
Security quotes of the week
Security quotes of the week
Security quotes of the week
Security quotes of the week
Wol
Security quotes of the week
Security quotes of the week
Wol
Security quotes of the week
Security quotes of the week
