User: Password:
|
|
Subscribe / Log in / New account

Linux and automotive computing security

Linux and automotive computing security

Posted Oct 11, 2012 8:16 UTC (Thu) by hickinbottoms (subscriber, #14798)
In reply to: Linux and automotive computing security by quotemstr
Parent article: Linux and automotive computing security

Being involved in this world as well I can say that whilst testing is a considerable part of the process (the back-end of the development model, if you like), the majority of the effort lies in the front-end during and before the design phase.

You can't design a safety-critical system without knowing what the safety requirements are, and they're often harder to identify than you imagine. For example a hypothetical brake-control system might have a safety requirement that the brakes are applied within X ms of being commanded, with Y reliability, which is a fairly easy requirement to spot. Slightly harder is that it's also likely to be potentially hazardous for the brakes to be applied when not commanded, so you need to spot that and engineer the requirements appropriately -- there have been aircraft losses during landing for such failures if my memory serves me correctly.

It's this identification of the requirements and the associated safety analysis process involving tools such as fault trees, event trees, FMEA/FMECA, hazard analysis/logs, SIL analysis etc that makes safety-critical development really hard and expensive. It is, however, critical to get this right before diving into coding and testing since as we know changing the requirements of systems after they're built is difficult and often leads to unexpected behaviours being implemented. The high-integrity world is littered with examples of failures caused by changed requirements or systems being used to fulfil requirements that were never identified.

Because the resulting design of the system is heavily-influenced by the requirements analysis that got you there it's also very difficult to make a convincing safety case and retrospectively develop a safety substantiation for a system that hasn't been designed that way from the outset.

As the parent poster says, you can't stop non-trivial software from having bugs and crashing, but you can build a confident argument that such failure cannot lead to a hazardous condition with an intolerable frequency. The safety analysis process lets you make such statements with evidence.

It's always a little disappointing that at the end of the day you just end up with 'normal-looking' software that isn't somehow magical and better -- but it's the confidence that it's more likely to do what's expected and that when it doesn't it can't lead to situations you've not at least considered that's important.


(Log in to post comments)

Linux and automotive computing security

Posted Oct 11, 2012 15:01 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

You can't design a safety-critical system without knowing what the safety requirements are, and they're often harder to identify than you imagine.

Yes, and in this case, it turns out that one of the things the designers failed to identify is that they couldn't necessarily trust all of the other systems on the CAN. It's easy to understand why somebody might make that mistake, but the major thrust of the security researchers' article is that it is a mistake. Now they need to go back to the drawing board and design a better set of specifications for their networking component so it won't let the system be subverted by malicious messages.


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds