User: Password:
|
|
Subscribe / Log in / New account

Linux and automotive computing security

Linux and automotive computing security

Posted Oct 11, 2012 1:44 UTC (Thu) by quotemstr (subscriber, #45331)
In reply to: Linux and automotive computing security by SLi
Parent article: Linux and automotive computing security

Thanks for the interesting explanation of the development process behind safety-critical systems. Would it be safe to say that for these systems, the majority of the actual effort is expended on writing testcases?


(Log in to post comments)

Linux and automotive computing security

Posted Oct 11, 2012 8:16 UTC (Thu) by hickinbottoms (subscriber, #14798) [Link]

Being involved in this world as well I can say that whilst testing is a considerable part of the process (the back-end of the development model, if you like), the majority of the effort lies in the front-end during and before the design phase.

You can't design a safety-critical system without knowing what the safety requirements are, and they're often harder to identify than you imagine. For example a hypothetical brake-control system might have a safety requirement that the brakes are applied within X ms of being commanded, with Y reliability, which is a fairly easy requirement to spot. Slightly harder is that it's also likely to be potentially hazardous for the brakes to be applied when not commanded, so you need to spot that and engineer the requirements appropriately -- there have been aircraft losses during landing for such failures if my memory serves me correctly.

It's this identification of the requirements and the associated safety analysis process involving tools such as fault trees, event trees, FMEA/FMECA, hazard analysis/logs, SIL analysis etc that makes safety-critical development really hard and expensive. It is, however, critical to get this right before diving into coding and testing since as we know changing the requirements of systems after they're built is difficult and often leads to unexpected behaviours being implemented. The high-integrity world is littered with examples of failures caused by changed requirements or systems being used to fulfil requirements that were never identified.

Because the resulting design of the system is heavily-influenced by the requirements analysis that got you there it's also very difficult to make a convincing safety case and retrospectively develop a safety substantiation for a system that hasn't been designed that way from the outset.

As the parent poster says, you can't stop non-trivial software from having bugs and crashing, but you can build a confident argument that such failure cannot lead to a hazardous condition with an intolerable frequency. The safety analysis process lets you make such statements with evidence.

It's always a little disappointing that at the end of the day you just end up with 'normal-looking' software that isn't somehow magical and better -- but it's the confidence that it's more likely to do what's expected and that when it doesn't it can't lead to situations you've not at least considered that's important.

Linux and automotive computing security

Posted Oct 11, 2012 15:01 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

You can't design a safety-critical system without knowing what the safety requirements are, and they're often harder to identify than you imagine.

Yes, and in this case, it turns out that one of the things the designers failed to identify is that they couldn't necessarily trust all of the other systems on the CAN. It's easy to understand why somebody might make that mistake, but the major thrust of the security researchers' article is that it is a mistake. Now they need to go back to the drawing board and design a better set of specifications for their networking component so it won't let the system be subverted by malicious messages.

Writing tests cases

Posted Oct 11, 2012 11:57 UTC (Thu) by man_ls (guest, #15091) [Link]

Would it be safe to say that for these systems, the majority of the actual effort is expended on writing testcases?
I hope that, in this day and age, the effort on writing and running test cases for any non-trivial system is the majority of the coding effort! In a recent interview Kernighan says that in his classes:
I also ask them to write tests to check their code, and a test harness so the testing can be done mechanically. These are useful skills that are pretty much independent of specific languages or environments.
Given that tests should be about half the size of the system (for a big system), and that they are run repeatedly, they should take the majority of the coding effort. For critical systems this amount should be probably higher.

I am just speaking about coding, but obviously it is not the only development activity. I am not surprised to learn from the above poster that analysis and design take even longer than coding.

Writing tests cases

Posted Oct 18, 2012 18:22 UTC (Thu) by TRauMa (guest, #16483) [Link]

Then again, nobody pays for test cases unless regulations force them to. :(

Linux and automotive computing security

Posted Oct 11, 2012 14:57 UTC (Thu) by ortalo (subscriber, #4654) [Link]

Certainly. And in some cases, manual coding in a conventional language is even nearly prohibited: code is generated from the specification. (With the testcases, the timing calculations, etc.) And even in this case, the testing effort is paramount.

Linux and automotive computing security

Posted Oct 11, 2012 15:00 UTC (Thu) by ortalo (subscriber, #4654) [Link]

The last line of the above comment disappeared mysteriously. It was:

But is that enough for security (!= safety)?


Copyright © 2017, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds