|
|
Subscribe / Log in / New account

Antipatterns in IoT security

By Jake Edge
September 13, 2017

Open Source Summit

Security for Internet of Things (IoT) devices is something of a hot topic over the last year or more. Marti Bolivar presented an overview of some of the antipatterns that are leading to the lack of security for these devices at a session at the 2017 Open Source Summit North America in Los Angeles. He also had some specific recommendations for IoT developers on how to think about these problems and where to turn for help in making security a part of the normal development process.

A big portion of the talk was about antipatterns that he has seen—and even fallen prey to—in security engineering, he said. It was intended to help engineers develop more secure products on a schedule. It was not meant to be a detailed look at security technologies like cryptography, nor even a guide to what technical solutions to use. Instead, it targeted how to think about security with regard to developing IoT products.

Background

There are some buzzwords used in the talk that he wanted to define. An "IoT product" is a mass-produced consumer device that communicates over the network and whose primary purpose is not computation. Most of these devices will not use Linux as they will run on microcontrollers that are too small for Linux. Based on that definition, WiFi thermostats, networked AC plugs, and heart-rate monitors with radios all fit. Systems like laptops and smartphones do not (they are for computation), nor do ATMs, network voting machines, or nuclear command and control centers (not consumer devices—he hopes the latter aren't for sale at all).

[Marti Bolivar]

He borrowed his definition of "securing" from Ross Anderson in his book Security Engineering. It means building systems that "remain dependable in the face of malice, error, or mischance". Security is not a binary yes/no state of a system, it is an ongoing process throughout the lifecycle of the system.

Bolivar is an embedded software engineer who works on IoT infrastructure and reference implementations for Linaro; security is a big piece of that work. Before that, he founded and ran an embedded systems company that created open-source hardware, firmware, and software, as well as doing some consulting on the side. He noted that over time more and more of his projects included networking, but that "projects with a solid security story" did not follow that trend closely, which leaves a big gap.

There is some good news, though: security engineering is a robust field with many experts who have established techniques that can be used to help secure IoT devices. The bad news is that the IoT industry is "doing it wrong"; there aren't enough of these experts to go around. The way to win is for those who are not security experts to learn and apply their ways. Those things will need to be incorporated into the product development workflow, but that can be done slowly and iteratively, he said.

There are plenty of reasons that companies should care about IoT security, but an oft-heard argument is that these problems are not something the company has to deal with. The costs associated with securing these devices are an externality in economic terms (e.g. pollution, since the cost of it is not borne by the creator). So there is resistance to spending money on security engineering at times.

But the problem is real. Major distributed denial of service attacks have stemmed from insecure IoT devices, medical devices have potentially fatal flaws, and new major IoT infrastructure (e.g. Tizen) often has many zero-day vulnerabilities to exploit. Public concern about these problems will likely result in less willingness to buy IoT products, he said. Bruce Schneier has famously called for government regulation and legislators in various places worldwide are listening. Bolivar said there is reason to believe that the externality argument does not hold much water and will hold less over time.

There are some economic concerns that might also lead companies to fund security; preventing device "cloning" is one, but a better security story can also be a product differentiator. In enterprise and business-to-business (B2B) contexts, support contracts might play a role; products that are also used internally (or might be) would also help provide an incentive to secure them. Some may simply feel that securing these devices is the "right thing to do". There may be other reasons as well, of course; whatever the reason, he would like to see companies start securing their devices—hopefully with open-source software.

Antipatterns

The most basic security antipattern is to "do nothing". That means accepting any and all risk, though. Another is to "do it yourself"; that leads to thinking the system is secure because of custom elements, such as non-peer-reviewed cryptography algorithms or implementations and security through obscurity. "Hand-rolled" security systems have not fared well over the years—developers have learned that implementing stream ciphers, for example, should not be tackled in-house. But there is still a fair amount of security by obscurity, such as "super unguessable URLs". If a product becomes successful, which is what you want, the unguessable will become all-too-guessable.

"Simon says security" is the antipattern that determines the system is secure because someone important says that it is. That can stem from vague requirements documents with sweeping security claims. It can lead to security theater (e.g. no lighters on airplanes). It tends to happen when people are panicking; they want security but aren't sure how to make it happen. But that kind of "security" does not meet Anderson's definition since it is not specifically focused any particular threat.

The next up was "just add crypto"—the system is secure because it uses cryptography. The corollary seems to be that the system is even more secure because it uses even more crypto in a cascading list of more and more acronyms (SSL, TLS, DTLS, AES, ...). Bolivar is (perhaps obviously) not saying that crypto is bad, just that it is "not magic". It is easy to misuse, implementations have bugs, key management is tricky, and so on. Adding crypto is not the end of the line for securing a device.

If the system is secure because it uses so many different security technologies, it has perhaps fallen victim to the "security grab bag" antipattern. He noted a real-life remote administration system that a friend had to use: it used a VPN, then an HTML5 remote desktop server to get a remote desktop on a system, from which SSH was used to actually log into the system of interest. In some ways, the grab bag is similar to "just add crypto". It can even work to a certain extent, depending on what's in the grab bag, but it is likely to overprotect some things, while not protecting others. It can often be a waste of resources because it does not focus on the most import threats.

An attempt to "aim for perfection" is another trap. Bolivar likened it to building a bomb-proof door before adding a window lock or trying to stop a determined nation-state level attacker before the basics are handled. This can occur when engineers get carried away in brainstorming sessions or if the people who sign off on security plans ignore "trivial matters" like deadlines and salaries.

Perfect systems never ship, so they are "tautologically secure". Any system that ships has issues, both known and unknown; security is no different. In the IoT world, it is important to remember that these devices are no longer your systems. Customers have physical access, but the problem starts even before then; contractors that build and ship the devices also have that access. Any attempt to reach perfection is likely to be seen as zealotry, which leaves a bad taste in people's mouths and reduces security buy-in.

"Release and forget" is also common. The thinking is that the system is secure, so nothing more needs to be done. Even if something does need to change, though, the build cannot be reproduced: some of the source code has gone missing or vendors won't support newer versions with needed fixes. The support window for the device may not have been specified and there may be no mechanism for people to report vulnerabilities. There may also be no way to update deployed devices at all. This "strategy" is unworkable; it means that vulnerabilities cannot be fixed, it alienates the security community that you would rather have on your side, and it antagonizes customers. But it does cost real money to ensure these things, so there have to be business reasons for a company to care.

The last antipattern he noted is the "kill the messenger" approach; sue anyone who says that the product is not secure. That includes making legal threats over vulnerability reports as well as lobbying for laws to prevent security research. Those efforts may chill research and reporting, but it will cause bad press that can damage your brand. It also antagonizes people who can sell the vulnerabilities they find (often anonymous people who are difficult to threaten or sue) .

Better patterns

Instead of adopting one or more of the above approaches, there are alternatives. To start with, devices should not connect to the network if they don't need to. Bolivar said that his management is not happy when he says that, but devices that are not connected have a much reduced attack surface. Similarly, don't collect information that is not needed; it is hard for servers or devices to give up information they never possessed to begin with.

Threat modeling is an important part of the process. Iteratively building and using threat models will result in more secure systems. It is also important to keep security planning in the normal workflow of the development process. Security bugs and features should be tracked in the same systems and factored into planning the same way as other bugs and features are. Otherwise, you can reach a point where the schedule only reflects part of the work that needs to be done, which results in long hours and slipped release dates.

The "one slide" he would like everyone to remember from his presentation (slide 31 in his slides [PDF]) reinforces the message on threat models. There are many approaches, but all involve modeling the system in question, deciding what the important problems are, and then mitigating them in priority order. It is best to start small and then iterate, Bolivar said.

For IoT, he recommended the approach laid out in Threat Modeling: Designing for Security by Adam Shostack. The book covers Microsoft's methodology, which is applicable to IoT with some tweaks. The book starts out in a fairly lightweight and easy to understand way; it describes how to improve the process as you go in an evolutionary way. It is opinionated, which he likes; it provides advice rather than just offering a bunch of different options. In addition, the methodology is "battle tested"; Microsoft has gotten much better at security over the years, he said.

There are other books and resources, of course, including the Anderson book he mentioned at the outset. That book is a great read, he said, though not all of it is applicable to IoT. It is, however, over 1000 pages long (even the bibliography is over a 100 pages, he thinks) so it is a substantial amount of reading to get started. There are also threat modeling resources from the Open Web Application Security Project (OWASP); those are focused on web applications, as the name would imply, but much of it is also applicable to IoT. The idea of "threat trees" as described by Schneier and others is useful, but somewhat hard to get started with. There are other resources listed in the slides.

As he was wrapping up (and running out of time), Bolivar gave a brief overview of the threat modeling described by Shostack. It is important to model the system with a data-flow diagram; the book focuses on software but, for IoT, it makes sense to look at the hardware as well; the schematics of the device should be consulted. Make concrete choices of what to protect and address those in a breadth-first way. Keep the antipatterns in mind as things to avoid. You will never reach the bottom of the list of protections, but that is expected; avoid "aim for perfection" and test to make sure the product is good enough to ship.

[I would like to thank the Linux Foundation for travel assistance to attend OSS in Los Angeles.]

Index entries for this article
SecurityInternet of Things (IoT)
ConferenceOpen Source Summit North America/2017


to post comments

Antipatterns in IoT security

Posted Sep 14, 2017 7:56 UTC (Thu) by lacos (guest, #70616) [Link] (3 responses)

> The costs associated with securing these devices are an externality in economic terms

These devices will never be secured, if it's up to the producers and the consumers only. The manufacturers are not interested in the security of their customers, so they won't pay. The customers -- consumers of these mass-produced devices -- are also either totally uninterested in their own security, or -- on a grand scale -- aren't interested in it enough to pay up.

The consequences for society will be terrible, of course. IIRC, Schneier keeps saying that rules & regulations should be imposed by the local governments on the manufacturers / distributors. My comment is that it should occur similarly to how the safety of plain electric devices is enforced. This would effectively force the consumer population to pay for their own security, which would be better (and cheaper) for society at a large scale.

I guess this is another debate to be had between the left and the right (applied to your local government). :/

Antipatterns in IoT security

Posted Sep 14, 2017 7:59 UTC (Thu) by lacos (guest, #70616) [Link] (1 responses)

Sigh, the article mentions Schneier. Sorry about commenting too quickly.

Antipatterns in IoT security

Posted Sep 14, 2017 14:44 UTC (Thu) by mbolivar (subscriber, #75534) [Link]

Thanks for the write-up! Sorry I ran out of time.

> Sigh, the article mentions Schneier. Sorry about commenting too quickly.

Indeed! If you check out my slides (link is in the article), one of the reasons I argue that the "security is just an externality" argument might hold less water as time goes on is exactly due to the potential for increased regulations as recommended by Schneier and others.

In particular, slide 10 mentions the introduction of the "Internet of Things (IoT) Cybersecurity Improvement Act" this year with the backing of four senators, Schneier, and other big names:

https://www.warner.senate.gov/public/index.cfm/2017/8/ena...

Antipatterns in IoT security

Posted Sep 14, 2017 15:49 UTC (Thu) by NightMonkey (subscriber, #23051) [Link]

I'm pretty happy that my electric devices have been made more "secure" by not blowing up and catching fire as much as they could without government intervention in the flow of capital from my wallet to the device company owners. Sensible regulations are needed to counter the effects of businesses treating our lives and property with contempt in the rush to extract profit. #SchneierWouldHaveWon ;)

IoT needs careful, thoughtful regulation, because the security aspect has been left as an begrudgingly handled afterthought by a consortium of wealthy businesses with employees that should definitely know better.

Antipatterns in IoT security

Posted Sep 18, 2017 19:00 UTC (Mon) by rriggs (guest, #11598) [Link] (1 responses)

Security for IoT devices is not a hot topic nor will it ever be. You have IoT or you have security. Pick one. Alexa is listening and recording everything. The data acquired by an IoT vendor is their property, not yours. And, as we have seen over and over again, the use of that data (at least for US consumers) is subject to the whim of the vendor.

It must be apparent to most that there is nothing that state security organizations would love more than to have active listening devices in every home, in every pocket, and on every wrist. Have some unsavory characters visiting your home? Big Brother knows. Say any politically incorrect things while playing Cards Against Humanity? Big Brother knows.

Is that "dependable in the face of malice, error, or mischance?" Depends on how malicious you thing Big Brother really is.

Someone hacking in and changing your thermostat? Hardly worth talking about given the real security issues we face with the IoT. We are centralizing the command and control of our homes into a few easily manipulated multinational companies that are completely amoral.

Antipatterns in IoT security

Posted Sep 20, 2017 21:05 UTC (Wed) by Funcan (guest, #44209) [Link]

There's enough FUD here to make your point rather, erm, pointless.

To take a single example, it is trivially provable that Alexa isn't recording everything I say - I can inspect the hardware for storage devices, and I can monitor how much data is is transmitting and when.

Mobile phones and similar devices are a little more difficult to analyse, but there's enough work being done with SDR that means that more and more is precisely understood.

Paranoid rantings do not actually bring the discussion forward any; concrete examples and testable hypothesises do.


Copyright © 2017, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds