|
|
Subscribe / Log in / New account

LWN.net Weekly Edition for January 31, 2019

Welcome to the LWN.net Weekly Edition for January 31, 2019

This edition contains the following feature content:

This week's edition also includes these inner pages:

  • Brief items: Brief news items from throughout the community.
  • Announcements: Newsletters, conferences, security updates, patches, and more.

Please enjoy this week's edition, and, as always, thank you for supporting LWN.net.

Comments (none posted)

An open-source artificial pancreas

By Jake Edge
January 30, 2019

LCA

Dana Lewis said that her keynote at linux.conf.au 2019 would be about her journey of learning about open source and how it could be applied in the healthcare world. She hoped it might lead some attendees to use their talents on solutions for healthcare. Her efforts and those of others in the community have led to a much better quality of life for a number of those who suffer from a chronic, time-consuming disease.

She began with a well-known joke in hacker circles ("there are 10 kinds of people in the world ..."), but she added a twist. Beyond those who know binary and those who don't, there are also another 10 kinds of people: those who can produce their own insulin and those who can't. Lewis has type 1 diabetes, so she "just" needs to add insulin to her system because her pancreas does not produce it, but it is not quite that simple. Getting diagnosed with a chronic disease is "like getting struck by lightning", she said; there is no time to prepare and you know that everything will be different from that point forward.

Ups and downs

There is more to it than just eating right, exercising, and adding insulin. Stress, excitement, and adrenaline all factor into her blood sugar levels. She was excited and a bit stressed to give her keynote, which meant that her blood sugar level was going up. It is hard to measure the proper insulin dose to counteract the experience of giving a talk to "a roomful of people really early in a different time zone". It is further complicated by the fact that things that sometimes elevate blood sugar also sometimes lower it.

If you look at the graph of a diabetes patient's blood sugar, it will go up and a down a fair amount. Beyond that, "insulin is not magic"; it takes 60-90 minutes for it to start bringing down blood sugar, while eating makes it rise in as little as 15 minutes. She noted that access to insulin is not a given, though, since there are many parts of the world, especially in developing countries, where it is not available.

[Dana Lewis]

Some patients have tools that make diabetes management easier. She has a continuous glucose monitor (CGM) to measure her blood sugar every five minutes and an insulin pump that delivers a configured amount of insulin continuously. Those sound great, she said, and they are, but there is still a lot of manual work required.

A patient has to pull out the CGM and press a button to see the latest reading and the trend, they then need to look at the pump and see what it has been delivering. After that, they need to calculate what action is needed based on the readings and their current and planned activities (e.g. eating, exercising, or giving a talk). They do that "over and over and over and over again" throughout the day. These decisions are not all about insulin either, but also about whether they should eat, what kind of exercise they should do, what they should eat, and so on. It is a lot of work and hard to get right; that is why people's blood sugar graphs have lots of ups and downs

She showed some sample graphs in part because she wanted to point out that it is actually difficult to get access to that data. The only application approved by the US Food and Drug Administration (FDA) to get the glucose readings from her device only works on Windows—and she has a Mac. Meanwhile, the CGM and pump are from different manufacturers, so they don't talk to one another, which is quite common in the medical-device world. That leaves the human in the middle to make all of the decisions and take any required actions.

Alarms

Lewis said that she is "the world's best sleeper"; she sleeps for a long time and deeply. The CGM has an alarm that is meant to wake her when her blood sugar levels are too low, but there is no way to change the volume or sound it makes. Because of "alarm fatigue", where over time the brain stops alerting because it has gotten used to an alarm sound, and her deep sleeping, she can sleep right through the alarm. That made her afraid to go to sleep because she might not hear the alarm and stop the insulin pump from providing more insulin, thus lowering her blood sugar further. You can die from diabetes even if you do everything right, she said; not being able to trust that your device will wake you up when an adjustment is needed is intolerable.

She got so fed up with the problem that she started looking for other ways to create the alarm. In 2013, she and her now-husband, Scott Leibrand, found someone on Twitter who had a way to get information out of the CGM and send it to a server; he used it to monitor his son's blood sugar from afar. They were able to get the code from him and used it to send alarms to her phone when needed. The data was extracted by a Windows program running on an old laptop at her bedside and was sent to an application in the cloud that pushed the alarms out. In the first month, she got many alarms that she actually woke up for; "it was fantastic".

They also created a dashboard on the web that she can easily read and make sense of when she is woken in the middle of the night. It has buttons that she can click to indicate the action that she took, which is helpful for her parents and others who are part of her "secondary line of defense". Brains are a "little weird" when your blood sugar is low, she said, so the dashboard could alert others if she didn't wake up and take the right action (i.e. press the right button) when the alarm went off. She thought the dashboard, which they jokingly called "DIYPS" (for do-it-yourself pancreas system), was the greatest thing ever.

Sleeping through the CGM alarm is a common problem for those suffering from diabetes, she said. Her new open-source awareness meant that she wanted to share her solution with others. They had learned a lot about open source and medical devices, including finding out that the FDA "frowns on distributing code that looks like a medical device", even if it's not a medical device. So they did not end up releasing the code for the dashboard/alarm.

Auto-pancreas

As they started looking around at what others were doing in the open-source diabetes world, though, they found someone who had figured out how to talk to a specific model of insulin pump, which turned out to be the one that she had. With that, they realized that the necessary tools were available to create a closed-loop artificial pancreas. You can take those two dumb devices (CGM and insulin pump), add a Linux system (in their case, a Raspberry Pi), a USB radio to talk to the pump, and a battery—you end up with an artificial pancreas system.

In the manual method, a human needs to take in data, do some calculations on it, and change settings on another device—many times a day. That is a job for a computer, she said. The system is automated, but that does not mean that a human cannot intervene to change things as needed. She showed before and after graphs; the after graph was noticeably flatter, with fewer swings—which is the goal. That was met with much applause from attendees.

The automatic pancreas is not a cure, but it can make a huge difference in people's quality of life. She was helped by multiple people along the way, so she knew the system needed to be released as open source. OpenAPS (open-source artificial pancreas system) is the project that was formed to provide information so that others can build their own devices. Lewis did not return to the FDA "problem" in her talk, but the OpenAPS FAQ notes that the FDA only regulates commercial products that are sold; OpenAPS simply provides information for those who want to build and try the system for themselves.

The DIYPS was an open-loop system, since it simply gave her the information (and recommended action), but she had to make it happen. The newer, closed-loop system took her out of that loop (though she still had the ability to change things if needed); they were able to translate the calculations she normally did into code that could run on every update from the CGM. Their highest goal was safety, so when they started documenting the algorithm at OpenAPS, one of the most important pieces was to document how the system could fail.

There are numerous failure modes (e.g. batteries die, CGM pulled out, pump disconnected, insulin runs out, etc.) but OpenAPS is designed to handle them. When OpenAPS shows the design to new people, who inevitably have a large number of "what if" questions, they are impressed with its safety orientation. That really shouldn't be a surprise, she said, since those who are trying to solve the problem have the disease and have been working out how to deal with it since they were diagnosed. Those who have diabetes "have to figure it out or they don't stay alive".

One thing that developers outside of the project often focus on is how OpenAPS is talking to the insulin pump, she said. The currently targeted pump is an older version of a particular pump brand that has been recalled due to a security vulnerability. But that vulnerability is what allows OpenAPS to talk to the pump over the radio, so it is required that they use this insecure device. That is what allows the loop to be closed.

The software that OpenAPS uses has conservative defaults for various settings. There are also hardware limits on the amount of insulin that can be given, but the software starts with low levels and does not increase them without some action from the user. In addition, while it is monitoring every reading from the CGM, it does not take actions based on one outlying data point; it looks at the trends and is prepared for anomalous or missing data.

The insulin pump has a "basal" rate, which is the base amount of insulin it will dose in the absence of other input. That rate will continue even if blood sugar goes low, which is bad. OpenAPS operates by sending a temporary basal rate adjustment command that only lasts for 30 minutes. If the system is working, another adjustment will be made within that window, but if not, the pump will return to its default state.

She referred interested people to the OpenAPS web site, where there is a plain-language reference design. There have been many non-technical people who have come to the site over the years to try to learn how they can build an artificial pancreas for themselves. In many cases, it takes a lot of convincing to get them to believe that they can do this for themselves. The plain-language description is understandable to those who are already treating their diabetes and the idea is that OpenAPS has just moved that algorithm into a computer. There is also an open-source reference implementation (under the MIT license) available on GitHub.

Results

She has been "looping for four years" and there are now over 1000 people with DIY closed-loop systems in the world including, she said, some in the room. That means that there are over nine million hours of DIY closed-loop experience at this point—using conservative numbers for people and hours. That is significant because many of the "big fancy clinical trials" have 150,000 hours of testing before they are declared ready for use. OpenAPS has learned a lot and its users have chosen to stay with the DIY system for a long time.

"I refuse to let all this beautiful data go to waste", she said. OpenAPS has its own data commons as part of the Open Humans project. OpenAPS users can volunteer their data to be added to the anonymized data set, which she said she does not want companies to ignore when they are working on projects for diabetes. Anyone who has an interest in the data can get access to it and learn from it.

She relayed two anecdotes that she thought encapsulated many of the reasons that people are choosing to use APS. One was a man in Finland whose young son had diabetes; they went from 4.5 manual interventions per day to 0.7/day after using OpenAPS. Each of those interventions is time consuming and burdensome for the caregivers and the child, so getting to less than one a day on average makes a huge difference. Another user had a son with diabetes who needed 420 visits to the school nurse during his 4th-grade year (e.g. before lunch or gym, high or low blood sugar events). Once he was using OpenAPS in his 6th-grade year, that dropped to five visits: three for low blood sugar after gym and two from equipment failures.

The OpenAPS "rig" can communicate with both Android and iOS devices. "You can text your pancreas ... you really can", she said, or use an assistant like "Siri" to send input to the rig. You can also use a smartwatch to enter information, which is her preferred way to do it. Just using three buttons allows her to change the amount of insulin being delivered or to enter the food that she is eating, which is much more discreet than fumbling around with her medical device. That allows her to participate in job interviews, for example, without making her diabetes the focus of the discussion if she needs to provide some input to her pancreas.

As with many open-source projects, lots of great ideas and code have come in from new people. They are often surprised when presenting their new idea or use case that Lewis and other OpenAPS developers are enthusiastic about the idea but not able to make it happen for various reasons—the suggestion that the new person "should go do it" is temporarily astonishing to them. But that is how things work in open source.

The earliest system was based on the first Raspberry Pi, but it made for a bulky system with a large battery; the radio had a short range as well. One potential user said that he needed something smaller for his wife so he started looking into an Intel Edison-based device; he found a radio that would work with it and wrote some code to make it talk to the pump. Another community member had a hardware shop and helped design a board for the rig, which is what she uses today. It is much more compact and uses less power.

Unfortunately, Intel has discontinued the Edison, so she asked anyone who had extra Edisons to contact her or put them up for sale on eBay. The project has gone back to the Raspberry Pi, which has gotten better in the interim. She noted that many of the attendees to LCA had gotten a Raspberry Pi Zero and said that she would be happy to take any that were not needed. By the end of the conference, 11 had been donated, but the organizing committee was also planning to donate any extras that were not handed out; OpenAPS should end up with a nice supply of free computers for users who need them.

There are some naysayers who proclaim that the project is not made up of medical professionals or programmers; "who do you think you are to do this?" That is ridiculous, she said. The algorithm is one that all diabetes patients learn early on: if your blood sugar is going up, add more insulin based on your settings, if it is going down, add less insulin based on your settings. The computer is just able to do it every five minutes. The people who use the system find that any negatives are outweighed by the positives, in terms of quality of life, that it brings.

The most important thing she has learned through this process is that open source is about "the willingness to try"; it is about showing up and asking questions, she said. The OpenAPS project is not made up of doctors, scientists, researchers, and engineers, but it is doing engineering, experimentation, science, and so on every day. The rest of the open-source community has helped immensely along the way as well.

This kind of effort can be applied beyond diabetes. If you know someone with cancer, asthma, cystic fibrosis, or some other rare disease that few people know anything about, start asking them what their biggest problem is; "what is the thing that most annoys you?" A cure is probably at the top of the list, but once you get past that, you may find there is something fairly simple that would help them. For her, it was simply a louder alarm so that she could sleep without worry. Once that was fixed, she (and others) started chipping away at the other problems that existed and ended up with a much bigger and more useful system than they ever envisioned at the beginning.

She closed by noting that there are another 10 types of people: those who will consider using their open-source skills to solve healthcare problems and those who will not. She hopes that her presentation will help to get more people into the first group.

A YouTube video of the keynote is available.

[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Christchurch for linux.conf.au.]

Comments (36 posted)

Systemd as tragedy

By Jonathan Corbet
January 28, 2019

LCA
Tragedy, according to Wikipedia, is "a form of drama based on human suffering that invokes an accompanying catharsis or pleasure in audiences". Benno Rice took his inspiration from that definition for his 2019 linux.conf.au talk on the story of systemd which, he said, involves no shortage of suffering. His attempt to cast that story for the pleasure of his audience resulted in a sympathetic and nuanced look at a turbulent chapter in the history of the Linux system.

Rice was also influenced by Aurynn Shaw's writing on "contempt culture". According to Shaw, people use contempt (of developers using a different programming language, for example) as a social signifier, a way of showing that they belong to the correct group. This sort of contempt certainly plays into this story, where large groups identify themselves primarily by their disdain for systemd and those who work with it. A related concept is change, or the resistance thereto. The familiar is comfortable, but it isn't necessarily good, especially if it has been around for a long time.

The roots of the tragedy

The ancestry of systemd, he said, is tied to the origin of Unix, which was "a happy accident" — a reaction to the perceived complexity of the systems that came before. It was brutally simple in all regards, including how its user space was bootstrapped. Putting up an early init man page, he [Benno Rice] called out the "housekeeping functions" that it was designed to carry out, including mounting filesystems and starting daemons. Those are two distinct tasks, but they had been lumped together into this one process.

In those days, there were few daemons to worry about; cron, update (whose job was to write out the filesystem superblocks occasionally), and the init process itself listening on a few terminals were about it. By the time that 4BSD came around, Unix had gained a proper getty daemon, network daemons like routed and telnetd, and the "superdaemon" inetd. That is where things started to get interesting, but it still worked well enough for a while.

Then the Internet happened. Using inetd worked well enough for small amounts of traffic, but then the World Wide Web became popular and it was no longer possible to get away with forking a new process for every incoming connection. Sites on the net started running databases and other systems with a great deal of stored state that could not go away between connections. All this shifted the notion of a daemon toward "services", which are a different beast. Old-style init could start services, but was pretty much useless thereafter.

Part of the problem was the conflation of services and configuration. Tasks like mounting filesystems are of the latter variety; they are generally done once at boot time and forgotten thereafter. But that approach is not sufficient for automated service management, which requires ongoing attention. Thus we saw the birth of more service-oriented systems like Upstart and systemd. This is something other operating systems figured out a while back. Windows NT had a strong service model from the beginning, he said, and Mac OS has one now in the form of launchd. Other systems had to play a catch-up game to get there.

Apple's launchd showed up in the Tiger release and replaced a whole series of event-handling daemons, including init, cron, and inetd. Systemd, Rice said, was an attempt to take a lot of good ideas from launchd. When Lennart Poettering started thinking about the problem, he first looked at Upstart, which was an event-based system that was still based around scripts, but he concluded that he could do a better job. His "Rethinking PID 1" blog post cited launchd as an example to work from. He was concerned about improving boot speed and the need for the init system to be tuned into the hardware and software changes on a running system. When the Unix init system was designed, systems were static, but the environment in which the operating system runs now is far more dynamic.

The service layer

Classic Unix-like systems are split into two major components: the kernel and user space. But kernels have become more dynamic and changeable over time, responding to the hardware on which they run. That has led to the need for a new layer, the "service layer", to sit between the kernel and user space. This layer includes components like udev and Network Manager, but systemd seeks to provide a comprehensive service layer; that is why it has pulled in functionality like udev over time. It has been quite successful, achieving wide (but not universal) adoption through much of the Linux distribution space, often creating a great deal of acrimony in the process.

There are a number of often-heard arguments against systemd; one of those is that it violates the Unix philosophy. This argument, he said, seems to be predicated on the notion that systemd is a single, monolithic binary. That would indeed be silly, but that's not how systemd is structured. It is, instead, a lot of separate binaries maintained within a single project. As "a BSD person" (he is a former FreeBSD core-team member), Rice thought this pulling-together of related concepts makes sense. The result is not the bloated, monolithic system that some people seem to see in systemd.

Another frequently heard criticism is that systemd is buggy. "It's software" so of course it's buggy, he said. The notion that systemd has to be perfect, unlike any other system, raises the bar too high. At least systemd has reasonable failure modes much of the time, he said. Then, there is the recurrent complaint usually expressed as some form of "I can't stand Lennart Poettering". Rice declined to defend Poettering's approach to community interaction, but he also said that he had to admire Poettering's willpower and determination. Not everybody could have pushed through such a change.

Systemd makes no attempt to be portable to non-Linux systems, which leads to a separate class of complaints. If systemd becomes the standard, there is a risk that non-Linux operating systems will find themselves increasingly isolated. Many people would prefer that systemd stuck to interfaces that were portable across Unix systems, but Rice had a simple response for them: "Unix is dead". Once upon a time, Unix was an exercise in extreme portability that saw some real success. But now the world is "Linux and some rounding errors" (something that, as a FreeBSD person, he finds a little painful to say), and it makes no sense to stick to classic Unix interfaces. The current situation is "a pathological monoculture", and Linux can dictate the terms that the rest of the world must live by.

Systemd has gained a lot from this situation. For example, control groups are a highly capable and interesting mechanism for process management; it would be much harder to do the job without them. They are much more powerful and granular than FreeBSD jails, he said. Developers for systems like FreeBSD can see systemd's use of these mechanisms, and its subsequent non-portability, as a threat. But they can also use it as an excuse to feel just as liberated to pursue their own solutions to these problems.

Change and tragedy

The whole systemd battle, Rice said, comes down to a lot of disruptive change; that is where the tragedy comes in. Nerds have a complicated relationship to change; it's awesome when we are the ones creating the change, but it's untrustworthy when it comes from outside. Systemd represents that sort of externally imposed change that people find threatening. That is true even when the change isn't coming from developers like Poettering, who has shown little sympathy toward the people who have to deal with this change that has been imposed on them. That leads to knee-jerk reactions, but people need to step back and think about what they are doing. "Nobody needs to send Lennart Poettering death threats over a piece of software". Contempt is not cool.

Instead, it pays to think about this situation; why did systemd show up, and why is it important? What problem is it solving? One solution for people who don't like it is to create their own alternative; that is a good way to find out just how much fun that task is. Among other things, systemd shows how the next generation doesn't think about systems in the same way; they see things more in terms of APIs and containers, for example.

So what can we learn from systemd? One is that messaging transports are important. Systemd uses D-Bus heavily, which gives it a lot of flexibility. Rice is not a fan of D-Bus, but he is very much a fan of messaging systems. He has been pushing for BSD systems to develop a native message transport, preferably built into the kernel with more security than D-Bus offers. On top of that one can make a proper remote procedure call system, which is a way to make kernel and user-space components operate at the same level. In a properly designed system, a process can simply create an API request without having to worry about where that request will be handled.

Other lessons include the importance of supporting a proper service lifecycle without having to install additional service-management systems to get there. Service automation via APIs is important; systemd has provided much of what is needed there. Support for containers is also important; they provide a useful way to encapsulate applications.

Systemd, he concluded, fills in the service layer for contemporary Linux systems; it provides a good platform for service management, but certainly does not have to be the only implementation of such a layer. It provides a number of useful features, including painless user-level units, consistent device naming, and even the logging model is good, Rice said. Binary logs are not a bad thing as long as you have the tools to pull them apart. And systemd provides a new model of an application; rather than being a single binary, an application becomes a bunch of stuff encapsulated within some sort of container.

The world is changing around us, Rice said. We can either go with that change or try to resist it; one path is likely to be more rewarding than the other. He suggested that anybody who is critical of systemd should take some time to look more closely and try to find one thing within it that they like. Then, perhaps, the catharsis phase of the tragedy will be complete and we can move on.

A video of this talk is available on YouTube.

[Thanks to linux.conf.au and the Linux Foundation for supporting my travel to the event.]

Comments (174 posted)

Design for security

By Jake Edge
January 30, 2019

LCA

Serena Chen began her talk in the Security, Identity & Privacy miniconf at linux.conf.au 2019 with a plan to dispel a pervasive myth that "usability and security are mutually exclusive". She hoped that by the end of her talk, she could convince the audience that the opposite is true: good user experience design and good security cannot exist without each other. It makes sense, she said, because a secure system must be reliable and controllable, which means it must be usable, while a usable system must be less confusing, thus it is more secure.

Chen is a product designer who is interested in the "intersection between security and humans". She thought that most in the room would agree that "everyone deserves to be secure without having to be experts". But our current ways of designing systems are not focused on that. We need to stop expecting our users to be or become security experts.

No one cares

[Serena Chen]

It may be hard for a roomful of people interested in security to hear, she said, but "no one cares about security". In truth, they may care about it in theory, but that is not what they are thinking about or trying to accomplish at any given time. As an example, she referred to the classic "dancing pigs" quote ("Given a choice between dancing pigs and security, users will pick dancing pigs every time."), though she noted that it was written in 1999 and might need its memes updated, perhaps by substituting "cats" for "pigs".

But we expect users to actively think about their security and, when they don't, "we shame them". In the security world, there is a "pervasive culture of shame". She fully admits that she has participated, by making fun of users who post their credit card numbers on the internet or get their password scammed on IRC. Beyond that, there's recommending a completely unusable tool (with a slide of the OpenPGP home page) then "belittling them when they can't figure out how to do it", she said to laughter.

Shaming people is lazy, she said. People wanted to complete a task and we have failed to provide a secure and easy way to do so. It is okay that people don't care about security, she said, because they shouldn't have to. It is our job to build secure systems for everyone. She pointed to the classic Sandwich xkcd comic ("sudo make me a sandwich") and noted that lots of developers probably just add sudo to the front of a failing command without even thinking about it; we are all just trying to get things done.

In her job, she focuses on the end-user experience. Instead of "overwhelming people with complex technical instructions", we can make things more intuitive and friendly. That way, it will actually get used. Something that can help is "design thinking"; it is just another problem-solving tool that should be in your toolkit, she said. Using design thinking, she came up with four things she thinks should be considered the next time there is the "inevitable tug-of-war between usability and security".

Least resistance

The first is "paths of least resistance". We are used to putting up a lot of walls, Chen said, such as popping up warnings ("have you considered not doing the thing?" or "oh, you needed to open up a program to do your job, have you considered getting a new job?"). That comes from security being tacked on at the end of the development process, she said.

Instead of walls, she suggested making rivers: make the path of least resistance be the path that is the most secure. It is the "secure by default" principle; if you do nothing, you get the most secure options. It can be as simple as defaulting the options in a dialog to the one that is more secure. For a real-world example, she pointed to blenders that won't turn on unless their lids are on; "you can't screw that up".

Security actions should be a normal part of the process. It shouldn't take "extra credit" work to be secure. If, for example, a phone number is going to be needed to verify an account, ask for it right up front rather than expecting the user to go add it in some settings screen. If a user visits a site or an app with a specific task in mind, they are not likely willing to go through a whole setup process. But if they are going through a setup process, make sure that all of the pieces that need to go with that are grouped with it.

Humans are good at being economical with their physical and mental resources, she said. That means security will not be on their minds as they will be trying to accomplish something. Therefore it is paramount that the goals of the end user are aligned with the security goals of the developers. If they are not aligned, users will get around the security goals—sometimes on purpose.

The browser warnings about privacy with regard to HTTPS certificates that do not validate, perhaps because they are self-signed, are one example. She noted that the warnings have been getting better, but we all know someone who just breezes right past the warnings "without a care in the world", she said. Their current goal is not to try to figure out which sites are dodgy and which aren't, so they just say "I know how to internet" to themselves and click through.

She asked: "why doesn't friction work here?" The problem is that, when she talks about paths of least resistance, she really means paths of perceived least resistance. If clicking through ten security warnings is how she has learned to get a certain thing done, she will do it every time—and she won't even see the warnings any more. In fact, a study has shown that after two exposures to a warning, people do less visual processing when they "see" it again.

There is a massive vulnerability in what she called "shadow IT": the path that employees actually take that is directly contrary to what the IT department requires because the requirements are too onerous. An example would be password rotation policies, which are known not to work and to lead to various less-secure options (e.g. short passwords, passwords on post-it notes, cycling through slight variations of the same password).

If you keep putting up obstacle courses, she said, users will get good at running them. Another area where IT departments put up walls is around what content can be accessed, but security tools should not be used for non-security purposes. She has had to work around company firewalls so she could listen to Spotify, for example. If employees are spending all of their time on YouTube, "that is a management problem, not a security problem", she said.

When you want to build good paths, don't make the users think, she said. "Again, build rivers, not walls." Make the secure path be the easiest one. A good example of this is the BeyondCorp security model used by Google, which removed the security perimeter from the corporate network, effectively putting the whole thing onto the internet. That requires ensuring that the authentication systems are reliable and that there are good models for all of the roles within the organization including what access they require. More importantly, from her perspective, BeyondCorp had a clear focus on user experience and on making it largely invisible to its users.

Intent

If you want to align your security goals with the goals of users, you need to know what the user's intent is. Developers forget about intent all of the time; that is usually where the tension between usability and security occurs. There is a tendency to fall back on common patterns; designers say "make it easy", while security people say "lock it down". But it is not her job to make everything easy, nor is it the security developers' job to make it all locked down. The job is to make an action that the user wants to take at a specific time and place easy; everything else can be locked down.

Figuring out the user's intent is "easier said than done, of course", but it starts by understanding who the user is, where they are, what time it is, and what kinds of things might they want to do under those conditions. Is there enough information for the application to know those things? Are people sharing login information so that it is difficult to know who the actual user is? Are those things that need to be handled?

Determining what a user would be expected to do—and not do—based on their role, while using the minimum amount of personal data to do so, is the goal. Simple things like country of origin and time of day can inform the software on what that user's intent might be, she said.

(Mis)communication

She challenged attendees to think about communication a bit differently than they usually do; it is not just a "mushy kind of human-based I/O that is a bit of a drag to deal with sometimes". In particular, she wanted them to think about miscommunication as a human security vulnerability; it is what is exploited by social engineering attacks. It is the ambiguity in communication that gives social engineers cracks to exploit.

In their current project, she asked attendees, is there something that the program is unintentionally miscommunicating? For example, the (in)famous green lock in the web browser interface, which is a bug that has thankfully largely been fixed, she said, means that the communication is encrypted and that the domain name is attested to by the certificate authority (CA). But to the average person, it simply means that the site is "secure", because it says so right next to the lock icon. We know that is not necessarily true, though—there is a miscommunication, thus a human security vulnerability.

By way of an example, she mocked up a "pretty convincing" web site that would show the green lock. If she was bored one night and "felt like doing some crime", she could grab an unused domain name (say, "chase-help-and-support.com") for around $20 and then go to Let's Encrypt for a free certificate. With some HTML and CSS hacking, she could set up a pretty convincing phishing site that the browser would explicitly label as "secure". She didn't actually do any of that, but phishers do it all the time and it works really well. The question to ask yourself is: do your end users know what it is you are trying to communicate?

Matching mental models

In order to understand whether users are receiving your communication, you need to understand their mental model of what is going on. Matching the mental models of users and developers is the most important consideration in this process. The user's expectations are what ultimately govern whether a system is secure: if they are met, it is secure, if not, it isn't.

Not all man-in-the-middle instances are an attack, the telephone game is a series of man-in-the-middle "attacks", but it is "just a pointless children's game". In a network context, though, users expect that their communications are able to be read only by the expected recipients and man-in-the-middle attacks violate those expectations.

So in order to make a system that is secure from the user's perspective, we need to figure out what their mental model is. One way to do that is to observe them interacting with the system. Designers are often doing user interviews; sitting in on those or reading their transcripts will be quite helpful in determining what users are thinking. If you don't have access to those activities, observe friends and family. By figuring out why they do the things they do, you can infer their intent.

Another approach is to influence their mental model so that it better matches what is actually happening. Whenever we make something, we teach and whenever someone uses something we make, they learn. The path of least resistance will often simply become the way to accomplish a task. Our software is already influencing the user's mental model. As an example of that, she described something that Apple products used to do; they would semi-randomly pop up a dialog asking the user to log into iTunes. People got so used to that—and just quickly typing their password to dismiss it—that it could easily be used as part of phishing scams.

We should pay attention to what our applications are teaching our users. Are they teaching users to ignore warnings and figure out the easiest way around them? Or that security is a barrier to be surmounted rather than something that helps them? The question "is it secure?" is completely meaningless without some kind of context about who and what it is securing and what it is protecting against.

In summary

As she concluded her talk, she noted that cross-pollination between design and security is rare, which is a missed opportunity. All of our jobs are about "outcomes based on specific goals", which should not rely on the stereotypical patterns of designers' considerations versus those of security developers. The key is to align the user's goals with the security goals.

She closed with a final anecdote: in her company's old building, one of the floors had a light switch that no one knew what it controlled. It had a post-it note over the switch that simply said: "No!". "Can you guess what the first thing I did was?", she said with a laugh as she showed a picture of a finger about to press it.

The first question in the lengthy Q&A was, inevitably, about the switch. She never found out what it controlled though she switched it many times. Another question had to do with security flaws that are difficult to communicate to users and, perhaps, have no fix yet, such as a firmware or processor bug. There are no simple answers there, especially if there is no recommended action that can be offered, she said. In those cases, hopefully automatic updates are taking care of things once there is a fix. Until then, it is not clear what can usefully be communicated to non-technical users.

How to get users to care about the "trust question" was another query. She acknowledged the problem but said that users often do not have time to even think about who they trust and they cannot be bothered to do so. She likened it to voting, where we would like to have people care about the issues and make informed choices, but many simply don't have the time—or take the time—to become informed.

A YouTube video of the talk is available.

[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Christchurch for linux.conf.au.]

Comments (49 posted)

Changing the world with better documentation

By Jonathan Corbet
January 24, 2019

LCA
Rory Aronson started his 2019 linux.conf.au keynote with a statement that gardening just isn't his passion; an early attempt degenerated into a weed-choked mess when he couldn't be bothered to keep it up. But he turned out to be passionate indeed about building a machine that would do the gardening for him. That led to the FarmBot project, a successful exercise in the creation of open hardware, open software, and an open business. A big part of that success, it turns out, lies in the project's documentation.

A few years after his garden went to seed, Aronson was taking an organic agriculture class when he stumbled across a piece of advanced industrial agricultural equipment. It was a tractor attachment that contained an array of cameras, one for each of a dozen or so rows of plants. The device can distinguish lettuce plants from weeds; it uses that information to automatically till the weeds under the soil. It can also selectively spray materials as needed. This was, he thought, a piece of cool technology, but he found himself wondering why there was no version of it for his backyard.

As he thought about the problem, he realized that the necessary components to create such a machine exist, it's just a matter of putting them together. With the help of tools like 3D printers and CNC milling machines, he could make a device that would plant seeds, spray water on [Rory Aronson] them, take care of weeds, and more. After graduating in 2013, he started working on the project in earnest, producing a white paper describing what was to become FarmBot — a sort of 3D printer for growing a garden. He designed the device to a high level of detail and posted the result, asking if there was anybody out there who wanted to help make this machine real.

As an aside, he noted that the companies making farming equipment are not much enamored of open-source software or ideas; as a result, farmers have been at the forefront of the "right to repair" movement. His own values, he said, differ from those held by farming companies; he values impact on lives far more than money made. Distributed ownership, individual control, and the right to repair matter more than revenue. So it felt right to him to share the idea widely; after all, everybody eats.

The posting of the white paper inspired others to join the project; some co-founders came in early on to help write the firmware and the web app. He received a Shuttleworth Foundation grant that helped with the building of prototype machines; after consuming the results, they concluded that they could make a real product out of the FarmBot. So the project's crowdfunding video was created, and far more money than expected rolled in; now they had to find a way to ship real devices to customers. The first of them, Aronson said, went to his mother; others have been shipped all over the world. Even NASA bought one to investigate ideas around growing food in space.

But, he said, that wasn't what he came to Christchurch to talk about; instead, he was there to discuss documentation. If you go to a web site for a typical software project, you'll be greeted by a README.md file that, in theory, gives all the information you need to get started with that project. It is a tried-and-true documentation model, but it only goes so far, and not everything in the world is software. FarmBot is a hardware and software project, but also a user community; it has its own needs for documentation that require going beyond the README.md file.

FarmBot is an open-source hardware project, which brings its own requirements, including version management. On the FarmBot site, one can find the hardware designs for all 14 (so far) versions of the device, and see what has changed between each one. The CAD models are there, built with Onshape, which provides a GitHub-like system for hardware. Bills of materials are there, along with instructional videos and more. There's also an area for modifications and add-ons.

A key part of building a community, Aronson said, is to do this kind of documentation well; when that happens, people will start playing with the design.

One other aspect of the FarmBot community has, he thought, lessons to offer for open source in general. He pointed to the linux.conf.au code of conduct as a good example of a necessary component; he advised the audience to avoid joining any project that lacks a code of conduct. Releasing a code of conduct is creating an open-source building block for a community as a whole.

That said, there are some places where the conference (and many projects) could improve. How, he asked, can a project build a safety team? There should be a documented process for that. Some events and projects document the incidents they have had to deal with, which provide a useful guide to how specific problems should be handled. All of this naturally has to be done in a way that protects both the privacy of the people involved and the safety of the conduct team.

Another thing that FarmBot has been doing is open-sourcing many of the components of building the business itself and making it work. If you have a real business, he said, you will have competitors, but there is a lot of value to be had in seeing them more like collaborators. Toward that end, the company has launched a document hub for its business information. Therein, one can find helpful information on topics like how to handle sales tax or do order fulfillment. The compensation formulas for its employees are there. To the extent possible, the company has open-sourced the plans for building the company itself.

What all of this comes down to, he concluded, is giving other people power. With that power, they can create software, hardware, communities, and businesses. With FarmBot, it all started with a white paper; effective documentation can bring about real change.

A video of this talk is available on YouTube.

[Thanks to linux.conf.au and the Linux Foundation for supporting my travel to the event.]

Comments (75 posted)

Snowpatch: continuous-integration testing for the kernel

By Jonathan Corbet
January 26, 2019

LCA
Many projects use continuous-integration (CI) testing to improve the quality of the software they produce. By running a set of tests after every commit, CI systems can identify problems quickly, before they find their way into a release and bite unsuspecting users. The Linux kernel project lags many others in its use of CI testing for a number of reasons, including a fundamental mismatch with how kernel developers tend to manage their workflows. At linux.conf.au 2019, Russell Currey described a CI system called Snowpatch that, he hopes, will bridge the gap and bring better testing to the kernel development process.

There are a number of advantages to CI, Currey said. It provides immediate feedback to developers; with luck, they can fix their problems before other people have to spend any time reporting them. It can save a lot of time for reviewers. As a result, the whole code submission process speeds up, and the project is able to move more quickly as a whole.

The core idea behind a kernel CI implementation is not complicated: one just needs to merge patches from the mailing lists, then run a set of tests on the result. These tests can be as simple as checkpatch.pl, but can also [Russell Currey] include building and booting, running the kernel's self-testing code, and more. Once the tests are done, the results can be reported back to the developer.

Doing this in the kernel context proves to be harder than in projects that are hosted on sites like GitHub, though. A pull request contains all of the information needed to merge a group of changes; an email containing, say, patch 7/10 lacks that context. It is nearly impossible to tell from an email message whether a patch series has been merged, rejected, or superseded. In general, mailing lists simply do not carry the same level of metadata as contemporary project-hosting sites, and that makes the CI problem harder.

Even so, there are groups doing CI testing on the kernel now. The "big boy" of kernel CI is the 0day robot, which picks up patches from the mailing lists and runs a number of tests. It does some static-analysis testing on the x86 architecture, build testing with over 100 kernel configurations, and a runs set of tests looking for performance regressions. When tests fail, email is sent to the developer. 0day is useful, but it is proprietary to Intel, so nobody else has the ability to change it to do what they want. In the absence of failures, there is also no way for developers to tell whether the tests have been run on a given patch posting or not.

Providing better CI for the kernel requires obtaining better metadata for patches, but any proposal that requires kernel developers to change their workflow is clearly not going to get far, he said. The solution is to use Patchwork, which is already in use by a number of kernel subsystems and is designed to supplement mailing lists rather than replacing them. Patchwork is able to track the state of patches, keep a patch series together, and host test results. And, perhaps best of all for those who would like to extend its functionality, it has a JSON API that can be used to build scripts around it.

Patchwork fills the bill nicely because it is already in use and accepted by many developers; adopting it will not require any workflow changes. Patchwork can host test results without having to run the tests itself; they can come from anywhere. There is also value in having the results posted on a web site; developers can learn when tests have been run (and their outcome) without the need to send out email for every patch set.

Snowpatch, thus, is built on top of patchwork. It is written in Rust in, Currey said, an attempt to be cool. The effort began at linux.conf.au 2016 in Geelong, and is maintained in collaboration with Andrew Donnellan. The code is GPL-licensed. There is an instance running now for the linuxppc-dev mailing list.

At its core, Snowpatch grabs a patch from Patchwork, applies it to one or more repository branches, then sends the result to a remote system for testing. When the results come back, they are added to the Patchwork entry. Actually running the tests requires Jenkins for now — a limitation that Currey apologized for. But, he said, Jenkins does everything that the project needs it to do.

Should anybody else want to set up a Snowpatch instance, he said, there are a few basic requirements. First of all, it needs a local repository to which patches can be applied. Access to a patchwork instance is needed to be able to publish the results. A Jenkins server is needed to run the tests, and there needs to be a remote Git repository that is visible to the Jenkins system. Currey ended his talk with an expression of hope that more kernel subsystems will set up Snowpatch and start making use of it to improve their CI testing.

A member of the audience asked about the risk of malicious patches taking over the test machines. Currey answered that "something" needs to be in place to deal with that problem, but it hasn't been addressed yet. That something might involve having a maintainer approve test runs. That said, bad patches haven't been a problem so far. The final question had to do with dependencies between patches; Snowpatch has no real solution for that problem at this time.

A video of this talk is available on YouTube.

[Thanks to linux.conf.au and the Linux Foundation for supporting my travel to the event.]

Comments (25 posted)

Page editor: Jonathan Corbet

Inside this week's LWN.net Weekly Edition

  • Briefs: Alpine Linux 3.9; Debian 9.7; Bison 3.3; Firefox 65; Kodi 18; MythTV 30; Quotes; ...
  • Announcements: Newsletters; events; security updates; kernel patches; ...
Next page: Brief items>>

Copyright © 2019, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds