|
|
Subscribe / Log in / New account

Design for security

By Jake Edge
January 30, 2019

LCA

Serena Chen began her talk in the Security, Identity & Privacy miniconf at linux.conf.au 2019 with a plan to dispel a pervasive myth that "usability and security are mutually exclusive". She hoped that by the end of her talk, she could convince the audience that the opposite is true: good user experience design and good security cannot exist without each other. It makes sense, she said, because a secure system must be reliable and controllable, which means it must be usable, while a usable system must be less confusing, thus it is more secure.

Chen is a product designer who is interested in the "intersection between security and humans". She thought that most in the room would agree that "everyone deserves to be secure without having to be experts". But our current ways of designing systems are not focused on that. We need to stop expecting our users to be or become security experts.

No one cares

[Serena Chen]

It may be hard for a roomful of people interested in security to hear, she said, but "no one cares about security". In truth, they may care about it in theory, but that is not what they are thinking about or trying to accomplish at any given time. As an example, she referred to the classic "dancing pigs" quote ("Given a choice between dancing pigs and security, users will pick dancing pigs every time."), though she noted that it was written in 1999 and might need its memes updated, perhaps by substituting "cats" for "pigs".

But we expect users to actively think about their security and, when they don't, "we shame them". In the security world, there is a "pervasive culture of shame". She fully admits that she has participated, by making fun of users who post their credit card numbers on the internet or get their password scammed on IRC. Beyond that, there's recommending a completely unusable tool (with a slide of the OpenPGP home page) then "belittling them when they can't figure out how to do it", she said to laughter.

Shaming people is lazy, she said. People wanted to complete a task and we have failed to provide a secure and easy way to do so. It is okay that people don't care about security, she said, because they shouldn't have to. It is our job to build secure systems for everyone. She pointed to the classic Sandwich xkcd comic ("sudo make me a sandwich") and noted that lots of developers probably just add sudo to the front of a failing command without even thinking about it; we are all just trying to get things done.

In her job, she focuses on the end-user experience. Instead of "overwhelming people with complex technical instructions", we can make things more intuitive and friendly. That way, it will actually get used. Something that can help is "design thinking"; it is just another problem-solving tool that should be in your toolkit, she said. Using design thinking, she came up with four things she thinks should be considered the next time there is the "inevitable tug-of-war between usability and security".

Least resistance

The first is "paths of least resistance". We are used to putting up a lot of walls, Chen said, such as popping up warnings ("have you considered not doing the thing?" or "oh, you needed to open up a program to do your job, have you considered getting a new job?"). That comes from security being tacked on at the end of the development process, she said.

Instead of walls, she suggested making rivers: make the path of least resistance be the path that is the most secure. It is the "secure by default" principle; if you do nothing, you get the most secure options. It can be as simple as defaulting the options in a dialog to the one that is more secure. For a real-world example, she pointed to blenders that won't turn on unless their lids are on; "you can't screw that up".

Security actions should be a normal part of the process. It shouldn't take "extra credit" work to be secure. If, for example, a phone number is going to be needed to verify an account, ask for it right up front rather than expecting the user to go add it in some settings screen. If a user visits a site or an app with a specific task in mind, they are not likely willing to go through a whole setup process. But if they are going through a setup process, make sure that all of the pieces that need to go with that are grouped with it.

Humans are good at being economical with their physical and mental resources, she said. That means security will not be on their minds as they will be trying to accomplish something. Therefore it is paramount that the goals of the end user are aligned with the security goals of the developers. If they are not aligned, users will get around the security goals—sometimes on purpose.

The browser warnings about privacy with regard to HTTPS certificates that do not validate, perhaps because they are self-signed, are one example. She noted that the warnings have been getting better, but we all know someone who just breezes right past the warnings "without a care in the world", she said. Their current goal is not to try to figure out which sites are dodgy and which aren't, so they just say "I know how to internet" to themselves and click through.

She asked: "why doesn't friction work here?" The problem is that, when she talks about paths of least resistance, she really means paths of perceived least resistance. If clicking through ten security warnings is how she has learned to get a certain thing done, she will do it every time—and she won't even see the warnings any more. In fact, a study has shown that after two exposures to a warning, people do less visual processing when they "see" it again.

There is a massive vulnerability in what she called "shadow IT": the path that employees actually take that is directly contrary to what the IT department requires because the requirements are too onerous. An example would be password rotation policies, which are known not to work and to lead to various less-secure options (e.g. short passwords, passwords on post-it notes, cycling through slight variations of the same password).

If you keep putting up obstacle courses, she said, users will get good at running them. Another area where IT departments put up walls is around what content can be accessed, but security tools should not be used for non-security purposes. She has had to work around company firewalls so she could listen to Spotify, for example. If employees are spending all of their time on YouTube, "that is a management problem, not a security problem", she said.

When you want to build good paths, don't make the users think, she said. "Again, build rivers, not walls." Make the secure path be the easiest one. A good example of this is the BeyondCorp security model used by Google, which removed the security perimeter from the corporate network, effectively putting the whole thing onto the internet. That requires ensuring that the authentication systems are reliable and that there are good models for all of the roles within the organization including what access they require. More importantly, from her perspective, BeyondCorp had a clear focus on user experience and on making it largely invisible to its users.

Intent

If you want to align your security goals with the goals of users, you need to know what the user's intent is. Developers forget about intent all of the time; that is usually where the tension between usability and security occurs. There is a tendency to fall back on common patterns; designers say "make it easy", while security people say "lock it down". But it is not her job to make everything easy, nor is it the security developers' job to make it all locked down. The job is to make an action that the user wants to take at a specific time and place easy; everything else can be locked down.

Figuring out the user's intent is "easier said than done, of course", but it starts by understanding who the user is, where they are, what time it is, and what kinds of things might they want to do under those conditions. Is there enough information for the application to know those things? Are people sharing login information so that it is difficult to know who the actual user is? Are those things that need to be handled?

Determining what a user would be expected to do—and not do—based on their role, while using the minimum amount of personal data to do so, is the goal. Simple things like country of origin and time of day can inform the software on what that user's intent might be, she said.

(Mis)communication

She challenged attendees to think about communication a bit differently than they usually do; it is not just a "mushy kind of human-based I/O that is a bit of a drag to deal with sometimes". In particular, she wanted them to think about miscommunication as a human security vulnerability; it is what is exploited by social engineering attacks. It is the ambiguity in communication that gives social engineers cracks to exploit.

In their current project, she asked attendees, is there something that the program is unintentionally miscommunicating? For example, the (in)famous green lock in the web browser interface, which is a bug that has thankfully largely been fixed, she said, means that the communication is encrypted and that the domain name is attested to by the certificate authority (CA). But to the average person, it simply means that the site is "secure", because it says so right next to the lock icon. We know that is not necessarily true, though—there is a miscommunication, thus a human security vulnerability.

By way of an example, she mocked up a "pretty convincing" web site that would show the green lock. If she was bored one night and "felt like doing some crime", she could grab an unused domain name (say, "chase-help-and-support.com") for around $20 and then go to Let's Encrypt for a free certificate. With some HTML and CSS hacking, she could set up a pretty convincing phishing site that the browser would explicitly label as "secure". She didn't actually do any of that, but phishers do it all the time and it works really well. The question to ask yourself is: do your end users know what it is you are trying to communicate?

Matching mental models

In order to understand whether users are receiving your communication, you need to understand their mental model of what is going on. Matching the mental models of users and developers is the most important consideration in this process. The user's expectations are what ultimately govern whether a system is secure: if they are met, it is secure, if not, it isn't.

Not all man-in-the-middle instances are an attack, the telephone game is a series of man-in-the-middle "attacks", but it is "just a pointless children's game". In a network context, though, users expect that their communications are able to be read only by the expected recipients and man-in-the-middle attacks violate those expectations.

So in order to make a system that is secure from the user's perspective, we need to figure out what their mental model is. One way to do that is to observe them interacting with the system. Designers are often doing user interviews; sitting in on those or reading their transcripts will be quite helpful in determining what users are thinking. If you don't have access to those activities, observe friends and family. By figuring out why they do the things they do, you can infer their intent.

Another approach is to influence their mental model so that it better matches what is actually happening. Whenever we make something, we teach and whenever someone uses something we make, they learn. The path of least resistance will often simply become the way to accomplish a task. Our software is already influencing the user's mental model. As an example of that, she described something that Apple products used to do; they would semi-randomly pop up a dialog asking the user to log into iTunes. People got so used to that—and just quickly typing their password to dismiss it—that it could easily be used as part of phishing scams.

We should pay attention to what our applications are teaching our users. Are they teaching users to ignore warnings and figure out the easiest way around them? Or that security is a barrier to be surmounted rather than something that helps them? The question "is it secure?" is completely meaningless without some kind of context about who and what it is securing and what it is protecting against.

In summary

As she concluded her talk, she noted that cross-pollination between design and security is rare, which is a missed opportunity. All of our jobs are about "outcomes based on specific goals", which should not rely on the stereotypical patterns of designers' considerations versus those of security developers. The key is to align the user's goals with the security goals.

She closed with a final anecdote: in her company's old building, one of the floors had a light switch that no one knew what it controlled. It had a post-it note over the switch that simply said: "No!". "Can you guess what the first thing I did was?", she said with a laugh as she showed a picture of a finger about to press it.

The first question in the lengthy Q&A was, inevitably, about the switch. She never found out what it controlled though she switched it many times. Another question had to do with security flaws that are difficult to communicate to users and, perhaps, have no fix yet, such as a firmware or processor bug. There are no simple answers there, especially if there is no recommended action that can be offered, she said. In those cases, hopefully automatic updates are taking care of things once there is a fix. Until then, it is not clear what can usefully be communicated to non-technical users.

How to get users to care about the "trust question" was another query. She acknowledged the problem but said that users often do not have time to even think about who they trust and they cannot be bothered to do so. She likened it to voting, where we would like to have people care about the issues and make informed choices, but many simply don't have the time—or take the time—to become informed.

A YouTube video of the talk is available.

[I would like to thank LWN's travel sponsor, the Linux Foundation, for travel assistance to Christchurch for linux.conf.au.]

Index entries for this article
SecurityApplication design
Conferencelinux.conf.au/2019


to post comments

Design for security

Posted Jan 31, 2019 1:50 UTC (Thu) by murukesh (subscriber, #97031) [Link] (9 responses)

It makes sense, she said, because a secure system must be reliable and controllable, which means it must be usable, while a usable system must be less confusing, thus it is more secure.
This is so incredibly naïve that I am having trouble mustering any desire to read the rest of this. Software can be reliable, controllable, and not confusing, but still annoying to use, and therefore not very usable. Or does user annoyance not factor in to this definition of usability at all?

Design for security

Posted Jan 31, 2019 3:09 UTC (Thu) by nilsmeyer (guest, #122604) [Link]

That seems to be a matter of semantics. Usable in terms of good user experience, not merely functioning.

Design for security

Posted Jan 31, 2019 4:05 UTC (Thu) by Nahor (subscriber, #51583) [Link] (3 responses)

That is my thought too: it's very naive.

Security and usability may not always be mutually exclusive, but sometimes (often) they are. A big part of security is trust, and showing that you can be trusted is never the end goal, so it's necessarily a "wall". For instance to enter a house, the door, and the lock on it, and the alarm system, are all impediments to the resident's goal. Yet you can't have security without them.

Similarly, asking the user everything that might be needed for later during the setup is a scenario for nightmares. It means storing that information somewhere, which means a risk of it getting stolen or abused. And to make it worse, it might never even be needed at all because the user won't use the feature requires it.

Design for security

Posted Jan 31, 2019 6:52 UTC (Thu) by cpitrat (subscriber, #116459) [Link] (1 responses)

It may be naive but I think the idea is to debunk the common justification from average developers to not even think about security (think IoT devices) and from security experts to not think about usability (think GPG which is aimed at end users).

Design for security

Posted Jan 31, 2019 15:43 UTC (Thu) by naptastic (guest, #60139) [Link]

I sat down with some friends--all of us professional sysadmins--a few years ago to learn GPG / PGP well enough that we could teach our friends and family members, get people using encrypted email, make the Internet a better place, yadda yadda. In 3 hours, we could not do secure email using any combination of FOSS email clients. We gave up, reaching the conclusion that developers working on GPG and email client plugins should seek other career paths.

Now we use Keybase and and it just works. We use our real names, actual photographs of ourselves, proofs anywhere we possibly can, and we only "follow" each other after verifying account ownership in person. (We treat "following" the same way as signing someone's public key, and make sure they understand that before following them.) The UI isn't great, and the .deb is >100 MB, but you know what? Good security is inconvenient, and having to download a 100MB .deb every other day is still a million times better UX than GPG.

Design for security

Posted Jan 31, 2019 11:27 UTC (Thu) by NAR (subscriber, #1313) [Link]

For instance to enter a house, the door, and the lock on it, and the alarm system, are all impediments to the resident's goal. Yet you can't have security without them

Of course, even if a house has a lock, a door and an alarm system, if the owner lets in some stranger (e.g. disguised as a person who reads the electricity/water/gas consumption from the meter) from the street, the house can be compromised. In the end the user (home owner) has to make choices - and for that the user has to have enough information. Guiding/forcing the user to not let anyone in could lead to unpaid bills and loss of electricity/water/gas.

Design for security

Posted Jan 31, 2019 17:59 UTC (Thu) by agateau (subscriber, #57569) [Link] (1 responses)

I don't think it's naïve, I understand it as: a system which aims to be secure has better be usable, otherwise users will find ways to workaround the security to make their life less complicated.

So usability is a difficult, but necessary, goal to achieve, not a de-facto feature of a secure system.

Design for security

Posted Jan 31, 2019 23:04 UTC (Thu) by rgmoore (✭ supporter ✭, #75) [Link]

a system which aims to be secure has better be usable, otherwise users will find ways to workaround the security to make their life less complicated.

More precisely, though, this has to do with security making things more difficult, not with the underlying usability of the software. If your program is overall very easy to use but security adds complexity, people will work to defeat the security even if it's still reasonably easy to use with the security in place. If your program is overall very difficult to use but the security doesn't make it any harder, people have no reason to defeat it. The ideal approach is to make security have a negative cost: make it harder to do things the insecure way than the secure one (which would include making insecure impossible).

Design for security

Posted Jan 31, 2019 18:46 UTC (Thu) by tylerl (guest, #96561) [Link] (1 responses)

First, annoyance is definitely part of usability, and therefore security.

This is not naive at all. This is advanced stuff -- this is as advanced as it gets in the security world. It seems simplistic because the concept is simple, but this is one of the absolute fundamental truths of security that, if you work in this industry, you need to grok. She's pointing out the fundamental disconnect between what we often *think* is security (offering a mechanism by which safety can be maintained) and what security actually is: making the obvious path (perhaps the only path) be the safe path. Not just secure by default; secure under all non-exceptional conditions.

This is difficult. This is exceptionally difficult. This is my day job. I'm working with literally the most capable engineers in the world, possibly under the most supportive and favorable conditions that exist anywhere, and this is still hard to do at scale.

This means security UX needs to be involved in a significant way at the design stage of all nontrivial components. It means changing the way we think about responsibility and authority. It means you never force someone to make a decision for which they are not expected to fully understand the implications. It means security must explicitly support the "I just want to get my job done" use case -- you give your people (safely and seamlessly) the resources to do their job.

This is really, really hard to do. Far more than it seems just talking about it. But it's my experience that it's the only solution and approach that does not conflict with reality.

Design for security

Posted Jan 31, 2019 23:45 UTC (Thu) by Fowl (subscriber, #65667) [Link]

Bravo! Theoretical security is not enough, if never happens in practice.

Design for security

Posted Jan 31, 2019 6:33 UTC (Thu) by joncb (guest, #128491) [Link] (4 responses)

While I agree on the general thrust of this theory (It's a hobby horse of Bruce Schneier's as well), I dislike the attempts to re-frame as "the opposite is true". Both positions model the world as a simple linear function whereas the real world is never that simple. Unfortunately trying to say that "usability and security are maximally complimentary" is easy to ignore. The most maximally secure system is one that is completely unusable for anyone and anything. Any increase in usability is, by definition, reducing it's security.

However that doesn't mean that security and usability are entirely mutually exclusive either. For any given level of security, you can tweak to improve it's usability without compromising the security. This is entirely on point to the meat of the content that Serena Chen is presenting... it's just the elevator pitch which, IMO, gives the wrong impression.

Design for security

Posted Jan 31, 2019 6:55 UTC (Thu) by cpitrat (subscriber, #116459) [Link] (3 responses)

> The most maximally secure system is one that is completely unusable for anyone and anything.

Well, no because users will work around it, most likely with totally unsecure paths.

Design for security

Posted Jan 31, 2019 10:16 UTC (Thu) by NYKevin (subscriber, #129325) [Link]

This is the key point that almost everyone in these comments is missing. Security is not a property of the software. Security is a property of the whole system, including the human sitting at a keyboard trying to get something done.

Design for security

Posted Jan 31, 2019 13:29 UTC (Thu) by joncb (guest, #128491) [Link] (1 responses)

I think you misunderstand me. I'm not talking about a system that has been "locked down" or "secured". I mean a computer system that is functionally indistinguishable from an inert lump of semi-precious metals. There might be a keyboard or mouse but it will never have an effect. There might be a screen but it will never display anything (information or otherwise). I think you understand this because you state why this is the case most effectively. If a system can be used(i.e. has any value of usability greater than absolute zero), then there is a potential (if unreasonable) system that is more secure because any security can be undermined by the user "working around it". Yes this is akin to a thought experiment. No-one is going to build an inert mass of metal and call it "the worlds most secure computer".

My point is that it is simple to imagine counter-examples to that initial idea that "Good experience design and good security cannot exist without each other" and i think that undermines the true value of what the talk is saying the rest of the time. I would rather people acknowledge that yes, there is a tension between usability and security and the interplay between these two values is complex and thinking about one without thinking about the other is doing your users a disservice.

Design for security

Posted Jan 31, 2019 14:45 UTC (Thu) by Otus (subscriber, #67685) [Link]

> I'm not talking about a system that has been "locked down" or "secured". I mean a computer system that is functionally indistinguishable from an inert lump of semi-precious metals.

You don't need to go that far for people to bypass the computer and get their work done by e.g. sending texts from their phone. And that's not nearly maximally secure.

Design for security

Posted Jan 31, 2019 8:46 UTC (Thu) by mjthayer (guest, #39183) [Link]

> That comes from security being tacked on at the end of the development process, she said.

I find the interaction between the idea that "security must be part of a product from the start" and the "path of least resistance" idea that the author discusses rather interesting. Developers have their goals as users do, and I suspect that in many cases a highly secure product is not one of the main ones, at least when development starts. I would further expect a product designed by people for whom security was not one of the main initial goals to be more likely to meet the needs of users for whom security was not a main goal, since it would leave more developer capacity free to concentrate on what the user did want to accomplish.

Design for security

Posted Jan 31, 2019 12:00 UTC (Thu) by nilsmeyer (guest, #122604) [Link] (23 responses)

> In a network context, though, users expect that their communications are able to be read only by the expected recipients and man-in-the-middle attacks violate those expectations.

This is a corporate anti-pattern I've seen with a few customers where I had to install their CA to connect to some websites or click away the warnings. This usually happens through some appliance made by a proprietary vendor (who often have horrible track records when it comes to security), which is rarely if ever upgraded. It's completely compromising the security of the whole network, reducing security for everyone. The purpose of this is usually content blocking, which is also highly problematic. Some times I had a case where vital resources (documentation, test sites) were blocked, to get content unblocked you had to request it from the vendor(!).

The solution of course is to route most of your traffic through a 4G connection...

Design for security

Posted Jan 31, 2019 13:45 UTC (Thu) by edeloget (subscriber, #88392) [Link] (2 responses)

> This is a corporate anti-pattern I've seen with a few customers where I had to install their CA to connect to some websites or click away the warnings. This usually happens through some appliance made by a proprietary vendor (who often have horrible track records when it comes to security), which is rarely if ever upgraded. It's completely compromising the security of the whole network, reducing security for everyone. The purpose of this is usually content blocking, which is also highly problematic. Some times I had a case where vital resources (documentation, test sites) were blocked, to get content unblocked you had to request it from the vendor(!).

Yet in some countries (including France) the company is responsible for the sites you visit when you are in, so that makes the whole thing a lot more complex than a corporate anti-pattern ; should they disallow HTTPS? or should they let you do anything you want, risking potential legal fallbacks?

Design for security

Posted Feb 6, 2019 9:24 UTC (Wed) by nilsmeyer (guest, #122604) [Link]

They should change that law.

Design for security

Posted Feb 8, 2019 13:40 UTC (Fri) by benoar (guest, #52466) [Link]

There exist no such law in France: this is urban legend to justify corporate control of the employees and arbitrary filtering. Also, it helps selling “security appliances” and provides big bucks to “security” companies.

From my knowledge, there is no clear consensus on employer responsibility for personal access, but I am not aware of *any* sentence to an employer for wrondoing of one of his/her employee regarding non-filtered Internet access. And anyway, if the employer is recognized as an operator as L34-1 of “Code des postes et des communications électroniques”, then they should provide the log to exonerate themselves, that they already collect anyway.

Design for security

Posted Jan 31, 2019 18:33 UTC (Thu) by nim-nim (subscriber, #34454) [Link] (19 responses)

Well you do need to inspect traffic in a corporate context. Some common examples:

1. idiots that find it convenient to replicate their whole trove of internal documents on insecure remote country websites, just so they can "work" from the nearest pub. See also all the various fooleaks, Hillary Clinton mail, and so on

2. idiots bored at work that think they deserve to listen to their favorite preacher all day round, pull 4k videos from their own NAS, etc. It's not that the audio/video feeds present a danger to the company by themselves (except for the idiot productivity) but audio/video is so bulky just a few people misbehaving is sufficient to starve legitimate work traffic, as soon as you deal with worksites that host more than a handful of people.

And you can say "it's all a managerial problem". Do *you* want managers that spend their day looking over your shoulder at your screen just to check you're using company resources correctly? (No one has managed to eradicate human idiocy so far, and it's compounded on jobs where computer literacy is low)

And yes deploying corporate certificates just to perform those checks is massive overreach, blame Google and the other hipsters that made it an all or nothing option just so they could fight their home ISP for the right to stream good quality netflix.

Design for security

Posted Jan 31, 2019 22:46 UTC (Thu) by sml (guest, #75391) [Link] (16 responses)

Network inspection is the wrong approach. You'll never get 100% visibility and all those crappy middleboxes (Bluecoat et al.), corporate certificates and associated complexity are a security disaster zone.

I find endpoint enforcement - locked down workstations and centralised logging - much more effective.

Design for security

Posted Feb 1, 2019 8:32 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (15 responses)

It's hard to design locked down workstations that do not do more productivity damage than the middleboxes.

And even when you lock down workstations, are you going to remove browsers too? A lot of jobs need a browser. And the cloudy data-slurping websites have been pretty good at making all their dangerous stuff depend on a browser only.

Lastly, enforcement through logs does you little good when network hogging by the local intern made one of the people/systems that rack money for the corporation miss a deadline. The money is already gone and wasted by the time you home on the culprit.

Design for security

Posted Feb 1, 2019 8:39 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (14 responses)

> It's hard to design locked down workstations that do not do more productivity damage than the middleboxes.
You don't need to lock down boxes. Just install mandatory firewall rules, software inspectors and anti-intrusion software. This can be almost completely transparent to end-users.

> And even when you lock down workstations, are you going to remove browsers too?
Why would you want to remove browsers?

> Lastly, enforcement through logs does you little good when *network hogging*
Is it seriously even an issue these days?

Design for security

Posted Feb 1, 2019 9:42 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (6 responses)

>> Lastly, enforcement through logs does you little good when *network hogging*
> Is it seriously even an issue these days?

Yes.

Home access nowadays can be 1 FTTH per family, so basically one high-bandwidth link for ~ 4 people.

There's no way any sane corp will provision the same per/user bandwidth ratio on any work site with 50 people or more. The economic model works for home access because its main point is to slurp high-volume entertainment, so home users are ready to pay tens of $ per month and user just to get access to their videos and games. The per user budget for network @work, where users are mostly supposed to exchange mails and access some webified corp apps, is not the same.

Add to that that a byte of corporate bandwidth is more expensive, because corps want some warranted availability (it's expensive to pay people to wait for network to go back up), and you have all the security systems trying to detect intrusions (cost scales with amount of traffic to check), while home bandwidth is dirt cheap (zero security processing, best effort availability levels, no link redundancy).

So basically, network @home and network @work are not the same thing, it's different technical compromises, and all the high-volume low-security low-availability use cases people are used @home do not translate well @work.

Design for security

Posted Feb 1, 2019 9:55 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (4 responses)

> So basically, network @home and network @work are not the same thing, it's different technical compromises, and all the high-volume low-security low-availability use cases people are used @home do not translate well @work.
No. I'm speaking about one endpoint device (an intern's laptop?) displacing all other users. As far as I'm aware all sane routers will not allow this.

Design for security

Posted Feb 1, 2019 12:06 UTC (Fri) by nim-nim (subscriber, #34454) [Link]

That depends on the complexity of your network topology. It's easy to manage on border equipments, not so much when you get closer to the backbone. And, unfortunately, humans are social beings, so you can be sure whoever invents a new way to make the network crawl to a stop, will have bragged about it to friends, that will participate in the DOSing.

Design for security

Posted Feb 13, 2019 16:41 UTC (Wed) by Wol (subscriber, #4433) [Link] (2 responses)

> No. I'm speaking about one endpoint device (an intern's laptop?) displacing all other users. As far as I'm aware all sane routers will not allow this.

And where do I buy said sane router?

My home internet regularly collapses under load. The cause clearly seems to be down to my (reasonably modern) router. And I strongly suspect that actually the cause is the link from there back to the ISP router.

There is a VERY long-standing bug in equipment called "buffer bloat" where a single end-point device *can* displace all other users, and there's probably a hell of a lot of equipment still out there that suffers from this. New versions of the linux kernel work around this, but how many devices are still sold brand-new with old kernels, or haven't been upgraded in years?

Cheers,
Wol

Design for security

Posted Feb 21, 2019 2:50 UTC (Thu) by fest3er (guest, #60379) [Link] (1 responses)

No need to buy anything. Get a late-model PC (dual-core, 1.6GHz CPU or faster, 2GiB RAM, SATA HD, and 2-4 NICs depending on how many zones you want), and install Smoothwall Express on it. I spent a good amount of time getting its QoS (Traffic Control) to work well. Since I released v3.1 in 2014, it has done a very good job preventing any traffic stream from hogging bandwidth. No matter what I do DLing or ULing, all streams share the bandwidth fairly (almost equally). I can have multiple GB downloads and uploads going with none blocking any others. Interactive response is still very good. Identified isochronous traffic is very smooth. DNS and NTP traffic are very timely. Low priority bulk traffic (such as P2P) can use any B/W left after all higher priority packets have been sent. It isn't perfect. Or complete. But it does work well. And is still designed for non-experts; they need to know some technical stuff, but most of the jargon and arcana are hidden from them.

Linux's Traffic Control is poorly documented and leads to impossible expectations. I designed a nice JS-based configuration tool that I eventually abandoned because LTC just cannot do what the documentation says. However, once I really understood what it can do and what it cannot do, I was able to 'fix' traffic control so that, for the most part, traffic flows smoothly. LTC also cannot easily control multiple interfaces; for example, a gigE NIC might be able to 'block out' a 100Mb/s NIC when they both 'send' to a 10Mb/s internet link.

I haven't addressed buffer bloat. 'ls -lstr /' through an SSH connection results in ^C being unresponsive for 5-10 seconds. But dealing with that much output doesn't happen too often.

In short, there *are* Linux-based routers that do a nice job of enforcing bandwidth sharing. And some of them are free.

Design for security

Posted Feb 22, 2019 15:22 UTC (Fri) by nix (subscriber, #2304) [Link]

I haven't addressed buffer bloat.
These days, for wired Ethernet at least, just switching to fq_codel or CAKE on your bottleneck link with the default parameters (or default plus telling it what your ADSL encapsulation etc is) should be enough to fix that, as long as your NIC driver supports BQL, which most now do.

Design for security

Posted Feb 8, 2019 7:33 UTC (Fri) by anton (subscriber, #25547) [Link]

I work at a university with about 2000 staff and about 20000 students, and we don't have any of the restrictions discussed here, and network hogging is not a problem I am aware of (not even in those days when the students did not all have internet at home or in the phone); I think there was one episode a few years ago where a virus or something was rampant in the university network, and apparently overloaded it, but the typical reason for the rare occurrences of network outage is when some piece of hardware fails (e.g. a router dies).

So my university invests in enough bandwidth to allow pretty free internet access, but does not invest in redundant routers etc.; what's different for corporate environments?

Design for security

Posted Feb 1, 2019 9:46 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (6 responses)

>> It's hard to design locked down workstations that do not do more productivity damage than the middleboxes.
>You don't need to lock down boxes. Just install mandatory firewall rules, software inspectors and anti-intrusion software. This can be almost completely transparent to end-users.

And that's almost exactly the processing middleboxes do, except you only need to manage a handful of centralized middleboxes, instead of getting the correct conf on (tens/hundreds) of thousands of workstations.

And if you complain middleboxes are badly configured, how exactly do you expect the same processing to be configured correctly when multiplied by thousands of endpoints or more?

It's not that is is technically impossible, but a corp that skimps on correct middlebox maintenance, is unlikely to be generous on endpoint maintenance.

Design for security

Posted Feb 1, 2019 20:10 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (5 responses)

> And that's almost exactly the processing middleboxes do, except you only need to manage a handful of centralized middleboxes, instead of getting the correct conf on (tens/hundreds) of thousands of workstations.
No, middleboxes only see network traffic. They can try to decrypt it, overcoming more and more counter-measures, but they are fundamentally limited.

If your users have laptops then the middleboxes won't see anything that happens outside of the company, when a laptop is used in a Starbucks (for example).

Management software can ensure that the firewalls are configured correctly and it has quite a bit more access to browsers (via group policies) and other software. Also if you have hundreds of endpoints, you probably should manage the client devices anyway.

Design for security

Posted Feb 1, 2019 21:12 UTC (Fri) by nim-nim (subscriber, #34454) [Link] (3 responses)

That's trivial to handle, just deploy a firewall that forbids everything except the corp VPN when outside the corp network

Design for security

Posted Feb 1, 2019 21:15 UTC (Fri) by Cyberax (✭ supporter ✭, #52523) [Link] (2 responses)

> That's trivial to handle, just deploy a firewall that forbids everything except the corp VPN when outside the corp network
Try it. Go on, try it. I dare you.

Hint: this won't work. Even Starbucks requires you to click through the captive portal page to get access to WiFi. Ditto for GoGoInflight and approximately most of other public access points.

Design for security

Posted Feb 2, 2019 12:10 UTC (Sat) by nim-nim (subscriber, #34454) [Link] (1 responses)

That actually works (not my domain, I haven't looked at it, but I think the desktop firewalls let pass the first few requests without filtering to let the portals show up). Or they just let google search been redirected, as that's the internet for a lot of users.

Design for security

Posted Feb 2, 2019 20:51 UTC (Sat) by Cyberax (✭ supporter ✭, #52523) [Link]

Which ones? The firewall on Mac OS X will not disallow outbound connections. It will simply block incoming connections (possibly with exceptions for signed binaries).

You can install additional firewall software but at this point you can just as well install a full-blown management client instead.

Design for security

Posted Feb 1, 2019 21:51 UTC (Fri) by rodgerd (guest, #58896) [Link]

They also break more security measures than they solve problems; cretins with middleboxes masquerading as security experts have done more to undermine security that most black hats could dream of.

Design for security

Posted Feb 6, 2019 9:37 UTC (Wed) by nilsmeyer (guest, #122604) [Link]

> 1. idiots that find it convenient to replicate their whole trove of internal documents on insecure remote country websites, just so they can "work" from the nearest pub. See also all the various fooleaks, Hillary Clinton mail, and so on

Did you ever ask yourself, beyond them being idiots, why people do that?

> 2. idiots bored at work that think they deserve to listen to their favorite preacher all day round, pull 4k videos from their own NAS, etc. It's not that the audio/video feeds present a danger to the company by themselves (except for the idiot productivity) but audio/video is so bulky just a few people misbehaving is sufficient to starve legitimate work traffic, as soon as you deal with worksites that host more than a handful of people.

It's possible to manage network bandwidth without completely breaking encryption.

Of course I believe that stronger, more restrictive security measures make sense where "idiots" are involved, or even just the people who know very little about computers. What annoys me is that the same applies to people in IT (developers etc.) as well, except of course for the people who administer the network, they seem to always have a workaround. I usually refuse to work at places like that since it's hard to be productive and usually indicative of a certain workplace culture.

Design for security

Posted Feb 9, 2019 20:29 UTC (Sat) by nix (subscriber, #2304) [Link]

audio/video feeds present a danger to the company by themselves (except for the idiot productivity) but audio/video is so bulky just a few people misbehaving is sufficient to starve legitimate work traffic, as soon as you deal with worksites that host more than a handful of people.
Video, maybe (though frankly with the amount of documentation showing up as YouTube videos these days, you more or less have to provision enough capacity for that) -- but audio? Seriously? You work at places where bandwidth is so anaemic that compressed audio streams don't fit? Do they also not provide a phone network because the phones have too high bandwidth requirements?

Audio is almost the definition of a low-bandwidth application these days. Only terminal service is lower bandwidth, and audio streaming (as opposed to phone calls) is heavily buffered, so doesn't have the low-latency requirements of that.

Design for security

Posted Feb 8, 2019 11:13 UTC (Fri) by otpyrc (guest, #124901) [Link] (3 responses)

Umm, the youtube account linked to was just cancelled, and thus the video.
Is there an alternative link?

Design for security

Posted Feb 8, 2019 14:20 UTC (Fri) by jake (editor, #205) [Link] (2 responses)

> Umm, the youtube account linked to was just cancelled, and thus the video.

Hmm, I just clicked the link and it worked ... but here is a different link in case maybe there is a country-specific block or some such: http://mirror.linux.org.au/pub/linux.conf.au/2019/c3/Tues...

jake

Design for security

Posted Feb 8, 2019 15:17 UTC (Fri) by gevaerts (subscriber, #21521) [Link] (1 responses)

Earlier it said that the account had been suspended for ToS violations or something like that (I can't remember the exact wording), but it's back for me now. It also worked yesterday. It's not a geographic block.

Design for security

Posted Feb 11, 2019 9:48 UTC (Mon) by otpyrc (guest, #124901) [Link]

Back for me too, so probably just a temporary misguided content blocking:)

Design for security

Posted Feb 21, 2019 3:41 UTC (Thu) by fest3er (guest, #60379) [Link] (4 responses)

The real problem can be illustrated by a quote from the movie, Cool Hand Luke: "What we've got here is failure to communicate." After 40 years in the field, I *still* find evidence of far too many software professionals who either can't or won't clearly communicate their thoughts to other people.

It gets down to the most fundamental characteristic of inter-human communication: the fact that *all* human languages are programming languages because human language is the effort of one person to program others' neural nets to think her thoughts. Some day I'll challenge all software people to master English (or at least master their native languages). The internet is rife with examples of ambiguous human programs. Shoot, a reading of NHRA's carefully-prepared rule book will reveal lots of ambiguities; some rule is intended to prevent injury A, so the racer must do Q but the rule, as written, allows lesser X, Y, and Z actions as well. (As much as I try to write clear English, I often still fail to communicate my thoughts to others; I'm sure this missive will be no exception.)

The BeyondCorp security model was mentioned, in that it opened one's private internetwork to the universe. Something that isn't quite as bad is TLS because it bypasses the private internetwork's perimeter firewall. Once the encrypted link it established, the firewall cannot block malware or theft. IMO, the time of SSL/TLS is long past and it should be largely retired; the recent push to shove TLS everywhere is misguided at best. The proper solution is host-to-gateway and gateway-to-gateway opportunistic encryption. Prevent deliberate, casual or incidental observation of private/confidential data, but allow perimeter firewalls to block trojans, viruses, ransomware, phishing, data theft, and other 'crimes'. (People who are terrified that someone might see what they're doing on an internetwork probably shouldn't be using the net in the first place. IMO.) VPNs must be allowed, but all network traffic other than the VPN should be forced through the business/SOHO/personal firewall so that it can be filtered and inspected for malware. Most people don't use VPNs because most software engineers and programmers make it too hard to use.

Basically, I generally agree with Miss Chen. Usability and security must go hand-in-hand. Engineers, programmers and coders nede to learn to think like ordinary people; they need to crawl out of their virtual universes and learn to communicate with non-technical people. And non-technical people need to learn some of the tech terminology so they can more readily learn how to best use the tech they own. Many smart phones have multi-GiB RAM because people don't know or don't care that they have 500 apps open. When I finish using an app on my phone, I close it; I think the most apps I ever had open at one time was 8. When I need the net, I turn WiFi or cell data on. When done, I turn them off.

Design for security

Posted Feb 22, 2019 15:16 UTC (Fri) by nix (subscriber, #2304) [Link]

The proper solution is host-to-gateway and gateway-to-gateway opportunistic encryption. Prevent deliberate, casual or incidental observation of private/confidential data, but allow perimeter firewalls to block trojans, viruses, ransomware, phishing, data theft, and other 'crimes'. (People who are terrified that someone might see what they're doing on an internetwork probably shouldn't be using the net in the first place. IMO.)
And what do you do when you work, as I used to, for a megacorp with a centralized, out-of-touch, uncontactable security department that listens mostly to expensive consultants and flatly refuses to allow things that entire national divisions need to do their jobs? Just sit there all day and do no work?

People bypass security when it impedes them. Otherwise, they mostly ignore it. You can't get rid of that by saying "but nothing should be encrypted except via gateways operated by the Right People": if the Right People are idiots, this will not help, and it makes the gateways into a huge target that attackers will naturally penetrate, and now they have everything so you are paying the cost of security for nothing.

Many smart phones have multi-GiB RAM because people don't know or don't care that they have 500 apps open. When I finish using an app on my phone, I close it; I think the most apps I ever had open at one time was 8.
This seems utterly bizarre to me. I just leave things open and let the phone transparently close things in the background when memory is short. Usually the only visible effect of this is having to go back in via the home screen, and a slight delay on switching back as the app reloads its state. Why do you care if the app is being persisted in normal use, any more than you care which pages are in the page cache in normal use? Why on earth would you try to kick things out, unless you like making your own life harder?

Design for security

Posted Feb 22, 2019 19:36 UTC (Fri) by mpr22 (subscriber, #60784) [Link]

One does not have to believe oneself to be a Designated Undesirable to prefer that one's internet traffic be end-to-end encrypted rather than per-hop encrypted.

My ISP's gateway, my bank's ISP's gateway, and the six other gateways between my ISP's gateway and my bank's ISP's gateway have no business being able to know my access credentials for my bank's online services.

Design for security

Posted Feb 23, 2019 17:34 UTC (Sat) by jezuch (subscriber, #52988) [Link] (1 responses)

In addition to what others said...

How do you guarantee that the gateways do not cheat and are not sending data unencrypted? How do you verify this? It's like today's email: how do I know that my mail provider is not broadcasting my poorly written and utterly embarrassing (but really sweet to the addressee) love letters to other mail exchanges as plain text? (An obvious response to an obvious retort: mail encryption is rubbish and my girlfriend refuses to use it.)

BTW, you may be confused that when you "close" an app it is in fact closed. More likely it is just removed from the list of recent apps, unless you're going to the settings > applications > particular app > stop process > yes, I really want to kill it and I know it may break the app. It gets tiresome really quick.

Design for security

Posted Feb 23, 2019 23:59 UTC (Sat) by excors (subscriber, #95769) [Link]

> BTW, you may be confused that when you "close" an app it is in fact closed. More likely it is just removed from the list of recent apps, unless you're going to the settings > applications > particular app > stop process > yes, I really want to kill it and I know it may break the app. It gets tiresome really quick.

The "Recents" screen shows activities and tasks (which I think correspond to certain Java objects), not apps or processes. When the user closes something from that list, I think it does destroy the activity if it's still alive, per https://developer.android.com/guide/components/activities..., so it's doing more than just hiding it from the list. But it still probably won't terminate the process if that was its last activity - it just increases the likelihood of it being chosen by the Low Memory Killer when someone else wants the memory.

Conversely, an activity can remain in the Recents list after its process has been terminated by the LMK. A new process can be started and told to resume that activity. So there's little correlation between process lifetime and activity visibility.


Copyright © 2019, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds