The conference portion of linux.conf.au opened on Wednesday morning with a
keynote by Bruce Schneier. LCA is a sold-out event; in fact, there are
rather more attendees than can be fit into the hall where the keynotes are
held. Thus the room was packed, with the second-class citizens - those
with yellow badges who put off registration until late - watching a remote
feed in a separate room. Those folks may have had a more distant
experience, but it was almost certainly a cooler one too.
Bruce's key point is that we need to rethink how we try to achieve
security, though it took a while to explain just why that is. Security, he
says, has two components:
- The feeling of security: that which helps us to sleep well
- The reality of security: whether we are, in fact, secure.
These two aspects of the problem are entirely separate from each other, but
they both have to be addressed if our security goals are to be achieved.
Security is always a set of tradeoffs which we are all making every day.
As an example, consider that, in all likelihood, nobody in the audience was
wearing a bulletproof vest. It's not that the vests do not work; instead,
nobody feels that the cost of wearing a bulletproof vest is justified
given the risk. On a bigger scale, the answer to the question of how to
prevent more 9/11-like attacks is clear: ban all aircraft. In fact, that
was done in the US for a few days after those attacks, but, in the longer
term, that is not a tradeoff that people are willing to make.
So the fundamental question for any security tradeoff is: is it worth it?
As it happens, we are quite bad at making that decision. We tend to
respond to feelings rather than reality. Spectacular risks drive us more
than everyday risks. We fear the strange over the familiar and the
personified (think Osama bin Laden) over the anonymous. Involuntary risks
are seen as being bigger than those entered into voluntarily. In the end,
evolution has equipped us quite well for making tradeoffs in the small
communities we lived in many, many thousands of years ago. We are less
well equipped for the world we live in now.
Since we respond to feelings more than reality, there are strong economic
incentives for solutions which address feelings. The result is snake-oil
products and security theater.
Sometimes people notice that they are being
sold bad security (later Bruce mentioned a US survey which indicated that
the Transportation Security Agency is now less trusted than the taxation
agency), but, all too often, they don't. They have a poor understanding of
the risks and the costs involved, and there are plenty of people with
strong interests in confusing the issue.
The security market is a lemons
market, one where buyers and sellers have asymmetric access to
information. Economic research shows that, in such markets, the bad
products tend to drive the good ones out of the market. There is no easy
way to evaluate the work which has gone into the creation of a truly secure
product, so buyers respond to other, less reliable signals. Things like
price, sales claims, or the Gartner Group. These signals are sloppy and
prone to manipulation. When security is outsourced to outside agencies -
governments, say - the problem gets even worse.
In the business world, information eventually brings some order to a lemons
market. As businesses learn about what really works, access to information
evens out - though there is always a problem with very rare, high-cost
events where information is not available. In the individual world,
though, it is much harder, because fear plays a much bigger role.
The fact of the matter is that fear is wired deeply into how we work - it
is a result of a very old part of our brain. As humans, we have the
ability to override our fears when reason indicates that we should, but it
is a hard thing to do. The default state is that fear rules. So this is
Bruce's core point: the feelings matter. All that security theater out
there is not entirely stupid; any security solution must address the fears
that people feel. We must address both aspects of security.
The problem is where the feeling of security and the reality of security
diverge from each other. If only feelings are addressed, security has not
really been achieved. If only the reality of security is addressed, people
feel insecure and may make bad decisions. Either way, the full problem has
not been solved. Addressing this all-too-common problem is hard, though;
Bruce knows of no better way than the spreading of good information.
Your editor's perspective follows - nothing from this point on was said
during the talk. It seems that he has a point here. Consider some common
situations in the free software world:
- A large number of security updates from a distributor may be an
indication that the reality of security is being achieved: problems
are being found and fixed before they are exploited. But all those
updates can undermine the feeling of security. The seemingly endless
stream of Wireshark updates is a case in point; most of these problems
are found through proactive auditing by the developers and have never
been exploited by the Bad Guys. But the feeling of insecurity
associated with Wireshark can be strong. This feeling can push users
toward other software which, while not having that long history of
security updates, is actually less secure.
- A system running SELinux may, in fact, be highly secure. But many
administrators still turn it off. SELinux does not make them
feel secure because they do not understand it, and they fear
(rightly or wrongly) that it will interfere with the proper operation
of the system. But, by turning it off, they undoubtedly expose
themselves to a number of attacks which SELinux would block.
We should hear Bruce's point and think a bit more about how we can ensure
that free software creates the feeling of security - but a feeling which is
backed up by real security. It's a hard problem, one which lacks technical
solutions. But we'll find ourselves less secure than we would otherwise be
if we do not address that side of the issue.
to post comments)