The readers of LWN do not need to be reminded that the software industry
as a whole has a big problem with computer security. One proposal aims to
redress this state of affairs: the concept of legislation designed to
create financial liability for the vendors of buggy software. This idea is
applauded by many such as Bruce Schneier,
author of the famous book Applied
Cryptography. But despite the support of notable authors, software
liability laws are themselves a dangerous liability to the software
industry.
One can readily find sympathy in the potential impact of
software liability laws on developers of free and open-source
software. Many of these developers are working on a volunteer basis, and
holding them financially liable for the code they write and release freely
could have a chilling effect on the development of free software. Of
course, liability laws might be written to exclude programs given away for
free, or they might concern themselves with vendors and leave individual
developers out of the picture.
Unfortunately, the dangers of software liability laws don't subside when
individual developers are granted immunity. One of our community's most
prominent projects, the Linux kernel, was never intended to grow off the
386 but is now found running everything from stock markets to
supercomputers and military gear. This ubiquity brings demand for services,
support, and a single throat to choke, which is the bread and butter of Red
Hat and other businesses. When a vendor is selling free software, and we
make the vendor financially liable for bugs in the code it is selling but
did not write, we risk significant disruption to our cherished development
models.
Further complications arise when we imagine possible liability
lawsuits. In the event of a security breach, directing blame and assigning
liability can be problematic. Picture a system that runs Oracle on top of
Red Hat Enterprise Linux, and imagine that the Oracle database is breached
due to a bug in glibc. Does the buck stop with Oracle, Red Hat, or both?
What if Novell provided the operating system, but the glibc developer who
introduced the bug responsible for the breach is paid by Red Hat? An
attorney might decide to sue all three parties, especially if it is unclear
which component was vulnerable.
Consider also that virtually all software developers attach disclaimers
of warranty to their products. These disclaimers are nearly ubiquitous in
free software licenses, and are even found attached to some public domain
declarations. For software liability laws to have teeth, these disclaimers
must be nullified. But when dealing with software designed to address a
broad range of users, one must carefully select use cases in which default
warranties apply. There is a big difference between a database full of blog
postings, a database full of credit card numbers and a database full of top
secret government intelligence.
We must also recognize the differences in the types of failures under
which warranty is considered appropriate. Ford Police Interceptors had a
reputation for exploding when they
were rear-ended. Ford also suffered a blow
to its reputation, along with tire manufacturer Firestone, when tires
on Ford Explorer vehicles were found to spontaneously fail. In both of
these cases, the loss of human life was not the result of a willful human
actor but was caused instead by spontaneous failure under expected
operating conditions. By sharp contrast, software security breaches
generally don't endanger life or limb, and successful exploits are not
accidents but are rather the result of willful attack.
The difference between accidental and intentional failure is an
important one. Because the laws of physics and the nature of accidents do
not change, we can expect auto manufacturers to build reliable gas tanks
and tires. But in computer security, attackers discover new techniques each
and every year. The equation for software is always on the move.
At this point, advocates of software liability laws still hoping to sell
their wares need to choose their words carefully, and so they plead for a
standard based on best practice. But who defines best practice in an
industry that is changing so fast? The pioneers of the Internet didn't
predict many of the problems we're facing today, yet few would call them
negligent. Real "best practice" is a moving target that is carried by the
tides of the times, and in the world of technology, the waves are a mile
tall and move thousands of miles per hour.
These and other questions must be addressed if software liability laws
are to succeed. Unfortunately, legislators are notoriously bad at
understanding and regulating technology. Observers of SCO v. IBM
surely agree that court cases are long, complicated and costly. Those with
faith in any branch of government to appropriately legislate technology
should reexamine the Digital
Millennium Copyright Act, a law that continues to have a chilling
effect on free software development, and Universal
v. Reimerdes, the case in which 2600 Magazine's publication of DeCSS
was suppressed.
Security is, of course, a problem, and the case can be made that
someone must be held liable. We prosecute the criminals who breach
computer security, but if we're going to put burden on anyone else, we
should choose the companies that leak personal information to these
criminals when their security fails. In some ways, these companies might be
held liable today, but we would do well to consider tightening down the
screws. By increasing the burden on these data aggregators, the demand for
secure software will increase. This gives the best solutions that engineers
produce a market advantage, and financially rewards security-conscious
vendors. This approach to liability also addresses the need for best
practices and defense in depth when implementing and maintaining networks
and databases. By concentrating liability in this way, we eliminate the
complications that result from playing the blame game with a group of
software vendors. Whose security was breached is a much easier question to
litigate than how it was done and how it might have been stopped.
As Schneier has pointed out,
companies tend to convert variable cost liabilities into fixed cost
insurance plans. Insurers have a financial incentive to excel at evaluating
risk, and it isn't inconceivable that they might view the use of open code
their experts can review a reason to offer lower premiums. Furthermore,
putting liability on data aggregators allows those organizations to make
choices on how much insurance they are willing to buy. A technologically
sound small business might adopt best practices and spend less on
insurance, or they might decide to skip out on insurance entirely. But if
insurance were expensive and the danger of a security breach was still
unacceptable, they might reconsider the practice of permanently storing
large amounts of customer data, something that their customers tend to
consider an invasion of their privacy anyway.
Software code is quite complex, but we can write all kinds of new and
useful software because it is intangible and cheap to produce. Placing
liability on software vendors threatens to dramatically change this
landscape. We can expect to see reduced participation, hampered innovation,
and skyrocketing costs. We should carefully consider whether perfect
security is a goal or an expectation, and educate users on the need for
compartmentalization, defense in depth, patching, and best practices in
their networks. If we approach the issue in this way, we can improve
security overall with minimal risk to the efficiency of the software
industry.
(
Log in to post comments)