By Jake Edge
July 23, 2008
At its core, the internet is a set of agreements; not just on protocols,
but also on practices amongst carriers. Part of what has allowed the
explosive growth—in both participants and services—of the
internet can be attributed to these agreements. When a new technology like
deep packet inspection (DPI) comes along to threaten these long-standing
practices, it should be cause for concern.
Internet packets are constructed much like postal mail. There is an
envelope with addressing information contained in the packet header and a
message which is contained in the
data payload portion of the packet. Internet carriers are supposed to make
their best effort to deliver a packet based on the information in its
header. DPI violates that compact by looking inside
the data portion, as the packet is en route to its destination, and making
decisions based on that.
There are some potentially valid uses for DPI—network performance
monitoring and law enforcement surveillance, perhaps even with a warrant,
are two—but the potential for abuse is large. Because network
processing has gotten to the point where devices can do more than just
observe and record, packets are being modified and generated on-the-fly in
a technique known as deep packet processing (DPP).
Various examples of DPI and DPP—generally lumped together as
DPI—have been in the news over the last year. Comcast used DPI
to try and throttle
Bittorrent traffic, while Phorm and NebuAd have used
it to rewrite
web pages to deliver
advertising to unsuspecting users. The DPI problem has gotten enough
attention that even
various governments have started showing interest.
The designer of User Datagram Protocol (UDP)—the connectionless
analog to Transmission Control Protocol (TCP)—David Reed recently
testified to the US Congress
about DPI. In his testimony
[PDF] he outlines numerous technical issues, but the biggest may lead to
breaking the fundamental model of internet communication:
This is the real risk: [a] service or technology unnecessary to the correct
functioning of
the Internet is introduced at a place where it cannot function correctly
because it does [not]
know the endpoints' intent, yet it operates invisibly and violates rules of
behavior that
the end-users and end-point businesses depend to work in a specific way.
We have seen this behavior from internet companies in other guises
as well. Verisign and various ISPs have tried redirecting failed DNS
queries to pages they control (and generally fill with ads). Once again,
that breaks many applications; it functions more or less correctly for web
browsing, but other applications depend on receiving proper errors when
querying for nonexistent domains.
Because many
ISPs hold a near-monopoly on high-speed access in a particular geographical
area, they can hold their customers hostage with little concern that
competition will come along to force a change. It is this abuse of their
monopoly position that tends to interest regulators. In addition, most of
their customers are unlikely to notice these "enhancements", making it
easier to get away with—at least until those more technically savvy
recognize and raise the issue.
Using encrypted communications, HTTPS for web browsing for example, is one
defense against DPI. There is some cost associated with encryption, of
course, but it
is one that is likely to be borne if internet carriers persist in these
shenanigans. Another option might be Obfuscated TCP, which is a
technique to do backwards-compatible encryption at the packet level.
Because it doesn't require all hosts to support it at once—it is
negotiated between the endpoints when the connection is
established—it could incrementally be added into the arsenal of tools
to thwart DPI.
DPI uses techniques that have generally been
attributed to the "cracking" community. Things like
man-in-the-middle attacks and IP address spoofing are difficult-to-solve
security problems for many applications. When the "legitimate" middlemen
start manipulating packets using these means for their own benefit, they
come very
close to—or cross—the line into illegality.
This is a battle about control; our freedoms to communicate and innovate on
the internet are at stake. A phone system that randomly inserted
advertising into calls or a postal system that kicked back letters whose
contents it
didn't like as undeliverable would not be considered functioning systems.
The internet requires the same treatment.
Comments (8 posted)
Fortify Software, a vendor of security scanning solutions, has put out
a
press release saying that open source software poses security risks for
businesses, partly as a result of the lack of use of security scanning
solutions. There is an associated report available for those who
register. "
The survey, sponsored by Fortify Software and completed
by leading application security consultant Larry Suto, examined 11 of the
most common Java open source packages. In order to evaluate the security
expertise offered to users and to measure the secure development processes
in place in OSS communities, Fortify interacted with open source
maintainers and examined documented open source security practices.
"
The whole thing may be self-serving, but there is also a real point:
anybody contemplating putting software into a security-relevant setting
should look at how the project handles security issues.
Comments (17 posted)