By Nathan Willis
June 26, 2013
Back in February, Mozilla implemented
a new cookie policy for Firefox, designed to better protect users'
privacy—particularly with respect to third-party user tracking.
That policy was to automatically reject cookies that originate from
domains that the user had not visited (thus catching advertisers and
other nefarious entities trying to sneak a cookie into place). The
Electronic Frontier Foundation (EFF) lauded
the move as "standing up for its users in spite of powerful
interests," but also observed that cross-site tracking cookies
were low-hanging fruit compared to all of the other methods used to
track user behavior.
But as it turns out, nipping third-party cookies in the bud is not
as simple as Mozilla had hoped. The policy change was released in
Firefox nightly builds, but Mozilla's Brendan Eich pointed
out in May that the domain-based approach caused too many false
positives and too many false negatives. In other words,
organizations that deliver valid and desirable cookies from a
different domain name—such as a content distribution network (CDN)—were being unfairly blocked by the policy, while sites that a user
might visit only once—perhaps even accidentally—could set
tracking cookies indefinitely. Consequently, the patch was held back
from the Firefox beta channel. Eich commented that the project was
looking for a more granular solution, and would post updates within
six weeks about how it would proceed.
Six weeks later, Eich has come back with a description
of the plan. Seeing how the naïve has-the-site-been-visited policy
produced false positives and false negatives, he said, Mozilla
concluded that an exception-management system was required. But the
system could not rely solely on the user to manage
exceptions—Eich noted that Apple's Safari browser has a similar
visited-based blocking system that advises users to switch off the
blocking functionality the first time they encounter a false
positive. This leaves a blacklist/whitelist approach as the
"only credible alternative," with a centralized service
to moderate the lists' contents.
Perhaps coincidentally, on June 19 (the same day Eich posted his
blog entry), Stanford University's Center for Internet and Society (CIS)
unveiled just such a centralized cookie listing system, the Cookie Clearinghouse (CCH). Mozilla
has committed to using the CCH for its Firefox cookie-blocking policy,
and will be working with the CCH Advisory Board (a group that also
includes a representative from Opera, and is evidently still sending
out invitations to other prospective members). Eich likens the CCH exception
mechanism to Firefox's anti-phishing
features, which also use a whitelist and blacklist maintained
remotely and periodically downloaded by the browser.
Codex cookius
As the CCH site describes it, the system will publish blacklists
(or "block lists") and whitelists (a.k.a. "allow lists") based on
"objective, predictable criteria"—although it is
still in the process of developing those criteria.
There are already four presumptions made about how browsers will
use the lists. The first two more-or-less describe the naïve
"visited-based"
approach already tried by Safari and Firefox: if a user has visited a
site, allow the site to set cookies; do not set cookies originating
from sites the user has not visited. Presumption three is that if a
site attempts to save a Digital Advertising Alliance "opt out cookie," that
opt-out cookie (but not others) will be set. Presumption four is
that cookies should be set when the user consents to them. The site
also notes that CCH is contemplating adding a fifth presumption to
address sites honoring the Do Not Track (DNT) preference.
Obviously these presumptions are pretty simple on their own; the
real work will be in deciding how to handle the myriad sites that fall
into the false-match scenarios already encountered in the wild. To
that end, CCH reports that it is in the initial phase of drawing up
acceptance and blocking criteria, which will be driven by the Advisory
Board members. The project will then hammer out a file format for the
lists, and develop a process that sites and users can use to challenge
and counter-challenge a listing. Subsequently, the lists will be
published and CCH will oversee them and handle the challenge process.
The site's FAQ page
says the project hopes to complete drawing up the plans by Fall 2013
(presumably Stanford's Fall, that is).
The details, then, are pretty scarce at the moment. The Stanford
Law School blog posted
a news item with a bit more detail, noting that the idea for the CCH
grew out of the CIS team's previous experience working on DNT.
Indeed, that initial cookie-blocking patch to Firefox, currently
disabled, was written by a CIS student affiliate.
Still, this is an initiative to watch. Unlike DNT, which places
the onus for cooperation solely on the site owners (who clearly have
reasons to ignore the preference), CCH enables the browser vendor to
make the decision about setting the cookie whether the site likes it
or not. The EFF is right to point out that cookies are not required
to track users between sites, but there is no debating the fact that
user-tracking via cookies is widespread. The EFF Panopticlick illustrates
other means of uniquely identifying a particular browser, but it is
not clear that anyone (much less advertisers in particular) use
similar tactics.
To play devil's advocate, one might argue that widespread adoption
of CCH would actually push advertisers and other user-tracking vendors
to adopt more surreptitious means of surveillance, arms-race style.
The counter-argument is that ad-blocking software—which is also
purported to undermine online advertising—has been widely
available for years, yet online advertising has shown little sign of
disappearing. Then again, if Firefox or other browsers adopt CCH
blacklists by default, they could instantly nix cookies on millions of
machines—causing a bigger impact than user-installed ad
blockers.
The other challenges in making CCH a plausible reality include the
overhead of maintaining the lists themselves. The anti-phishing
blacklists (as well as whitelists like the browser's root
Certificate Authority store) certainly change, but the sheer
number and variety of legitimate sites that employ user-tracking
surely dwarfs the set of malware sites. Anti-spam blacklists, which
might be a more apt comparison in terms of scale, have a decidedly
mixed record.
It is also possible that the "input" sought from interested parties
will complicate the system needlessly. For example, the third
presumption of CCH centers around the DAA opt-out cookie, but there
are many other advertising groups and data-collection alliances out
there—a quick search for "opt out cookie" will turn up an
interesting sample—all of whom, no doubt, will have their own
opinion about the merits of their own opt-out system. And that does
not even begin to consider the potential complexities of the planned
challenge system—including the possibility of lawsuits from
blacklisted parties who feel their bottom-line has been unjustly harmed.
Privacy-conscious web users will no doubt benefit from any policy
that restricts cookie setting, at least in the short term. But the
arms-race analogy is apt; every entity with something to gain by
tracking users through the web browser is already looking for the next
way to do so more effectively. Browser vendors can indeed stomp out
irritating behavior (pop-up ads spring to mind), but they cannot
prevent site owners from trying something different tomorrow.
(
Log in to post comments)