LWN.net Logo

A white paper on comparative browser security

December 14, 2011

This article was contributed by Nathan Willis

A paper was released in early December comparing the security designs of recent versions of Microsoft Internet Explorer, Mozilla Firefox, and Google Chrome, and concluded that Google Chrome was the "most secured against attack" — and Firefox the least. But Google sponsored the paper (by Denver-based security firm Accuvant), a fact that many in the trade press immediately latched onto as evidence that its contents were untrustworthy. It is always wise to take such reports with a heap of salt, but Google's funding alone does not mean that there is no interesting information in the report. Still, many of the headlines in recent days have glossed over some important details in the paper and its conclusions.

A careful reading of the paper shows it to be not a quantitative analysis of the various browsers' vulnerabilities (or lack thereof) to real-world attacks, but more of a feature-by-feature review of their respective security architectures. In other words, when the paper's conclusion calls Chrome the most secured, instead of the most secure, the distinction is important. The paper's premise is that the browser with the most "modern" security features is the best prepared to repel likely attacks, and it examines the three browsers against a list of specific features, namely sandboxing, just-in-time (JIT) compiler hardening, protection against malicious add-ons (plug-ins, extensions, and themes), and various low-level exploit-prevention measures (such as address space randomization).

The browsers scored equally well on the low-level exploit prevention measures, but Chrome's sandbox, add-on security, and JIT hardening were deemed "industry standards" while the other browsers' were not. Interestingly enough, the paper also includes sections on URL blacklisting, and a look at browsers' vulnerability-report and patch statistics over an 18-month period — statistics which the authors take pains to insist should not be used to draw conclusions.

Approaches, blacklists, and statistics

The paper, dated December 6, 2011, is entitled Browser Security Comparison: A Quantitative Approach. A summary is posted on the Accuvant blog, and includes a link to a separate page on which the full, 140-page PDF is available, along with a ZIP archive of the raw data and supporting tools.

The paper begins by making a case for the approach used — comparing the security design of the browsers tested — and follows up with an overview of the browsers' architectures. For the security feature comparison, the paper considers Google Chrome versions 12 (12.0.724.122) and 13 (13.0.782.218), Internet Explorer 9 (9.0.8112.16421), and Firefox 5 (5.0.1), all of which were examined in July 2011, on Microsoft Windows 7 (32-bit).

Next is a survey of security vulnerability statistics, collected and collated between January 2009 and June 2011 (which includes versions of Firefox from 2.0 to 5.0, versions of IE from IE6 to IE9, and all stable releases of Chrome). The paper makes four arguments that such statistics are unreliable. First, that vendor-advisories do not correspond one-to-one with vulnerabilities (which includes rolling multiple vulnerabilities into one advisory and unreported vulnerabilities). Second, that timeline information gleaned from advisory and patch publication dates does not accurately reflect when a vulnerability is caught and/or fixed (which includes a number of factors, from bug duplication to vulnerabilities that are discovered internally by Microsoft and unpublished). Third, that there are no generally-agreed-upon criteria for classifying the severity of vulnerabilities. Finally, the varying development models of the browser vendors make correlating vulnerability data across vendors difficult if not impossible (which includes patches to Windows that affect IE, and idiosyncrasies in the bug trackers used by both Firefox and Chrome).

Nevertheless, the authors follow up by reporting statistics for update frequencies, public vulnerability reports, vulnerabilities sorted by severity, and the average time between a vulnerability report and a published fix. The section makes several comments dissuading readers from inferring browser quality based on the numbers, such as "none of these pieces of information can be used to draw a security related conclusion" and "any conclusion drawn from the data is speculation and the data does not aid in discovering which browser is most secure." However, each of these comments comes immediately after a set of conclusions spelled out by the authors — such as Chrome being the most frequently updated browser, and Firefox having the most "critical" vulnerabilities. It is a puzzling approach: writing a conclusion, and then immediately disavowing it, but since the entire topic is deemed unreliable, too, perhaps this is a moot point.

The next section is a look at URL blacklist services, namely Microsoft's URL Reporting Service (URS) and Google's Safe Browsing List (SBL). The authors harvested active malware URLs from four web security sites, and queried both services. Over an eight-day stretch, they sampled a total of 47,682 URLs. Out of the 24,686 malware URLs which were still live when requested, URS and SBL each managed to block a scant 10%, with the remainder successfully slipping by.

Clearly, neither of the blacklist services performed well, but the data in this section of the paper is presented in a confusing manner. For example, in the pie chart which purports to show the portion of malware URLs blocked by the blacklist services (a graph reproduced in several news reports about the paper), the "unmatched URLs" pie-piece that takes up roughly 75% of the circle is labeled with the number from the total row of the chart. The pie-pieces showing URS and SBL's respective numbers of blocked URLs are also separate from each other, which implies that they had no URL-matches in common — a highly unlikely, albeit not impossible, event. Essentially, the pie pieces seem to come from two or three separate pies.

Security!

The next section defines the security features examined in each browser. The approach taken to assess the quality of each feature varied. First are the low-level exploit-prevention measures. This list includes Address Space Layout Randomization (ASLR), Data Execution Prevention (DEP), Stack Cookies (a buffer overflow prevention technique), and Structured Exception Handling (SEH; techniques to prevent exception handlers from executing hidden payloads of malware). The authors examined all of binaries loaded by the three browsers (including .EXEs and .DLLs of the browser itself as well as all of the Windows system .DLLs called by the browser) and checked for ASLR, DEP, Stack Cookie, and SEH compatibility.

Sandboxing receives the most attention. On Windows, Chrome and IE both take advantage of the OS's sandboxing functionality to limit each process's access to the filesystem, the network, Windows Registry data, other processes and threads, and various other system resources. Chrome and IE are both multi-process, providing separate processes for the UI, the rendering engine, and most individual tabs. Chrome and Firefox also run plug-ins in separate processes (though IE does not). Firefox, though, uses one process for the rest of the browser, and does not take advantage of Window's sandboxing features. It therefore receives the default "Medium" level security token from Windows.

That distinction is responsible for the bulk of the paper's criticism of Firefox; the analysis section examines each of the system components that are accessible to an un-sandboxed Firefox process and walled off from a sandboxed Chrome process in turn. The authors used the sandbox testing tool from the Chrome project to perform tests on each browser. Chrome does not hit every bullet point, however, allowing access to some system parameters and "Windows Hooks" on the authors' checklist. Firefox does not miss on every point, either, rather it receives mixed marks on many of the checklist items. IE falls somewhere in between.

JIT hardening is the next topic examined. The authors enumerate eight techniques for securing the JIT engine against malware: codebase alignment randomization, instruction alignment randomization, constant blinding, constant folding, memory page protection, resource constraints, executable memory allocation randomization, and memory guard pages. On this topic, the authors examined the source code, disassembled binaries, and ran test scripts against the JIT engines to check for each technique. IE received the most positive marks, with complete implementations of all the techniques except for additional randomization and guard pages, for which it was scored "technique was not necessary."

Chrome scored in the middle of the pack, without implementations for three of the eight techniques (codebase alignment randomization, instruction alignment randomization, and memory page protection). On the guard pages technique, though, Chrome received a check-mark with a footnote noting that the feature was implemented in Chrome 14 — which was not the version reported earlier as having been tested. Firefox did not receive any check-marks in this section, with the authors observing succinctly "Firefox does not implement any JIT hardening techniques."

The final section of the paper addresses the security measures protecting each browser against malicious add-ons. The authors identify a list of 19 possible security measures — including whether add-ons are subject to many of the sandboxing protections measured for the browsers themselves. It also includes user-facing techniques, such as displaying pre-install warnings, allowing automatic updates, and providing a user-controlled permission set for each add-on. The authors examined each browser with a mix of manual inspection (for user-visible techniques such as installation warnings) and repetition of the earlier sandbox tests.

Here the results are surprising considering what has come before; all of the browsers scored virtually the same, with mediocre add-on security. Chrome picked up one more checkbox than Firefox for its add-on permission model and a "partially-functioning" mark for its incomplete sandboxing. Both browsers received failing marks on eleven of the other criteria, including many sandboxing techniques that Chrome passed when examining the browser itself. IE, as always, scored in the middle, but it, too, failed to enforce many of sandboxing rules for add-ons that it enforced for browser processes itself. Nevertheless, in the paper's Executive Summary section, Chrome is given an "industry standard" check-mark, IE is given an "implemented" dot, and Firefox an "unimplemented or ineffective" X.

Two appendices follow; the first is an exploration of Chrome Frame, a plug-in for IE that uses Chrome as the page rendering and JavaScript engine. The authors examine how Chrome Frame operates and assess its potential security impact, concluding that it increases the attack surface of IE just like any other browser add-on. The second appendix is a lengthy (22-page) table of the low-level exploit-prevention measure test results for the browsers. Detailed test results for the other features examined are not included, although they are included in the downloadable data archive at the Accuvant site.

Is the perspective of the paper slanted?

Skeptics and Mozilla fans have every right to doubt the results of any Google-funded "research" that shows Chrome superior to other browsers — just as any skeptic should with vendor-funded research. After all, such research could be designed from the start to ensure a victory for Chrome, by examining only those features where Chrome outscores the competition. In that case, there is no need to fudge any numbers; the victor emerges naturally. Such a set-up was alleged by several Slashdot commenters (and hinted at by the story submitter) in the site's December 10 discussion of the paper.

Certainly the sandbox analysis could have been chosen to showcase one of Chrome's flagship features, but I would not conclude the same thing about the JIT hardening or add-on analysis sections, which did not show Chrome in nearly as favorable of a light. On the other hand, I simply do not think that I buy the paper's premise that running a checklist examination of the browsers results in what the authors call "a more accurate window into the vulnerabilities of each browser." Under the "Methodology Delta" section, the authors say:

Accuvant LABS' analysis is based on the premise that all software of sufficient complexity and an evolving code base will always have vulnerabilities. Anti-exploitation technology can reduce or eliminate the severity of a single vulnerability or an entire class of exploits. Thus, the software with the best anti-exploitation technologies is likely to be the most resistant to attack and is the most crucial consideration in browser security.

Perhaps that is a defensible position in theory, but what the paper examines is essentially the existence of these anti-exploitation features in the code base — it is hardly a "quantitative" approach as the title suggests. After all, the paper spends several pages asserting that real-world quantitative data on vulnerability reporting and patching can be "misleading" and "misappropriated." One could argue that a bug in the sandboxing code could single-handedly undermine a dozen of the check-marks that Chrome or IE received for implementing the features examined. A test performed in the lab may or may not catch such a bug, while real-world vulnerability reports — or attacks — are more likely to.

Regardless of how one feels about the approach taken by the paper, it is worth taking a look at because it has a different approach to measuring application security than do the bulk of other analyses. We can all agree that vulnerability statistics are often open to interpretation, so relying on them to measure the security of different applications is suspect — but many similarly targeted white papers do so. Accuvant has made an effort to analyze the security of these browsers in a different way, which is useful in its own right.

What a browser-maker might learn

Of course, weaknesses in the paper do not mean that Firefox should not consider a sandbox and multi-process design on all of its desktop platforms. It would clearly be more secure if it migrated to a model that included both, and if it implemented JIT hardening techniques, but those are hardly overnight changes.

At least the paper provides a survey of the attack surface addressed by Windows sandboxing and JIT hardening, which is valuable — both to browser vendors and to other developers. It is also interesting to note how many Windows system libraries are touched by each of the browsers, how ineffective URL blacklists are in practice, as well as how-and-where the security provided by the main browser breaks down when an add-on is installed. Skeptics may turn up their noses at Google's financing of the work, or at the methodology employed, but a detailed discussion of application security always makes for valuable reading.

Firefox's Johnathan Nightingale told Informationweek that Mozilla regards sandboxing as just one tool among many used to reduce security threats, "from platform-level features like address space randomization to internal systems like our layout frame poisoning system." He added that the browser-maker emphasizes security in the development process as well as in the code itself, highlighting code reviews, testing and analysis, and rapid responses to security issues.

As for the specifics touched on in the paper's comparison to Chrome's security architecture, Mozilla has been exploring a multi-process design for some time — but primarily out of an interest in speeding up Firefox's responsiveness. That work appears to have been back-burnered in favor of a set of smaller changes, including optimizations to the Places database and garbage collector. There are also Bugzilla issues tracking JIT hardening work, which does not include substantial architectural changes to Firefox.

The paper is a puzzling affair — parts of it contradict other parts, the URL blacklisting discussion is a tangent, and the conclusion seems to weigh some of the tests significantly more than others. But whatever else they may show, the public reaction to the paper since its release indicates that many Firefox users are interested in seeing that project push forward in these unaddressed areas.


(Log in to post comments)

Copyright © 2011, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds