By Jonathan Corbet
April 13, 2011
Companies operating in the handset market have different approaches to
almost everything, but they do agree on one thing: they have seen the
security problems which plague desktop systems and they want no part of
them. There is little consistency in how the goal of a higher level of
security is reached, though. Some companies go for heavy-handed central
control of all software which can be installed on the device. Android uses
sandboxing and a set of capabilities enforced by the Dalvik virtual
machine. MeeGo's approach has been based on traditional Linux access
control paired with the Smack mandatory access control module. But much
has changed in the MeeGo world, and it appears that security will be
changing too.
In early March, the project sent out a notice
regarding a number of architectural changes made after Nokia's change
of heart. With regard to security, the announcement said:
In the long-term, we will re-evaluate the direction we are taking
with MeeGo security with a new focus on *End-User Privacy*. While
we do not intend to immediately remove the security enabling
technologies we have been including in MeeGo, all security
technologies will be re-examined with this new focus in mind.
It appears that at least some of this reexamination has been done; the
results were discussed in this message from
Ryan Ware which focused mainly on the problem of untrusted third-party
applications.
The MeeGo project, it seems, is reconsidering its decision to use the Smack
access control module; a switch to SELinux may be in the works. SELinux
would mainly be charged with keeping the trusted part of the system in
line. All untrusted code would be sandboxed into its own container; each
container gets a disposable, private filesystem in the form of a Btrfs
snapshot. Through an unspecified mechanism (presumably the mandatory
access control module), these untrusted containers could be given limited
access to user data, device resources, etc.
It probably surprised nobody that Casey Schaufler, the author of Smack, was
not sold on the value of a change to SELinux. This change would, he said, add a great deal of complexity to the
system without adding any real security:
SELinux as provided in distributions today does not, for all its
trappings, complexity and assurances, enforce any security
policy. SELinux is capable of enforcing a security policy, but no
general purpose system available today provides anything more than
a general description of the experienced behavior of a small subset
of the system supplied applications.
The people who built SELinux fell into a trap that has been
claiming security developers since computers became
programmable. The availability of granularity must not be assumed
to imply that everything should be broken down into as fine a
granularity as possible. The original Flask papers talk about a
small number of well defined domains. Once the code was implemented
however the granularity gremlins swarmed in and now the reference
policy exceeds 900,000 lines. And it enforces nothing.
Ryan's response was that the existing
SELinux reference policy is not relevant because MeeGo does not plan to use
it:
At this point I want nothing to do with the Reference Policy. I
would much prefer to focus on a limited set of functionality around
privacy controls. I know that means it won't necessarily exhibit
"expected" SELinux behavior. Given the relatively limited range of
verticals we are trying to support, I believe we will be able to
get away with that.
What this means is that he is talking about creating a new SELinux policy
from the beginning. The success of such an endeavor is, to put it gently,
not entirely assured. The current reference policy has
taken many years
and a great deal of pain to reach its current state of utility; there are
very few examples of viable alternative policies out there. Undoubtedly
other policies are possible, and they need not necessarily be as complex as
the reference policy, but anybody setting out on such a project should be
under no illusions that it will be easily accomplished.
The motivation for the switch to SELinux is unclear; Ryan suggests that
manufacturers have been asking for it. He also said that manufacturers
would be able to adjust the policy for their specific needs, a statement
that Casey was not entirely ready to accept:
There are very few integrators, even in the military and
intelligence arenas, who feel sufficiently confident with their
SELinux policy skills to do any tweaking that isn't directly
targeted at disabling the SELinux controls.
Ryan acknowledged that little difficulty, but he seems determined to press
on in this direction.
The end goal of all this work is said to be preventing the exposure of
end-user data. That will not be an easy goal to achieve either, though.
Once an application gets access to a user's data, even the firmest SELinux
policy is going to have a hard time preventing the disclosure of that data
if the application is coded to do so; Ryan has acknowledged this fact. Any Android user who
pays attention
knows that even trivial applications tend to ask for combinations of
privileges (address book access and network access, for example) which
amount to giving away the store. Preventing information leakage through a
channel like that - while allowing the application to run as intended - is
not straightforward.
So it may be that the "put untrusted applications in a sandbox and limit
what they can see" model is as good as it's going to get. As Casey pointed out, applications are, for better or
worse, part of the security structure on these devices. If an application
has access to resources with security implications, the application must
implement any associated security policy. That's a discouraging conclusion
for anybody who wants to install arbitrary applications from untrusted
sources.
(
Log in to post comments)