|| ||<casey.schaufler-xNZwKgViW5gAvxtiuMwx3w-AT-public.gmane.org> |
|| ||<rene.mayrhofer-KHkXbKk7S56uQSZpjJ3gJA-AT-public.gmane.org> |
|| ||Re: Arbitrary 3rd Party Code |
|| ||Mon, 11 Apr 2011 19:53:08 +0000|
|| ||Article, Thread
> From: ext René Mayrhofer [rene.mayrhofer-KHkXbKk7S56uQSZpjJ3gJA@public.gmane.org]
> Sent: Friday, April 08, 2011 12:23 PM
> To: Schaufler Casey (Nokia-SD/SiliconValley)
> Cc: pgupta-Z7xSafZYcfOMe3Hu20U6GA@public.gmane.org; ware-VuQAYsv1563Yd54FQh9/CA@public.gmane.org;
> Subject: Re: [Meego-security-discussion] Arbitrary 3rd Party Code
> Am Freitag 08 April 2011, um 19:53:14 schrieb
> > > Imagine for example a game application that should be able to set
> > > reminders for <doing something or other that is part of the game> in
> > > the
> > > user calendar.
> > You have two applications, a game and a calendar.
> > The game pushes information to the calendar.
> Yes. The calendar would/should be the built-in device calendar (as a stand-in for arbitrary
> other standard applications shipped with each MeeGo release).
> > > As another (unrelated) function of the game, it needs to
> > > communicate with a central server to exchange <high scores etc.>.
> > The game has bidirectional communication with the score-server.
> > > Upon
> > > installation time, the user might grant the application access to the
> > > calendar to post and maybe read and modify/delete existing entries as
> > > well as allowing it to connect to this server.
> > The installer instructs the computer to allow the game and the
> > score-server to communicate.
> > The installer instructs the computer to allow the game and the
> > calendar to cooperatively manage game related calendar entries.
> Yes. Again, I am thinking about communicating with the built-in PIM type applications
> that manage access to private/personal/sensitive user data.
> Sounds simple.
> Not necessarily ;-)
> > The calendar entry maintenance demonstrates how a simple requirement
> > can lead to excesses of architecture.
> > One approach that the game and the calendar can use to accomplish
> > the updates is for the game to blindly push calendar update requests
> > to the calendar.
> Agreed, problem solved in the simple case, but not when you extend the scenario to e.g.
> moving existing (game-related) calendar entries, removing them, or checking the built-in
> task list if the user has already finished a game-related task.
> > Another approach is for the game and the calendar to use
> > bidirectional communications to negotiate the updates, allowing
> > for the game to modify its expectations in the face of schedule
> > conflicts.
> > Finally, the game could be granted access to the data that the
> > calendar uses.
> > These three scenarios are all rational in certain contexts.
> > Each has its own set of security, performance and usability issues.
> > The security implementation for each can be very different.
> Fully agreed. However, looking at many of the current Android applications
> (I am referring to Android because I know its security architecture much better
> than e.g. the iPhone security measures, but the same concept apply to other app
> markets), I see many examples that are related to this simplified use case:
> applications accessing the full contacts database just because they offer to send
> some application-related snipped via email or SMS; applications requiring full
> telephony access because they allow to trigger a phone call to a phone number
> found in the (typically social network type) application data; applications requesting
> full network communication privileges just because they want to display ads or check
> for updates of some application-related data from the developer server; or
> applications requesting su shell privileges just to set one kernel variable. That is,
> applications require access to some local or remote data set, but only use a
> (completely legitimate) subset of the available entries. With current architectures,
> I don't see a standard way to enforce this "subset of some data resource" access
> in a way that end-users installing the application would be able to understand.
> [That is one of the main reasons why I strive for the simplest possible architecture:
> if end-users don't understand it, it's (mostly) worthless.]
You are describing application object management (e.g. calendar entrys, plugsins)
and OS capability (e.g. CAP_CLOCK, CAP_SYSADMIN) management. This all
has to happen in the application space and there are exactly two options. One is
to restrict the applications to those that comply to some criteria. Both iphone
and Andriod use this model, with Apple and Google taking different approaches
to the enforcement of adherence to criteria. The other approach is to leave the
behavior of application space to the developers of applications. This is the classic
linux/unix model and leaves the application space security strictly up to the
distributor. Redhat for example has chosen to use SELinux as a mechanism
to argue that the applications conform to a policy with regard to each other.
Application object management takes everyone by surprise. Once an application
starts providing access control services the application becomes a security
enforcing component of the system. Back in the Orange Book days we wrote
entire security policy models for print queue management. I seriously doubt that
most of the people reading this would anticipate the issues with PostScript
If your calendar application is accepting request to make changes to the
calendar data you can describe the policy it is enforcing. If the game is
allowed to modify the data without the intervention of the calendar it has to
be included in the policy for calendar objects. SELinux attempts to provide
an OS based structure for doing this. Because applications are rarely
written with data domains in mind you end up with large, complicated policy
As far as end users go, we really need to change our perception
of who the "end user" is. On a cell phone the person with the handset
in her purse does not typically know or care about the security model
of the operating system. The application writer and the service provider
do care. This is one reason for the success of Android, where the
emphasis of the platform is to make the development and deployment
of new applications easy by providing all system resources as services.
> The problem of assigning security context to applications is that it does not support
> limiting access to subsets of the data managed by one application.
> > > However, that does not mean that the application should be allowed to
> > > send calendar data to the server. If data from different sources (such
> > > as the calendar) was tagged appropriately and was not allowed to be
> > > sent
> > > over network connections, we could solve a significant amount of
> > > privacy leaks.
> > Oh my. I haven't seen anyone advocate information labels since the
> > Mitre Compartmented Mode Workstation specification in 1987. It can
> > be done and it has been done, it just doesn't turn out to work the
> > way you want it to.
> Casey, I highly respect and value your opinion on security architectures.
> However, this is the second time in two days that you post a rather condescending
> remark about somebody's suggestion/question.
> For the record, I did not suggest information labels as such,
I understand. You did suggest a path that goes past the same dragon lairs.
> and I do not particularly want to go the path of SELinux MLS.
> Can we get back to purely technical discussions, please?
Sorry if I offended. None was intended.
> > > Will it be possible to protect against rogue
> > > applications that read private data in one context and then apply
> > > encryption/steganography/whatever to get them into another context
> > > without this being detected? No.
> > Yes.
> How (In the mobile applications context)?
Sorry, my response should have been "Yes, I agree with your conclusion."
> > > The question is therefore more a compromise: given limited resources
> > > and
> > > a finite-length security policy, against how many "standard" threats
> > > can
> > > we protect? By solving 90% of those cases where Android applications
> > > currently violate the "intended"/"expected" behavior, we would already
> > > have made a large improvement.
> > I still say that your computer should not be asked to second guess
> > the intention or expectation of the user except in cases where the
> > entire software stack is under the control of a single entity that
> > is willing and able to take responsibility for the behavior.
> Did you come across the more recent papers on usability of security methods
> in the mobile domain, e.g. the authentication protocols usability study done by
> Nokia research Helsinki [Usability Analysis of Secure Pairing Methods. USEC 07],
> a more recent one in the same area [On the Usability of Secure Association of
> Wireless Devices Based On Distance Bounding] or others that followed these
> (I am not going to reference my own papers here because I do not consider the
> studies we did statistically significant for a broad population)? I can recommend
> them as a good read, even if they are specific to authentication protocols and
> don't cover the whole of usable security in mobile devices.
Authentication is an important component of security and secure communications
but all it provides is assurance that the message came from a particular
source. It says nothing about the appropriateness of the content of the message.
> A few years ago, I would have agreed that a "single entity" (the user/owner of
> the device, and not any co-operation with potentially conflicting interests) should
> be in full control over what may or may not happen. However, I have changed
> my opinion based on these (non-representative, but still clearly alarming) and
> other studies -- most end-users are simply not capable of making informed
> decisions about security policies; heck, I myself am not able to decide if I
> want to install an Android application based on its set of required capabilities.
> And neither do I think that end-users should be asked to make these decisions.
> It's not part of the job they want to get done, but gets in the way of the task they
> intend to perform. It is therefore completely understandable that most users
> choose to ignore security policies as long as "the system works". We need to
> decrease the burden placed on users when developing new security measures,
> not increase it. Doing otherwise means a losing battle like the one still fought by
> advising users to choose unique, strong password for every account and
> changing them regularly.
This has not changed since I started working in security in the days when
dinosaurs roamed the earth and megabytes were only found on disk drives.
We released a Unix variant that we charged $5000 extra for because it had
an unprivileged root (using POSIX capabilities) and every customer's first
questions was "How do I become Real Root?".
> There are multiple potential approaches to tackle this issue besides information
> labels, e.g. informal tagging of content (which is similar to information labels, but
> where each application can define its own tags), the whole range of techniques
> from data leak prevention (that is, trying to detect _only potentially sensitive_ data
> before allowing it into another context instead of tagging/labeling _all_ data elements),
> and probably many others I'm not currently thinking of. The scenario I brought up
> was intended to act as a simple threat scenario against which we can measure
> technical suggestions, not as an implementation description.
If informal methods are sufficient than the problems of information labels
are reasonably easy to deal with. The problem comes from trying to ensure
that the mechanism is not circumventable.
I proposed a mechanism for content based access control last year, only
to discover that Eric Paris had beaten me to it with fanotify.
But again, you need the applications to buy into it.
> best regards,
to post comments)