Discovering a Java Application's Security Requirements (O'ReillyNet)
Java security manager policy files are powerful and flexible, but rather grueling and error-prone to write by hand. In this article Mark Petrovic employs a novel approach: a development-time SecurityManager that logs your applications' calls and builds a suitable policy file."
Posted Jan 4, 2007 22:46 UTC (Thu)
by bluefoxicy (guest, #25366)
[Link] (6 responses)
Posted Jan 4, 2007 22:57 UTC (Thu)
by nix (subscriber, #2304)
[Link]
This is definitely not new stuff. (Important, just not new.)
Posted Jan 5, 2007 3:52 UTC (Fri)
by dwheeler (guest, #1216)
[Link]
Posted Jan 5, 2007 5:58 UTC (Fri)
by dang (guest, #310)
[Link] (3 responses)
First, this is not about how your OS protects itself from code run in userpace, but it is about how your application can lock itself down. This matters more when your app needs to load third party modules or if you are dealing with something where the sandbox matters ( e.g, an applet in a browser ). But regardless, you are missing an unsubtle distinction here.
Second, while the example code deals largely with file access permissions, the security manager also deals with things like x509 certs, access to the Clipboard for cut/paste, whether you can serialize objects, tons and tons of things beyond file access call checkPermission(), many of which neither grsecurity or selinux know anything about.
So this is actually worth a read and it might well make it easier for a Java developer to write a better, safer app.
Posted Jan 5, 2007 7:12 UTC (Fri)
by bronson (subscriber, #4806)
[Link] (2 responses)
Your "unsubtle distinction" is, I think, exceedingly subtle. What's the theoretical difference between discovering the surface of an application's security requirements by running it instrumented under grsecurity versus running it instrumented under a JVM? I think you'll find that there's hardly any conceptual difference at all.
Posted Jan 5, 2007 10:25 UTC (Fri)
by epa (subscriber, #39769)
[Link] (1 responses)
Posted Jan 9, 2007 2:05 UTC (Tue)
by sholden (guest, #7881)
[Link]
Posted Jan 4, 2007 22:53 UTC (Thu)
by smoogen (subscriber, #97)
[Link] (7 responses)
Posted Jan 4, 2007 23:23 UTC (Thu)
by jannic (subscriber, #5821)
[Link] (5 responses)
Posted Jan 4, 2007 23:56 UTC (Thu)
by drag (guest, #31333)
[Link] (4 responses)
Posted Jan 5, 2007 11:33 UTC (Fri)
by hummassa (subscriber, #307)
[Link] (3 responses)
Posted Jan 5, 2007 16:53 UTC (Fri)
by iabervon (subscriber, #722)
[Link] (2 responses)
Posted Jan 6, 2007 10:16 UTC (Sat)
by hummassa (subscriber, #307)
[Link] (1 responses)
The rules _should_ be based on what you want to do (forbid expressely
Posted Jan 7, 2007 11:35 UTC (Sun)
by drag (guest, #31333)
[Link]
Posted Jan 5, 2007 0:43 UTC (Fri)
by bluefoxicy (guest, #25366)
[Link]
"Novel approach"? Since the dawn of time (ok, some time in 2001 or so) grsecurity has implemented access control policies at the operating system level with a learning module that works about the same way. This works about the same way, but at the application level.Discovering a Java Application's Security Requirements (O'ReillyNet)
Likewise AppArmor (and, these days, SELinux).Discovering a Java Application's Security Requirements (O'ReillyNet)
There's an SELinux module that does the same. This is a really old idea. Nevertheless, it may still be a good thing.Discovering a Java Application's Security Requirements (O'ReillyNet)
You are missing two thiings. Discovering a Java Application's Security Requirements (O'ReillyNet)
The point is, it's not a "novel approach." It's an ages-old approach, just applied to the Java environment. Everything you said about clipboard, etc, are just different attributes discovered using the exact same technique.Discovering a Java Application's Security Requirements (O'ReillyNet)
The 'novel approach', according to the blurb, is replacing the default SecurityManager with a stub that logs the calls being made. Using that you can profile your application to see the access it needs.Discovering a Java Application's Security Requirements (O'ReillyNet)
So it's the patent application definition of "novel" rather than the english definition.Discovering a Java Application's Security Requirements (O'ReillyNet)
Does this mean that if the application is doing something wrong, then you are writing a policy that allows it to do something wrong? Especially if you don't really understand how policies are formatted?Discovering a Java Application's Security Requirements (O'ReillyNet)
The article explicitly states: "Each of these rules should be examined carefully to understand what it does and to confirm that it is consistent with our application's goals." - which probably could use some more emphasis. Using such a tool without understanding the results is indeed dangerous. But in this case, it's still better than disabling the security manager completely.Discovering a Java Application's Security Requirements (O'ReillyNet)
Plus it would probably be more advantagious to developers if they could spend more time auditing existing rules that are created mostly automaticly rather then spend all their time wrestling around with creating them from scratch.Discovering a Java Application's Security Requirements (O'ReillyNet)
The everything-is-forbidden-except-where-permitted-explicitly policy is I'm sorry I disagree.
_really_ stronger than the
everything-is-permitted-let-me-see-if-i-have-to-forbid-something.
IOW, writing the rules from scratch brings security to another level and
should be the _rule_, not the _exception_.
But that's MHO.
Well, if you run your application under controlled conditions in development, generate the rules, and then deploy it in the wilds with the rules you generated before, it's really: anything that happens under normal conditions is permitted; everything else is forbidden. That's somewhat better, anyway, since it means that the rules aren't based on what it does when given malformed input or on untested code paths.I'm sorry I disagree.
> That's somewhat better, anyway, since it means that the rules aren'tThat is exactly the problem...
> based on what it does when given malformed input or on untested code
> paths.
things) when given malformed input or untested code paths. If you do your
rules from scratch, you open only your tested code paths. If you use a
generator, you _may_ be opening untested code paths, unknowningly -- and
even with a thorough review of the generated rules, things like this may
be _impossible_ (or at least impraticable) to detect.
Ya.. White lists are generally going to be more reliable then black lists.That is exactly the problem...
Use application test cases. Stress test the application, and build your policy during that. If your policy fails, you'll expose not a policy bug; but rather, a bug in the test cases (i.e. a feature isn't sufficiently tested, as something the application can do wasn't done during the testing). Don't you know security tools make software development and debugging a heck of a lot easier? ;)Discovering a Java Application's Security Requirements (O'ReillyNet)