|
|
Subscribe / Log in / New account

Discovering a Java Application's Security Requirements (O'ReillyNet)

Discovering a Java Application's Security Requirements (O'ReillyNet)

Posted Jan 4, 2007 23:23 UTC (Thu) by jannic (subscriber, #5821)
In reply to: Discovering a Java Application's Security Requirements (O'ReillyNet) by smoogen
Parent article: Discovering a Java Application's Security Requirements (O'ReillyNet)

The article explicitly states: "Each of these rules should be examined carefully to understand what it does and to confirm that it is consistent with our application's goals." - which probably could use some more emphasis. Using such a tool without understanding the results is indeed dangerous. But in this case, it's still better than disabling the security manager completely.


to post comments

Discovering a Java Application's Security Requirements (O'ReillyNet)

Posted Jan 4, 2007 23:56 UTC (Thu) by drag (guest, #31333) [Link] (4 responses)

Plus it would probably be more advantagious to developers if they could spend more time auditing existing rules that are created mostly automaticly rather then spend all their time wrestling around with creating them from scratch.

I'm sorry I disagree.

Posted Jan 5, 2007 11:33 UTC (Fri) by hummassa (subscriber, #307) [Link] (3 responses)

The everything-is-forbidden-except-where-permitted-explicitly policy is
_really_ stronger than the
everything-is-permitted-let-me-see-if-i-have-to-forbid-something.

IOW, writing the rules from scratch brings security to another level and
should be the _rule_, not the _exception_.

But that's MHO.

I'm sorry I disagree.

Posted Jan 5, 2007 16:53 UTC (Fri) by iabervon (subscriber, #722) [Link] (2 responses)

Well, if you run your application under controlled conditions in development, generate the rules, and then deploy it in the wilds with the rules you generated before, it's really: anything that happens under normal conditions is permitted; everything else is forbidden. That's somewhat better, anyway, since it means that the rules aren't based on what it does when given malformed input or on untested code paths.

That is exactly the problem...

Posted Jan 6, 2007 10:16 UTC (Sat) by hummassa (subscriber, #307) [Link] (1 responses)

> That's somewhat better, anyway, since it means that the rules aren't
> based on what it does when given malformed input or on untested code
> paths.

The rules _should_ be based on what you want to do (forbid expressely
things) when given malformed input or untested code paths. If you do your
rules from scratch, you open only your tested code paths. If you use a
generator, you _may_ be opening untested code paths, unknowningly -- and
even with a thorough review of the generated rules, things like this may
be _impossible_ (or at least impraticable) to detect.

That is exactly the problem...

Posted Jan 7, 2007 11:35 UTC (Sun) by drag (guest, #31333) [Link]

Ya.. White lists are generally going to be more reliable then black lists.


Copyright © 2025, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds