LWN.net Logo

Partial disclosure

By Jake Edge
October 8, 2008

We are increasingly seeing disclosures of security vulnerabilities that don't actually disclose much, except that the researcher has found something. Unfortunately, we have also seen lots of evidence that once the presence of a flaw is known, it doesn't take very long for folks to figure out what the vulnerability is. Of course, we don't have any data on how long it takes those with a malicious intent to find the flaws, but clearly the "white hats" find them quickly. So what or who, exactly, are those practicing "partial disclosure" protecting?

Partial disclosure is clearly a part of the "security circus" that Linus Torvalds recently castigated, as it serves to increase the notoriety of security researchers, without necessarily doing anything to help protect users. Several recent examples come to mind of researchers who have found real flaws, but for various reasons don't want to disclose the details. Instead they "tease" the world by talking around what they found, trying—and generally failing—to leave out enough information so that others can't immediately follow in their footsteps.

Dan Kaminsky's DNS flaw was an interesting example in that Kaminsky only disclosed the vulnerability to affected software vendors, allowing them multiple months to produce patches. He then wanted to give administrators time to apply the patches so he delayed disclosing the flaw for another month or so. He also had an admittedly selfish reason for delaying disclosure: he wanted to announce it at the Black Hat security conference.

Because of the addition of source port randomization as the fix, it didn't take very long for other security researchers to come up with the vulnerability. Attackers may have come up with it even more quickly, but because there were no details available, developers of other, smaller DNS servers—not privy to the initial disclosure—were unable to determine whether their code was vulnerable. It is commendable that Kaminsky worked with the vendors to fix the problem, but there were clearly holes in his disclosure methods.

A worse case can be seen with the recent spate of reports about "clickjacking". It started with a report of a canceled talk at the OWASP AppSec conference. The name is clearly suggestive of where the vulnerability might be, and the description of the canceled talk gave enough information that others were able to duplicate it. This led one of the original researchers to release the vulnerability information.

So, in the interim, there was enough information floating around to find and exploit the flaws, and now the vulnerability info has been released, but there are no fixes available for many of them. It is hard to see what delaying the disclosure did for anyone—researchers or users—here. It did generate lots of press, though, partially because of the name as Bruce Schneier pointed out pre-disclosure:

"Clickjacking" is a stunningly sexy name, but the vulnerability is really just a variant of cross-site scripting. We don't know how bad it really is, because the details are still being withheld. But the name alone is causing dread.

Yet another recent example is the denial of service reported for nearly any TCP device. Like clickjacking, it is being described in scary ways—which may well be justified:

Robert and I talk a lot, and I asked him if he'd be willing to DoS us, and he flatly said, "Unfortunately, it may affect other devices between here and there so it's not really a good idea." Got an idea of what we're talking about now? This appears not to be a single bug, but in fact at least five, and maybe as many as 30 different potential problems. They just haven't dug far enough into it to really know how bad it can get. The results range from complete shutdown of the vulnerable machine, to dropping legitimate traffic.

There may well be enough information in the description of what the researchers found—and, in particular, how they found it—for an enterprising attacker to find it for themselves. In the meantime, the rest of us are left in the dark. Security researchers are clearly under no obligation to disclose their research sensibly, but it would seem that either releasing all the details at once, or keeping them completely secret, would be better than these partial disclosures.


(Log in to post comments)

Partial disclosure

Posted Oct 9, 2008 9:37 UTC (Thu) by hppnq (guest, #14462) [Link]

it would seem that either releasing all the details at once, or keeping them completely secret, would be better than these partial disclosures.

I don't agree. If the aim is to be secure, IMHO it is better to simply apply normal safety measures and apply normal risk analysis. For most serious shops this would include reading all kinds of relevant disclosures.

Partial disclosure

Posted Oct 9, 2008 14:41 UTC (Thu) by tshow (subscriber, #6411) [Link]

Wasn't the new TCP DOS attack something to do with forged syncookies? I seem to recall one of the articles at the time suggesting this.

Partial disclosure

Posted Oct 9, 2008 23:30 UTC (Thu) by tialaramex (subscriber, #21167) [Link]

Not forged, but rather, it's the same trick used against you.

One part of the attack that's been partly revealed uses the same trick as SYN cookies. SYN cookies work by hiding some information in fields that TCP/IP promises will be faithfully returned by the other party. (Part of?) the new attack relies on the same trick, but this time by the client not the server. The client sends SYN packets but instead of storing the connection structures locally, the vital data is hidden in the SYN packet itself, and when it is returned per the requirements of the TCP specification, the data can be used to create a valid ACK. Thus the attacker needs store no state to create a connection, their only overhead is sending two packets, while the victim system must store an entire connection structure (and for a lot of trivial services, either fork a process or create a thread...) for every packet. This quickly exhausts the resources of the victim, delivering a DOS.

Note that /unlike a SYN flood/ the attacker must reveal real addresses which they own in order to carry out such a "full connection" attack reliably. So it is really only useful if you can sacrifice the machines or addresses used in the attack (e.g. it's a server you previously compromised in another more sophisticated attack). Also it means that victims can use ordinary stateless firewall technology available on every platform to block the main effect of the attack once they identify the source.

In a sense this "attack" wasn't revealed because it isn't one attack but a collection of strategies to undermine the principle that you shouldn't do disproportional work for an unauthenticated connection. Traditionally it was expected that an attacker must waste as much resources on a TCP DOS as the victim, and thus you can only win if you have more resources (e.g. because you have amassed a zombie network of compromised PCs). This suite of attacks apparently all tips the balance in the attacker's favour, reducing the storage, CPU and network bandwidth overhead of maintaining an effective DOS.

Partial disclosure

Posted Oct 10, 2008 12:02 UTC (Fri) by copsewood (subscriber, #199) [Link]

"Because of the addition of source port randomization as the fix, it didn't take very long for other security researchers to come up with the vulnerability."

I'm not convinced Kaminsky's partial disclosure was a bad thing for maintainers of minority DNS products who were not pre-informed in the initial secret disclosure limited to main DNS vendors. Once the details of the coordinated fix were released, the class of attack and fixes needed became obvious if not the specific vector. The knowledge of the class of attack should have made it possible to improve randomisation of source ports on minority DNS products. It's not as if Dan Bernstein hadn't forseen this class of attack years before in connection with the design of DJBDNS anyway. This product was no more vulnerable before this disclosure than the mainstream patched DNS products were afterwards. It may not have taken very long for the specific attack to have been discovered, but this should have been long enough to fix minority DNS products against the entire class of attack concerned, to the extent DNS as opposed to DNSSEC as a protocol and products implementing it can be fixed at all.

If Kaminsky had spilled the beans in public all at once, this would have given less time to patch DNS products against the entire class of attack. What would have been more likely is that maintainers would have had to come up with faster and narrower fixes which would have proved less durable than patches fixing the entire class of attack.

When to release what information about a devastating new specific attack is a hard problem, but I don't think Kaminsky did such a bad job compared to how some researchers might have chosen to handle the same knowledge. This is based on the objective of making it possible to keep the existing Net going somehow, rather than the objective of punishing every admin for not knowing everything that all security researchers taken together know at any given time. Some people commenting on this kind of issue seem to presume the latter objective to be reasonable but I don't.

Copyright © 2008, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds