User: Password:
|
|
Subscribe / Log in / New account

Security

What to do about DNS?

April 11, 2007

This article was contributed by Jake Edge.

The Domain Name System (DNS) has been in the news a bit recently, mostly because of a ham-handed attempt by the US Department of Homeland Security (DHS) to control the master signing key for the DNS Security Extensions (DNSSEC) root zone. While the impact of that is still being debated, it certainly does not help alleviate the fears that other countries have regarding US control of the Internet. Meanwhile, the DHS is pushing adoption of DNSSEC which further fans the flames, even while there are serious questions about the protocol and what, if any, real problems it solves. On another front, Bugtraq readers will have noticed a call to action regarding DNS issues from security researcher Gadi Evron. All of this seems like a good reason to take a look at DNS and DNSSEC and to try to shed some light on the state of Internet name lookups.

DNS is one of the most commonly used services on the Internet, every time one puts 'lwn.net' into a browser, it is used to turn that name into an IP address. In a naive implementation, the browser causes the machine to talk to one of the 13 root servers (k.root-servers.net for example) requesting information about a nameserver for 'net'; it will get a response listing the 13 servers that handle requests for the 'net' top-level domain (D.GTLD-SERVERS.NET for example). As part of the answer, it also receives the IP address for D.GTLD-SERVERS.NET (otherwise it would have to query for that IP address which could lead to an infinite loop) and it uses that address to query for a nameserver for 'lwn.net'. The response is a set of hosts and their IP addresses that are the nameservers for the 'lwn.net' domain and these in turn can be queried to get the IP address of the host of interest. After all that, the browser can connect to the IP address on port 80 and commence with the HTTP request.

In most cases, all of that traffic does not get generated each time a hostname needs to be resolved because there are caches that store information on intermediate hosts. Hosts are typically configured to talk to a caching nameserver when they make DNS requests. The caching nameservers store name-to-IP mappings for as long as the time-to-live (TTL) value will allow. TTL values are an amount of time in seconds that the information returned is valid; they are chosen by a domain owner as a tradeoff between quick responses to changes and DNS traffic reduction; typical values range from two hours to two days. When a caching nameserver finds a mapping in its cache with time still left in the TTL, it can just provide that information to a requester without making any queries upstream.

DNS has worked, by and large, for a long time, but it is not without its problems. Anyone who can intercept DNS queries and/or reply in a way that looks like it came from the queried server can control the name resolution process, providing a number of opportunities for phishing and other kinds of malfeasance. Because the information is typically cached, one redirection with an enormous TTL can have a large impact in what is known as a DNS cache poisoning attack. A poisoned cache sufficiently high in a hierarchy of caching DNS servers can affect large swaths of the Internet as the redirection can trickle down to each of the nameservers below it.

It is against this backdrop of cache poisoning and exploitable flaws in some DNS implementations (Wikipedia has some good examples here) that calls to implement DNSSEC have increased. By using public key encryption, DNSSEC removes the possibility of spoofing the nameserver for a domain through a DNS reply. DNSSEC replies will be signed using the private key of the domain and can then be verified using the public key. If the response does not verify, it does not contain valid information for that domain and should be discarded. At first blush, this seems like a good thing that will eliminate some existing problems; as with many things, though, the devil is in the details.

In order to verify any signed queries, one must obtain the public key from a trusted source; invalid public keys just lead to the same forgery issues that are present in the current system. The public keys will have to be signed in a hierarchy that corresponds to the domain name hierarchy and the top-level master signing key will be the key at the top of the heap. Its public portion will be distributed with DNSSEC enabled software and the private part will sign the keys for the root servers. The root servers will sign the keys for the TLD servers which will in turn sign keys for each of the domains. By verifying each step before caching the information, nameservers can ensure they have correct DNS mappings.

There are some inherent problems in DNSSEC and perhaps the highest profile issue is with the exposure of all the zone data. Because DNSSEC is tasked with providing an authoritative 'not found' message for hosts without an entry, it enables enumeration of all hosts in a zone. The 'not found' messages need to be signed, but it is deemed important not to have the private keys online (in case of a security breach); it also cannot just be a single signed 'not found' message because it could be replayed, in effect knocking a valid host out of the DNS. The solution involves ranges of invalid hostnames each with their own signed 'not found' message. Through a series of queries, an attacker can gain all of the 'not found' ranges which leaves the available hostnames obvious in the gaps. This is very different from the current DNS where one could only ask for hosts by name and essentially get a yes or no answer.

This information leakage was at first considered to be a non-issue by the IETF group working on DNSSEC. They have since been convinced that this problem would prohibit adoption in some jurisdictions and would severely limit some of the more interesting uses for DNS after it becomes secured. The latest proposals provide for a 'not found' message that contains a canned signed portion along with a cryptographic hash of the hostname requested and recipients would need to verify both the signature and that the hash corresponds to the request that they made before accepting the response.

There are also legitimate questions about why DNS needs to be secured. Even if you are certain you know the right address to use for a particular domain, you are not guaranteed that a connection made to that IP actually gets to your intended destination. In order to ensure that, you must have another layer of encryption such as HTTPS or ssh using verified keys. It also does not really help against the vast majority of phishing scams as it does not assist users in recognizing that 'thisistotallynotpaypal.com' is not in any way the same as 'paypal.com' even though they end the same way.

There are some interesting applications for secured services like DNSSEC, but critics argue that those applications should be implemented separately from DNS. There is no need to risk breaking the currently working DNS system by adding additional complexity for little or no gain. If putting DKIM keys into a nameserver-like structure is desirable, and many would argue that it is, create a new system, perhaps based on DNS/DNSSEC, that implements it. In the meantime, they contend, we should leave DNS alone.

Given these questions and a bit of concern whenever any government - but particularly the US government - tries to muscle in on Internet governance, it should come as no surprise that there is a bit of an uproar regarding the DHS key control attempt. It is not completely clear why the DHS believes it must control the master signing key; the theories range from the bland, through clueless and into nefarious. It is possible that DHS believes it is the only entity that can be trusted with the keys, a position which tends to cause muttering about US arrogance. Another possibility is that DHS does not really understand what the keys are and what can be done with them. The paranoid are concerned that the keys might be used to set up a parallel set of root servers that remake the Internet into something more in line with the Bush administration's vision of what the Internet should look like. By co-opting or otherwise manipulating Internet routing, the DHS, some fear, could stage a complete takeover via this alternate sanitized hierarchy. No matter what the reason, it certainly stirs up people who feel that Internet governance should be handled by international organizations and not by the US government.

The problems that Gadi Evron brought to the attention of Bugtraq readers are independent of the DNS vs. DNSSEC debate as neither address the issues that he is trying to solve. A great deal of Internet malware, botnets, spyware, viruses, phishing, etc. relies on name resolution in order to do its work. They typically use nameservers and IP mappings with very short TTL values which allows them to be highly mobile, rapidly changing nameservers and IP addresses as they get detected and shut down in the whack-a-mole game that gets played continuously on the Internet. The white hats simply cannot move fast enough even if they do not run up against slow moving or hostile ISP administrators.

The easiest place to handle this kind of domain is with its registrar, who can completely shut it down by routing its nameservers to nonexistent hosts. This ability to essentially remove a domain's existence can be abused (as GoDaddy proved with seclists.org earlier this year) and there need to be some strict policies and procedures in place to govern how that power is to be used. In addition, there are so-called black hat registrars that do not care and perhaps encourage malicious behavior from some of their registrants. Evron was reporting on a message he sent to the registrar operations mailing list highlighting the problem and looking for solutions. His message to Bugtraq reported on the progress and asked for further ideas.

DNS is a critical piece of Internet infrastructure and anything that impacts it will be felt by a lot of people; anything that breaks it will break the net. All of the services that we use rely, at least to a limited extent, on DNS and any serious outage would make the Internet completely unusable. Because of that, a conservative approach is required. Threats can come from both criminals and governments (though some would claim that is redundant) and we need to protect the net from both. Perhaps DNSSEC tips things too far one way and another approach is needed. It will be interesting to see how it plays out.

Comments (18 posted)

New vulnerabilities

ipsec-tools: denial of service

Package(s):ipsec-tools CVE #(s):CVE-2007-1841
Created:April 10, 2007 Updated:August 28, 2007
Description: A flaw was discovered in the IPSec key exchange server "racoon". Remote attackers could send a specially crafted packet and disrupt established IPSec tunnels, leading to a denial of service.
Alerts:
Fedora FEDORA-2007-665 ipsec-tools 2007-08-27
Debian DSA-1299-1 ipsec-tools 2007-06-07
Red Hat RHSA-2007:0342-01 ipsec-tools 2007-05-17
Gentoo 200705-09 ipsec-tools 2007-05-08
SuSE SUSE-SR:2007:008 ipsec-tools, inkscape, rarpd, ImageMagick/GraphicsMagick, mod_perl, dovecot 2007-04-27
Mandriva MDKSA-2007:084 ipsec-tools 2007-04-16
Ubuntu USN-450-1 ipsec-tools 2007-04-09

Comments (none posted)

man-db: buffer overflow

Package(s):man-db CVE #(s):CVE-2006-4250
Created:April 6, 2007 Updated:April 11, 2007
Description: A buffer overflow has been discovered in the man command that could allow an attacker to execute code as the man user by providing specially crafted arguments to the -H flag. This is likely to be an issue only on machines with the man and mandb programs installed setuid.
Alerts:
Debian DSA-1278-1 man-db 2007-04-06

Comments (none posted)

Page editor: Jonathan Corbet
Next page: Kernel development>>


Copyright © 2007, Eklektix, Inc.
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds