OATH: yesterday, today, and tomorrow
The Initiative For Open Authentication (OATH) is a security-vendor-based collaboration bent on developing a standardized "strong authentication" infrastructure using open standards. Not to be confused with the web-sharing cross-site authorization scheme OAuth, OATH has a broad set of security models it hopes to cover with a unified suite of protocols and APIs — collecting hardware-based, public-key infrastructure (PKI)-based, and one-time password (OTP)-based authentication into one framework.
The end goal is a noble one: a common framework that can use any of the three authentication systems on the client-side, so that it can be used just as easily to connect a user to a cell phone network (which use hardware-based authentication keyed off of the phones' SIM cards), a corporate VPN (which uses PKI authentication via X.509 certificates), and a web application (which uses an OTP protocol to authenticate the user without transmitting a traditional password). Password-based logins are inherently vulnerable, OATH argues, and the hardware-token systems sold by vendors have no established standard. Thus, why not replace both, and in a way that allows vendors to reuse some high-level APIs and software developers to build authentication-agnostic middleware.
OATH's rallying cry throughout its documentation is strong authentication but, interestingly enough, it does not offer a definition of that term. The group does not seem to mean "strong" in the sense of true multi-factor authentication (such as requiring both a hardware token and a password); rather it seems to encompass password-less authentication schemes built around either trusted hardware tokens or challenge-response protocols. Existing PKI systems appear to pass OATH's standards both for security and standardization.
The consortium has a white paper [PDF] on its web site that elaborates on how an organization might deploy different OATH-based systems. Overall, the architecture starts with a client user having "strong authentication" credentials of one form or another (smart card, SIM module, or software-based PKI certificate). The service that the user wishes to connect to could be a VPN, a corporate WiFi or GSM network, or a web application. In any case, the company setting up the service would use one of OATH's strong authentication algorithms for sign-in. The type of service determines the connection over which the authentication step is performed: VPNs would use IPSec, for example, while WiFi networks would use Extensible Authentication Protocol (EAP), and web applications would use SSL/TLS.
In addition to the already-existing network layers like TLS and IPSec, the examples in the white paper tend to rely on existing open source infrastructure to validate user accounts on the server side, such as RADIUS and LDAP. The puzzle pieces that do not yet exist are the standardized credentials, standardized OTP protocols, and application connectors required to hook the OATH authentication interface into network services — bits like Apache modules, PHP and Perl libraries, and VPN code.
From theory to practice
Since 2004, OATH has focused its energies on developing the missing pieces in this roadmap [PDF], and has attempted to do so in the open, building on open and royalty-free specifications. The first result of this work was HOTP, the Hash-based Message Authentication Code (HMAC) OTP algorithm, published in 2005. In 2008, the algorithm was superseded by TOTP, the Time-based OTP. Both were published as IETF drafts.
TOTP extended HOTP by replacing the latter's moving event counter with a time-based value. Essentially, HOTP was a cryptographic function of a shared (symmetric) key and an integer event counter, the count on which the connecting client must keep in sync with the remote server in order to successfully authenticate. TOTP removes the need for client and server to stay in sync on the event counter by using a Unix timestamp instead; the algorithm allows the server to choose how far off an incoming timestamp it deems acceptable, in order to correct for clock drift.
In September of 2010, the most recent release was unveiled. Named OATH
Challenge-Response Algorithms (OCRA), the new algorithm extends TOTP
still further. First, it allows for the replacement of the counter or
timestamp value with any arbitrary input parameter. The IETF draft
describes the input parameter as "a structure that contains the
concatenation of the various input data values
" that the parties
agree upon, and enumerates several acceptable values: event counters as
used in HOTP, time signatures as used in TOTP, hashed PIN or password
values, session identifiers, and general challenge/response questions (and
their answers). The input parameter also incorporates a header indicating which data values are employed, as well as the cryptographic hash function to be used.
The second change is the addition of more verification modes. HOTP and TOTP have exactly one: a client attempts to connect, and the server authenticates the client by sending it a challenge and checking that the response is valid. But OCRA also allows the client to authenticate the identity of the server, so that both parties can be sure they know who they are talking to. This "mutual challenge-response" variation of the algorithm doesn't add anything new, it just allows the client to issue a challenge of its own. Thus the mutual authentication boils down to doing two separate challenge-response computations: one client-to-server, one server-to-client; in other words, the challenge issued by the client is not connected to the challenge issued by the server.
Finally, OCRA also features a "plain signature mode," in which the server sends a "challenge" to the client which requires only that the client sign the challenge's data payload and return it. This mode does not depend on a shared secret key; any client can sign and return the response, so it is only useful for tracking purposes. But it can be combined with a regular challenge-response authentication, creating a "signature with server authentication mode."
Thus far, the HOTP/TOTP/OCRA work has been OATH's most visible development, but it is not the only product released. In 2009, OATH published the OATH Token Identifier Specification, which specifies a formatted alphanumeric identifier that can be used as a unique global identifier by all OATH-compliant products. The format breaks down hardware tokens, software tokens, and "embedded tokens" into separate classes. Thus far, the posted specification only covers hardware tokens, and consists of a 2-character manufacturer prefix, 2-character token type, and 8-character "manufacturer unique identifier."
In 2010, OATH contributed to the IETF's Portable Symmetric Key Container (PSKC) draft specification, which defines an XML-based format for transferring symmetric encryption keys, and Dynamic Symmetric Key Provisioning Protocol (DSKPP), which described a client-server method of initializing and installing symmetric keys. OATH also launched a certification program for vendors wishing to have their products certified for HOTP, TOTP, and OCRA compliance.
Criticism
Shortly after HOTP's launch, the SHA-1 hash function was discovered to be considerably less collision-resistant than previously thought. The bar was lowered yet again in 2009. HOTP's HMAC codes used SHA-1, which led to some concern of HOTP's security itself. However, HOTP does not rely on SHA-1's collision-resistance, but rather on its strength as a "trapdoor function" — finding a collision would entail finding two HOTP responses that have the same hash value, but doing so would not enable an attacker to hijack or replay a valid session; to do that the attacker would need to invert the hashed response and recover the event counter. Still, TOTP and OCRA allow for other hash functions in addition to SHA-1.
A more fundamental challenge to OATH's relevance is the notion that "industry wide" standards for OTP are pointless. This was the charge leveled in 2005 by RSA's Burt Kaliski, who argued that while PKI systems depend on "one-to-many
" security, OTP is always "one-to-one;" i.e., although many vendors may need to verify a digital signature, the challenge-response algorithm between a server and client does not get stronger or more reliable because other vendors implement the same algorithm.
Kalinski further criticized HOTP's inflexibility, both in hash algorithm and use of event counter, which may have contributed to the improvements in those areas in TOTP and OCRA. Still, his initial complaint stands — the OATH OTP algorithms "standardize" something that does not benefit much (if at all) from standardization. One could make the same argument about the Token Identifier Specification: the specification does not make a strong case that a standardized ID string format makes life substantially easier than a randomly-generated string; the strength of the hardware token authentication system comes from the secure installation of the secret key, not the ID number printed on the back. Kalinski's criticism is at least true from the consumer's point-of-view; the user is not made more secure through standardization. Cryptographic token manufacturers, on the other hand, might stand to benefit from an industry-wide standard
HOTP, TOTP, and OCRA are all very simple, but (according to mining project data at Freshmeat and Ohloh) there are less than a dozen open source products that implement any of them — which puts them about on par with S/KEY and other, older OTP standards. That is hardly widespread adoption for a consortium with more than fifty paying members. OATH does not even provide a reference implementations; it only publishes specifications. Then again, the OTP component is just one piece of the overall puzzle OATH has set out to define and standardize — still to come are protocol handlers, validation handlers, credential storage and auditing, and more. Perhaps the full architecture will look more enticing to open source developers.
| Index entries for this article | |
|---|---|
| Security | Authentication |
| GuestArticles | Willis, Nathan |
Posted Dec 16, 2010 20:20 UTC (Thu)
by iabervon (subscriber, #722)
[Link]
There's also the aspect that testing interoperability is a good way of catching implementation mistakes; the vendor may have worked out a secure method, but have bugs in both the token and server that make the implementation not follow the intended method. If there's another implementation of each, chances are that someone will notice if either doesn't work. (I could imagine a buggy system truncating at a hash input block size instead of padding and accidentally discarding all but a few bits of shared secret and using the same code on both sides and never noticing that there are only a few passwords to try at any given time.)
Posted Dec 16, 2010 20:43 UTC (Thu)
by Comet (subscriber, #11646)
[Link]
See the Google Authenticator:
OATH: yesterday, today, and tomorrow
OATH: yesterday, today, and tomorrow
http://code.google.com/p/google-authenticator/
which has an Apache License 2.0.
