Transport-level encryption with Tcpcrypt
It has been said that the US National Security Agency (NSA) blocked the implementation of encryption in the TCP/IP protocol for the original ARPANET, because it wanted to be able to listen in on the traffic that crossed that early precursor to the internet. Since that time, we have been relegated to always sending clear-text packets via TCP/IP. Higher level application protocols (i.e. ssh, HTTPS, etc.) have enabled encryption for some traffic, but the vast majority of internet communication is still in the clear. The Tcpcrypt project is an attempt to change that, transparently, so that two conforming nodes can encrypt all of the data portion of any packets they exchange.
One of the key benefits that Tcpcrypt offers is transparency. That means that if both endpoints of a connection support it, the connection will be encrypted, but if one doesn't support Tcpcrypt, the other will gracefully fall back to standard clear-text TCP/IP. No applications are required to change, and no "new" protocols are required (beyond Tcpcrypt itself, of course) as applications will send and receive data just as they do today. But there is an additional benefit available for those applications that are willing to change: strong authentication.
Tcpcrypt has the concept of a "session ID" that is generated on both sides as part of the key exchange. This ID can be used in conjunction with a shared secret, like a password, to authenticate both ends of the communication. Because the client and server can exchange cryptographic hash values derived from the shared secret and session ID, they can be assured that each is talking over an encrypted channel to an endpoint that has the key (password). A "man in the middle" would not have access to the password and therefore can't spoof the exchange.
Even without any application changes for stronger authentication, Tcpcrypt would defend against passive man-in-the-middle attacks, like eavesdropping. Active attacks could still spoof responses that said Tcpcrypt was not supported, even if the other endpoint did support it, or even relay encrypted traffic. That would still be better than the usual situation today where a passive attacker can gather an enormous amount of clear-text traffic, especially from unencrypted or weakly encrypted wireless networks.
There is an Internet Engineering Task Force (IETF) draft available that describes how Tcpcrypt works by using two new TCP options. Those two options, CRYPT and MAC, will not be recognized by endpoints without Tcpcrypt support, and are therefore harmless. The CRYPT option is used to negotiate the use of Tcpcrypt and to exchange encryption keys, while the MAC option carries a hash value that can be used to verify the integrity of the packet data.
In addition to the IETF draft, the project has produced a paper, The case for ubiquitous transport-level encryption [PDF], that was presented at the 2010 USENIX Security conference. It gives a somewhat higher-level look at how Tcpcrypt integrates with TCP/IP, while providing a lot more information on the cryptographic and authentication algorithms. The slides [PDF] from the presentation are also instructive.
One of the basic premises that underlies Tcpcrypt is that computers have gotten "fast enough" to handle encrypting all internet traffic. Doing so at the transport level, rather than in application protocols (e.g. ssh), can make it transparent to applications. In addition, Tcpcrypt can work through NAT devices, which is something that another lower-layer encryption protocol, IPSec, cannot handle.
Because Tcpcrypt keys are short-lived, non-persistent public/private key pairs, it does not require the public key infrastructure (PKI) that other solutions, like HTTPS, need. That means that endpoints can communicate without getting certificates signed by centralized authorities. Of course the existing PKI certificates will work just fine on top of Tcpcrypt.
While computers may be "fast enough" to handle encryption on every packet, there is still the problem of asymmetry. Servers typically handle much more traffic than clients, so Tcpcrypt is designed to put the most difficult parts of the key negotiation and encryption onto the client side. The claim is that speeds of up to 25x that of HTTPS (i.e. SSL/TLS) can be achieved by Tcpcrypt. One wonders whether mobile devices are "fast enough", but that problem—if it even is one—is probably not one for that much longer.
Overall, Tcpcrypt is an intriguing idea. It certainly isn't a panacea for all of today's network ills, but that is no surprise. Unlike other proposals, Tcpcrypt can be incrementally deployed without requiring that we, somehow, restart the internet. Since it won't break existing devices, it can be developed and tested within the framework of the existing net. If for no other reason, that should give Tcpcrypt a leg up on other potential solutions.
Index entries for this article | |
---|---|
Security | Encryption/Network |
Posted Aug 26, 2010 3:33 UTC (Thu)
by djao (guest, #4263)
[Link] (19 responses)
I'm glad to see that tcpcrypt adopts the sensible policy of encrypting by default, even when authentication is not available. Many other protocols, such as SSL, refuse to work without authentication, or at best provide only second-class support for this use case, and this horrible design decision is in my opinion (yes, I am a cryptographer) the single worst mistake ever made in the history of network security. Firefox, in particular, is one of the worst offenders: an unauthenticated encrypted connection is met with scarier warnings and error messages than radioactive waste, whereas a totally unencrypted (and also unauthenticated) connection is allowed through with no warnings whatsoever! The Firefox example is a dramatic illustration of the absurdity that arises when programmers without cryptography expertise try to write security software. It would be supremely satisfying to see this vexing problem fixed at a lower level through ubiquitous deployment of tcpcrypt.
(Before anyone chimes in with the usual advice to "send a patch": it's been tried already, and it didn't work. For better or for worse, the Firefox developers are convinced that they are right, and nothing will sway them from that view.)
Posted Aug 26, 2010 4:35 UTC (Thu)
by blitzkrieg3 (guest, #57873)
[Link] (9 responses)
Posted Aug 26, 2010 4:51 UTC (Thu)
by djao (guest, #4263)
[Link] (5 responses)
Your web banking example is a bad one. Firefox already allows unencrypted unauthenticated connections without any scary warnings, even if the user is doing web banking. Currently, an attacker can already launch a phishing attack using no SSL/TLS at all. Unless the user notices that the lock icon is absent, the attack will work often enough to be useful.
I propose allowing encrypted unauthenticated connections, with no warnings, and no lock icon. This does not make attacks any easier than they are now. Every attack that an attacker can perform using encrypted unauthenticated connections can also be performed using unencrypted unauthenticated connections.
Posted Aug 26, 2010 14:39 UTC (Thu)
by foom (subscriber, #14868)
[Link] (4 responses)
Yes it does. Let's say you have https://mybank.com bookmarked or memorized. You go to that url expecting it to be secure. That has always been the case up till now [modulo the questionable trustworthiness of the 5000 multinational certification authorities your browser trusts].
With your proposal, I would have to check on every connection to see if there's a "lock" icon for that site, because https now just means "please encrypt" not "please authenticate". That is definitely a loosening of security, and will make MiTM attacks possible where they were not before. Nobody is gonna go for that...
For your proposal to actually work, you need to do the opposite: transparently *upgrade* http:// to be anonymously-encrypted when possible. That's a great idea. But you've gotta leave https:// alone.
Posted Aug 26, 2010 14:50 UTC (Thu)
by djao (guest, #4263)
[Link] (2 responses)
Posted Aug 26, 2010 16:42 UTC (Thu)
by gmaxwell (guest, #30048)
[Link]
The authentication-state caching thing you suggest is one possible solution, but it's somewhat fragile and still leaves a window of exposure. See https://secure.wikimedia.org/wikipedia/en/wiki/Strict_Tra... for more information on the initiatives in this area.
Posted Aug 26, 2010 18:22 UTC (Thu)
by Simetrical (guest, #53439)
[Link]
This is basically exactly what STS (formerly known as ForceTLS) does. For added effectiveness, browsers will likely precache STS headers for many major sites. Once this is done, there will be no reason to present scary warnings anymore for self-signed certs. Firefox has just implemented this: it's RESOLVED FIXED as of two days ago, so I guess it will be in the next Firefox 4 beta.
I still agree that the current behavior is way over the top. As this paper observes, a certificate error these days is a guarantee that the site is not malicious, since only a complete idiot of an attacker would try pulling off an attack via a page that raised a giant error . . .
Posted Aug 27, 2010 10:12 UTC (Fri)
by epa (subscriber, #39769)
[Link]
95% of users have no idea what https means or even that there is a difference between http and https, so it's not a big issue, but I take your point that any change shouldn't downgrade the security currently offered by https bookmarks.
Posted Aug 29, 2010 16:37 UTC (Sun)
by Tet (guest, #5433)
[Link] (2 responses)
Utter nonsense. Yes, that's supposed to be how it works, but in
the real world, it's simply not true. The obvious example is
3-D Secure, as used
by banks here in the UK (and elsewhere?).
You go to a web site, fill your cart and go to
the checkout. You enter your credit card details, because the lock
icon is present and you're confident that it's not a scam site. Then
you're sent to a secondary authentication page. The lock icon is
still showing everything's good. But what you're seeing is actually
an iframe pointing to an entirely different site (in my case, hopefully it's
LloydsTSB ClickSafe). But the padlock icon has nothing to do with this
site, and thus offers no guarantees that the connection is secure, or
that the page is being served by an entity that you trust.
As a technically adept user, I can (and do) explicitly check the
certificate of the iframe. But I'm in a very, very small minority. It's
trivially easy for a phishing site to show a valid padlock throughout
the entire transaction.
Posted Aug 29, 2010 17:52 UTC (Sun)
by foom (subscriber, #14868)
[Link]
The padlock is shown on the outermost site. It is up to that site to ensure the security of its own website against XSS, against hacking of its servers, and against using insecure content inappropriately. It's their responsibility, not yours, to make sure they use secure iframes not insecure ones. And your browser checks the certificate to make sure that it *actually* belongs to the site that your bank trusted. So no, you don't need to verify every iframe individually.
Okay, so it's not literally true that "the only other party to view your communications was the web site", it's the web site and other web sites that the web site trusts.
> It's trivially easy for a phishing site to show a valid padlock throughout the entire transaction.
Of course, but that has nothing to do with the rest of your complaint. In the case of a phising site, "the web site" that the user is visiting, and which is protected, is the phising site.
Posted Sep 2, 2010 8:47 UTC (Thu)
by eduperez (guest, #11232)
[Link]
Posted Aug 26, 2010 15:39 UTC (Thu)
by zooko (guest, #2589)
[Link] (7 responses)
Now hold on there, mister. The PKI paradigm in which public keys are supposed to be vetted by a centralized trusted third party can be blamed squarely on the cryptographers who invented public key encryption in the first place.
The reason Mozilla and every other user-facing app has this stupid design is a direct consequence of them trusting in cryptographers to give them good advice about security distributed systems design.
(Now granted, we all should have known at the start that cryptographers are the wrong people to go to for secure distributed systems design.)
Anyway, I can't hold silent while you reverse the history and saying that application hackers like the Netscape engineers are the ones to blame when they should have listened to cryptographers. That's backward! They did listen to cryptographers, and that's how we got here!
Since then a lot of distributed systems hackers (myself included) have pushed alternative models instead of the PKI model, and more recently (*after* we distributed systems hackers made significant progress) cryptographers like Prof. Boneh have started working on it too.
Posted Aug 26, 2010 15:49 UTC (Thu)
by Trelane (subscriber, #56877)
[Link] (4 responses)
Posted Sep 4, 2010 4:52 UTC (Sat)
by zooko (guest, #2589)
[Link] (3 responses)
ftp://cag.lcs.mit.edu/pub/dm/papers/mazieres:thesis.ps.gz
ssh's model which Peter Gutmann calls the "baby-duck" model or "key continuity"
Web of Trust by Phil Zimmermann
The FreeS/WAN project by John Gilmore, Hugh Daniel et al., known as "Opportunistic Encryption".
The Capability Access Control model:
original:
http://www.cs.washington.edu/homes/levy/capabook/Chapter3...
modern synthesis:
http://erights.org/talks/thesis/index.html
Zooko's Triangle and Pet Names:
http://www.skyhunter.com/marcs/petnames/IntroPetNames.html
ZRTP:
http://en.wikipedia.org/wiki/ZRTP
Tahoe-LAFS:
(Those last three are self-citations.)
The overall theme here is that the good ideas about robust decentralized security came originally from systems researchers and hackers, not from cryptographers. Cryptographers traditionally focused on elegant mathematical models and (with almost no explicit justification) they settled on the globe-spanning, centralized, hierarchical security model that we all know and love today as "PKI".
Posted Sep 6, 2010 3:20 UTC (Mon)
by zooko (guest, #2589)
[Link] (2 responses)
Carl Ellison. Establishing Identity Without Certification Authorities. In Proc. Sixth USENIX Security Symposium, pages 6776, Berkeley, 1996. Usenix.
Again, this is a fellow who is basically a systems researcher, not a cryptographer as such (he has no publications in crypto theory to my knowledge), and he was publishing good ideas along these lines back in '96.
Oh, and of course Ron Rivest was doing a very similar thing in '96: http://people.csail.mit.edu/rivest/sdsi10.html
So there's the first example I can come up with of a bona fide cryptographer giving us something more robust and decentralized than the PKI model.
Posted Sep 6, 2010 3:26 UTC (Mon)
by zooko (guest, #2589)
[Link] (1 responses)
Matt Blaze, Joan Feigenbaum, and Jack Lacy. Decentralized trust management. In Proceedings 1996 IEEE Symposium on Security and Privacy, page (to appear), May 1996.
Also real cryptographers.
But I should emphasize that while SDSI and to a lesser extent PolicyMaker were influential, these were exceptions to the centralized hierarchical PKI model that dominated cryptography, and they were too late. By 1996 the damage had already been done when Netscape engineers baked the PKI model into their socket encryption protocol, SSL.
Posted Sep 6, 2010 3:27 UTC (Mon)
by zooko (guest, #2589)
[Link]
Okay I'm definitely going to stop following-up to myself now. :-)
Posted Aug 26, 2010 15:56 UTC (Thu)
by djao (guest, #4263)
[Link]
But Firefox was developed well after the rise of SSH, and the particular design decision that I am criticizing, namely the decision to change the warning dialogs for unauthenticated encrypted connections from one mild warning to three consecutive very big scary warnings, was made in Firefox 2.0, released in 2006. The Netscape engineers had valid excuses for their mistakes. The Firefox engineers do not.
Posted Aug 29, 2010 1:54 UTC (Sun)
by zooko (guest, #2589)
[Link]
Posted Aug 26, 2010 17:58 UTC (Thu)
by emk (subscriber, #1128)
[Link]
As it turns out, though, tcpcrypt does two very useful things:
1) tcpcrypt provides extremely low-overhead session key creation, more than an order of magnitude faster than SSL.
2) tcpcrypt provides non-secret, unique session identifiers that allow you to build authentication primitives at the application level. Among other things, you can use RSA to sign the session ID (turning tcpcrypt into a much faster SSL replacement) or perform mutual authentication using a client-only password and a corresponding hash on the server.
Essentially, tcpcrypt separates encryption from authentication, and provides the building blocks to do both well.
Even though tcpcrypt looks like junk at first glance, it's a _very_ slick protocol. But the authors really need to revamp their home page so that they stop getting written off as cranks who don't understand why "fail open" is a bad security policy.
Posted Aug 26, 2010 3:58 UTC (Thu)
by zooko (guest, #2589)
[Link] (7 responses)
Citation needed! I've never heard that story, and I've read much of what has been written on the history of modern encryption.
According to Paul Lambert and Howard Weiss [1], NSA actually sponsored development of encryption for ARPANET.
http://www.toad.com/gnu/netcrypt.html
N.B. NSA certainly tried to prevent widespread encryption several times during the 1990's, so I wouldn't be surprised if they did block encryption in TCP/IP, but unless you have some evidence that they did, or at least you can say who told you this rumor, then why publish it?
Posted Aug 26, 2010 6:13 UTC (Thu)
by djao (guest, #4263)
[Link] (6 responses)
Posted Aug 26, 2010 13:37 UTC (Thu)
by csigler (subscriber, #1224)
[Link] (4 responses)
You should have ended your comment with the above sentence. I'm not saying the NSA doesn't want to read all traffic on the Internet. However, you have _no_ proof for your claim. It is based on sheer speculation and approaches tinfoil hat-worthiness.
Clemmitt
Posted Aug 26, 2010 14:23 UTC (Thu)
by djao (guest, #4263)
[Link] (2 responses)
What I am claiming is that their actions of 1970 are largely irrelevant. The NSA unquestionably did block encryption software in the 1990s, and in doing so heavily damaged the development of the internet. Against that backdrop, the events of the 1970s, whether positive or negative, have no more significance than a rounding error. If the sentence you point out from the article is indeed an error, it is an extremely minor one. The major thrust of the claim (that the NSA held back public use of encryption software) is correct, even if the timeline is off.
Posted Aug 26, 2010 16:31 UTC (Thu)
by zooko (guest, #2589)
[Link] (1 responses)
If it doesn't matter whether the rumor is true or false, then one can omit it just as well as include it.
One reason that I object to printing unsubstantiated rumors is that it reduces the credibility and impact of truths. NSA did indeed block distribution of crypto in the 1990's through means both legal (export regs, Clipper chip) and shady (pressuring Netscape and Cisco to cripple security products), and in fact they were still doing it as recently as 2007 when they pressured Sun (without any legal justification that I can see) to omit the crypto accelerator from the GPL'ed source code of the UltraSparc T2.
These things matter to me! I don't want people to be complacent or ignorant of a powerful, shadowy, ill-regulated organization interfering with freedom of speech, freedom of commerce, and democracy! Accusing them of things without evidence only serves to inure people to their real offences.
Posted Aug 26, 2010 16:45 UTC (Thu)
by jake (editor, #205)
[Link]
I read it (somewhere) within the last week or so ... as I was writing the article I did some Googling and even poking through my (voluminous) browser history to see if I could find it. The fact that I didn't should probably have made me shy away from saying it.
"It has been said ..." was basically a cop-out ... my apologies ...
If I do come up with a reference I'll post it here.
jake
Posted Aug 26, 2010 16:03 UTC (Thu)
by gmaxwell (guest, #30048)
[Link]
From the horses (formerly top secret) mouth: "NSA hunted diligently for a way to stop cryptography from going public."
Though I've never seen any disclosure specific to the arpanet, it would be a logical consequence of the chilling effect of their academic suppression in other areas even absent direct intervention.
But really... would you have done differently in their shoes?
Posted Aug 26, 2010 19:32 UTC (Thu)
by smoogen (subscriber, #97)
[Link]
Applying Occam's razor without enough facts gets you wrong conclusions. There were many many differences between 1969 ARPAnet and 1994 Internet.
1) ARPAnet was a Cold War research unit where designing new things to help the military was paramount. The NSA at that time was quite aware that designing in security first versus later was important for future military networks. The institutes that were going to connect to ARPAnet were limited and controlled putting in encryption would be easier to secure. The Internet on the other hand was completely different with it already spanning into .su and other places.
2) The politics were completely different in 1970's and the 1990's. In the 1970's ARPAnet was going to be connecting and learning about dealing with network failures in that war with the soviet union any day now. This environment security was more important than Control. In the 1990's the war was over and the US had won.. so control was more important than security.
The simple fact is that encryption is very expensive hardware wise and when your research computers are at best on a partial T1 adding in DES or some other encryption would make it too much for anyone to want to use. Back in the late 1980's we had encryption in our Kerberos systems but most people turned it off because it sucked the living bejezus out of the CPU when you were trying to do a telnet.
Posted Aug 26, 2010 4:29 UTC (Thu)
by njs (subscriber, #40338)
[Link]
Posted Aug 26, 2010 9:04 UTC (Thu)
by richdawe (subscriber, #33805)
[Link] (11 responses)
I'm guessing that FTP-over-tcpcrypt doesn't really work in the presence of NAT devices. How would the NAT device decrypt the data, rewrite FTP commands, and re-encrypt?
Posted Aug 26, 2010 9:15 UTC (Thu)
by ilmari (guest, #14175)
[Link] (7 responses)
Posted Aug 26, 2010 13:24 UTC (Thu)
by djao (guest, #4263)
[Link] (6 responses)
But even when using an authenticated connection, tcpcrypt works with NAT. It accomplishes this feat by not encrypting or authenticating the port numbers. This design allows for some attacks (such as traffic analysis on port numbers), but the tradeoff seems to be worth it. The USENIX paper discusses this issue in some detail.
Posted Aug 26, 2010 13:55 UTC (Thu)
by ilmari (guest, #14175)
[Link] (5 responses)
Posted Aug 26, 2010 14:32 UTC (Thu)
by djao (guest, #4263)
[Link] (4 responses)
Besides the conference paper, the protocol has been implemented on Windows/Mac/Linux, and the implementation itself is publicly available on the web site under the GPL. The implementation demonstrably works over NAT, and it works with FTP and IRC and DCC and all those other problem cases that you cite. I suggest taking a look at the working implementation rather than arguing about whether or not the software works.
Posted Aug 26, 2010 15:31 UTC (Thu)
by bboissin (subscriber, #29506)
[Link]
some NAT routers modify the TCP *payload* to let active FTP connection work (same for DCC I guess). So if the content is encrypted there's no way for the router to modify the payload.
Posted Aug 26, 2010 15:50 UTC (Thu)
by imitev (guest, #60045)
[Link] (2 responses)
Now in the case of NAT, if you want - say - to match RELATED packets with iptables, you need the ftp conntrack helper which will *read* the payload/data for ftp connections (control channel) so that what would be a NEW connection to the data channel on the random port supplied by the ftp server is actually matched by the connection tracking logic, and labelled as RELATED.
I may be wrong though, it's been a long time I dealt with that stuff.
Posted Aug 26, 2010 16:02 UTC (Thu)
by djao (guest, #4263)
[Link]
Posted Aug 26, 2010 17:05 UTC (Thu)
by imitev (guest, #60045)
[Link]
Posted Aug 26, 2010 9:44 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Posted Aug 26, 2010 15:01 UTC (Thu)
by sync (guest, #39669)
[Link] (1 responses)
Posted Aug 26, 2010 19:08 UTC (Thu)
by RobSeace (subscriber, #4435)
[Link]
Posted Aug 26, 2010 9:45 UTC (Thu)
by jengelh (guest, #33263)
[Link]
Eh? Did they ever hear of udpencap? Port 4500, anyone?
Posted Aug 26, 2010 13:18 UTC (Thu)
by Fowl (subscriber, #65667)
[Link] (4 responses)
I'm wondering how far off actual widespread implementation is - it should be relatively easy to get this into the (Linux) kernel, harder to get it on by default; but I'm left wondering if it'll ever make it into OSX or NT in our lifetimes...
Anyway, indepth articles on topics like this are why I hand over my hard earned student dollars to subscribe *hint* *hint*. =)
Posted Aug 26, 2010 13:50 UTC (Thu)
by Fowl (subscriber, #65667)
[Link]
This is very exciting for me however! Hopefully in the not too distant future, this will make it into mainline and large chunks of internet traffic will start become opaque with almost no one noticing! =)
So I suppose what I was really asking for is more of a political (urgh) view of the situation - what sort of chances this has for general consumption, etc.
Posted Aug 26, 2010 13:52 UTC (Thu)
by jackb (guest, #41909)
[Link] (2 responses)
Posted Aug 26, 2010 14:44 UTC (Thu)
by djao (guest, #4263)
[Link]
Posted Aug 26, 2010 17:33 UTC (Thu)
by imitev (guest, #60045)
[Link]
Posted Aug 26, 2010 18:18 UTC (Thu)
by sflintham (guest, #47422)
[Link]
Admittedly it is a bit pointless right now, but I would quite like to try using this for a while and see if I encounter any breakage or noticeable overhead. I haven't been an early adopter for ages, it might be my turn to suffer for the greater good. :-)
Posted Aug 27, 2010 12:48 UTC (Fri)
by intgr (subscriber, #39733)
[Link]
I have created an AUR package of tcpcrypt for Arch Linux users: http://aur.archlinux.org/packages.php?ID=40308 This is the user space version, so it's quite foolproof (be careful with it on headless servers though), and surprising how easy it is to deploy.
It is important to emphasize that the tcpcrypt effort is being led by skilled and experienced cryptographers. The USENIX conference where the paper was published is highly regarded, and one of the authors (Boneh) is arguably the top cryptographer of this generation. In short, tcpcrypt is not your average snake oil -- it is a serious proposal, worthy of consideration.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
I never suggested that an unauthenticated connection should have a lock icon. In fact, the reverse is true -- they should not have a lock icon. The lock icon should remain reserved for situations where the user needs to sleep easy.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
You make a valid point. However, this problem can be easily, almost trivially fixed. The browser can (and should) be programmed to give a huge scary warning if a web site which previously offered authentication in the past has changed so that it no longer offers authentication.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
If you see the little locked icon in your address bar, you should
sleep easy in the knowledge that the only other party to view your communications was the web site
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
I do not, did not, and never have faulted the original Netscape engineers. They did the best job that they could at the time. In particular, Netscape predates SSH, and SSH was the first program to demonstrate that crypto is much easier to deploy without a PKI. (SSH is also, incidentally, the only crypto protocol in history successful enough to have driven its corresponding unencrypted alternative to extinction.) I also fully agree that cryptographers are largely at fault for the colossal PKI mistake.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
The NSA is far too secretive for anyone to procure a reliable citation for such allegations. However, given that we all agree the NSA did block encryption from being deployed on the internet during the 1990s, during a period which was at least as crucial to the formation of today's internet as the old ARPANET, what difference does it make? There's no logical reason why the NSA's position on this issue would change between 1970 and 1990. Occam's Razor certainly implies that the NSA opposed encryption software deployment consistently throughout this entire timeframe, and they had ample ability and opportunity to take action behind the scenes, out of public view.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
> a reliable citation for such allegations.
To be clear, I'm not claiming that the NSA actually did block encryption software in the 1970s. I have no proof of that.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
> heard about it, then that might help us learn something.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
> change between 1970 and 1990.
Transport-level encryption with Tcpcrypt
FTP with Tcpcrypt vs. NAT
FTP with Tcpcrypt vs. NAT
Tcpcrypt supports authentication, it's just not mandatory. This is not the same thing as no authentication. As I wrote in my first comment, I believe mandatory authentication is one of the biggest mistaken design decisions of all time.
FTP with Tcpcrypt vs. NAT
Not authenticating or encrypting port numbers still doesn't help when you need to inspect and rewrite port numbers and IP addresses in the payload of the packet, as with FTP (PORT and EPRT commands), IRC (DCC connections) and a whole bunch of other protocols (see the various conntrack modules in netfilter).
FTP with Tcpcrypt vs. NAT
I thought my statement was pretty clear, but it seems that I'll have to clarify. Tcpcrypt does not encrypt or authenticate the portions of the TCP header in the IP payload that encode port numbers and IP addresses. Neither does the (optional) authentication portion of tcpcrypt rely on port numbers or IP addresses, which in any case are easily forged. Present-day encryption protocols such as SSL and SSH already work fine over NAT -- there's no inherent reason why NAT is incompatible with strong authentication or encryption.
FTP with Tcpcrypt vs. NAT
FTP with Tcpcrypt vs. NAT
FTP with Tcpcrypt vs. NAT
In the case of FTP passive, you usually connect to port 21 (control channel), and if you up/down something (or even ls ?) the server replies (in the payload) with "ok, connect to port 12345" ; port 12345 is random and is for the data channel.
Having the headers clear text doesn't help here, you really need to be able to read what's in the payload.
Right, I didn't think about that. Thanks for the explanation.
FTP with Tcpcrypt vs. NAT
FTP with Tcpcrypt vs. NAT
Anyway, in both case you still need to read what's in the payload.
FTP with Tcpcrypt vs. NAT
FTP with Tcpcrypt vs. NAT
You have to use a sane protocol.
FTP with Tcpcrypt vs. NAT
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
You can, but it's unbelievably difficult. To start with, opportunistic encryption is (to my knowledge) not yet a standard part of IPsec. It's a nonstandard extension offered by some implementations, not all of which are compatible. Also, installation and configuration of IPsec is much more intrusive and time-consuming than tcpcrypt. But the biggest problem is that the DNS TXT key needs to go in the reverse DNS zone file. I don't know a single residential ISP that allows customers to add something to their reverse DNS, and even among business ISPs this kind of thing is very rare. So, in practice, opportunistic encryption via IPsec is available only to a very few privileged users, which is not enough to support large-scale deployment or make any measurable difference in the percentage of internet traffic that undergoes encryption.
Transport-level encryption with Tcpcrypt
Transport-level encryption with Tcpcrypt
As a general rule if you need to configure something then say goodbye to mass usage. If you need to understand DNS TXT and ipsec (and its weird implementation interoperability issues), then say not only goodbye to mass usage, but also to a good amount of sysadmins.
Packaged version?
Transport-level encryption with Tcpcrypt
Just 'makepkg -i
' as usual and '/etc/rc.d/tcpcryptd start
'. Enjoy! :)