The overengineering and non-deployment of SSL/TLS
I have written before about how overzealous design of cryptographic protocols often results in their non-use. Protocol engineers are trained to be thorough and complete. They rankle at leaving in vulnerabilities, even against the most extreme threats. But the perfect is often the enemy of the good. None of the various protocols to encrypt E-mail have ever reached even a modicum of success in the public space. It's a very rare VoIP call (other than Skype) that is encrypted.
The two most successful encryption protocols in the public space are SSL/TLS (which provide the HTTPS system among other things) and Skype. At a level below that are some of the VPN applications and SSH.
TLS (the successor to SSL) is very widely deployed but still very rarely used. Only the most tiny fraction of web sessions are encrypted. Many sites don't support it at all. Some will accept HTTPS but immediately push you back to HTTP. In most cases, sites will have you log in via HTTPS so your password is secure, and then send you back to unencrypted HTTP, where anybody on the wireless network can watch all your traffic. It's a rare site that lets you conduct your entire series of web interactions entirely encrypted. This site fails in that regard. More common is the use of TLS for POP3 and IMAP sessions, both because it's easy, there is only one TCP session, and the set of users who access the server is a small and controlled set. The same is true with VPNs -- one session, and typically the users are all required by their employer to use the VPN, so it gets deployed. IPSec code exists in many systems, but is rarely used in stranger-to-stranger communications (or even friend-to-friend) due to the nightmares of key management.
TLS's complexity makes sense for "sessions" but has problems when you use it for transactions, such as web hits. Transactions want to be short. They consist of a request, and a response, and perhaps an ACK. Adding extra back and forths to negotiate encryption can double or triple the network cost of the transactions.
Skype became a huge success at encrypting because it is done with ZUI -- the user is not even aware of the crypto. It just happens. SSH takes an approach that is deliberately vulnerable to man-in-the-middle attacks on the first session in order to reduce the UI, and it has almost completely replaced unencrypted telnet among the command line crowd.
I write about this because now Google is finally doing an experiment to let people have their whole gmail session be encrypted with HTTPS. This is great news. But hidden in the great news is the fact that Google is evaluating the "cost" of doing this. There also may be some backlash if Google does this on web search, as it means that ordinary sites will stop getting to see the search query in the "Referer" field until they too switch to HTTPS and Google sends traffic to them over HTTPS. (That's because, for security reasons, the HTTPS design says that if I made a query encrypted, I don't want that query to be repeated in the clear when I follow a link to a non-encrypted site.) Many sites do a lot of log analysis to see what search terms are bringing in traffic, and may object when that goes away. Several things have stood in the way of deploying the encrypted web:
Cost of certificates
First of all, to use TLS without annoyance, you had to buy, and keep buying, a certificate from a seller on the trusted list of all major browsers. In the old days these cost hundreds of dollars per year, which large sites had no problem with but small sites balked at. Even today the hassle of getting a certificate and maintaining it scares away so many sites. In the ZUI approach, we need a way for sites to get, free and with very little UI, a low-security certificate good only for securing basic traffic.
One simple approach would be a certificate server which allowed any site to request a certificate and verify it owned the domain in question but putting a response to a challenge in a URL on that domain on a web server on a random port below 1024. The web server, when it is being installed, could conduct this exchange without the installing user having any involvement, and gain a low-security certificate proving that at least back then, the requester had control of the domain or IP. (There are some DNS cache poisoning gotchas to worry about here.) With such a certificate, the yellow lock would be shown as barely locked. We might call this a "Light Certificate." The startssl system offers free SSL Certs but not automatically and they will get an error in Internet Explorer.
One could even imagine a certificate which only certifies a given domain on a given IP. You would need to (automatically) get another certificate if you ever changed IPs. If you want to do complex stuff, such as round-robin DNS or Dynamic DNS, get a more general certificate. However, this would not solve the problem of local IPs behind NAT that can't be certified by remote CAs. It does mitigate the issue that somebody who takes over your machine temporarily could then get a cert to pretend to be it on the general internet, which is a problem with all low-authentication CA approaches.
On top of this, cheaper identity verified certificates would have been worthwhile for those sites willing to spend a little cash to get a stronger lock icon.
UI Cost of certificates
Another downside of running secured traffic is all the warnings that come in many browsers when you transition from a secured page to an unsecured one, or when a page contains secured text but includes unsecured images or other embedded items. These warnings have become the "alarm that constantly goes off" and is thus routinely ignored. We might want to not have these warnings when moving from a light certificate to unencrypted. Or make them less intrusive, such as a little animation as the security level changes, a change of colour in a toolbar background or something. Again, if the alarm is going to go off all the time it is not doing you any good.
Client authentication is another can of worms. For privacy reasons, clients must not identify themselves to any server who asks. But it is useful to be able to do this, and more secure and invisible to use public key encryption and digital signature when you need it than to use userids and passwords as we do today.
In addition, users have shown it takes a lot of prodding for them to get and install client side certificates. The more they can do with user generated certificates, or free certificates that can be requested by programs with minimal UI for the user, the better. Earlier, I wrote about the idea of authenticated actions as an alternative to logins and single sign-on.
Physical cost of the encryption
In the early days -- SSL was developed 15 years ago -- people thought seriously about the CPU cost of doing the ordinary symmetric encryption on the actual data. To do it properly, it was felt back then, required buying a card for your server which would do the encrypt, a real serious cost. Thanks to 15 years of Moore's law, you would think that this would no longer be an issue, but you will still hear some talk about it.
There is also resistance to the cost of the public key encryption (typically RSA) done in the initial handshake for TLS. RSA is much more expensive than ordinary encryption. But it is done only once, at the start of a session, and should not be standing in anybody's way. RSA uses large keys and large certificates, however, and people with bandwidth concerns (mostly for their users) have reason to object to it. To take a tiny transaction, such as the fetching of the lightweight Google home page (3kb in size) and make it involve tens of kilobytes is something one can still express some concern about, even today.
There is an answer to that, in elliptic curve cryptography, which is able to use much smaller keys and certificates. However, for a variety of reasons, some of them silly, it is less popular for these applications.
Cost of the handshakes
The biggest cost, one that Moore's law doesn't help at all, is the handshake to start a TLS based web fetch.
With regular HTTP, you send your unencrypted web URL request out, and the server answers back with the web page you asked for. Aside from the TCP SYN/ACK setup roundtrip there is just the basic request, response (and close).
With TLS, in the full version there is also a double round-trip handshake. After that the data can flow securely for another round trip. While Moore's law can make the calculations at either end be faster, and more bandwidth can reduce the cost of the large certificates and keys being bandied about, little can reduce the round trip times of handshakes to servers that are far away. Everything is going to start a bit slower.
No good fallbacks
Another curse was the use of two protocols HTTP and HTTPS. You can write a URL that points to one or the other, but there is no way to say, "Use HTTPS, but if that doesn't work fall back to HTTP." Right now the most common alternative is that you go to a site with HTTP, and then the site notices your browser supports HTTPS (they almost all do these days) and it redirects you to the https site for login. This means the initial request is sent unencrypted, and is even slower. Even though there are many sites that have reliably supported https for over a decade, it is very rare to see an external link use an HTTPS URL. It was drummed into us in the early days that the browser might not support HTTPS, and so such a URL might break things. Even today, when a web tool that can't handle HTTPS is the thing that should be considered broken, you will not find many external site links using it.
We could define extra attributes to put on links that say, "It's OK to fall back" but of course the older tools would not understand these attributes. We could have an attribute that says, "Really try https on this link if you are able" and that might help a bit better. Another useful attribute might imply, "There is no need to warn the user that this link goes to an unsecured web site, we know what we're doing." Leave the warnings for times when something has actually gone wrong, rather than for something planned. Attempting to enforce policy with warnings seems to work fairly rarely.
There has been some talk of using DNS to handle both fallbacks and fall "ups" to encourage encryption. For example, one could put an extra field in the DNS record for a domain (for example using a SRV field) to say that, "If you're getting a request to go to port 80 for HTTP here, instead use HTTPS." A browser that was aware of this would know to go secure and there is no need to change the target links. This also allows the introduction of newer, more efficient protocols without having to change any of the web's HTML (a huge plus) which can then be adopted as browsers and servers start supporting them. Security designers tend to disdain the "fall-up" concept where at any moment you can't be sure if your traffic will be encrypted or in-the-clear. Fear of this has resulted in most traffic being in-the-clear, rather than most traffic being encrypted.
With proper deployment of more security in DNS, it's also possible to put public keys and lists of supported protocols into DNS records, so that browsers can make their initial request encrypted using that key, and cut back on the handshakes. This way even the first request can be free of the extra handshakes.
Alternatives
It is possible to design a transaction protocol (ie. for web fetches) which is not nearly as costly in roundtrips. Particularly, if a client is able to remember the public key of the web sites it visits, there can be a non-handshake protocol, where the request consists of "Here, encrypted with our public key X, is my request, my generated symmetric key and optionally my own key and certificate" and the response is "Here, encrypted with the symmetric key you offered is the response" or "No, we don't accept that key or method any more, here's what we want to use" which requires another round trip and offers the full potential of the advanced protocols -- but only once per user.
It has a problem with compromised keys, however. But so do all the other systems. But for ordinary site encryption, which is 99% of your web hits, it would be fine. Sites could elect to opt out, and say, "don't use this, we want a full TLS handshake for any transaction to domains matching this pattern" to assure that the most complex systems are used for financial transactions or logins associated with them.
Of course, switching to a system like this would not be easy. SSL and TLS took time to get adopted, and they had a big adoption advantage -- there was no other way to do a secure web session, and big commercial sites were demanding something. Now that they have something that they can use for bank logins, there is much less pressure to do it better. And for ordinary web sites, just not much pressure at all to make their traffic secure.
There is an effort afoot to put an encryption handshake into the TCP handshake which died on the vine because most IETF people felt it was a layering violation.
There would in fact be a tremendous efficiency gain if web transactions could be made vastly simpler, bypassing even the usual rules of TCP. An ideal web transaction protocol might be done with a mixture of UDP and a kludged TCP. In such a system, the initial request would go out as UDP (encrypted using keys learned from DNS) and effectively being the SYN. The response could come back, and if it all fit in one packet, that would be it and an ACK/CLOSE would be sent in response. If it did not fit, the response would be the SYN/ACK and the rest would follow like regular TCP. This would commonly be the case for requests that will be known to fetch more components such as images keep-alive style. However, the short transactions would be just a few packets.
To get even more efficient, one could imagine a system where you don't yet know the DNS records of the host your going to talk to. In this case you should send the web request to the DNS server, and ask it to forward it on (via UDP) to the web server, which would then respond directly to you (telling you its authenticated DNS information for future reference.) Now that would be efficient, though a problem requiring proxies in the modern NAT world. Also you need a way to stop the DNS server from snooping on the initial request, which is difficult as you don't yet know anything about the target site or its keys. Identity based certificates could provide a cool solution to this, though they need trusted 3rd parties.
Comments
Wes Felter
Fri, 2009-06-26 16:46
Permalink
Free SSL and BTNS
Free "domain validated" SSL certs are already available. http://www.startssl.com/ It would be cool if they had an API so that a Web server could grab a cert automatically during installation. (A phisher's paradise to be sure, but they can already write scripts to automate the signup process anyway.)
Have you done any reading about BTNS? It appears to be the IETF's "official", non-layer-violating solution. I wonder if it can be used in a ObsTCP-like way that adds no round trips. An apparent problem with both ObsTCP and BTNS is that they have to be implemented in the network stack (i.e. Windows Eight) rather than in userspace (i.e. Firefox 4.0).
brad
Fri, 2009-06-26 21:45
Permalink
StartSSL is a good start
But of course it is not trusted by MSIE or Outlook. And the sad truth is, nobody wants to have to have all the IE users get a warning dialog, even if you can tell them how to install the cert and not get it in future. Worse, nobody is going to use an HTTPS URL to point to you if they know that IE users will barf. You could write a detector so the web page looks at the user-agent and offers HTTPS URLs to non-IE users and HTTP to IE users, but that's quite a kludge.
I am glad to see BTNS, as I have said from the start that ipsec was doomed to small deployment as long as it involved a lot of work to set up keys.
But layer-religion aside, I think there are (at least) two types of reliable (ie. not just best efforts) connections we want to have on the net. Sessions and transactions. The web is a mix of those, but starts off as transactions. You don't have to make yourself vulnerable to MITM to fix this either, if we can make a way that the web server can generate a cert for itself when it installs. (However, the CA that offers these certs is of course a tempting target.)
A lot of this would be better if we had secure DNS, something we've been mighty slow to deploy too. Skype provides a remarkable example. They deployed a secure system and millions were using it within months. It is not subject to MITM, but is subject to corruption of Skype, as the Chinese incident showed. But frankly I would rather have that, than nothing, which is part of the BTNS philosophy.
Robert
Sun, 2009-09-27 15:05
Permalink
StartCom now support
Good news: StartCom is now supported by Microsoft! Microsoft just included StartCom's root certificate in their trusted store so it will be supported without warnings on all newer Microsoft products: http://www.sslshopper.com/article-microsoft-adds-support-for-startcom-certificates.html
Bram Cohen
Mon, 2009-06-29 22:15
Permalink
malfeatures
The no referer thing is an outright malfeature, that should be handled at the html layer (is there even a way to completely ban referers at the html layer? I haven't heard of one.) Lack of transparency, both in requiring https be a separate port from http and giving security warnings if there's no signature on the certificate (even though there's no warnings for plaintext) have also been massive issues from day 1, although the crypto community hasn't exactly helped (one of my first posts to perry's list was complaining about requiring separate ports, and he mailed me back and said I was wrong and asked if I still wanted to post it, presumably to save me embarassment or something).
The multiple round tripping is also a major issue not so much because of bandwidth but because of latency - latency times aren't going down, because they're based on the speed of light, and multiple round trips are noticeable by humans.
One thing which would be very welcome would be an extension to http which did opportunistic encryption with minimal overhead both in terms of round trips and computational overhead. It would even be fine if it didn't encrypt the first request, since that's rarely sensitive and there are simple hacks to force a reload if it is anyway. MITM explicitly should not be part of the threat model, because that just results in the protocol not getting used.
brad
Tue, 2009-06-30 13:38
Permalink
First request
Actually, it's pretty easy to not be so vulnerable to MITM in a transparent way if you have a free, automated CA, or another authenticated channel such as secure DNS. In this case you might still be MITM'd but only by somebody who has broken the automated CA or other system.
Secured DNS makes the most sense, since our current approach always involves doing DNS first on a site before connecting to it. (Those who want to connect via raw IP could not use this.) The DNS record should be signed, and contain the public key of the web site, allowing you to open up an encrypted connection with your very first packet encrypted, and in fact with no need for round trips.
An alternate method is identity based encryption. In this case, you encrypt your first packet using the identity based key derived from the domain name. They include their certified public key in the response, and the rest of the response is encrypted using that. The identity certifying company could look at this initial packet, and spooks could look at it if they know the key of the identity CA, but they could not look at the response, nor MITM you.
The main issue with identity certs is that there is a strong incentive for the spooks to want to get the key of the identity CA, especially if more than initial requests are encrypted with it. You can make this a bit harder in a few ways. For example, one might make a formula so that the inner portion of the domain is the name of the identity CA. So instead of www.foo.com you have idcaname.foo.com as the domain of the web site, and the tools know to use that identity CA. That way there can be so many it's hard for spooks to track them, and you can choose one you think is secure, and you can choose one in another country not under legal authority of the spooks you are afraid of. You need to figure a good way to regularly update and publish the public keys of all the identity CAs though.
Bram Cohen
Tue, 2009-06-30 17:52
Permalink
Putting keys and security
Putting keys and security policies into DNS would be a good way to go. The CA system we have now just makes people not use encryption.
brad
Tue, 2009-06-30 18:07
Permalink
DNS and security
Indeed, though we have also suffered from poor deployment of secured DNS, and DNS has all these vulnerabilities, such as cache poisoning, so you need to secure it first.
There is also, for the most efficient possible system, a desire to not even do a DNS fetch. While today most web sessions are long (full of images) I once described the ideal form of a web fetch transaction, which would not even use TCP.
If you did not know the IP of the web server, you would not do a DNS lookup on it, wait for the response, and then send the request to the server. Instead, you would send your web fetch request to the DNS server via UDP (just as you send your DNS request to it.) It would look up the address, but instead of just responding to you with it, it would instead forward your web request directly to the web server (again by UDP) and an ACK to the DNS server. The web server would then send the answer directly to you, including its DNS information, again via UDP, and you would send an ACK to it. From then on you would have the DNS information for direct communication the next time.
As you can see this is vastly more efficient than today's fetch, which involves DNS fetch, DNS response, SYN, SYN/ACK, ACK, HTTP Get, HTTP response packets (with ACK) and FIN. Lots and lots of round trips.
Of course, if you use this method you are letting the DNS server see your request URL on your first request, but not further ones. However, if you know a key, or can use an identity based key, you can hide your request from the DNS server. The web server would use a key provided in your request to encrypt the response to you. You might know a key, but not the DNS value because DNS values expire quickly, especially when sites are doing round robin. In the case of this, the final DNS server is generally located with the web server, so it is easy for it to forward on your request.
Bram Cohen
Tue, 2009-07-07 08:41
Permalink
The issue with multiple
The issue with multiple round trips isn't so much the fact of the round trips as it is the latency introduced, which is created by speed of light issues. Since the initial DNS lookup comes from something very close by geographically, it doesn't really add much to latency. The bigger problem is the second round trip needed for the syn cookie. Dropping that would improve real world latency quite a bit, although I'm not sure what a good way to do that is. There's also the general suckage of TCP-based congestion control, which at the moment is partially a product of the gamma being low for broadband, but the bigger problem is that it's based on dropped packets instead of increased one way delays, so it always fills up the queue, hitting the problem that routers have queues which are way too big, so latency goes through the roof when TCP is transmitting at capacity.
brad
Tue, 2009-07-07 12:17
Permalink
Latency
Yes Bram, I write about the latency as the big issue in the original article.
DNS does not come from something close by, unless it's cached. The DNS server for 4brad.com is in fact the same as the web server, and this is quite common. If it's your first hit (or the first hit by your cache) on the domain, then packets come all the way here and back, which is why it would be super efficient for the DNS server to be able to just forward or even answer the query.
In theory, a DNS request could involve first asking the root (who is fairly local) where "com" is -- but effectively that's almost always in the cache. You must then ask com where 4brad.com is -- that also is likely to be one of the nearby mirrors, but won't be in the cache if it's your first hit on the site since the TTL expired. Finally you must ask 4brad.com's DNS server where ideas.4brad.com is (on most sites it is www.foo.com you will be asking about) which is a round trip all the way to the target.
For big sites like www.google.com it's almost surely in the shared cache. For small sites it quite often isn't. When sites have set a very short TTL in order to do round robin DNS, you are doing this dance much more frequently.
Now it turns out that today's web transaction usually consists of fetching a page and a bunch of embedded images, css and scripts on the first hit, and this is almost always done in a single keep-alive session. Because of this, the burdens of a TCP socket (or even a TLS channel) are relatively much smaller. The "simple web transaction" where you send out a GET and get back a modest sized HTML page almost never happens on the first hit, though it is frequent on later hits and particularly on ajax hits.
Later hits, of course, can use encryption keys negotiated on the first hit if you do it right.
Anonymous
Thu, 2009-07-16 03:47
Permalink
costs of handshake
I like that you considered costs of the handshake and Moore's Law. That is wise thinking. I only wish I saw this point being made in the context of video streaming. There's Adobe's (Macromedia's) campaign with rtsp. That's a fairly simple handshake that has been documented. I'd guess many are aware of it. But have you looked at Ooyala? It's even more annoying. All in the interests of "analytics" ($$). There is actually a POST made after the user issues a GET, unbeknownst to the user, unless he watches the HTTP transaction.
Anonymous
Sun, 2009-08-23 03:02
Permalink
You are right but...
You are right, the tools suck. But the protocols themselves (certs, ssl, etc.) are 1) are pretty damn efficient for what they do and 2) pretty solid. Don't point fingers at the tech, but rather the implementation.
Elliptic curve cryptography is not well developed, that's why it's not a good alternative. A better alternative to the round trip problem might be to use session caching with user configured/directed re-keying appropriate for use. No matter what though, such gains in efficiency must be implemented first on servers - which have the economic problem of low distribution numbers and therefore short budgets.
This is the real problem that needs to be solved: developing secure software is expensive, time consuming, and increases your liability as a software provider. Even just researching crpto technologies can be a liability. Find a way to reduce secure software development costs, and you'll find yourself inundated with excellent tools and protocols...
brad
Sun, 2009-08-23 11:30
Permalink
The implementation is the tech
And the quality of the implementation is what affects the deployment. It is pointless to design a wonderful system that does not get deployed because it is not properly designed for the qualities users are looking for. Users want security, but most of them demand convenience, ease of use and ease of learning or they will never install it.
I don't agree, I don't think it's a matter of cost at all -- other than with more money you would hire protocol designers who know how to build tools users will actually deploy.
Anonymous
Thu, 2009-09-17 00:08
Permalink
As a user I need just
As a user I need just encryption, no authentication. I don't care whether another site impersonates the service I want to use, but I care about encrypting the traffic. TLS and SSL would be a lot simpler if we could use them without certificates at all. We can have encryption without authentication and in 99% of cases that's what we users need.
Add new comment