The situation is additionally confused by the fact that the version numbers do not give a good clue to how different the protocols were. Specifically:
SSLv2 was the first widely deployed version of SSL, but as this post indicates, had a number of issues.
SSLv3 is a more or less completely new protocol
TLS 1.0 is much like SSLv3 but with some small revisions made during the IETF standardization process.
TLS 1.1 is a really minor revision to TLS 1.0 to address some issues with the way block ciphers were used.
TLS 1.2 is a moderately sized revision to TLS 1.1 to adjust to advances in cryptography, specifically adding support for newer hashes in response to weaknesses in MD5 and SHA-1 and adding support for AEAD cipher suites such as AES-GCM.
TLS 1.3 is mostly a new protocol though it reuses some pieces of TLS 1.2 and before.
Each of these protocols has been designed so that you could automatically negotiate versions, thus allowing for clients and servers to independently upgrade without loss of connectivity.
TLS1.0 introduced modularity via the concept of "extensions". It's everything but a minor evolution of the protocol.
One of the many things it brought is session tickets, enabling server-side session resumption without requiring servers to keep synced-up state. Another is Server Name Indication, enabling servers to use more than one certificate.
FWIW, these aren't actually in TLS 1.0.
Extensions (including SNI) are in later spec but introduces in RFC 3546 (https://www.rfc-editor.org/rfc/rfc3546). Session tickets are in RFC 4507.
What TLS 1.0 did was to leave the door open for extensions by allowing the ClientHello to be longer than what was specified. See https://www.rfc-editor.org/rfc/rfc2246.html#section-7.4.1.2 (scroll to "Forward Compatibility Note")
> Each of these protocols has been designed so that you could automatically negotiate versions, thus allowing for clients and servers to independently upgrade without loss of connectivity.
And ensuring decades of various downgrade attacks
The downgrade attacks on TLS are only really present in the case of client behaviour where, on failing to achieve one version, they retry a new connection without it.
This was necessary to bypass various broken server side implementations, and broken middleboxes, but wasn’t necessarily a flaw in TLS itself.
But from the learnings of this issue preventing 1.2 deployment, TLS 1.3 goes out of its way to look very similar on the wire to 1.2
This isn't really accurate historically. TLS has both ciphersuite and version negotiation. Logjam (2015) [1] was a downgrade attack on the former that's now fixed, but is an extension of an attack that was first noticed way back in 1996 [2]. Similar problems occurred with the FREAK attack, though that was actually a client vulnerability. TLS 1.3 goes out of its way to fix all of this using a better negotiation mechanism, and by reducing agility.
[1] https://en.wikipedia.org/wiki/Logjam_(computer_security) [2] https://www.usenix.org/legacy/publications/library/proceedin...
Moreover, there's not really much in the way of choices here. If you don't have this kind of automatic version negotiation then it's essentially impossible to deploy a new version.
Well you can, but that would require a higher level of political skill than normally exists for such things. What would have to happen is that almost everyone would have to agree on the new version and then implement it. Once implementation was sufficiently high enough then you have a switchover day.
The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.
The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.
This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.
Moreover, even in the best case scenario this means that you don't get the benefits of deployment for years if not decades. Even 7 years out, TLS 1.3 is well below 100% deployment. To take a specific example here: we want to deploy PQ ciphers ASAP to prevent harvest-and-decrypt attacks. Why should this wait for 100% deployment?
> The big downside of negotiation is that no one ever has to commit to anything so everything is possible. In the case of TLS, that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore. The only part that has to be truly standard is the negotiation scheme.
I don't think this is really that accurate, especially on the Web. The actual widely in use options are fairly narrow.
TLS is used in a lot of different settings, so it's unsurprising that there are a lot of options to cover those settings. TLS 1.3 did manage to reduce those quite a bit, however.
> This seems like a truly unreasonable level of political skill for nearly any setting. We're talking about changing every endpoint in the Internet, including those which can no longer be upgraded. I struggle to think of any entity or set of entities which could plausibly do that.
Case in point: IPv6 adoption. There's no interoperability or negotiation between it and IPv4 (at least, not in any way that matters), which has led to the mess we're in today.
Many servers and clients support both ipv4 and ipv6. So, in a sense, there's a "negotiation" happening between client and server.
That’s not negotiating- I can’t connect to a server over v4 and have it tell me to switch to v6 or vice versa. That’s just supporting 2 completely different protocols.
Right. The closest thing we have to IPv6 "negotiation" is the Happy Eyeballs algorithm[0], which is literally just "connect to both at the same time and pick the one that connects first". The name serves to legitimise it and make it sound fancy but it's basically just brute force + a bit of caching.
That’s a great theory but in practice such a “flag day” almost never happens. The last time the internet went through such a change was January 1, 1983, when the ARPANET switched from NCP to the newly designed TCP/IP. People want to do something similar on February 1, 2030, to remove IPv4 and switch totally to IPv6, but I give it a 50/50 chance of success, and IPv6 is already about 30 years old. See https://ipv4flagday.net/
You don't have to have everyone switch over on the same day as with your example. Once it is decreed that implementations are widespread enough, then everyone can switch over to the introduced thing gradually. The "flag day" is when it is decreed that implementations no longer have to support some previously widely used method. Support for that method would then gradually disappear unless there was some associated cryptographic emergency that could not be dealt with without changing the standard.
Well, this is basically what we do, except that we try to negotiate to the highest version during the period before the flag day. This is far more practical for three reasons:
1. You actually get benefit during the transition period because you get to use the new version.
2. You get to test the new version at scale, which often reveals issues, as it did with TLS 1.3. It also makes it much easier to measure deployment because you can see what is actually negotiated.
3. Generally, implementations are very risk averse and so aren't willing to disable older versions until there is basically universal deployment, so it takes the pressure off of this decision.
> The big risk with such an approach is that you could implement something, then the politics could fail and you would end up with nothing.
They learned the lesson of IPv6 here.
> that seems to have led to endless bikeshedding which has created a standard which has so many options is is hardly a standard anymore
Part of the motivation of TLS 1.3 was to mitigate that. It removed a lot of options for negotiating the ciphersuite.
You could deploy a new version, you'd just have older clients unable to connect to servers implementing the newer versions.
It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility (yes I realize the 's' stands for secure, not 'ssl', but httpt would have still worked as "HTTP with TLS")
> It wouldn't have been insane to rename https to httpt or something after TLS 1.2 and screw backwards compatibility
That would have been at least little bit insane, since then web links would be embedding the protocol version number. As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work.
I wish we could go the other way - and make http:// implicitly use TLS when TLS is available. Having http://.../x and https://.../x be able to resolve to different resources was a huge mistake.
Regarding your last paragraph: Isn’t that pretty much solved thanks to HSTS preload? A non-technical author of a small recipe blog might not know how to set it up, but a bank ought to have staff (and auditors) who takes care of stuff like that.
It doesn't solve the problem of a client having to treat https:// and http:// URLs with the same string after the :// as distinct resources.
Are there any real world online resources where, modulo redirect, a different resource is presented on the HTTP and the HTTPS protocols? Or alternatively, on ports 80 and 443?
There used to be, though it's less true now. However, the reason to treat them distinctly (as different origins, technically) is that HTTPS provides integrity whereas HTTP does not. So, consider the case where the client enters an HTTP URL and is redirected, just as you say above. If the attacker injects their own JS and it is cached in an origin that is just `example.com`, then they control the user's experience of the site, even if later the user securely goes to the site with HTTPS.
Thank you. That really is a novel attack that I didn't think of.
> As a result, we'd need to keep old versions of TLS around indefinitely to make sure old URLs still work
Wouldn't we be able to just redirect https->httpt like http requests do right now?
Sure it'd be a tiny bit more overhead for servers, but no different than what we already experienced moving away from unencrypted http
You’re thinking about it from the perspective of a site operator. Yes, individual websites could do that. But not all websites would use such a redirect.
But think about it from the perspective of a web browser or curl. You can’t rely on all web servers having such a redirect for their URLs. Web browsers would need to support old versions of TLS to make old URLs work. They’d need to support old versions of tls indefinitely so as to not break old URLs.
Using an old version of tls isn’t like using an old version of the C compiler. Old versions of tls have well documented problems with security implications. That’s why we made new versions. Maintaining lots of versions of TLS multiplies the security surface area for bugs, and makes you vulnerable to downgrade attacks.
Like, you're right that some, perhaps many, sites would continue using https, just like in the current situation, many sites continue supporting http (instead of just setting up a redirect)
No site needs to do this though, and I can't recall seeing a site with sensitive user info that supports http in recent years. And in the current situation, many sites are still supporting old versions of https (SSL2). A protocol name upgrade would give you more certainty that you're connecting over a secure connection, and perhaps a better indication if you've accidentally used a less-secure connection than intended.
I mean actually your exact argument could be made about http vs https, that http+SSL should have become the default (without changing the protocol name of http://), and by changing the protocol name it made it so that some websites still accept http. I guess in practice there's a slight difference since http->https involved a default port change and ssl2 -> tls did not, so in the former case the name change was important to let clients know to use a different default port; but ignoring that, the same argument could be made, and I would have disagreed with it there too.
Specifying the protocol... in the protocol portion of the URL... can be useful for users.
This has a number of negative downstream effects.
First, recall that links are very often inter-site, so the consequence would be that even when a server upgraded to TLS 1.2, clients would still try to connect with TLS 1.1 because they were using the wrong kind of link. This would relay delay deployment. By contrast, today when the server upgrades then new clients upgrade as well.
Second, in the Web security model, the Origin of a resource (e.g., the context in which the JS runs) is based on scheme/host/port. So httpt would be a different origin from HTTPS. Consider what happens if the incoming link is https and internal links are httpt now different pages are different origins for the same site.
These considerations are so important that when QUIC was developed, the IETF decided that QUIC would also be an https URL (it helps that IETF QUIC's cryptographic handshake is TLS 1.3).
TLS is one of the best success stories of widely applied security with great UX. It would be nowhere as successful with that attitude.
Yes it would absolutely have been insane.
You mean like the way we use h2:// everywhere now? Oh wait, we don't.
Depends on what you mean by "this kind" because you want a way to detect attacker-forced downgrades and that used to be missing.
If a protocol is widely used wrongly, I consider it a flaw in the protocol. But overall, SSL standardization has gone decently well. I always bring it up as a good example to contrast with XMPP as a bad example.
Well, my only real point is that it’s not the version negotiation in TLS that’s broken. It’s the workaround for intolerance of newer versions that had downgrade attacks.
Fortunately that’s all behind us now, and transitioning from 1.2 to 1.3 is going much smoother than 1.0 to 1.2 went.
One of the big differences was in attitude. The TLS 1.3 anti-downgrade feature was not compatible with some popular middlebox products. Google told people too bad, either your vendor fixes it (most shipped free bug fixes for this issue, presumably "encouraged" by the resulting customer anger) or you can't run Chrome once this temporary fudge goes away in a year's time.
Previously (in earlier protocol versions) nobody stood up to the crap middleboxes even though it's bad for all normal users.
The service providers were the worst offenders here because they wanted to be the MIM to be able to look at the data and “add value” to their networks some how. Moving to TLS 1.3 took a lot of that away from them and it was only Google’s market power that could break them.
Similar thing has been happening with email sender auth, with Gmail and other big providers enforcing things
Any chance that can be used to undo lots of the ossification that made QUIC a UDP based hack rather than it's own level 4 protocol?
Basically none.
First the success rate of any new IP-based protocol through most devices is incredibly low, especially now that NAT is so common.
Second, part of why QUIC runs over UDP is because the operating system generally won't let applications send raw IP datagrams.
Even running over UDP, QUIC has nontrivial failure rates and the browsers have to fall back to TLS over TCP.
It's probably too hard to get NATs to agree on a new L4 protocol.
> I always bring it up as a good example to contrast with XMPP as a bad example.
Could you expand a bit here? Do you just mean how extensions to the protocol are handled, etc., or the overall process and involved parties?
XMPP is too loose. Easiest comparison is security alone. XMPP auth and encryption are complicated, and they're optional for each of c2s, s2c, s2s (setting aside e2e). Clients and servers will quietly do the wrong thing if not configured exactly right. Email has similar problems, so bad that entire companies exist just to help set up stuff like DMARC, but that's a simpler app than instant messaging. The rest of the XMPP feature set is also super loose. Clients and servers never agree on what extensions to implement, even for very basic things like chat rooms. I really tried to like it before giving up.
Edit: https://wiki.xmpp.org/web/Securing_XMPP
SSL is appropriately strict. Auth and encryption, both c2s and s2c, go together. They were a bit lax on upgrades in the past, but as another comment said, Google just said you fix your stuff or else Chrome will show a very scary banner on your website. Yes you can skip it or force special things like auth without encryption, but it's impossible to do by accident.
thanks for taking the time to respond!
Man in the middle interfering with TLS handshakes?
The handshake is unencrypted so you can modify the messages to make it look like the server only supports broken ciphers. Then the man in the middle can read all of the encrypted data because it was badly encrypted.
A surprising number of servers still support broken ciphers due to legacy uses or incompetence.
Yes, this is a seriously difficult problem with only partial solutions.
The basic math of any kind of negotiation is that you need the minimum set of cryptographic parameters supported by both sides to be secure enough to resist downgrade. This is too small a space to support a complete accounting of the situation, but roughly:
- In pre-TLS 1.3 versions of TLS, the Finished message was intended to provide secure negotiation as long as the weakest joint key exchange was secure, even if the weakest joint record protection algorithm was insecure, because the Finished provides integrity for the handshake outside of the record layer.
- In TLS 1.3, the negotiation messages are also signed by the server, which is intended to protect negotiation as long as the weakest joint signature algorithm is secure. This is (I believe) the best you can do with a client and server which have never talked to each other, because if the signature algorithm is insecure, the attacker can just impersonate the server directly.
- TLS 1.3 also includes a mechanism intended to prevent against TLS 1.3 -> TLS 1.2 downgrade as long as the TLS 1.2 cipher suite involves server signing (as a practical matter, this means ECDHE). Briefly, the idea is to use a sentinel value in the random nonces, which are signed even in TLS 1.2 (https://www.rfc-editor.org/rfc/rfc8446#section-4.1.3).
No: while the handshake is unencrypted, it is authenticated. An attacker can’t modify it.
What an attacker can do is block handshakes with parameters they don’t like. Some clients would retry a new handshake with an older TLS version, because they’d take the silence to mean that the server has broken negotiation.
well, unless both client and server have sufficiently weak crypto enabled that an attacker can break it during the handshake.
Then you can MITM, force both sides to use the weak crypto, which can be broken, and you're in the middle. Also not really so relevant today.
You could encrypt the handshake that you recieved with the server's certificate and send it back. Then if it doesn't match what the server thought it sent it aborts the handshake. As long as the server's cert isn't broken this would detect a munged handshake, and if the server's cert is broken you have no root of trust to start the connection in the first place.
How do you agree a protocol to encrypt the message to agree the protocol?
This is the message that returns a list of supported ciphers and key exchange protocols. There’s no data in this first packet.
Alice: I’d like to connect Bob: Sure here is a list of protocols we could use:
You modify bob’s message so that bob only suggests insecure protocols.
You might be proposing that Alice asks Trent for Bob’s public key … But that’s not how TLS works.
Bob's list of supported protocols is an input into the (authenticated) final handshake message, and that authentication failing will prevent the connection from being considered successfully established.
If the "negotiated" cipher suite is weak enough to allow real-time impersonation of Bob, though, pre-1.3 versions are still vulnerable; that's another reason not to keep insecure cipher suites around in a TLS config.
The fine man in the middle could still intercept that.
It also enabled cipher strength "step up". Back during the '90s and early 2000s (I'm not sure when it stopped, tbh), the US government restricted the export of strong cryptography, with certain exceptions (e.g. for financial services).
If you fell under one of those exceptions, you could get a special certificate for your website (from, e.g. Verisign) that allowed the webserver to "step up" the encryption negotiation with the browser to stronger algorithms and/or key lengths.
They still should have just called it TLS v4.0 instead of v1.0.
I'm halfway convinced that they have made subsequent versions v1.1, v1.2, and v1.3 in an outrageously stubborn refusal to admit that they were objectively incorrect to reset the version number.
As I noted below, there was real discussion around the version number for TLS 1.3. I don't recall any such discussion for 1.1 and 1.2.
Well, at least they were not just versioned by year number. ;)
It would still be better than changing the name for no reason and resetting the version number.