Very interesting, seems like this would have potential to improve email transit security quite a bit. It is encouraging to see Google, Microsoft, Yahoo and Comcast in the authors, that would indicate that this has good chance of being applied to large portion of internet email.
Of course email at rest security remains an issue, but that another story.
> encouraging to see Google, Microsoft, Yahoo and Comcast
For all intents and purposes this is most of the English speaking internet. If you swap Yahoo for Verizon and Apple you'd be pretty close.
So, yeah encouraging to get this standard applied, less so if you think it is good for the internet to offload an outsize share of control to 5-10 corporations.
Only in the sense that larger companies can afford to fund their employees to work on IETF standards groups, and are thus more likely to end up contributing, editing and managing IETF groups.
The funding is not large, though: no dues, no grants, just the time that people spend and airfare/hotels up to three times a year -- usually two of those being inside the US. And it's possible to have quite a lot of impact simply by volunteering your time in a group in which you have some expertise.
If you are not willing to do anything significant about it, then at least push people not to trade simplicity and ease-of-use with blockchain short term to give the technoloigy a chance to diverge into numerous protocols and subsystems.
I think you underestimate how much email Yahoo (still) handles. Maybe add Apple to the list, but Verizon is nowhere near the volume / user count of Yahoo.
Can't we pretty much please make it reasonably generic and not SMTP-specific?
There are a lot of protocols (say XMPP, IRC or IMAP) around that have opportunistic encryption with STARTTLS-like semantics. Surely, they could all benefit from a similar solution.
SMTP needs this so desperately that it's worth making a customized solution - if there is a chance that making the standard generic would slow its adoption or development, it's probably best to do it this way. None of the others underpin, necessarily, a global communication infrastructure that powers the world of business.
There are a lot more protocols that would benefit from the more generic and much simpler solution of "define a standard port where the service listens over TLS".
Excellent! Strict transport security for SMTP has been sorely needed. Once it's widely deployed, it will finally ensure that virtually all email communication occurs over TLS.
TLS is already widely supported among large ISPs, but until now it has been far too easy to fall back to plaintext. There wasn't any way for a receiving ISP to ask all senders to use TLS and not fall back, which meant it happened occasionally. It's why Google's "Safer Email" transparency report shows many senders at 99% percent TLS, rather than 100%:
As receivers deploy strict TLS policies, and as senders add support, I expect to see these numbers reach 100% and stay there. I look forward to having strict TLS support in place in our email systems. Nice job making this happen, folks who contributed!
We are working to switch Haraka to use TLS by default in all systems. A big part of that is adding support for letsencrypt so that we can trivially create a secure outbound certificate for every mail. This stuff is important and we need to make email secure.
It would be nice to also consider completely dropping support for non-TLS SMTP.
Unlike HTTP where connecting to abandoned non-TLS websites can be useful, e-mail is only useful if someone is reading it, and if they are reading it they can take action to enable TLS.
Well, we missed out on mandatory encryption in HTTP/2, despite that requiring the webserver to turn it on, so for some reason we still have a lobby for unsecured communications.
Except browsers won't implement non-TLS HTTP2. So for all the use cases that matter, it's fine. Another good example of implementation trumping "standards".
Read the draft dude. STS is designed to interoperate with DANE.
Also, if you deploy your STS info via DNS you'll need DNSSEC validation to to ensure that what you read in DNS is actually what the zone holder put there. Without DNSSEC how does a sending SMTP daemon trust the STS policy of the recipient zone. STS policies are stored in DNS TXT records, or their own new RR.
Pleaee don't be rude. STS doesn't preclude DANE, but it doesn't require it either. Meanwhile, this use case was the last remaining one motivating DANE at all.
The draft is very clear about DANE being preferred from both a security and deployment perspective (does't need to get certificates for all hosted domains).
Sure, if DNSSEC fail, then SMTP-STS is better than nothing.
I'm pretty sure that the big providers could have had deployed DNSSEC relatively easily, if they just wanted to.
I think other DNS-based email security features such as DKIM and DMARC motivates DNSSEC as well.
Why? They solve different problems. DNSSEC/DANE moves the TLS CA into the hands of the DNS hierarchy from various corporations. STS just says to use TLS (with this cert, but you can only know the cert is OK to begin with because of a CA, or DANE).
Because the only remaining reason to deploy DNSSEC is to exploit DANE to allow SMTP MTAs to force TLS. All the other uses of DNSSEC are already DOA.
It's a little tricky to explain why this is the case without getting into a lot of gritty detail. A shorthand answer is that browser vendors have, pretty much as a group, decided not to adopt DANE (the DNSSEC-based alternative to the X.509 CA system), and DANE is the only reason anyone cares about DNS security on an Internet where everything is going to be encrypted (usually with TLS) by default.
No, SSHFP is pretty silly as a motivating use case for DNSSEC. There's nothing it does that you can't do (better!) with some other system. It makes sense if you already have a secure DNS. But that begs the question of why you would need a secure DNS.
You don't need a secure DNS for the web.
You don't, with STS deployed, need a secure DNS for email.
Is publishing SSH key fingerprints so important that we should do a forklift upgrade of the DNS? No, of course not.
STS information is stored in DNS records. Why would you not want to secure those?
Also, I think you're underestimating the growth of DNSSEC deployments. I've been watching DNSSEC growth for about 2 years and it is steadily moving up and to the right.
It has taken two decades to get to this point. The protocol has been substantially revised four times, and, after each of those four revisions, DNSSEC proponents said "now, we've got it right, and it's ready for universal deployment".
It is nowhere near ready for universal deployment today, and, indeed, virtually nobody relies on it, unlike TLS.
DNSSEC ain't pretty. I'll agree with you there, but that doesn't mean it isn't the best thing going for us in terms of securing the DNS. Like it or not it's important to be able to trust DNS responses.
I guess I don't have some expectation that deployment should take place quickly. Or that the first go at a protocol is going to always get it right. Just because a journey is difficult doesn't mean the journey isn't worth taking.
DNSSEC and TLS are unrelated. They're trying to solve different problems.
It absolutely is a legitimate use case, and would IMO improve security for the type of developers who do use SSH, but don't know what a fingerprint is. It would also make it easier to create secure connections for non-developers, e.g. a SFTP/SCP based file sync app.
I don't have good numbers for signing of zones. Although we do know that these numbers are increasing. Cloudflare recently turned on DNSSEC for all of their clients.
> It is a selling point in that it simplifies the protocol, making it easier to implement.
In the current protocol the only difference between offline and online signing is in how authoritative servers are implemented. There the differences aren't all that significant (relative to building an authoritative server as a whole) -- the same data needs signing, etc -- so please explain how your protocol makes implementation simpler.
> Obviously it would not be the only difference.
Can you please expand on that? You've said you've been thinking of a DNSSEC2 proposal for a while so you must have more to say.
I forgot to mention that one of the known problems of DNSSEC is the DNS zone information disclosure problem. Online signing would allow the problem to be eliminated and reduce complexity by the protocol no longer having to be designed for enumerating DNS records. Of course, DNSSEC2 would probably only allow ECC signing too (I suggest both NIST curves and Curve25519 as options).
Online signing and white lies are already in DNSSEC. What you've described so far doesn't seem to be different to the status quo. Even deprecation of older cryptographic signatures is being worked through at the moment.
You have said you have been thinking of a DNSSEC2 proposal for a while, that your protocol differs in multiple ways and that it is easier to implement. Given this you should be able to concretely articulate how your protocol differs and where its advantages are gained. Thus far you have not shown the ability to do this.
Give me a solid anchor and a lever that is long enough and I can move the earth.
The problem of all modern crypto technos is
1) CPU burnt for crypto is a nice lever for DOS (and not all our mails require crypto, seriously 60% is spam)
2) they do not solve the 2 general problem
Yes 2 auth/Channels factors seems nice. The problem they stay on the same plane. It is 2 inbound channels.
You know when USA engineers last thought it was smart, kiddies learnt to use a 1$ cpt crunch whistle to phone for free all around the world.
Actually I don't mind, I am broke and totally will love free whatever the stuff that this kind of engineering will offer me.
Interesting to see the list of contributors from the various companies: Google, Inc, Yahoo!, Inc, Comcast, Inc, Microsoft, Inc, LinkedIn, 1&1 Mail & Media Development & Technology GmbH . One of my goals is to contribute to an RFC.
Is there a TL;DR version of how adding SSL will make it a whole lot better? Sorry to be dumb on this subject but my understanding of how email works (in particular SMTP) seems like the whole model is busted to begin with.
Are we still sending basically plain text unsigned messages using something akin to the pony express? Does it matter that the pony express carriers communicates securely so no bad guys can snoop the carriers bag of messages At least in the old days you could put a wax seal on the letter to know the letter was legit and not tampered with. With email the entire system is flawed from the get go.
SMTP goes over port 25 between major senders (there are other ports but forget that for now).
Since it uses a single port, you can't distinguish between encrypted and plain text communication until you know each end supports encryption. A dated philosophy but people don't upgrade their email servers as often as they do their web browsers so it made sense at the time.
Since the plain text receiving server says "yes I can do STARTTLS", this is easy to man in the middle intercept and say "no encryption here" and the mail goes through anyway.
Even if the receiving end says all mail must arrive over TLS the man in the middle can currently circumvent that by receiving in plain text and forwarding onwards via TLS
This is an RFC to try and prevent that happening.
This stuff is hard, and email nerds (via MAAWG and various other places) have been working on this for years. We don't want to break your current email service, and bringing things up to speed without breaking a ton of eggs has been hurting email for a long time, but we spent too long stopping spam instead of thinking about these problems. Sorry!
I don't think the use of a single port is really at the heart of the problem. Even if SMTP with TLS ran over port 26 (say), you wouldn't know if a timeout on port 26 meant the server wasn't listening on port 26 or a MITM had just chosen to drop your packets.
Discovering if someone supports Protocol++ if the fallback to Protocol is insecure is a hard problem.
Agree, idea that the carrier should be trusted or for that matter responsible for the security of a message is dated and largely meaningless security measure. As long as the message is able to be read by anyone other than the sender and intended recipient the message should be assumed as being insecure.
Why do we need to add support for protocols individually for this? Can't we just introduce a TLS extension and an API for detecting these "always use TLS" requests in OpenSSL etc.?
At least in email's case, the most common pattern is for SMTP connections to begin in plaintext and then be upgraded to TLS. This involves the client sending a command to the server inquiring about the server's protocol extensions. The server advertises STARTTLS [1], and then the client invokes the command to upgrade to it. These interactions are all SMTP-specific, and so the SMTP client and server both need to be aware of it. (The client should presumably keep trying to upgrade to TLS, and not send the message plaintext; and a server one day might reject connections that haven't been upgraded.)
For protocols where the common pattern is to establish a connection using TLS from the beginning, it would be easier to do something like what you're describing.
You upgrade to TLS later. But only if the server says it can do that.
A MITM attack can easily say that the receiver can't do TLS and thus can see everything that passes.
This RFC specifies a way to prevent this, assuming clients abide by the requirements. It will be a long and slow roll out to get this done. But it is worth it.
This seems like the slowest possible way to make no useful change. Literally every SMTP-capable software package on earth supports SMTPS on a dedicated port, which requires no changes to protocol. Why not just formalize and mandate this, instead of clinging to a braindead design that requires SMTP to munge the transport level? STARTTLS is a hack. It is not something to be preserved. If the goal is 100% TLS coverage, just mandate that (since everyone already supports it) and move on.
This kind of bureacratic busywork is really, really frustrating. SMTP is nice and simple and does not benefit from this proposal except in the most abstract checklist-compliance sense.
If you required TLS on all SMTP, you would in fact end up having to fail a large number of messages.
Even worse, of the domains that support STARTTLS, a sizable number either don't present certificates that chain to a widely trusted root, or don't present certificates that actually match their MX. Worse still, because many domains' MXs don't match the domain itself, even if the certificate is trusted for the MX, it may not be trusted for the domain.[1]
So I think unfortunately we're not anywhere near a world where we could actually just drop email on the floor if TLS-with-a-valid-cert isn't present ("valid" not being clearly defined here, of course). I do think we're slowly moving in that direction[2].
It's certainly true that retrofitting security makes the whole thing more complicated, of course. That's a strong argument for ensuring that any retrofitting we do is itself forward-compatible with what we want to do in 30 years. Or for inventing time machines.
>Even worse, of the domains that support STARTTLS, a sizable number either don't present certificates that chain to a widely trusted root, or don't present certificates that actually match their MX. Worse still, because many domains' MXs don't match the domain itself, even if the certificate is trusted for the MX, it may not be trusted for the domain.
This is madness, though. TLS is transport-layer security. It's not kerberos and it was never intended to be. The cert produced by the MX should be valid for the MX. Trust is an illusion, but to the extent that you decide to trust anything in the CA system, you trust it for the MX only and use SPF etc to determine if the MX is the correct one.
STARTTLS is a bad idea and should go away entirely. It forces an SMTP server to care about the transport layer, and that's entirely incorrect. I have plenty of SMTP servers that communicate on my networks without ever touching TCP/IP -- forcing them to support STARTTLS specifically is moving backwards.
This RFC is strongly dependent on the internet looking pretty much exactly like it looks today, and enforcing that mode of operation indefinitely. It's short-sighted and harmful to the entire email world.
But I think the authentication problem is in fact the hard problem. Assuming we got rid of STARTTLS (the actual verb) and just always did TLS (say, on some other port), how do you propose to solve it?
Fortunately, we have an extensible protocol that already supports service advertising and negotiation. There's no reason we can't have an AUTH module that works both ways (both the client and server mutually authenticate, independent of the transport-layer encryption).
I don't see a problem here. Just try TLS first always. If it fails and you haven't seen a STS flag for the server, then try plaintext. This should work with any protocol. There's not a whole lot of gain from doing this, except SSL client libs could provide an API for it rather than having to munge every higher protocol to support some kind of STS mechanism.
"SMTP STS relies on the certificate authority (CA) system and a trust-on-first-use (TOFU) approach to avoid interception. The TOFU model allows a degree of security similar to that of HPKP [RFC7469], reducing the complexity but without the guarantees on first use offered by DNSSEC."
Also, can be used with DANE for additional security.
If, like me, you believe the only real-world adversary for large-scale mail transport between places like Google and Yahoo is state-level actors and dragnet surveillance, then deploying DANE reduces your security, by tying you to trust anchors controlled by world governments.
DANE is generally a bad idea, and it's great that STS is explicitly proposed as an alternative to it, not an application of it.
The CA system already uses the DNS as a trust anchor, and will continue to do so as long as DV certs are the standard. DANE likely would not have enabled any form of attack that isn't already possible.
What DANE would have done is end the madness that allows me to hack your wordpress blog for a day and then go get myself a certificate for your domain, for free, that doesn't expire for three years. A certificate you will never be able to detect the issue thereof and cannot easily revoke if you do.
And STS isn't an 'alternative' to DANE. The two are completely orthogonal
Arguments like this presuppose that the TLS security system is exactly as it was in 1999. But, of course, it isn't: major sites all pin certificates now, so that if you're well-equipped enough to subvert a CA and you use it to try to subvert a major site, there's a very good chance that literally all you'll accomplish is getting that entire CA blacklisted.
STS and DANE aren't orthogonal. DNSSEC/DANE offers such marginal value to HTTPS that the browsers that once offered pilot support for it have now eliminated that code; it is off the roadmap for the web. The remaining motivating use case for DNSSEC was SMTP security; DANE was a way to ensure that MTAs used secure transports and avoided downgrade attacks.
But STS does the same thing without requiring the forklift replacement of DNS that DNSSEC requires.
> Arguments like this presuppose that the TLS security system is exactly as it was in 1999
For the majority, it is. The % of sites that deploy all of the latest HTTPS bells and whistles, even just looking at the minority of sites that actually enable TLS, is in the low single digits. Only 8% of sites surveyed by Qualys SSL Labs... sites that are presumably run by people who care about security, unlike my bank, even bother to enable HSTS.
> major sites all pin certificates now
My bank doesn't. Enabling HPKP and HSTS is just risky from a commercial point-of-view. People aren't good at key management. If you botch it, you make your site inaccessible and you lose customers.
People screw it up all the time. I've done it. I was speaking to someone just today who was over-zealous with 'includeSubdomains' and made their product blog inaccessible (it was on a blog subdomain, which had an invalid TLS setup because wordpress.com don't support TLS on custom domains). It happens.
> if you're well-equipped enough to subvert a CA
Strawman, I never held this up as a threat. In any case, it's a bad defence of PKI. Being able to choose from N CAs doesn't help you. You only have to subvert one CA to own the world, just like DANE, where an attacker only has to break your registrar, or the domain registry, and you're toast. The 'world governments' you fear have the capability and thensome. Irrelevant.
The threat I explicitly mentioned is that if I own your DNS, I can get myself a cert for you domain. This has the same same threat models as DANE (if "owning your DNS" involves me compromising your DNS server and getting your DNSSEC keys, or guessing your domain registrar password), except DANE doesn't depend on horrible, ineffective, and privacy-invasive, hacks like OSCP... or allow me to simply keep certificates for domains I no longer own. DNS caching effectively gives you the equivalent of OCSP stapling for free.
> major sites all pin certificates now
TLSA records have all the configurability and capability of HPKP and more, and can be applied to any protocol/endpoint. They even cross the isle to work with the CA system if you so desire, unlike HSTS+HPKP which has become hostile to anything self-signed.
> STS and DANE aren't orthogonal
They are. DANE relates to key-pinning via TLSA records, not HSTS.
> forklift replacement of DNS that DNSSEC requires.
DNSSEC is backward compatible. You're talking twaddle. Enabling it these days, if you control your own DNS server, also takes seconds. With EC digital signatures, it's even sensible. Setting up HPKP+HSTS+OSCP stapling is far more fiddley...and stuck in the RSA stone age.
TLSA records do not have that capability, and, more importantly, the root of trust for TLSA records are organizations controlled by the government. The "Five Eyes" partnership can replace signatures for any DNSSEC domain in .COM, .NET, .ORG, .EDU, .UK, .AU, and .IO.
It seems crazy to me that, after years of hyperventilating about the implications of the Snowden disclosures, anyone could take DNSSEC seriously. But people do!
At this point, I'm just recapitulating things I've already written, so:
Your arguments does not make sense as a coherent whole. Any one may not be wrong in itself, but the problems described are either not inherent to secure DNS, or are something that is much worse with every other proposal (including keeping today's system).
1. Your main argument is that the NSA and its cohorts have control over a handful of many top level domains available. But the same control that would allow them to take control over domains and generate valid DNS signatures for them does allow them to generate valid certificates in every proposed global PKI system, including today's. There is literally zero difference in attack space here.
2. Several of the current CA institutions are under government control. Any one can generate a valid certificate for any domain. It can be done without a trace of evidence. The same active attack against a DNSSEC signed TLD would by necessity be much more visible.
3. Given that DV PKI is what TLS relies on, any adversary that can modify DNS packets in-flight can also create a valid TLS certificate today. We know these attacks are taking place from the Snowden documents.
4. An attacker can choose which CA to attack, and pick the one that is easiest to fool, does not participate in Certificate Transparency etc. There are literally hundreds to choose from. You can mount an attack in advance. Stuxnet suggests this is routine.
5. There are no alternatives that solves what secure DNS does. Key pinning and HSTS is important, but offer a trust-of-first-use model that can at best be complementary to a PKI. It is also important to note that HSTS shares the same deployment problems DNSSEC does. One mistake and your web server and domain is inaccessible. That is the reason none of the banks offer neither HSTS nor DANE. At least my bank sign their domain, so their step should be small.
The smoke-and-mirrors argument that DNSSEC gives governments control over "their" top level domains, when in fact it makes the scope of their control much smaller and more well defined, would be expected from someone who wants to maintain status quo as long as possible.
1. That example is not very useful. It's hard to blacklist .com, but it's even harder to blacklist Verisign. (Which by the way runs all of .com, and they have full power to delegate any domain and any certificate to anybody. This is not a theoretical attack.)
2/3. If you mean the FAQ you wrote yourself, it's misleading at this very point.
4. Pinning is useful, but it isn't a PKI. It's equally useful no matter how you issue certificates.
5. Domain name delegation are one of the Internet's weakest points today. Crypographic assurance of domain ownership would be very useful for a number of reasons.
Forged DNSSEC replies are inherently more public than forged TLS certificates, anybody can log the results and publish them.
And by doing so, the world would have evidence that somebody in that trust chain for that TLD has been lying. For TLS, that could be any CA in the world, which means that the number of single points of failure for services on DNSSEC is waaay lower. Because with DNSSEC you at least can chose who has the capability of forging results, with TLS alone that's all of the CA:s.
And why couldn't one combine the approaches anyway, using DNSSEC+DANE with certificate pinning? How would that possibly reduce security vs using standard DNS?
I agree that DNSSEC is a flawed technology and needs to be replaced, but I thought it can provide additional security (without doing harm) to STS against for example a malicious provider who can otherwise strip/modify the STS related records from the DNS traffic.
Importantly, just as with HSTS, once you successfully retrieve a counterparty's STS data, it's no longer straightforward for an ISP to strip that data from the DNS; it's cached, like a cookie. Which means as well that everyone who deploys STS will potentially join a collaborative anti-surveillance surveillance network (also one of the neat things about TLS certificate pinning).
When deployed alone (i.e. without a DANE record, and using Web PKI
for certificate verification), SMTP STS offers the following
disadvantages compared to DANE:
o Infrastructure: DANE may be easier for some providers to deploy.
In particular, for providers who already support DNSSEC, SMTP STS
would additionally require they obtain a CA-signed x509
certificate for the recipient domain.
o Security: DANE offers an advantage against policy-lookup DoS
attacks; that is, while a DNSSEC-signed NX response to a DANE
lookup authoritatively indicates the lack of a DANE record, such
an option to authenticate policy non-existence does not exist when
looking up a policy over plain DNS.
do the same for the fucking DNS protocol, which is clear-text, it's the weapon of choice for all surveillance/censorship freaks and nobody cares about. And by 'nobody' I am not referring to actual random Joe users, but you - the fucking mastermind developers who create stuff that nobody cares about, hosted on .io domains just because it is cool and looks like some start-up that's about the be the next big thing.
Please fix the DNS crap and you will be remembered in the history of those who gave a damn about totally broken stuff.
you do realize that not even 0.001% of people use that, right?
DNS has to be encrypted by default for everyone. As in client -> recursor. Without any 3rd party software.
Then you must ask the OS vendors to include it, not startup developers.
And in any case, encrypting DNS won't solve the surveillance issue, since the HTTPS handshake sends the domain in clear-text, so that the server knows which certificate to send (https://en.wikipedia.org/wiki/Server_Name_Indication)
The client would still send the domain, since it doesn't known the server only has one certificate (the domain is sent in the first message, the ClientHello).
Unlike DNSSEC (which does not solve the problem the root comment complains about), DNSCurve/DNSCrypt doesn't require universal deployment to function. The 0.001% of people who use DNSCurve get most of the benefits of DNSCurve, despite being in a tiny minority.
The fact that DNSSEC is not universally deployed, yet secures important infrastructure in big organizations, would suggest your argument is false. In practice it is a bit more complex as DNSSEC is globally deployed and widely available, and that's a large part of the reason you can trust it. Any global initiative takes a full ten years to deploy as we've repeatedly seen.
What is the most important resource secured today by DNSSEC?
I can name thousands of critically important resources that do not use DNSSEC. For instance: any credit card transaction you make on the Internet will not, at any step in the process, involve DNSSEC. The same is true of any stock order at any retail brokerage, or any FIX connection between an exchange and a broker/dealer.
I've worked with banks and financial institutions for quite a while, and I would never pretend to be that sure about one bank, let alone all of them.
You picked a bad example. I can assure you that the clearing following that very card transaction would involve DNSSEC at least at one point, at least at one bank.
Of course email at rest security remains an issue, but that another story.