Hacker News new | past | comments | ask | show | jobs | submit login
HTTP/2.0 — Bad protocol, bad politics (acm.org)
219 points by ibotty on Jan 7, 2015 | hide | past | favorite | 207 comments



"HTTP/2.0 is not a technical masterpiece. It has layering violations, inconsistencies, needless complexity, bad compromises, misses a lot of ripe opportunities, etc."

I wish the article had spent more time talking about these things rather than rambling about "politics".

"HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier."

That would have destroyed any hope of adoption by content providers and probably browsers.

"HTTP/2.0 will require a lot more computing power than HTTP/1.1 and thus cause increased CO2 pollution adding to climate change."

Citation? That said, I'm not considerably shocked that web standards aren't judged based on the impact of whatever computing devices may end up using them on their power grids and those power grids' use of energy sources.

"The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL anywhere" agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption."

In the same paragraph, the author complains that HTTP/2.0 has no concern for privacy and then that they attempted to force encryption on everybody.

"There are even people who are legally barred from having privacy of communication: children, prisoners, financial traders, CIA analysts and so on."

This is so close to "think of the children" that I don't even know how to respond. The listed groups may have restrictions placed on them in certain settings that ensure their communications are monitored. But this doesn't prevent HTTP/2.0 with TLS from existing: there are a variety of other avenues by which their respective higher-ups can monitor the connections of those under their control.


"HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier."

In fact there's no need for this to be tied in with HTTP/2.0 at all. Alternate systems could be designed without regard to HTTP/1.x or HTTP/2.y, they just have to agree on some headers to use and when to set them.

Making these kind of changes as part of a new version of HTTP would just be bloat on an already bloated spec, it is actually a good thing that the spec writers did not touch this!


> there's no need for this to be tied in with HTTP/2.0 at all.

Not only that, tying cookies with HTTP/2.0 would be a layering violation! The cookie spec is a separate spec that uses HTTP headers, and the cookie spec also explicitly says that cookies can be entirely ignored by the user-agent.


This is not equivalent to cookies.

Cookies allow you to specify arbitrary data, not just a session ID, so you can set preferences without having to hit a database or even running a completely static page, using javascript to read the settings.

Cookies as they are now already allow you to control persistence. You can edit or delete them however you wish.


Except that deleting/not allowing cookies will make most of the things we use everyday unusable. Including this website.


Anyone have a link where I can read about these 'client controlled session identifiers'? I can't picture an approach to web app sessions which isn't essentially equivalent to a cookie (in terms of privacy, and ability of the client to control persistence).


It's really very simple:

Instead of all the servers dumping cookies on you, you send a session-id to them, for instance 127 random bits.

In front of those you send a zero bit, if you are fine with the server tracking you, and you save the random number so you send the same one every time you talk to that server. This works just like a cookie.

If you feel like you want a new session, you can pick a new number and send that instead, and the server will treat that as a new (or just different!) session.

If instead you send a one bit in front of the 127 random bits, you tell the server that once you consider this "session" over, and that you do not want them to track you.

Of course this can be abused, but not nearly as much as cookies are abused today.

But it has the very important property, that all requests get a single fixed size field to replace all the cookies we drag across the net these days.


It also forces the session to be tracked on the server. Many apps use cookies to keep the session in the browser. Maybe deprecating the document.cookie API so that people move to local storage, is a good first step?

More critique of cookies: https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies


Seems like anything a server could do via cookies, they could also do via your client-generated session id.

Servers which currently drop lots of individual cookies on you now, would just start dropping that data in the server-side session data. In either case they can tie your client to the same persistent information. Same for situations where javascript sets or gets cookie values - these could all be achieved via ajax, storing server-side against your session id.

If anything, it possibly reduces my options as a client, in situations where a site previously dropped lots of cookies on me, as now my only options are to completely close my session, or persist all data, when before there were situations where I could maintain, say, my logged in user session, while removing the "is_a_jerk=true" cookie.


Well, yeah, for a basic implementation, it's rather easy to track like that. What about a system where the unique id returned is based on a random value hashed with the domain name of the window? Then third party trackers would get a different id from you on each site, but your sessions would be static on the site.


True it would make it difficult to do third-party tracking across multiple sites (unless there was server-side information connecting your accounts, like an email address). And leaving aside browser fingerprinting.

That could also just by achieved by disallowing third party cookies though - feels on the cusp of being a browser implementation problem (just stop allowing third party cookies).


This is the link to MinimaLT (it runs on top of UDP instead of TCP). As of today I haven't seen an implementation and MinimaLT being a paper there are some ambiguous parts, but overall this looks like a really well defined spec. It tackles at least the cookie issue. It supports tunneling and it deals with DOS. The other real benefit is that it is RPC based, which means that the protocol is easy to extend (think about NFS and SSH alternatives) and maybe it could be the foundation of a new HTTP. AFAIK it doesn't deal with the CA certs / trust issue.

http://www.ethos-os.org/~solworth/minimalt-20131031.pdf


(Without being pedantic) are you aware of MinimaLT? It is really interesting technology (UDP).


I was in the middle of writing a response with a description of a client-provided UUID, with the scope and persistence specified by the client, but then I realized that I was basically describing a cookie. My version might have been better because it would more easily allow the user to control how the session ID is disseminated, but that would have been a feature of the HTTP client–not the protocol itself.


PHK discusses a lot of these point here: http://www.infoq.com/presentations/HTTP-Performance (especially his views on encryption)

(sorry for the repost, this wasn't posted when I posted the other one)


This is probably a better link: https://www.varnish-cache.org/docs/trunk/phk/http20.html There's a lot of fairly concrete issues in there, concrete suggestions for improvements, concrete suggestions for better problems to focus on (it's hard to overestimate how important that is for proper engineering!), and it's in text instead of a 40 minute video.


I disagree, the link is also interesting, but as I said further down: the video holds an interesting discussion of his views on encryption, especially end-to-end, something the parent explicitly discussed.


So as a problem, he lists server push. What other proposals are there to get rid of stuff like spritesheets?


Just saw it. Basically he's an advocate of the "only bad guys want privacy everywhere".

I still have not seen a single proposal from him that is not "forget about crypto", or "let everything be tracked".

He doesn't seems to even understand how basic key exchange work, and all his arguments boil down to "I think they can break it anyway, so we should stop using it".

IMHO, forget about him. he has no proposal and his understanding of security and crypto is downright dangerous.


You obviously havn't spent much time looking either, have you?

You could look at section 15 here for instance: http://phk.freebsd.dk/words/httpbis.html

With respect to key-exchange, maybe the problem is that I do understand, and therefore know that there is a difference between privacy and secrecy ?

Some places we want privacy, some places we want secrecy.

Mandating privacy everywhere makes it almost mandatory for police-states to trojan the privacy infrastructure (ie: CA's -- yes they already did), and therefore we have neither privacy nor secrecy anymore.

PS: Dangerous for who ?

PPS": Google "Operation Orchestra" if you don't undstand the previous question.


Privacy isn't a binary. Neither is secrecy. It's a function of relationships.

So the question isn't whether children have privacy, but against whom are they able to be private, in what regard? While I'm generally ok with parents knowing where their kids are, I'm not as ok with a random creep in the neighborhood knowing.

As far as criminals and prisoners, it's far more granular than that. If I get caught shoplifting, does that suddenly mean all my medical records are up for grabs? If I am a prisoner, am I allowed to refuse to see someone who comes to visit me?


Police-States would try to trojan the CA infrastructure anyway – its too much tempting a target.

Inspecting proxies work well enough with HTTP 1.1, there's no real need to improve them, and even if we do, it should be done via a proxy mechanism, rather than via sending more things in plaintext than we have.

Anyway, HTTPS routers can (and should) terminate SSL, and in that case they can even read the Host from SNI.


Re the environmental impact: A large contingent of internet networked servers are webservers, transacting with the HTTP protocol (be it spiders or producers or whatever). If it takes more computing resources to provide HTTP/2.0, then it would in fact require more servers, and thus increase the energy consumption.

But given how large datacenter consumers are considering how to build greener/smarter operations to reduce their impact (and costs) then clearly it's something to be mindful of.


> That would have destroyed any hope of adoption by content providers and probably browsers.

Bullshit, browser vendors have always lead the way on web tech. Remember XMLHttpRequest (Microsoft)? How about Javascript or SSL, both invented by Netscape? A proper session layer is inevitable for HTTP. Websites are no longer stateless, making HTTP an ill-suited protocol. We should just accept it and move on.


I believe that browser vendors will push technology that benefits them. That's definitely not in dispute. But you need to convince them that HTTP/2.0 is beneficial to them, and I don't think that will be easy if content providers aren't serving content via HTTP/2.0.


Most of the enhancements I'm aware of HTTP/2 are centered around SPDY.. along with persistent connections. The upconvert for SPDY and WebSocket connections are interesting on HTTP/1.0, I kind of wish that this had worked hand in hand with DNS to allow for DNSSEC publishing of public encryption keys for use with HTTP, then HTTP negotiating against that key.

I also wish there were layer between full encryption for content that is intended to be available for http and https (without the extra overhead) for signed content.


What you want re public keys in dnssec secured DNS is called Dane. Browsers are slow to implement it though.


if you need to have a stateful connection, why not just use websockets? in many other cases (eg asset fetching) http/http2 are fine


> That would have destroyed any hope of adoption by content providers and probably browsers.

why adoption by browsers?


I'm less sure browsers wouldn't adopt, but if content providers are slow to adopt due to massive changes in how they handle sessions / tracking / everything else they do with cookies, browsers are not incentivized to adopt.

I suspect it would end up looking similar to IPv6: Unless ISPs are providing the majority of users with IPv6, software developers aren't well incentivized to support IPv6 in their software. Similarly to browsers, they'd lose very little for supporting the new standard now, but the gains are low while adoption by the other group is low.


I don't see how this should stop us from fixing legitimate problems. As far as I know IPv6 adaptation is not a software problem at this point.


You are not fixing legitimate problems if the new spec never gets adopted by anyone.


That's not really my position to represent though. As an engineer you have to be firm if the issue is worthwhile. Just because the tech industry is a largely a vertical oligopoly doesn't mean we should support it. Technology would be a better place if the parties opposing security, robustness, privacy etc. would have to take an active stand against it rather than being able to hide behind vague design decisions. I do also think there are things that aren't worthwhile to be fighting for though.


The new spec wouldn't get adopted it if didn't offer anything better. See: IPv6


"won't". It hasn't been adopted by even a significant portion of the internet yet, and based on these discussions I'm fearful it will turn out exactly like IPv6.


> The same browsers, ironically, treat self-signed certificates as if they were mortally dangerous, despite the fact that they offer secrecy at trivial cost. (Secrecy means that only you and the other party can decode what is being communicated. Privacy is secrecy with an identified or authenticated other party.)

I'm frustrated to read this myth being propagated. We should know better.

In the presence of only passive network attackers, sure, self-signed certs buy you something. But we know that the Internet is chock-full of powerful active attackers. It's not just NSA/GHCQ, but any ISP, including Comcast, Gogo, Starbucks, and a random network set up by a wardriver that your phone happened to auto-connect to. A self-signed cert buys you nothing unless you trust every party in the middle not to alter your traffic [1].

If you can't know whom you're talking to, the fact that your communications are private to you and that other party is useless.

I totally agree that the CA system has its flaws -- maybe you'll say that it's no better in practice than using self-signed certs, and you might be right -- but my point is that unauthenticated encryption is not useful as a widespread practice on the web.

Browser vendors got this one right.

[1] Unless you pin the cert, I suppose, and then the only opportunity to MITM you is your first connection to the server. But then either you can never change the cert, which is a non-option, or otherwise users will occasionally have to click through a scary warning like what ssh gives. Users will just click yes, and indeed that's the right thing to do in 99% of cases, but now your encryption scheme is worthless. Also, securing first connections is useful.


Yes, pinning the cert, (i.e. TOFU - Trust On First Use) is exactly the right way to treat self signed certificates and under that model they offer real security. The idea that you can't do anything with self signed certs and nothing makes them okay is a much more troublesome untruth, IMO.

Rejecting self-signed certs and only allowing users to use the broken CA PKI model is the wrong choice. Browsers didn't get it right. The CA model is broken, is actually being used to decrypt people's traffic and though your browser might pin a couple big sites, won't protect the rest very well by default. It's a bad hack and we should fix the underlying issue with the PKI. I believe moxie was right, a combination of perspectives + TOFU is the way to do this.

Things that also work like this that we all rely on and generally seems more secure than most other things we use: SSH.


>TOFU - Trust On First Use

The scenarios where this works are pretty limited. Pretty much only a server you set up. Jane User has no idea if the first use of ecommercesite.com is actually safe. You generally do because you have other band access to that server to see the key or key fingerprint. Even that can be thwarted by a clever MITM attack.

>Things that also work like this that we all rely on and generally seems more secure than most other things we use: SSH.

Yeah, that's generally for system administrative access not general public access for the web. For that kind of access, you need higher safeguards, thus the CA system we have today.


I'm not defending the CA model; there are lots of known problems. But using a CA-signed cert before pinning it is way better than blindly trusting whatever cert you get on your first connection, which is the OP's suggestion wrt SSCs.

> Things that also work like this that we all rely on and generally seems more secure than most other things we use: SSH.

When was the last time you verified a certificate's fingerprint out of band when connecting to a server for the first time? Maybe you're the kind of person who scrupulously does this, but in my experience even paranoid computer types don't, to say nothing of regular people.

> I believe moxie was right, a combination of perspectives + TOFU is the way to do this.

+1, it's hard to go wrong listening to Moxie.


self-signed-certs is about making NSA&friends work for it, rather than giving them a free ride.

Today they can grep plaintext as they want. With SSC's they would have to pinpoint what communication they really need to see.

That would be a major and totally free improvement in privacy for everybody.

As for why the browsers so consistently treat SSC's as ebola: I'm pretty sure NSA made that happen -- they would be stupid not to do so.


Given that one can get a CA-signed certificate for free, and that in any case the cost of a cert is dwarfed by the cost of running a nontrivial website, I think you overestimate the number of sites which would be "secured" in this manner and the gains to be had.

But the important point is what we'd lose: If a UA accepts a SSC for an arbitrary website, then the NSA can actively MITM a website that uses a CA cert -- the browser will never see that CA cert, so it doesn't know better.

The only way around this would be to accept SSCs but treat them as no different from plain HTTP in the UI. But now websites have no incentive to use the certs in the first place, further limiting the benefits to be had.

Really, just buy a certificate. It's not that hard.

> That would be a major and totally free improvement in privacy for everybody.

Tangentially, I feel like you're trying to have it both ways in arguing for more encryption on the web and also against mandatory encryption on the grounds of energy efficiency. SSCs wouldn't be "totally free" under your model -- we'd be spending carbon on it. I might argue that if one is going to pollute to encrypt their data, it would be unethically wasteful to use a SSC, which comes at exactly the same environmental cost as a CA cert but offers much weaker guarantees.


> The only way around this would be to accept SSCs but treat them as no different from plain HTTP in the UI. But now websites have no incentive to use the certs in the first place, further limiting the benefits to be had.

That, or make HTTP/2's non-secure mode always be encrypted. Which is what people like myself would like. Opportunistic encryption at zero hassle and with zero security problems.


If it were a foregone conclusion that HTTP/2 would have a non-secure mode, I'd agree, yes. But in practice HTTP/2 isn't going to have a non-secure mode in most browsers.

Maybe you're arguing that it would be better to have an insecure but encrypted mode in all browsers? Could be, but I don't think so. As it is, if a site wants the benefits of HTTP/2, they have to establish actually secure communication (inasmuch as CAs provide that). It seems good to me to use the performance benefits of HTTP/2 as a carrot to accomplish real encryption everywhere. If the costs of getting a cert were too high, then maybe this would leave many websites stuck on HTTP/1, which would be worse than having those sites use an encrypted-but-insecure mode in HTTP/2, but I don't think that will be the case.


>Maybe you're arguing that it would be better to have an insecure but encrypted mode in all browsers?

Pretty sure this is what we're doing today with smtps. We just wrap smtp in tls and call it a day. Its dangerous and allows for mitm attacks. I think there was a paper recently about how this is already being abused.

I just dont see where people who believe in self-signed certs as a solution to all our encryption woes are coming from. Its historically and technically has shown itself to be a security nightmare for most use cases. I think people like this are more political than practical and think they can do non-trivial things without regulation, authorities, etc. Sorry, but that's just not how this world works.


Of course you are right re environmental pollution with either certificate type. Phk is also speaking about caching reverse proxies. Why should you encrypt everything when being in your own private network.


If there are already so many ways to fingerprint via cookies, JavaScript, Flash, etc. that it probably doesn't matter then why do cookies matter? That the EU parliament decided to pass some weird law is not much of an argument. They're not exactly known for their technological proficiency. And it did nothing for privacy. It only annoys people[1].

Sure, we could start with cookies. However, that would break a lot of the web with no immediate benefit.

On SSL everywhere (not "anywhere"), how much resources does it cost to negotiate SSL/TLS with every single smartphone in their area? Supposedly, not much[2]. I run https websites on an Atom server.

Frankly, that was rather unconvincing. Although it does seem likely than the entire process was driven by IETF trying to stay politically relevant in the face of SPDY.

[1] https://github.com/r4vi/block-the-eu-cookie-shit-list

[2] https://istlsfastyet.com/#cpu-latency


For what it's worth, the cookie law in the EU is far more general than cookies, and had a totally different origin. Originally, the law was to require explicit consent when installing something on an electronic device, to combat malware. However, some clever politicians later realized that setting a cookie also "installs" something on an electronic device, and thus the law further known as the cookie law was passed.

If a webserver were to set some secure session identifier, the same laws still apply -- just as installing software without explicit consent is covered by the same law.


Not if you are using the session identifier for it's intended purpose, see the exceptions at: http://ec.europa.eu/ipg/basics/legal/cookies/index_en.htm#se...


We all know how broad "intended usage" can be defined. Google has a cookie that is required for logging in into gmail (clearly intended usage), but is reusing that exact same cookie for tracking purposes.

Until there is actual legal precedence after people sueing businesses abusing these abilities, I have no idea how to interpret these laws other than "they are very broad and vague".


I don't think the laws are vague so much as how we use cookies today. With different mechanisms for different purposes it would (could) be much more transparent to the end users how things work. It would be more like the "save password" feature in various browsers. Of course since all the major browsers vendors also make money from ads this isn't really in their interest.


Cookies are a historical band-aid. The implementation is a mess, cross-domain cookies require all sorts of hacks, and there's nothing you can do with a cookie that you can't do with a session identifier.

Also, SSL gets way more complicated when you are using a CDN.


You could say that most of the modern web is a historical band-aid. Here we are, 20+ years later, building "apps" on top of a document delivery system.


Remembering the later BBS days and their own GUI protocols for applications, I don't consider web applications really any worse... Realistically today, you can use WebSockets (or a safer abstraction, shoe/sock, socket.io, SignalR or otherwise) to act as an application channel, with static resources grabbed via HTTP(S).

In such a way, your application channel can maintain all the state it needs in that websocket... no cookies needed. The downside of the web today is the lack of a standard display interface/size... you have to work from a phone all the way to a 1080p or larger desktop or big screen display.

These abilities to use the web and enhance things is exactly why the web is as pervasive as it is... if it weren't for such open, broadspread ability we'd all be stuck with a natural monopoly (windows) everywhere.


It's worked pretty well though, just sayin'.


Yeah, with AngularJS and REST I can finally approach the level of productivity that Visual Basic 4.0 had 20 years ago.


It works, yes, but with great effort. Think of all the development time spent grafting applications on top of what is fundamentally a document delivery system.

It's hack upon hack, kluge upon kluge.

Example: Do you think HTML/CSS/JS is the right way to develop a user interface for an application? I don't.


It's a damned sight better than tk, wxWigets, Swing, SWT, AWT, MFC, or any of a bajillion other things we've tried in the desktop.


Yeah, you're right, all those things are terrible, too. Can't we do better than what we have?


No, it hasn't. The web is a terrible, hacked-together mess. Things break often and catastrophically. HTTP 2.0 is layering on more hacks instead of trimming the fat.


The article is basically a rant. I was hoping the author would go more into layer violation issues.

Most of the interesting stuff from HTTP/2.0 comes from the better multiplexing of requests over a single TCP connection. It feels like we would have been better off removing multiplexing from HTTP altogether and then adopt SCTP instead of TCP for the lower transport. Or maybe he had other things in mind.

> There are even people who are legally barred from having privacy of communication: children, prisoners, financial traders, CIA analysts and so on.

This argument is quite weak, SSL can easily be MITMed if you control the host, generate custom certs and make all the traffic go trough your regulated proxy.


In the early days of SPDY there was an article comparing SPDY to HTTP/1.1 over SCTP. The short version was other than the lack of header compression, it got nearly all the wins of SPDY, except:

1) Unless tunneled over UDP (which has its own problems) failed to work with NATs and stateful firewalls.

2) HTTP(s) only environments (e.g. some big corporations) would not work with it; SPDY will look enough like HTTPS to fool most of these.

3) Lack of Windows and OS X support for SCTP (without installing a 3rd party driver) means tunneling over UDP.


Yes, changing from TCP is about as difficult as changing from IPv4 - it would take years, and it would take a) catastrophic emergencies like IPv4's address exhaustion or b) completely passive, low-effort transitions/dual stack over years. Note that TCP processing is built into the hardware of all modern network ASICs from your PC to backbone routers and firewalls.

Unfortunate, but true.


What are the problems with SCTP over UDP, except the obvious extra (eight byte) overhead?


UDP is just not as reliable over NAT (mainly due to crappy NAT implementations). SCTP tries really hard to keep it working (including heartbeats on idle connections), but implementing the 50% of SCTP that HTTP benefits from over TCP will work exactly as well as HTTPS whereas SCTP over UDP runs into lots of tiny issues due to nobody having tested it on their $5 NAT before putting it in a wifi router or a DSL/CABLE modem.


> SSL can easily be MITMed if you control the host

How is it MITM if you control the host? If you don't trust the host then you are hosed, period- there is no protocol that will save you.


You and the parent are not in disagreement. The point was merely that individuals who are subordinate (children, employees) can be monitored with or without TLS, and the original article's objection was irrelevant (and to my eyes, very strange).


No, this is actually a very important point for me.

If HTTP/2.0 had done this right, there wouldn't be a need for your employer to trojan your CA list so they can check for "inappropriate content" going either way through their firewall. (Under various laws they may be legally mandated to do so, flight controllers, financial traders etc.)

But because HTTP/2.0 was more about $BIGSITEs unfettered access to their users, no mechanism was provided for such legally mandated M-I-T-M, and therefore the CA-system will be trojaned by even more people, resulting in even less security for these users.

Likewise, pushing a lot of traffic which doesn't really need it onto SSL/TLS will only force NSA and others to trojan the CA-system even harder, otherwise they cannot do the job the law says they should do.

As I've said earlier: Just mindlessly slapping encryption on traffic will not solve political problems but is likely to make them much worse.

See for instance various "Law Enforcement" types calling for laws to ban encryption or Englands existing law that basically allow them to jail you until you decrypt whatever they want you to decrypt (never mind if you actually can or not...)

Edit to add:

The point about $BIGSITE is that everybody hates it when Hotels, ISPs and Phone companies modifies the content to insert adds etc. Rightfully so.

Any proxy which does not faithfully pass content should be forced to need the clients accept for its actions.

But since such proxies are legal and in some cases legally mandated some places (smut filters in libraries & schools, filters in jails. Parental controls at home, "compliance gateways" at companies) trying to make them impossible with the protocol just means that the protocol will be broken.


I think the big point that a lot of people are raising, however, is that a "proxy which does not faithfully pass content" shouldn't be legal in the first place. Users don't want network operators to be snooping on them if they can avoid it. The fact that such behaviors are formally legal now does not mean that we (i.e. internet users as a whole) should just lay on our backs and let $BIGSITE have its way with us.

In fact, the uses of proxies that you seem to be pointing out as examples of why ubiquitous encryption is a "bad thing" - such as those in schools, homes, workplaces, etc. to block "objectionable content" - would probably be better handled by blocking IP addresses or domain names, rather than trying to break into encrypted HTTP sessions, would it not? Last I checked, TLS does not prevent the ability to detect when a user agent attempts to access a particular host (whether by IP address or domain name), thus allowing $BIGSITE to close off access to blacklisted or non-whitelisted hosts without needing to know the exact data being exchanged.

Honestly, and with all due respect, the idea that ubiquitous use of TLS would in any way, shape, or form stifle $BIGSITE's ability to monitor and block attempts to access "objectionable" sites seems absurd when there are plenty of more effective ways to do such things that don't involve a total compromise of privacy or secrecy.


You are welcome to that opinion, take it up with your lawmakers, vote based on it, or run for office yourself to make it reality.

Just don't think you will make such proxies disappear with technical means -- in particular not where they are mandated by law.

I fully agree with you when we're talking about people trying to make money by modifying 3rd party traffic.

But I leave it to the relevant legislatures (and their electorates!) to decide with respect to libraries, schools, prisons, financial traders, spies, police offices and so on.

With respect to HTTP/2 there were two choices:

1) Try to use the protocol to force a particular political agenda through.

2) Make the protocol such that people behind manipulating proxies have notice that this is so, and leave the question of which proxies whould be there to the political systems.

Implementing policy with protocols or standardisation has never worked and it won't work this time either.

At the end of the day: IETF has no army, NSA is part of one.


Fair enough. My point, however, was that an always-encrypted internet protocol would not prevent the legitimate uses of these proxies - namely, the monitoring and blocking of attempts to access particular hosts on a network - from being possible; any competent network administrator could implement filters on particular network addresses and hostnames without needing to decrypt anything in the application and transport layers, since they're operating beneath those layers in the first place (perhaps from the router/firewall, even; I'm willing to bet that such monitoring/blocking would be relatively trivial to implement as a pf ruleset, using tables for blacklisted or whitelisted hosts) or are otherwise able to view information beyond what's encrypted (such as DNS lookups). That seems to be much easier and more surefire than trying to break into TLS sessions.


I'm not 100% certain I understand your position, so I'll attempt to restate it in my own words. Please correct me if I'm wrong. The TLS requirement in HTTP 2.0 is objectionable on the grounds that it makes filtering/monitoring difficult or impossible. This is a problem because in some places the filtering/monitoring is legally mandated and in some others merely legal. It would also aid these intermediaries if the opening stanza of the HTTP request were sent unencrypted.

To that, I disagree. On a practical level, many sites are already encrypted with TLS, especially the $BIGSITEs, so intermediaries that want to MITM their subordinates already must compromise their hosts. No browser is likely to ship an update that would regress on the privacy guarantees of traffic on the wire in this way.

On an ideological level I believe that it is better to err on the side of making information available to those who want it rather than empowering those who wish to censor and monitor access of information. I would also add that the RFCs issued by the IETF have historically been ideologically aligned with free and unimpeded access to information, and have more frequently treated censorship and monitoring as attacks to defend against rather than use cases to fulfill.


> If HTTP/2.0 had done this right

Is "right" in your opinion keeping SSL/TLS optional as it is today? Don't we already have the same concerns (legal obligation to MITM HTTPS connections)?

> no mechanism was provided for such legally mandated M-I-T-M

Maybe this is what you meant by doing it "right". How would this look? I'm having trouble imagining how such a mechanism could be securely built into HTTP.


It's certainly not a trivial thing to design, but the "GET https://..." proposal floating around solves a lot of the relatively benign cases (company/school smut-proxies etc.)

My point is that HTTP/2 didn't even try, because the political agenda for a lot of people were more or less "death to all client side proxies".

They're entitled to that opinion of course, but given that laws in various countries say the exact opposite, the only thing they achieve by not making space for it in the security model is that the security model will be broken.


> But because HTTP/2.0 was more about $BIGSITEs unfettered access to their users

Can you explain the link from required SSL to this point? I'm not seeing it...don't they already have unfettered access?


I'll just leave this here: http://www.w3.org/Protocols/HTTP-NG/http-ng-status.html

In 1995, the process began for HTTP to be replaced by a ground up redesign. The HTTP-NG project went on for several years and failed. I have zero confidence that a ground up protocol that completely replaces major features of the existing protocol used by millions of sites and would require substantial application level changes (e.g. switching from cookies to some other mechanism) would a) get through a standards committee in 10 years and b) get implemented and deployed in a reasonable fashion.

We're far into 'worse is better' territory now. Technical masterpieces are the enemy of the good. It's unlikely HTTP is going to be replaced with a radical redesign anymore than TCP/IP is going to be replaced.

Reading PHK's writings, his big problem with HTTP/2 seems to be that it is not friendly to HTTP routers. So, a consortium of people just approved a protocol that does not address the needs of one's major passion, http routers, and a major design change is desired to support this use case.

I think the only way HTTP is going to be changed in that way is if it is disrupted by some totally new paradigm, that comes from a new application platform/ecosystem, and not as an evolution of the Web. For example, perhaps some kind of Tor/FreeNet style system.


Yes, clearly nothing has changed since 1995 at all, obviously nobody has gotten any wiser or anything.

My big problem with HTTP/2 is that it's crap that doesn't solve any of the big problems.


You say that, but as a small site operator, my experience is that I add the characters "SPDY" to my nginx config, and clients are happy cause stuff loads faster.

Why did nothing happen since the HTTP1.1 spec? Everyone say around until Google decided to move stuff forward.


I really don't understand this bit:

> Local governments have no desire to spend resources negotiating SSL/TLS with every single smartphone in their area when things explode, rivers flood, or people are poisoned.

I remember some concerns about performance of TLS five to ten years ago, but these days is anybody really worried about that? I remember seeing some benchmarks (some from Google when they were making HTTPS default, as well as other people) that it hardly adds a percent of extra CPU or memory usage or something like that.

Also, these days HTTPS certificates can be had for similar prices to domains, and hopefully later this year the Let's Encrypt project should mean free high quality certificates are easily available.

With that in mind, forcing HTTPS is pretty much going to be only a good thing.


On my load balancers it is more like 5-10%. Not terrible, but not trivial either. Also, it multiplies throughout the environment. If you are dealing with PII or financials, everything needs to be encrypted on the wire.

Load balancer decrypts, looks at headers, decides what to do, re-encrypts, down to the app tier, decrypt, respond encrypted, etc. I'm not saying that is a bad thing, but that's why some people get cranky.

Somewhat unrelated, compressed headers in HTTP 2.0 makes sense if you only think about the browser, it saves 'repeated' information. The problem is the LB has to decrypt them every time anyways, so someone still have to do the work, it just isn't on the wire. Server push on the other hand could be awesome for performance (pre-cache the resources for the next page in a flow) but also has the potential for abuse.


Why not just run unencrypted behind the load balancer? Unless they're on wholely different networks, it shouldn't be needed.. SSL termination makes sense at a load balancer, reverse proxy, etc.



PCI.


> If you are dealing with PII or financials, everything needs to be encrypted on the wire.

Well I hope you'd be doing that even without HTTP/2...


Even if not the CPU cost, I don't think small-town, USA IT wants to deal with incompatibilities between clients when SSL is enforced (ie. no SNI on pre-GB Android phones/winXP, only some [weak] ciphers available in IE etc) not necessarily the individual compute increase.

To address what you've said, increasing the energy consumption of every internet connected device in the world will probably have noticeable effects on the aggregate (more power for the server, more power to cool the server, etc.).


“small-town, USA IT” has to outsource as much as possible anyway because their budgets have been cut annually for a long time. Spending an extra $100/year on an SSL certificate is the least of their worries if they're not simply using outside services for as much as possible.

> increasing the energy consumption of every internet connected device in the world will probably have noticeable effects on the aggregate

How would this compare to the energy used by e.g. a single wasteful banner ad campaign? Do you really think that it makes sense to be concerned with the part which is increasingly executed by optimized hardware?


The point is that such applications have no use for privacy, so why should it be forced? (NB: I am not asking this question, just explaining the point)


"No use" is not correct. For instance, an attacker could MITM a critical info service and provide malicious instructions.


Twenty-six years later, [...] the HTTP protocol is still the same.

Not true at all. Early HTTP (which became known as HTTP/0.9) was very primitive and very different from what is used today. It was five or six years until HTTP/1.0 emerged, with a format similar to what we have today.


Thank you, I was going to point it out myself. Early HTTP was literally just this:

  GET /somepath
That's it. Nothing more (well, that and a CRLF), nothing less. The response was equally barren: just pure HTML. Existent page? HTML. Non-existent page? HTML error message. Plaintext file? HTML. (The text is wrapped in a <plaintext> tag.) Anything else? Probably HTML, though you could also deliver binary files this way (good luck reliably distinguishing HTML and binary without a MIME type)!

I actually like HTTP/0.9. If you're stuck in some weird programming language without an HTTP/1.1 client (HTTP/1.0 is useless because it lacks Host:, while HTTP/0.9 actually does support shared hosts, just use a fully-qualified URI) you can just open a TCP port to a web server and send a GET request the old fashioned way.


After learning how public key infrastructure really works, I've become quite disillusioned by the security it seems to provide. After all, what use is a certification authority, if basically any authoritarian state and intelligence service can get on that list? In some countries the distinction between government officials, spies and organized crime is already extremely blurry...


After all, what use is a certification authority, if basically any authoritarian state and intelligence service can get on that list?

Who says what an authoritarian state is? The reason there are a lot of CAs and some of them are governments is that having rules and policies for what it takes to be a CA is way better than having some random neckbeard at a browser maker deciding he read something in the newspaper yesterday about your country he didn't like, so you can't be a CA.

If you wanted to, you could build a browser that's a fork of Firefox/Chrome, and just doesn't show the padlock when a CA that you believe is under the thumb of an authoritarian state is the signer. However you would then have to exclude all American and British CA's, which would then exclude most SSLd sites, thus making your fork not much different to just deciding to never show any padlock at all and assert that everything is insecure so fuck it, let's (not) go shopping.

OK ..... back here in reality, real browser makers understand that there are more adversaries than governments, and actually SSL was designed to make online shopping safer, not be a tool of revolution. Judged by the "make shopping safer" standard it does a pretty great job. Judged by the "fight repressive regime and save the world" standard, it still does a surprisingly good job - the NSA doesn't seem to like it much at all - but it's unrealistic to expect an internet protocol designed in the mid 90s to do that.


That's a specific facet of the CA layout we've adopted for HTTPS; PKI as a whole does not require it. You need to trust something, but that something is often your own organization's CA (many VPNs) or the keys of people you've met (GPG) or the servers you've deployed (SSH, though SSH now supports use of CA keys).


Yes, the CA model we have today is deeply problematic. But there are organizations such as Google that are trying to improve the current situation. See e.g. https://queue.acm.org/detail.cfm?id=2668154 about their Certificate Transparency project.


Of course Google is only trying to improve the situation as long as it doesn't hurt their bottom line or government relations.


In what situation would this be otherwise? "I'm going to do something that takes time and money and ultimately hurts me" - Masochists of America, unite under one CA?

Even if Jesus Christ managed a certificate authority, someone would complain. Everyone - even G-d - has a conflict of interest.


Good points. And I’m quite sure that Google’s efforts in strong crypto/security has irritated lots of people in the US government.


I wouldn't be surprised if there's actually a fight inside Google over this.


There is. I have received emails that say so outright.


That's exactly what I'm saying, you can't expect one entity to solve this.

There's clearly some good people working at worthwhile things at Google. My concern is that a lot of those things doesn't end up being pushed by Google. Not only because it might hurt themselves, but because of non-obvious outside influence.

We shouldn't forget that many things we accuse the NSA for like lack of accountability, overzealous collection of data, the undermining of privacy etc. are all things we can expect from a corporation.


We also expect the GDP of over half of our society from corporations - the balance from government and non profits.

My point was that everyone has self interest, if one that's influential and has resources comes up with a proposal that is reasonably transparent and beneficial, it seems self-destructive to reject it out of distrust.


I don't necessarily think it's a bad proposal. It's seems rather good actually. Google is still one of the most tracking entities on the Internet and they can't really argue with NSLs even if they wanted to.

I just think the US government is the best organization in the world in asserting pressure and that Google, even if they really wanted to (which isn't clear), isn't going to end up with an agenda hugely contradictory to the US governments wishes. The US has a long history of using industry for geopolitical goals and the tech industry isn't any different.

If we do end up with a system that is in line with peoples fundamental rights I'll be the first one to commend them for it though.


And...? Does that mean Certificate Transparency is a bad project?


You can remove their CA certificates from your browser/OS and if you click the lock icon on your browser you can check what CA the website you're on was signed.

You're right to say that the PKI doesn't work if you just want to trust any site that shows a padlock in the address bar, but it's useful if you do a little work.


> You're right to say that the PKI doesn't work if you just want to trust any site that shows a padlock in the address bar, but it's useful if you do a little work.

Not it isn't. It still suffers from not respecting name constraints. You can't setup trust for only a list of domains. If i run my own CA for some list of domains, there is no way i can prevent my CA from being able to sign for google.com. Instead people use wildcard certs so they can be delegated responsibility for a subdomain.


You can get a certificate for a fixed list of domains.

If name constraints were implemented more widely, that'd be great. But someone has to write the code, debug it, ship it, etc, and then you have to wait until lots of people have upgraded, etc, and ultimately wildcard certs work well enough.


> If name constraints were implemented more widely, that'd be great. But someone has to write the code, debug it, ship it

Without name constraints I assert the system is inherently broken. You cannot limit trust other than yes/no.

> ultimately wildcard certs work well enough.

Well enough is arguable. The problem is that your attack surface grows with each machine rather than having a private key per machine.


I can't modify the CA lists of my users...


Ship your own browser.


In [most|all] countries the distinction between government officials, [corporations|financial institutions], spies and organized crime is already extremely blurry...

FTFY.


Interesting that one of the primary developers of FreeBSD does not understand one of the two major use cases for SSL, namely identity assurance. News sites and local governments don't care about privacy when transmitting news or emergency information, true, but citizens should be concerned about making sure that information is coming from who they think it's coming from.

I'm no fan of HTTP/2, but this article does not effectively argue against it. Too many bare assertions without any meat to them. And when you fail to mention a major purpose of a protocol (SSL) you dismiss as useless, you lose a lot of credibility.


SSL does not provide identity assurance (=authentication), the CA-cabal does, SSL just does the necessary math for you.

CA's are trojaned, that's documented over and over by bogus certs in the wild, so in practice you have no authentication when it comes down to it.

Authentication is probably the hardest thing for us, as citizens to get, because all the intelligence agencies of the world will attempt to trojan it.

Secrecy on the other hand, we can have that trivially with self-signed certs, but for some reason browsers treat those as if they were carriers of Ebola.


That's an argument against showing scary warnings on self-signed certs, not against SSL. It would be nice if there was a httpr:// scheme that would be like HTTPS but without certificate checking.


I'm not arguing against SSL.

I'm arguing against making SSL mandatory, because that will force NSA to break it so they can do their work, and then we will have nothing to protect our privacy.

More encryption is not a solution to a political problem: http://queue.acm.org/detail.cfm?id=2508864


... Seriously? The NSA is just sitting around, unable to commit resources to breaking TLS because it's not widespread enough? But HTTP2 is suddenly gonna make the NSA say "alright, we'll break it now. We didn't want to break the most widely deployed crypto protocol, but now we jusT have to make huge crypto breakthroughs, sigh".

It's bizarre to think if the NSA could break TLS they're holding back.


> I'm arguing against making SSL mandatory, because that will force NSA to break it so they can do their work, and then we will have nothing to protect our privacy.

That line of reasoning sounds bizarre to me. It sounds like "don't add a lock to your door, because that will force the criminals to break the lock, and then your door will be unlocked".


A bad analogy is like a wet screwdriver.

Try this one, it's better, but not perfect:

Imagine what happened if some cheap invention turned all buildings into inpenetrable fortresses unless you had a key for the lock.

Now police can not execute a valid judge-sanctioned search warrant.

How long time do you think lawmakers will take to react ?


You were talking about the NSA breaking SSL, not lawmakers forbidding it.

If the problem is with the analogy, without analogies this time:

> I'm arguing against making SSL mandatory, because that will force NSA to break it so they can do their work, and then we will have nothing to protect our privacy.

Without SSL, our privacy is unprotected, since eavesdroppers can read our traffic. Now add SSL, and eavesdroppers cannot read the traffic. Then NSA breaks it, and eavesdroppers can read our traffic again - we've just circled back to the beginning. We will have nothing to protect our privacy, but we already had nothing to protect our privacy before we added SSL; and in the meantime before the NSA breaks it, we had privacy.

And it assumes that the NSA will be able to break it, and that the NSA is the only attacker which matters.


NSA does what the do because lawmakers told them to and gave them the money -- you cannot separate these two sides of the problem.

There are many ways to break SSL, the easiest, cheapest and most in tune with the present progression towards police-states is to legislate key-escrow.

Google "al gore clipper chip" if you don't think that is a real risk.


The more important sites have user credentials that should be secret, and therefore have to use SSL anyway, so NSA will do their best to break it anyway.


I would have found your comment rather more useful had you led with the technical point rather than an attack on the author.


I don't understand. I did not attack the author, and describing the purpose of SSL is a technical point.


Nitpick - It's HTTP/2 not HTTP/2.0

We've all learned from the failure of SNI and IPv6 to gain widespread adoption. (Thank you windows xp and Android 2.2) HTTP/2 has been designed with the absolute priority of graceful backward compatibility. This creates limits and barriers on what you can do. Transparent and graceful backward compatibility will be essential for adoption.

I agree, HTTP/2 is Better - not perfect. But better is still better.


Not that I believe SNI and IPv6 are failures, but HTTP/2 faces exactly the same failure case as IPv6 (the lack of adoption due to HTTP/1.x being 'good enough').


> Not that I believe SNI and IPv6 are failures, but HTTP/2 faces exactly the same failure case as IPv6 (the lack of adoption due to HTTP/1.x being 'good enough').

HTTP/2 isn't really like IPv6 in that fewer people need to act to adopt it -- if the browser vendors do (which they are already) and the content providers do (which some of the biggest are already), then its used. Its specifically designed to be compatible with existing intermediate layers (particularly when used with TLS on https connections) so that as long as the endpoints opt in, no one else needs to get involved -- and with one of the biggest content providers also being a browser vendor who is also one of the biggest HTTP/2 proponents...

IPv6 requires support at more different levels (client/server/ISP infrastructure software & routers, ISPs actually deciding to use it when their hardware/software supports it, application software and both client and server ends, etc.) which makes adoption more complex.


> > Not that I believe SNI and IPv6 are failures, but HTTP/2 faces exactly the same failure case as IPv6 (the lack of adoption due to HTTP/1.x being 'good enough').

> HTTP/2 isn't really like IPv6 in that fewer people need to act to adopt it -- if the browser vendors do (which they are already) and the content providers do (which some of the biggest are already), then its used.

I, for one, welcome the HTTP/1.x+2 future of 5-10 years from now. (Obligatory http://xkcd.com/927/ )


> The proponents of HTTP/2.0 are also trying to use it as a lever for the "SSL anywhere" agenda, despite the fact that many HTTP applications have no need for, no desire for, or may even be legally banned from using encryption.

What is the basis of this claim? ISTR that SPDY and the first drafts of HTTP/2 were TLS-only, and that some later drafts had provisions which either required or recommended TLS on public connections but supported unencrypted TCP for internal networks, but the current version seems to support TLS and unencrypted TCP equally.


Unencrypted HTTP/2 is a fake concession that isn't usable in the real world.


The only respect in which that appears to be true is that no major browser vendor has yet committed to supporting HTTP/2 other than over TLS-encrypted connections.

But given that that both some HTTP/2-supporting browssers and much of the server-side software supporting HTTP/2 is open source, and given that all the logic will be implemented and the only change will be allowing it on unencrypted TCP connection, it'll probably be fairly straightforward to anyone who cares much to put the proof of concept of the value of unencrypted HTTP/2 together.

OTOH, the main gain of HTTP/2 seems to be on secure connections, so I'm not sure why one would want unencrypted HTTP/2 over unencrypted HTTP/1.1, and given that no browser seems to have short-term plans to stop supporting HTTP/1.1, there's probably no real use case.

But the protocol supports unencrypted use just fine.


IIRC, the main issue with nonTLS HTTP/2 is broken web proxies. This is why Google deployed SPDY in https only, and also why they didn't use SCTP as the basis, but instead reinvented about 50% of it on top of TCP.

Google didn't want something that would break even a tiny percentage of existing installs.


As I keep having to mention: The omission of the use of SRV records is maddening, and the reasons given don’t make any sense.

https://news.ycombinator.com/item?id=8550133

https://news.ycombinator.com/item?id=8404788


I so much wish they would adopt SRV records. I've used them many times for load balancing internal HA web services and love the freedom they give you to specify failover tiers and push higher loads toward beefier servers.

Honestly, SRV records cover about 90% of the usage I've seen people deploy ZooKeeper or etcd for. I'd love to see them become the standard way of doing such things.


From the responses given to your linked comments, it seems like there was a technically valid reason not to use them: performance You just kept pushing that it wasn't a good enough reason. Are you just going to keep posting and asking until people decide you're right and they're wrong?


Take a better look at the responses. There's no inherent reason for the performance degradation (it's mainly because of noncompliance on Bind), and without it we are stuck with the much slower HTTP redirects.


Yes. Also CDNs and having your servers in the Cloud™ for automatic failover would be no longer necessary. So of course every company providing these services are against it. These companies are also, of course, the “stakeholders” interested in development of HTTP/2.


There actually is no performance problem. The closest anyone has come to a substantiation of that claim is to say, in essence, “If there is a buggy resolver somewhere which completely ignores SRV queries, the client would have to wait for a timeout.”. But this is not the normal, or even commonly occurring, case! SRV does not have such problems, or at least not enough for people to avoid using it for other protocols such as autodiscovery or Minecraft.


(I'm responding inline, and I'm not looking at all the aforementioned posts/comments, so forgive me if I'm missing something here.)

It sounds like you're arguing that SRV records are no slower than A records, which, on it's face, seems reasonable. A DNS request is a DNS request, and aside from a response being too big for UDP and having to switch to TCP, you should get nearly-identical performance.

The part, to me, that looks like a real performance issue is potentially having to double the minimum amount of queries to serve a website. We couldn't possibly switch directly to SRV records; there would have to be an overlap of browsers using both SRV and A records for backwards compatibility.

If we stick with that invariant, then we can say that the first-load cost of a page not using SRV records doubles in the worst case: websites that only have an A record. Now we're looking for a SRV record, not getting it, and falling back to an A record. So, now, all of the normal websites who don't give a crap about SRV records, won't ever use them, pay a performance penalty. A marginal one, sure. but it's there.,

So, overall, their claim seems valid, even if low in severity.

I'd love to hear from someone that has the data, but I can count on one hand the number of times where a loss of IP connectivity has happened where I wish I had SRV records for load balancing. It's usually bad DNS records, or slow/bad DNS propagation, or web servers behind my load balancers went down, or a ton of other things. Your point is still totally valid about being able to more easily load balance across multiple providers, datacenters, what have you... but I'm not convinced it's as much of a problem as you make it out to be.


If you really are in the position where an additional DNS request will kill you (unlike the overwhelming majority), there is an easy solution: Make sure all the servers pointed to by the SRV records are in the same domain (or at least that the domain is served by the same DNS server). Then the A (and AAAA) records should be present in the “ADDITIONAL” section of the DNS response. No more DNS questions are then necessary to get this data.

> […] or web servers behind my load balancers went down […]

I’m getting the impression that you think that even having a load balancer is a natural state of affairs, but it should not be. Getting more performance should be as easy as spinning up an additional server and editing the DNS data; done. Your attitude reminds me of the Unix-haters handbook, describing people who have grown up with Unix and are irreversibly damaged by it: “They regard the writing of shell scripts as a natural act.” (quoted from memory).


You make a valid point, but that relies on systems asking for all records on an RR. If a DNS server decides to send along A records with a SRV request, it's likely those A records won't be cached[1], and thus you'd have to make a second query.

As far as load balancers, sure, it'd be great if nobody needed to do anything other than spread requests across a pool of heterogeneous machines... but there's plenty more to be had by using a load balancer in front of web servers, namely intelligent routing. Things you just couldn't possibly figure out, as the browser, from looking at SRV records.

Besides that, I appreciate your thorough and clearly-informed thoughts on my attitude and state of mind when it comes to engineering systems. It definitely elevated this discussion to new heights, to be sure.

1. http://tools.ietf.org/rfcmarkup?doc=2181


> You make a valid point, but that relies on systems asking for all records on an RR.

Um, no? If a DNS client asks a DNS server for an SRV record, and the DNS server has the A (and AAAA) records for the domain names contained within that SRV record, the DNS server will send those A (and AAAA) records along in the reply in the “ADDITIONAL” section; i.e. not in the “ANSWER” section as a reply to the actual SRV query, but still contained within the same DNS response. So the tiny performance issue for this minor case can be solved for those who need to solve it.

> […] a load balancer [can also be used for] intelligent routing.

Well, yes, SRV records can’t be all things to all people. This is, however, nothing which will affect, I guess, at least 90% of those even today using load balancers. Those needing this extra functionality can perfectly well keep their load balancers or (to call them for what they actually would be) HTTP routers.

These are, however, both minor quibbles (the first of them even has a solution) and should not affect the decision to specify SRV usage in HTTP/2.

(Also, being overly ironic does not help discourse, either.)


That's making an assumption that they will send back the A/AAAA records. Empirically, you might be right, but it's a recommendation, not a requirement, in RFC2782. (Not sure if there's an RFC that supercedes that particular point.)

So, either you hope the server responds with the A/AAAA records in the additional section, or you have to query for all records on an RR, or further still, do multiple queries. What happens when your SRV records point to CNAMEs? Do most DNS servers that support sending back the A/AAAA records in the ADDITIONAL section also support resolving the CNAMEs before populating the additional section?

There's a few other things, too,, like having to make interesting tradeoffs on TTLs: if you have low enough TTLs to support using DNS as a near-real-time configuration of what web servers to use, what happens when DNS itself breaks? There's some operational pain there, to be sure.

This is all to say: there's clearly a lot of angles to something as simple as using SRV records in lieu of A/AAAA/CNAME records, and we're here, right now, talking about this, all because of the rushed design of HTTP/2.0, which is a protocol unto itself. It's not surprising that a standard that went through so quickly managed to not include something, like SRV records, which have been in a weird state of existence since their inception. To think it would be so simple, so easy, seems incredibly overoptimistic.


If you have this problem, you are in control of what DNS server you use, and can make sure it sends the appropriate records. We are talking about a “problem” which affects very few people, and those it affects have the budget to make sure this is the case.

Also, there is, by now, a lot of operational experience with both MX records and SRV records, and they are well understood. They are not the wild unknown you make them out to be.


The argument about computing power and CO2 pollution is misguided. HTTP/2 no longer requires encryption, so the TLS/non-TLS trade-offs remain the same as before (and their compute impact is mitigated by hardware AES support, etc.). The other relevant changes (SPDY, header compression, push) reduce the number of context switches and network round-trips required and the total time required for devices to spend in high power mode, and for the user to spend waiting. That results in a reduction, not increase, in total power consumption.

Taking server CPU utilization numbers as an indicator of total power consumption is pretty misguided in this context, and my understanding is that even those are optimized (and will continue to be optimized) to the point where TLS and SPDY have negligible overhead (or, in the case of SPDY, may even result in lower CPU usage).


Show me the mainstream browsers that will use HTTP/2 without SSL/TLS ?

The difference between you and me, may be that I have spent a lot of time measuring computers power usage doing all sorts of things. You seem to be mostly guessing ?


I don't have Kill-a-Watts or rack PDU data for fleets of webservers, unfortunately. What I do have is CPU performance data from running with and without SSL gateways and SPDY in production, and all I can say is that the server's CPU utilization is not significantly impacted by them. I also have client-side data that shows substantial load speed improvements when using SPDY. That should result in a C-state profile improvement on the CPU, but I'll need to collect more data to confirm.


phk (poul henning kamp) is the lead developer of varnish, in case people are not familiar with him.


His technical prowess are only matched by his privacy champion credentials, see e.g. his Operation ORCHESTRA talk https://www.youtube.com/watch?v=fwcl17Q0bpk


> Local governments have no desire to spend resources negotiating SSL/TLS with every single smartphone in their area when things explode, rivers flood, or people are poisoned.

That's one horrible argument, though. The cost of a text-based protocol on ethernet, over TCP greatly outweighs the cost of the encryption process.

Yes, of course encrypting things will increase computational requirements yet the cost is negligible in comparison to the problem being solved (stopping the trade of personal data).

It's hard for me to associate a privacy champion with these statements.


I can also recommend this one, which - strangely - turns into a privacy/security discussion for the second part of the talk.

http://www.infoq.com/presentations/HTTP-Performance


Indeed, so keep in mind that he has a vested interest in being able to use the FreeBSD feature to efficiently copy a file from a disk into a network interface.

Having to break stuff into protocol frames or having every byte have compute done on it for encryption foils his optimization. It doesn't mean that his optimization is more important than TCP connection sharing and ubiquitous confidentiality, integrity and authenticity.

(Edit: typo)


Isn't this the guy that basically refuses to implement SSL/TLS in varnish[1]? The more cynical part of me wonders if that has anything to do with his rejection of HTTPS everywhere.

[1] https://www.varnish-cache.org/docs/trunk/phk/ssl.html


Show me a quality implementation of SSL/TLS I can use ?

All the ones I've looked at are shitty source code.


Why hold yourself to a much higher standard than your customers do? Even though OpenSSL is shitty, everyone still uses it.


That's actually a very good question, thank you for asking it.

Because I think that we can do much better than the pile of IT-shit we have produced until now.

I am actively trying to do that, through my own code, through the articles that I write and through the discussions I engage in.

Not everybody has the luxury of doing that -- the kids have to be fed and the mortgage has to be paid and a job is a job -- but those of us who can, have an obligation to try to make the world a better place, by raising quality if IT.


What about LibreSSL?

-- Freely licensed

-- Has been ported to FreeBSD

-- Is certainly less of a tangled mess than OpenSSL


It is certainly a step in the right direction, my main worry is that the OpenBSD crew has a tendency to to -- SQUIRREL!! and forget their old toy when they spot a new shiny one.


They do? I personally haven't seen much of that, though I'm admittedly relatively new to the BSD world, so perhaps I just haven't been watching/using the BSDs long enough to notice that sort of tendency in OpenBSD relative to the others.

I know they've been making a lot of switches lately in terms of what ships with the base OS, but pretty much all of them (that I know of) have been more akin to forgetting someone else's old toys and playing with their own shiny new ones. OpenSSH and PF in particular seem to still be alive and well, and they've been around for quite a long time.


"HTTP/2.0 could have done away with cookies, replacing them instead with a client controlled session identifier."

It doesn't make much sense to get rid of cookies alone, not when there are multiple ways of storing stuff in a user's browser, let alone for fingerprinting - http://samy.pl/evercookie/

Getting rid of cookies doesn't really help with privacy at this point and just wait until IPv6 becomes more widespread. Speaking of which, that EU requirement is totally stupid.

The author also makes the mistake of thinking that we need privacy protections only from the NSA or other global threats. That's not true, we also need privacy protections against local threats, such as your friendly local Internet provider, that can snoop in on your traffic and even inject their own content into the web pages served. I've seen this practice several times, especially on open wifi networks. TLS/SSL isn't relevant only for authentication security, but also for ensuring that the content you receive is the content that you asked for. It's also useful for preventing middle-men from seeing your traffic, such as your friendly network admin at the company you're working for.

For example if I open this web page with plain HTTP, a middle-man can see that I'm reading a rant on HTTP/2.0, instead of seeing just a connection to queue.acm.org. From this it can immediately build a useful profile for me, because only somebody with software engineering skills would know about HTTP, let alone read a rant on IETF's handling of version 2.0. It could also inject content, such as ads or a piece of Javascript that tracks my movement or whatever. So what's that line about "many HTTP applications have no need for [SSL]" doing in a rant lamenting on privacy?

HTTP/2.0 probably has flaws, but this article is a rant about privacy and I feel that it gets it wrong, as requiring encrypted connections is the thing that I personally like about HTTP/2.0 or SPDY. Having TLS/SSL everywhere would also make it more costly for the likes of NSA to do mass surveillance of user's traffic, so it would have benefits against global threats as well.


Actually, getting rid of cookies would fit almost all HTTP requests into a single packet, so there are tangible technical benefits, even without the privacy benefits.

You seem confused about cryptography.

Against NSA we only need secrecy, privacy is not required.

Likewise integrity does not require secrecy, but authentication (which doesn't require secrecy either).

You don't think that anybody can figure out what you are doing when you open a TCP connection to queue.acm.org right after they posted a new article, even if that connection is encrypted ? Really ? How stupid do you think NSA is ?

Have you never heard of meta-data collection ?

And if you like your encrypted connections so much, you should review at the certs built into your browser: That's who you trust.

I'll argue that's not materially better than unencrypted HTTP.

(See also Operation Orchestra, I don't think you perceive the scale of what NSA is doing)


The line on secrecy vs privacy doesn't make sense. Actually help me out, because I'm not a native English speaker - if you're implying that the NSA should be able to snoop in on my traffic without a warrant, as long as it keeps it secret, then I beg to differ.

Visiting an article doesn't happen only after it was posted. And surely the NSA can certainly figure out ways to track you, but their cost will be higher. Just like with fancy door locks and alarm systems, making it harder for thieves to break in means the probability of it happening drops. Imperfect solutions are still way better than no protections at all (common fallacy nr 1).

All such rants are also ignoring that local threats are much more immediate and relevant than the NSA (common fallacy nr 2).

On trusting the certificate authorities built into my browser, of course, but then again this is a client-side issue, not one that can be fixed by HTTP 2.0 and we do have certificate pinning and even alternatives championed by means of browser add-ons. Against the NSA, nothing is perfect of course, unless you're doing client-side PGP encryption on a machine not connected to the Internet. But then again, that's unrelated to the topic of HTTP/2.0.


With unencrypted HTTP, NSA can just grab the packets on the fiber and search for any keyword they want.

With a self-signed cert they would have to do a Man In The Middle attack on you to see your traffic.

They don't have the capacity (or ability! man of their fibertaps are passive) to do that to all the traffic at all the time.

The problem with making a CA-blessed cert a requirement for all or even most of the traffic, is that it forces the NSA to break SSL/TLS or CAs definitively, otherwise they cannot do their job.

Fundamentally this is a political problem, just slapping encryption on traffic will not solve it.

But it can shift the economy of the situation -- but you should think carefully what way you shift it.


> The problem with making a CA-blessed cert a requirement for all or even most of the traffic, is that it forces the NSA to break SSL/TLS or CAs definitively, otherwise they cannot do their job.

Isn't the whole point of pervasive authenticated encryption to prevent the NSA from "doing their job" (at least the spying part of it)?

> But it can shift the economy of the situation -- but you should think carefully what way you shift it.

It shifts more than the economy of the situation. It also forces a shift from passive attacks to active attacks, which are easier to detect and harder to justify. Forcing the attacker to justify their acts has a political effect.


> Forcing the attacker to justify their acts has a political effect.

Which essentially means that terrorists and pedophiles are gonna be using encryption to harm the kids and browsers will have to obey to a new wonderful kids-protecting law and add a backdoor.


Pervasive authenticated encryption will not prevent the NSA from doing their job, as long as lawmakers think they should do their job.

Instead you will see key-escrow laws or even bans on encryption.

You cannot solve the political problem by applying encryption.


Laws against encryption will never be international. In the US it happened before [1], so there is indeed precedent. But such laws prevent a country from being competitive in the international marketplace, therefore many countries will not agree to it, just as they aren't agreeing with IP laws. And yes, I also believe that this trend on having national firewalls for censoring content will also not last for long, for the same reason.

What I love about technology is that it cannot be stopped with lawmaking.

[1] http://en.wikipedia.org/wiki/Export_of_cryptography_from_the...


The historical evidence that technology cannot be stopped by lawmakers is very weak, if it even exists in the first place.

Very few lawmakers have really tried, and few technologies have been worth it in the first place.

The relevant question is probably if technology can be delayed by lawmaking and how long time.

There is no doubt however that policies can be changed, most places it just takes elections but a few places may need a revolution.

Thinking this is a problem you can solve by rolling out SSL or TLS is incredibly naive.


> Instead you will see key-escrow laws or even bans on encryption.

That's a defeatist attitude: "we can't win, so let's not even try".

It's not guaranteed that pervasive authenticated encryption will lead to key-escrow laws or bans on encryption. In fact, the more common authenticated encryption is, the harder is to pass laws against it.

As an example, consider how common encrypted wireless networks are nowadays. A blanket ban on encryption would be opposed by many of the wireless network owners. And that's only one use of encryption.

Key escrow has the extra problem of being both costly and very complex to implement correctly.


No, I'm not saying "lets not even try", I'm saying "Lets try with the right tools for the job: Voting ballots."

In the meantime we should not cripple our protocols, hoping that the NSA will go "aahh shucks!" and close shop, when the law clearly tells them to "Collect everything."


"HTTP/2.0 will be SSL/TLS only"

Yes! Finally 99% of users won't be hacked by a default initial plaintext connection! We finally have safe(r) browsing.

", in at least three out of four of the major browsers,"

You had ONE JOB!

Jokes aside, privacy wasn't a consideration in this protocol. Mandatory encryption is really useful for security, but privacy is virtually unaffected. And the cookie thing isn't even needed; every browser today could implement a "click here to block cookies from all requests originating from this website" button.

We need the option to remove encryption. But it should be the opposite of what we currently do, which is to default to plaintext unless you type an extra magic letter into the address (which no user ever understands, and is still potentially insecure). We should be secure by default, but allow non-secure connections if you type an extra letter. Proxies could be handled this way by allowing content providers to explicitly mark content (or domains) as plaintext-accessible.

The problem I fear is as everyone adopts HTTP/2 and HTTP/1.1 becomes obsolete (not syntactically but as a strict protocol) it may no longer be possible to write a quick-and-dirty HTTP implementation. Before I could use a telnet client on a router to test a website; now the router may need an encryption library, binary protocol parser, decompression and multiplexing routines to get a line of text back.


HTTPS can also be used to protect you from malware [1] [2] and stop censorship [3]. If anything, news sites should be among the first to adopt strong HTTPS connections since many people visit them and the news also needs to not be censored.

[1] https://citizenlab.org/2014/08/cat-video-and-the-death-of-cl...

[2] http://www.ap.org/Content/AP-In-The-News/2014/AP-Seattle-Tim...

[3] http://ben.balter.com/2015/01/06/https-all-the-things/

As for the performance side, SPDY is probably not perfect, but it seems to generally improve over current HTTP, even if it uses secure connection. But even if it didn't, using HTTPS seems to add negligible overhead, and compared to the security it gives I think it's well worth it.

https://www.httpvshttps.com/


A very well written rant.

HTTP is supposed to have had opportunistic encryption, as per RFC 7258 (Pervasive Monitoring Is an Attack, https://news.ycombinator.com/item?id=7963228), but it looks like the corporate overlords don't really understand why is it at all a problem for the independent one-man projects to acquire and update certificates every year, for every little site.

As per a recent conversation with Ilya Grigorik over at nginxconf, Google's answer to the cost and/or maintenance issues of https --- just use CloudFlare! Because letting one single party do MITM for the entire internet is so sane and secure, right?


What's exactly the difference between cookies and session identifiers exactly? There's no law requiring you to send kilobytes of cookies (news.ycombinator.com gets by with a 22-byte cookie). Of course the way HTTP cookies handle ambient authority is rather imperfect, but that can be solved within the system.


The difference is who makes the decisions: session id is controlled by the client. cookies by the server.

FaceBook, Twitter etc. track you all over the internet with their cookies, even if you don't have an account with them, whenever a site puts up one of their icons for you to press "like".

With client controlled session identifies, uses would get to choose if they wanted that.

The reason YC gets by with 22 bytes is probably that they're not trying to turn the details of your life into their product.


> The so-called "multimedia business," which amounts to about 30% of all traffic on the net, expresses no desire to be forced to spend resources on pointless encryption.

I thought that "pointless encryption" was basically the definition of DRM? And the largest video site, traffic-wise (YouTube) is already encrypted.


I thought Netflix was the largest video site, traffic-wise?


Seems like it depends on who you ask. I imagine something closer to the truth is that Netflix are bigger in the US but YouTube are bigger globally.


> the IETF can now claim relevance and victory by conceding practically every principle ever held dear in return for the privilege of rubber-stamping Google's initiative.

What principles does he claim the IETF is conceding here?


> One remarkable property of this name is that the abbreviation "WWW" has twice as many syllables and takes longer to pronounce.

World Wide Web

Dou Ble U Dou Ble U Dou Ble U

I count three times as many, is this an accent thing?


Maybe in Danish it's only 6 syllables? Because the English language completely agrees with you.


Most people I know pronounce it dub-dub-dub so I started doing that as well. Maybe a London thing?


I think it's an accent thing. When I say WWW at normal speed, it sounds more like dubble-u dubble-u dubble-u. American, east coast USA.


Unless you say "duh blew duh blew duh blew", you're still using 9 syllables rather than six 6.


I'm not arguing that it's nine syllables, just saying that it's most likely an accent / speed of speech thing as far as the author's six-vs-four thing goes.


I say "triple double u".


Couldn't agree with most of this more. HTTP/2.0 seems to be me to be an entirely pointless set of unwanted complications and agendas disguised as technical improvements.


"Has everybody in IETF forgotten CNN's exponential traffic graph from 14 years ago?"

Any ideas?


I guess it's a reference to september 11th 2001.


Seriously though, 14 years in internet time is about 1400 years. Whatever was on that graph is probably not relevant today.


Upvoted, not because I think it is a particularly good article, but we seem to have a pretty good discussion based on it.


I have go go shopping and cook dinner, but if I'll be back in a couple of hours.

Poul-Henning


I dont care about IETF or HTTP/2.

Ill just keep using HTTP/1.1. It works on my computer.


When an article about technical problems with a protocol starts whining about how the protocol will increase CO2 pollution I know it's BS. WTH was the ACM thinking by wasting out time with this crap?


This is the sort of self-important technorant that I've come to despise in tech news. It is another example of a blogger pandering to readers' absurd addiction with outrage. HTTP/2.0 is not an outrage. It is imperfect, just as HTTP/1.1 is imperfect and ill-suited to today's rich web applications that were not envisioned at its inception.

[edit] It's a bit ironic that this story was delivered to many of us (via Hacker News) over SPDY--HTTP/2.0's dominant source of inspiration.


Considering phk works as a developer for a very popular HTTP based application (Varnish) and contributed to competing specs for HTTP2 he's hardly a pandering blogger. He's an expert annoyed that the improvements in standards in his field are absurdly slow and perfectly entitled to voice his ire.


He is no doubt perfectly entitled to rant. I'm also fairly confident that phk has contributed to more important projects vital to the tech ecosystem than I ever will. I simply argue that there are more constructive and productive ways to go about pointing out a protocol's flaws.

Phk has had issues with the process for quite some time, and I feel that his embitterment about the process has jaded his view of the protocol. Phk on SPDY/HTTP/2.0 http://lists.w3.org/Archives/Public/ietf-http-wg/2014AprJun/...


"I simply argue that there are more constructive and productive ways to go about pointing out a protocol's flaws."

Which he has done: https://www.varnish-cache.org/docs/trunk/phk/http20.html

It's a pet peeve of mine when people just fling this sort of accusation about as if every word-count limited column isn't any good unless it's 20 times longer and basically includes half of Wikipedia transitively. It's a column in a trade magazine. There isn't a place for a detailed technical discussion there, so complaining that there isn't one is complaining about something that can't be fixed.

Besides, cards on the table, I think he's basically correct, cynicism and all here. Sometimes the right answer is to just say no, and failure to say no is not good thing when that is what is called for.


"Which he has done: https://www.varnish-cache.org/docs/trunk/phk/http20.html"

You are highlighting my point. The arguments made here are far more pragmatic.

My qualms are with how his points are made in the original blog post, not with what points he is making; many of which are quite valid.

With that said, the totality of his argument allows the perfect to be the enemy of the good. Which in my opinion is an invariably flawed position.


BTW - that article is two and half years old and there's been a fair amount of work on HTTP/2 since then


Oh, it's this guy again? He already stirs up a bunch of outrage (and zero useful contribution) every year like clockwork by ranting about build systems. Is he "entitled" to a rant? Sure. But he should surely have realised by now that nothing positive will come of it, and there are better ways to actually improve things.


His posts to the IETF HTTP WG mailing list:

https://www.w3.org/Search/Mail/Public/search?hdr-1-name=from...

Results : 889

He's not just some guy who stirs up a bunch of outrage (and zero useful contribution) every year, regardless of what you may think of his arguments.


While the technical validity/integrity of HTTP/2.0 is certainly up for debate, there's a bigger issue here, and phk certainly isn't the first to talk about: the rush by the IETF to basically polish SPDY into HTTP/2.0.

To me, that's the most fascinating, and entirely relevant (read: not self-serving or self-important) piece of what the author is talking about.


SPDY was proven to be working in the wild, and others such as Twitter and Facebook had both adopted it.

PHK wanted to do the sort of ground up work that would have taken ten years rather than 3.


But his point, at least the way it comes across to me, is that the IETF simply jumped on the bandwagon of SPDY and fast tracked turning it into HTTP/2.0. Precisely the thing that you wouldn't expect a standards committee to do, because developing solid standards is hard.

Instead of spending 10 hypothetical years of their own time, doing the ground up work, they spent a small portion on HTTP/2.0. That portion is, in fact, smaller than the time spent on SPDY overall. So, how much standards-ing style work did they do? How much actual forethought was given besides making SPDY acceptable enough for a draft?

That's my takeaway.


HTTP/2 is not SPDY. They're not compatible.

The IETF mailing list archives are open if you'd like to see what 3 years of standardsing looks like. You can also go to the chair's blog (mnot.net) to read the evolution. Or Roy Fielding's presentations on Waka which were one of the early (2002) catalysts among the IETF-orbiters that lead to HTTP/2. http://gbiv.com/protocols/waka/200211_fielding_apachecon.ppt

Ultimately Waka was already a decade late, Google had SPDY which had some similarities and was already deployed, and people didn't want to wait another decade to build something no one might adopt. There were aspects of SPDY that needed changing to make it a true protocol standard rather than just a shared library, and a few features that out of rough consensus were changed.

That's how successful standards processes actually work - codifying what's already working, implemented in the field. Implementing a standard no one uses? See Atompub. See XHTML2.


IETF did a call for proposals, Google had a protocol that was up and running with others using it, MS put in a similar proposal, and another partially formed one came along too.

SPDY was the only really viable option for the IETF to choose - it was running at scale and there was knowledge out there about deploying it and it's performance.

Although SPDY was the prototype, HTTP/2 isn't SPDY anymore it's evolved and moved on taking some of the concepts from SPDY and introducing it's own.

Given how long it took HTTP/1.1 to get ratified we suck at ten year standardisation processes.


SPDY was chosen because nobody got any serious notice or time to come up with any alternatives.

If the IETF wanted a fresh look at HTTP, they would not have set such a short deadline for submissions.

It was evident from the start that this was about gold-plating & rubber-stamping SPDY, and people saw that and said so, already back then.

HTTP/2 isn't compatible with SPDY any more, but there is no significant difference between them, only a few deck-chairs were arranged differently in HTTP/2

The fact that people spend ages chewing the cud on a "clarification" of HTTP/1.1 has no impact on how long time it would take to define a HTTP/2 protocol.

Quite the contrary, HTTP/2 was a chance to jettison many of the horrors and mistakes that made the HTTP1.1bis effort so maddening.

You can see some of my thinking about what HTTP/2 should have been doing here:

http://phk.freebsd.dk/words/httpbis.html


Well, with the right kind of advocacy, couldn't the community push for httpbis and push back on http/2 ? True Google control it's servers and Chrome but not Firefox and the rest of the world. Not everybody is going to drink the KoolAid.

That would be pushing HTTP/2 to irrelevancy the same way HTML5 (for good or for worse) pushed back on XHTML. Sad way to go but if need be ...


Yeh, I've heard PHK give this rant in person under the guise of HTP/2 and performance.

The really annoying thing about his rant is the number of assertions he makes without backing them up with data




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: