title: On HTTPS and Hard Questions url: https://timkadlec.com/remembers/2018-08-14-https-and-hard-questions/ hash_url: 264073e45043c9cf1a1387cb8e0e00d0
Eric Meyer was recently in Uganda, where he experienced first-hand a very undesirable side effect of HTTPS. The area he was in was served by satellite internet access, and experienced significant latency (a floor of 506 milliseconds) and packet loss (between 50-80% was typical). In addition, there is a cap on the data that an account can use in any given month. Go over the cap, and you either pay overages or lose data access entirely until the next billing cycle.
To counter this, the school he was visiting sets up their own local caching server. But, as he explains, this approach falls apart when HTTPS gets involved.
A local caching server, meant to speed up commonly-requested sites and reduce bandwidth usage, is a “man in the middle”. HTTPS, which by design prevents man-in-the-middle attacks, utterly breaks local caching servers. So I kept waiting and waiting for remote resources, eating into that month’s data cap with every request.
Eric acknowledged that HTTPS is a good idea (I agree) but also pointed out that these implications can’t be ignored.
Beyond deploying service workers and hoping those struggling to bridge the digital divide make it across, I don’t really have a solution here. I think HTTPS is probably a net positive overall, and I don’t know what we could have done better. All I know is that I saw, first-hand, the negative externality that was pushed onto people far, far away from our data centers and our thoughts.
Every technology creates winners and losers. HTTPS is no exception.
Many of the responses to the post were…predictable. Some folks read this as an “anti-HTTPS” post. As Brad recently pointed out, we need to get better at talking about technology “…without people assuming you’re calling that technology and the people who create/use it garbage.”
Eric’s post is exactly the kind of reasoned, critical thinking that our industry could benefit from seeing a bit more of. HTTPS is great. It’s essential. I’m very happy that we’ve reached a point where more requests are now made over HTTPS than HTTP. It took a lot of work to get there. A lot of advocacy, and a focus on making HTTPS easier and cheaper to implement.
But the side-effects experienced by folks like those in that school in Uganda are still unsolved. Noting this isn’t blaming the problem on HTTPS or saying HTTPS is bad, it’s admitting we have a problem that we still needs solving.
I was thinking about this issue myself recently. I live in a small town and our mobile data connectivity is a bit spotty, to say the least. I use T-Mobile, which is normally excellent. In my little town, however, that’s not the case. Recently, it seems T-Mobile has partnered with someone local to provide better and faster data connections. But it’s all roaming. T-Mobile doesn’t charge for that, but it does cap your mobile data usage. After you exceed 2GB in a given month, you’re cut off. In the few months since the data has become available, it’s a number I’ve exceeded more than a few times.
So I’ve been taking a few steps to help. One of those was to turn Chrome’s Data Saver (they’re proxy service) back on. It does a good job of cutting down data usage where it can, but it’s useless for any HTTPS site for the same reasons that school’s local caching server is useless—to do what it needs to do it needs to act as a man-in-the-middle. So while Data Saver is extremely effective when it works, it works less and less now.
It’s far from the end of the world for me, but that’s not the case for everyone. There are many folks who rely on proxy browsers and services to access the web in an affordable manner and for them, the fact that the shift to HTTPS has made those tools less effective can have real consequences.
This isn’t an entirely new conversation (as my nemesis, Jason Grigsby, recounted on Twitter). I can personally remember bringing this up to folks involved with Chrome at conferences, online AMA’s and basically whenever else I had the opportunity. The answers always acknowledged the difficulty and importance of the solution while also admitting that what to do about it was also a bit unclear.
Whether or not the topic was overlooked is up for debate (there has been work done by the IETF towards solving this), and I suppose depends entirely on which discussions you were or were not involved in over the past few years. The filter bubble effect is real and works both ways. But the reality is that in the past few years we’ve made tremendous progress getting HTTPS to be widely adopted, but we haven’t done nearly as good a job ensuring that folks have an affordable and simple alternative to the tools they’ve used in the past to access the web.
Should we have moved ahead with HTTPS everywhere before having a production-ready solution to ensure folks could still have affordable access? I honestly don’t know. Is a secure site you can’t access better than an insecure one you can? That’s an impossibly difficult question to answer, and if you asked it to any group of people, I’m sure a heated discussion would ensue.
Many of us, too, are likely the wrong people to answer that. I know I’m not the right person to pose the question to. I can afford to access the web, and I don’t have the same significant privacy concerns that many around the world and down the street do. Having the discussion is essential, but ensuring it happens with the right people is even more so.
Then there’s the question this raises about how we approach building our sites and applications today.
Troy Hunt had one of the most reasoned responses to Eric’s post that I’ve seen. He pointed out that it’s critical that we move forward with HTTPS, but that this is also an essential problem to solve. He also, rightfully, pointed out the root issue: performance.
If you’re concerned about audiences in low-bandwidth locations, focus on website optimisation first. The average page load is going on 3MB, follow @meyerweb’s lead and get rid of 90% of that if you want to make a real difference to everyone right now 😎
I refer back to Paul Lewis’s unattractive pillars so often I should be paying him some sort of monthly stipend, but this is such a clear example of the reciprocal link and importance of security, accessibility, and performance.
The folks using these local caching servers and proxy services are doing so because we’ve built a web that is too heavy and expensive for them to use otherwise. These tools, therefore, are essential. But using them poses serious privacy and security risks. They’re intentionally conducting a man-in-the-middle attack and which sounds so terribly scary because it is.
To protect folks from these kinds of risks, we’ve made a move to increase the security of the web by doing everything we can to get everything running over HTTPS. It’s undeniably a vital move to make. However this combination—poor performance but good security—now ends up making the web inaccessible to many. The three pillars—security, accessibility and performance—can’t be considered in isolation. All three play a role and must be built-up in concert with each other.
Like pretty much everyone in this discussion has acknowledged, this isn’t an easy issue to solve. Counting on improved infrastructure to resolve these performance issues is a bit optimistic in my opinion, at least if we expect it to happen anytime soon. Even improving the overall performance of the web, which sounds like the easiest solution, is harder than it first appears. Cultural changes are slow, and there are structural problems that further complicate the issue.
Those aren’t excuses, mind you. Each of us can and should be doing our part to make our sites as performant and bloat-free as possible. But they are an acknowledgment that there are deeply rooted issues here that need to be addressed.
There are a lot of questions this conversation has raised, and far fewer answers. This always makes me uncomfortable. I write a lot of posts that never get published because ending with unsolved questions is never particularly satisfying.
But I suspect that may be what we need—more open discussion and questioning. More thinking out loud. More acknowledgment that not everything we do is straightforward, that there’s much more nuance than may first appear. More critical thinking about the way we build the web. Because the problems may be hard and the answers uncertain, but the consequences are real.