A place to cache linked articles (think custom and personal wayback machine)
Ви не можете вибрати більше 25 тем Теми мають розпочинатися з літери або цифри, можуть містити дефіси (-) і не повинні перевищувати 35 символів.

title: Plaintext HTTP in a Modern World url: https://jcs.org/2021/01/06/plaintext hash_url: 9e5d68c745

On the modern web, everything must be encrypted. Unencrypted websites are treated as relics of the past with browsers declaring them toxic waste not to be touched (or even looked at) and search engines de-prioritizing their content.

While this push for security is good for protecting modern communication, there is a whole web full of information and services that don’t need to be secured and those trying to access them from older vintage computers or even through modern embedded devices are increasingly being left behind.

Note: This article is mostly directed at those serving personal websites, like this one, with no expectation of privacy or security by most readers of the content. If you are running a commercial website, collecting personal information from users, or transmitting sensitive data that users would expect to be done privately, disregard everything here and don’t bother offering your website over plaintext.

HTTP Upgrading

Though it’s less common these days, users may still type in your website URL manually as opposed to clicking on a link that already includes the https scheme. (Imagine a user hearing your website mentioned on a podcast and they have to type it into their browser.)

For a URL entered with an http scheme or, more commonly, no scheme specified, unless your domain is listed in the STS preload list of the user’s browser or they are using a plugin like HTTPS Everywhere, the browser will default to loading your website over plaintext HTTP. For this reason, even if your website is only served over HTTPS, it’s still necessary to configure your server to respond to plaintext HTTP requests with a 301 or 302 redirect to the HTTPS version of the URL.

If your server is properly configured to send a Strict-Transport-Security header, once the user’s browser loads your website’s HTTPS version, the browser will cache that information for days or months and future attempts to load your site will default to the HTTPS scheme instead of HTTP even if the user manually types in an http:// URL.

Avoid Forced Upgrading by Default

This forced redirection is a major cause of websites becoming inaccessible on vintage computers. Your server responds to the HTTP request with a 301 or 302 status and no content, and either a) the browser follows the redirection and tries to negotiate an SSL connection, but your server doesn’t offer legacy SSL versions or old ciphers so the negotiation fails, or b) the browser just doesn’t support SSL/TLS at all and fails to follow the redirection.

A real-life example of this is that I recently purchased a Powerbook G4 and updated it to MacOS X 10.5.8 from 2009. It has a 1.5Ghz processor and 1.25Gb of RAM, and can connect to my modern WiFi network and use most of my USB peripherals. It includes a Mail client that can talk to my IMAP and SMTP servers, and a Safari web browser which can render fairly modern CSS layouts. However, it’s unable to view any content at all on Wikipedia simply because it can’t negotiate TLS 1.2 with the ciphers Wikipedia requires. Why is a decade-old computer too old to view encyclopedia articles?

A solution to this problem is for websites to continue offering their full content over plaintext HTTP in addition to HTTPS. If you’re using Nginx, instead of creating two server blocks with the listen *:80 version redirecting to the listen *:443 ssl version, use a single server block with multiple listen lines, like so:

server {
    server_name jcs.org;
    listen *:80;
    listen *:443 ssl http2;

    ssl_certificate ...;
    ssl_certificate_key ...;
    ...
    ssl_protocols TLSv1.2;
}

While it may seem counter to the point of this article, I recommend not serving legacy SSL/TLS ciphers like SSLv3 to try to help older browsers. These old protocols and ciphers are insecure and broken, and I feel it’s better to make it clear to the user they’re connecting to a website in cleartext than to offer a false sense of security by having the browser indicate a “secure connection” when it’s being done over an old, broken protocol. Also, while it may not be practical anymore, modern browsers might be tricked into negotiating an old, broken cipher if your server still offers it.

Even if you do offer legacy protocols and ciphers to older browsers, your TLS certificate might be signed by a certificate authority whose root certificate is not trusted by older browsers.

Continue Upgrading Modern Browsers

Now that your entire website is being offered to legacy browsers over HTTP, modern browsers can still be directed to connect over HTTPS for added privacy by responding to the Upgrade-Insecure-Requests header. This header is only sent by modern browsers that support CSP when making an HTTP request, so it’s a reasonable indicator that the client is sufficiently modern and robust that it will be able to negotiate a TLS 1.2 connection if redirected to your site’s HTTPS version.

For Nginx, this can be done inside a server block by defining a variable per-request that includes whether it was made over plaintext HTTP and whether it included an Upgrade-Insecure-Requests: 1 header:

server {
    ...
    set $need_http_upgrade "$https$http_upgrade_insecure_requests";
    location / {
        if ($need_http_upgrade = "1") {
            add_header Vary Upgrade-Insecure-Requests;
            return 301 https://$host$request_uri;
        }

        ...
    }
}

This location block will respond for any request and, for those made over plaintext HTTP where $https will be blank and which included an Upgrade-Insecure-Requests: 1 header, they will be offered a 301 redirection to the HTTPS version. The Vary header is sent so that any caching proxies in the middle won’t cache the redirect.

Content Concessions

With legacy browsers now able to access your site, you may want to make some changes to your HTML and CSS to allow your site to render with some degree of readability. I don’t recommend giving it the full IE6 treatment catering to the lowest common denominator, but at least make the main text of your site readable.

Obviously avoid JavaScript unless it is used progressively, though many older browsers raise error dialogs at the mere presence of modern JavaScript that can’t be parsed even if it’s never executed.

Modern CSS and complex layouts can also be a problem even for browsers just a few years old, so it’s probably best to use them sparingly. For any <a> or <img> tags that are local to your site, use relative links to avoid specifying a particular scheme.

<a href="/posts/blah">
<img src="/images/...">
</a>

If you have to specify an absolute URL to another site that is also available over both HTTP and HTTPS, you can specify it without a scheme or colon and the browser will use the same http: or https: that the document is being viewed over:

<a href="//other.example.com/">My other site</a>

A Rant About Gemini

Tangentially related, Gemini is a modern document transfer protocol that aims to fit between the ancient Gopher protocol and the too-modern HTTP web. Its document markup language is based on Markdown so it’s very lightweight and simple to parse without complex HTML/CSS parsers.

It sounds like the perfect thing to bring modern content to vintage computers, except that its protocol requires all content to be transferred over TLS 1.2 or higher which makes it nearly impossible to access from a vintage computer or even a modern embedded system with limited CPU power.

This requirement seems poorly thought out, especially considering the Gemini protocol doesn’t even support forms (other than a single text box on a search form) so there’s no chance of users submitting private data, and there’s no mechanism for client-server sessions so clients can’t be authenticated, meaning everything served pretty much has to be public anyway.

Its protocol author argues that TLS is just a simple dependency no different than a TCP server module, so it should be trivial to implement in any client or server. But if your computer’s CPU is so slow that a modern TLS negotiation would take so long as to be unusable, or its platform doesn’t have a TLS 1.2 library available, that makes it difficult to write a client without depending on an external system [2].

In my opinion, the protocol should recommend that servers offer both plaintext and TLS encrypted versions and recommend that clients prefer TLS, but may use plaintext if needed. Clients for modern operating systems can continue enforcing a TLS requirement so their users aren’t feeling any less secure.

Perhaps just sending actual Markdown text over plaintext HTTP to clients that ask for it can be the new, old web.

Accept: text/markdown,text/plain;q=0.9,*/*;q=0.1

Please don’t contact me to “well ackchyually” me and explain MITM attacks and how your terrible ISP inserts ads into your unencrypted web pages and how you were able to make a Gemini client out of a whistle and some shoelaces. If you don’t want to make your website content available to stupid old computers, then don’t.