The small web is beautiful


Summary: I believe that small websites are compelling aesthetically, but are also important to help us resist selling our souls to large tech companies. In this essay I present a vision for the “small web” as well as the small software and architectures that power it. Also, a bonus rant about microservices.

Go to: Software | Web | Server-side | Static sites | Dependencies | Analytics | Microservices

About fifteen years ago, I read E. F. Schumacher’s Small is Beautiful and, despite not being interested in economics, I was moved by its message. Perhaps even more, I loved the terse poetry of the book’s title – it resonated with my frugal upbringing and my own aesthetic.

I think it’s time for a version of that book about technology, with a chapter on web development: The Small Web is Beautiful: A Study of Web Development as if People Mattered. Until someone writes that, this essay will have to do.

There are two aspects of this: first, small teams and companies. I’m not going to talk much about that here, but Basecamp and many others have. What I’m going to focus on in this essay is small websites and architectures.

I’m not the first to talk about the “small web”, but, somewhat surprisingly, only a few people have discussed it using that term. Here are the main web pages I can find that do:

Why aim small in this era of fast computers with plenty of RAM? A number of reasons, but the ones that are most important to me are:

So let’s dive in. I want to cover a bunch of different angles, each with its own subheading.

Small software

If we’re going to talk about a small web, we need to start with small software.

As a teen, I learned to program using x86 assembly and Forth – perhaps odd choices, but my dad was heavily into Forth, and I loved how the language was so simple I could write my own bootstrapped compiler.

In terms of career, I started as an embedded programmer – not as in “embedded Linux” but as in microcontrollers where 16KB of RAM was generous. My current laptop has 16GB of RAM, and that’s not a lot by today’s standards. We were building IP-networked products with one millionth the amount of RAM. Those kinds of micros are as cheap as chips (ahem), and still widely used for small electronic devices, sensors, internet-of-things products, and so on.

You have to think about every byte, compile with size optimizations enabled, and reuse buffers. It’s a very different thing from modern web development, where a JavaScript app compiles “down” to a 1MB bundle, or a single Python object header is 16 bytes before you’ve even got any data, or a Go hello-world binary is 2MB even before you’ve added any real code.

How do you create small programs? I think the main thing is that you have to care about size, and most of us don’t think we have time for that. Apart from embedded development, there’s an entire programming subculture called the demoscene that cares about this. They have competitions for the smallest 4KB demos: who can pack the most graphical punch into 4096 bytes of executable. That’s smaller than many favicons! (Elevated and cdak are two of the highest-rated 4K demos.) Many demosceners go on to become game developers.

It’s not just about executable size … when you’re developing your next command line tool, if you use Go or Rust or even C, your program will be much faster, smaller, and use less memory than a Python or Java equivalent. And easier to install. If you don’t understand why, please do learn. (It’s out of scope for this essay, but to summarize: Go, Rust, and C compile to ready-to-execute machine code, don’t carry around a virtual machine, and don’t have memory overhead for objects like integers.)

But why not apply some of the same principles to web development? In the web world, I think the main trick is to be careful what dependencies you include, and also what dependencies they pull in. In short, know node_modules – or maybe better, no node_modules. More about this below.

Niklaus Wirth of Pascal fame wrote a famous paper in 1995 called A Plea for Lean Software [PDF]. His take is that “a primary cause for the complexity is that software vendors uncritically adopt almost any feature that users want”, and “when a system’s power is measured by the number of its features, quantity becomes more important than quality”. He goes on to describe Oberon, a computer language (which reminds me of Go in several ways) and an operating system that he believes helps solve the complexity problem. Definitely wirth a read!

I’ve been mulling over this for a number of years – back in 2008 I wrote a sarcastic dig at how bloated Adobe Reader had become: Thank you, Adobe Reader 9! It was a 33MB download and required 220MB of hard drive space even in 2008 (it’s now a 150MB download, and I don’t know how much hard drive space it requires, because I don’t install it these days).

But instead of just complaining, how do we actually solve this problem? Concretely, I think we need to start doing the following:

Small websites

I’m glad there’s a growing number of people interested in small websites.

A few months ago there was a sequence of posts to Hacker News about various “clubs” you could post your small website on: the 1MB Club (comments), 512KB Club (comments), 250KB Club (comments), and even the 10KB Club (comments). I think those are a fun indicator of renewed interested in minimalism, but I will say that raw size isn’t enough – a 2KB site with no real content isn’t much good, and a page with 512KB of very slow JavaScript is worse than a snappy site with 4MB of well-chosen images.

Some of my favourite small websites are:

Hacker News: I personally like the minimalist, almost brutalist design, but I love its lightness even more. I just downloaded the home page, and loading all resources transfers only 21KB (61KB uncompressed). Even pages with huge comment threads only transfer about 100KB of compressed data, and load quickly. Reddit has become such a bloated mess in comparison. Hacker News, never change!

Lobsters: a similar news-and-voting site, with slightly more “modern” styling. It uses some JavaScript and profile icons, but it’s still clean and fast, and the total transfer size for the homepage is only 102KB. You just don’t need megabytes to make a good website.

Sourcehut: I like the concept behind Drew DeVault’s business, but I love how small and anti-fluff the website is. He has set up a mini-site called the Software Forge Performance Index that tracks size and browser performance of the prominent source code websites – Sourcehut is far and away the lightest and fastest. Even his homepage is only 81KB, including several screenshot thumbnails.

SQLite: not only is SQLite a small, powerful SQL database engine, the website is fantastically small and content-rich. Even their 7000-word page about testing is only 70KB. How do they do this? It’s not magic: focus on high-quality textual content, minimal CSS, no JavaScript, and very few images (a small logo and some SVGs).

LWN: I’m a little biased, because I’ve written articles for them, but they’re an excellent website for Linux and programming news. Extremely high-quality technical content (and a high bar for authors). They’re definitely niche, and have a “we focus on quality content, not updating our CSS every year” kind of look – they’ve been putting out great content for 23 years! Their homepage only downloads 44KB (90KB uncompressed).

Dan Luu’s blog: this is one of the more hardcore examples. His inline CSS is only about 200 bytes (the pages are basically unstyled), and his HTML source code doesn’t use any linefeed characters. Kind of a fun point, although then he goes on to load 20KB of Google Analytics JavaScript…

As a friend pointed out, those websites have something of an “anti-aesthetic aesthetic”. I confess to not minding that at all, but on the other hand, small doesn’t have to mean ugly. More and more personal blogs and websites have adopted a small web approach but are more typographically appealing:

There are many, many more. Programmer Sijmen Mulder created a nice list of text-only websites – not quite the same thing as small, but it definitely overlaps!

However, it’s not just about raw size, but about an “ethos of small”. It’s caring about the users of your site: that your pages download fast, are easy to read, have interesting content, and don’t load scads of JavaScript for Google or Facebook’s trackers. Building a website from scratch is not everyone’s cup of tea, but for those of us who do it, maybe we can promote templates and tools that produce small sites that encourage quality over quantity.

For this website, I lovingly crafted each byte of HTML and CSS by hand, like a hipster creating a craft beer. Seriously though, if your focus is good content, it’s not hard to create a simple template from scratch with just a few lines of HTML and CSS. It will be small and fast, and it’ll be yours.

Loading this essay transfers about 23KB (56KB uncompressed), including the favicon and analytics script. It’s small, fast, and readable on desktop or mobile. I don’t think it’s too bad looking, but I’m primarily aiming for a minimalist design focussed on the content.

In addition to making sure your HTML and CSS are small, be sure to compress your images properly. Two basic things here: don’t upload ultra-high resolution images straight from your camera, and use a reasonable amount of JPEG compression for photos (and PNG for screenshots or vector art). Even for large images, you can usually use 75% or 80% compression and still have an image without JPEG noise. For example, the large 1920x775 image on the top of my side business’s homepage is only 300KB.

Speaking of hero images, you don’t need big irrelevant images at the top of your blog posts. They just add hundreds of kilobytes (even megabytes) to your page weight, and don’t provide value. And please don’t scatter your article with animated GIFs: if there’s something animated on the screen, I can hardly concentrate enough to read the text – and I’m not the only one. Include relevant, non-stock images that provide value equal to their weight in bytes. Bare text is okay, too, like a magazine article.

IndieWeb.org is a great resource here, though they use the term “indie” rather than “small”. This movement looks more organic than the Small Technology Foundation (which has even been critiqued as “digital green-washing”), and their wiki has a lot more real content. IndieWeb also promotes local Homebrew Website Clubs and IndieWebCamp meetups.

Emphasize server-side, not JavaScript

JavaScript is a mixed blessing for the web, and more often than not a bane for small websites: it adds to the download size and time, it can be a performance killer, it’s bad for accessibility, and if you don’t hold it right, it’s bad for search engines. Plus, if your website is content-heavy, it probably isn’t adding much.

Don’t get me wrong: JavaScript is sometimes unavoidable, and is great where it’s great. If you’re developing a browser-based application like Gmail or Google Maps, you should almost certainly be using JavaScript. But for your next blog, brochure website, or project documentation site, please consider plain HTML and CSS.

If your site – like a lot of sites – is somewhere in between and contains some light interaction, consider using JavaScript only for the parts of the page that need it. There’s no need to overhaul your whole site using React and Redux just to add a form. Letting your server generate HTML is still an effective way to create fast websites.

Stack Overflow is a case in point. From day one, they’ve made performance a feature by rendering their pages on the server, and by measuring and reducing render time. I’m sure the Stack Overflow code has changed quite a lot since the Jeff Atwood days – it now makes a ton of extra requests for advertising purposes – but the content still loads fast.

Hacker News (there’s that site again) is a server-side classic. With only one tiny JavaScript file for voting, the HTML generated on the server does the rest. And apparently it still runs on a single machine.

Around fifteen years ago there was this great idea called progressive enhancement. The idea was to serve usable HTML content to everyone, but users with JavaScript enabled or fast internet connections would get an enhanced version with a more streamlined user interface. In fact, Hacker News itself uses progressive enhancement: even in 2021, you can still turn off JavaScript and use the voting buttons. It’s a bit clunkier because voting now requires a page reload, but it works fine.

Is progressive enhancement still relevant in 2021? Arguably not, though some die-hards still turn JavaScript off, or at least enable it only for sites they trust. However, I think it’s the mentality that’s most important: it shows the developer cares about performance, size, and alternative users. If Hacker News voting didn’t work without JavaScript, I don’t think that would be a big problem – but it shows a certain kind of nerdish care that it does work. Plus, the JavaScript they do have is only 2KB (5KB uncompressed).

Compare that to the 8MB (14MB uncompressed) that the Reddit homepage loads. And this across 201 requests – I kid you not! – most of which is JavaScript to power all the ads and tracking. Lovely…

You don’t need a “framework” to develop this way, of course, but there are some tools that make this style of server-side development easier. Turbolinks from the Basecamp folks was an early one, and it’s now been superseded by Turbo, which is apparently used to power their email service Hey. I haven’t used these personally, but the ideas are clever (and surprisingly old-skool): use standard links and form submissions, serve plain HTML, but speed it up with WebSockets and JavaScript if available. Just today, in fact, someone posted a new article on Hacker News which claims “The Future of Web Software Is HTML-over-WebSockets”. If Hey is anything to go by, this technique is fast!

On the other hand, sometimes you can reduce overall complexity by using JavaScript for the whole page if you’re going to need it anyway. For example, the registry pages on my wedding registry website are rendered on the client (they actually use Elm, which compiles to JavaScript). I do need the interactivity of JavaScript (it’s more “single page application” than mere content), but I don’t need server-side rendering or good SEO for these pages. The homepage is a simple server-rendered template, but the registry pages are fully client-rendered.

Static sites and site generators

Another thing there’s been renewed interest in recently is static websites (these used to be called just “websites”). You upload some static HTML (and CSS and JavaScript) to a static file server, and that’s it.

Improving on that, there are many “static site generators” available. These are tools that generate a static site from simple templates, so that you don’t have to copy your site’s header and footer into every HTML file by hand. When you add an article or make a change, run the script to re-generate. If you’re hosting a simple site or blog or even a news site, this is a great way to go. It’s content, after all, not an interactive application.

I use GitHub Pages on this site just because it’s a free host that supports SSL, and automatically builds your site using the Jekyll static site generator whenever you push a change. I have a standard header and include the same CSS across all pages easily, though you can have multiple templates or “layouts” if you want. Because most people only view one or two articles on my site, I include my CSS inline. With HTTP/2, this doesn’t make much difference, but Lighthouse showed around 200ms with inline CSS, 300ms with external CSS.

Here’s an example of what a simple Jekyll page looks like (the start of this essay, in fact):

    ---
    layout: default
    title: "The small web is beautiful"
    permalink: /writings/the-small-web-is-beautiful/
    description: A vision for the "small web", small software, and ...
    ---
    Markdown text here.

I’ve also used Hugo, which is a really fast static site generator written in Go – it generates even large sites with thousands of pages in a few seconds. And there are many other options available.

Fewer dependencies

There’s nothing that blows up the size of your software (or JavaScript bundle) like third party dependencies. I always find a web project’s node_modules directory hard to look at – just the sheer volume of stuff in there makes me sad.

Different languages seem to have different “dependency cultures”. JavaScript, of course, is notorious for an “if it can be a library, it should be” attitude, resulting in the left-pad disaster as well as other minuscule libraries like the 3-line isarray. There are also big, heavy packages like Moment.js, which takes 160KB even when minified. There are ways to shrink it down if you don’t need all locales, but it’s not the default, so most people don’t (you’re probably better off choosing a more modular approach like date-fns).

Go now has good dependency management with the recent modules tooling, but it also has a culture of “use the standard library if you can”. Russ Cox wrote an excellent essay about the downsides of not being careful with your dependencies: Our Software Dependency Problem. Go co-creator Rob Pike even made this one of his Go proverbs: “A little copying is better than a little dependency.” You can probably guess by now, but I like this minimalist approach: apart from reducing the number of points of failure, it makes programs smaller.

Python, Ruby, Java, and C# seem to be somewhere in between: people use a fair number of dependencies, but from what I’ve seen there’s more care taken and it doesn’t get as out of hand as node_modules. Admittedly it is a little unfair, as Python (and those other languages) have standard libraries that have more in them than JavaScript’s.

The website YouMightNotNeedjQuery.com shows how many of the tasks you think you might need a library for are actually quite simple to do with plain JavaScript. For example, in one of my projects I use a function like the following to make an API request with plain old XMLHttpRequest:

function postJson(url, data, callback) {
    var xhr = new XMLHttpRequest();
    xhr.onreadystatechange = function () {
        if (xhr.readyState === xhr.DONE) {
            callback(xhr.status, JSON.parse(xhr.responseText));
        }
    };
    xhr.open("POST", url, true);
    xhr.setRequestHeader("Content-Type", "application/json");
    xhr.send(JSON.stringify(data));
}

The moral of the story: think twice before adding dependencies. You’ll keep your websites and programs smaller and more reliable, and you’ll thank Russ Cox later.

Small analytics

Most website owners want some form of analytics to see how many visitors are coming to their site, and from where. The go-to tool is Google Analytics: it’s easy to set up and the UI is pretty comprehensive. But there’s a cost: it adds a significant amount of weight to your page (19KB of JavaScript, 46KB uncompressed), and it sends a lot of user data for Google to collect.

Once again, there’s been renewed interest in smaller, more privacy-friendly analytics systems in recent times. Just this morning I read a provocative article that was highly-voted on Hacker News called “Google Analytics: Stop feeding the beast”.

Last year I wrote two articles for LWN on the subject, so I won’t say too much more here:

For this website I use GoatCounter, which is available as a low-cost hosted service (free for non-commercial use) or as a self-hosted tool. I really like what Martin is doing here, and how small and simple the tool is: no bells and whistles, just the basic traffic numbers that most people want.

Small architectures (not microservices)

Small websites are great for users, but small architectures are great for developers. A small, simple codebase is easy to maintain, and will have fewer bugs than a large, sprawling system with lots of interaction points.

I contend that the “microservices everywhere” buzz is a big problem. Microservices may be used successfully at Google and Amazon, but most companies don’t need to build that way. They introduce complexity in the code, API definitions, networking, deployment, server infrastructure, monitoring, database transactions – just about every aspect of a system is made more complex. Why is that?

It’s been said before, but microservices solve a people problem, not a technical one. But beware of Conway’s Law: your architecture will mimic your company structure. Or the reverse – you’ll have to hire and reorg so that your company structure matches the architecture that microservices require: lots of engineers on lots of small teams, with each team managing a couple of microservices.

That doesn’t mean microservices are always the wrong choice: they may be necessary in huge engineering organizations. However, if you’re working at such a company, you’ve probably already been using microservices for years. If you’re not “Google size”, you should think twice before copying their development practices.

What’s the alternative? The term “monolith” has a bad rap, but I agree with David at Basecamp that monoliths can be majestic. Basecamp is a large, monolithic application, and they run it with just a dozen programmers. David is quick to point out that “the Majestic Monolith doesn’t pretend to provide a failsafe architectural road to glory”. You still have to think, design, and write good code.

Thankfully, people are bouncing back from the cargo culting. Just do a search for “why not microservices” and you’ll find lots of good articles on the subject. One of the recent ones I’ve read is from Tailscale: Modules, monoliths, and microservices.

So what’s my advice?

Okay, so this became more of an anti-microservices rant than I was planning, but so be it.

In terms of counter-examples, Stack Overflow once again comes to mind. They’re one of the web’s busiest sites, but they have a relatively simple, two-tier architecture that they’ve scaled vertically – in other words, big servers with lots of RAM, rather than hundreds of small servers. They have 9 web servers and 4 very chunky SQL servers, with a few additional servers for their tag engine, Redis, Elasticsearch, and HAProxy. This architecture helps them get great performance and the ability to develop with a small team.

My own side business, GiftyWeddings.com, only gets a small amount of traffic, so it’s nothing like Stack Overflow, but it uses a Go HTTP server with SQLite on one of the smallest EC2 instances available, t2.micro. It costs about $8 per month, and I only have one tiny piece of infrastructure to maintain. I deploy using Ansible – a tool that is another good example of simple architecture and boils down to “just use ssh”.

Speaking of SQLite, there’s a growing number of developers who advocate using SQLite to run their websites. SQLite’s “when to use SQLite” page says “any site that gets fewer than 100K hits/day should work fine with SQLite. The 100K hits/day figure is a conservative estimate, not a hard upper bound. SQLite has been demonstrated to work with 10 times that amount of traffic.” Here are some other SQLite success stories:

Summing up

Companies will do what companies do, and continue to make flashy-looking, bloated websites that “convert” well. Maybe you can have an influence at work, and come home to your better half and say “honey, I shrunk the web”. Or maybe you’ll just focus on the small web for your personal projects. (Disclaimer: I mostly do the latter – as part of my day job, I work on Juju, which is not a small system by most measures.)

Either way, I believe the “small web” is a compelling term and a compelling aesthetic. Not necessarily in the visual sense, but in the sense that you built it yourself, you understand all of it, and you run it on a single server or static file host.

There are thousands of excellent examples of small websites, and hundreds of ways to create simple architectures – this essay touches on only a few of the ones I’m passionate about. I’d love to hear your own ideas and stories! Comment over at Lobsters or Hacker News or programming Reddit.