A place to cache linked articles (think custom and personal wayback machine)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

index.md 13KB

title: Building an offline page for theguardian.com url: https://www.theguardian.com/info/developer-blog/2015/nov/04/building-an-offline-page-for-theguardiancom hash_url: 6123a07505

You’re on a train to work and you open up the Guardian app on your phone. A tunnel surrounds you, but the app still works in very much the same way as it usually would—despite your lack of internet connection, you still get the full experience, only the content shown will be stale. If you tried the same for a website, however, it wouldn’t load at all:

Chrome for Android’s offline page Illustration: Oliver Ash

Chrome eases the pain of being offline with its hidden game (press space bar on desktop, tap the dinosaur on mobile). But we can do better.

Service workers allow website authors to intercept all network requests to their websites, which means we can provide rich offline experiences, just like native apps. At the Guardian, we recently released a custom offline experience of our own. When users are offline they will see a Guardian branded page with a simple offline message and, for fun, a crossword to play while they wait for a connection. This blog post is about how we built it, but first, here’s how you can try it out for yourself.

Try it out

You must be running a browser that supports the Service Worker and fetch APIs. At the time of writing only Chrome (mobile and desktop) supports both of APIs, but support is coming in Firefox very soon (currently in the nightly build), and all browsers except Safari have shown enthusiasm. Furthermore, service workers can only be registered for websites served over HTTPS, which theguardian.com has started to move towards. Thus, we can only offer the offline experience for HTTPS sections of the website. For the time being, we have chosen the developer blog as our testing ground. So, if you’re reading this on our developer blog section of the website, you’re in luck.

Once you’ve visited a page on our developer blog in a supported browser, you’re all set. Disconnect your device from the internet and refresh. If you are unable to try it out for yourself, take a look at this demo video.

How it works

We can instruct browsers to register our service worker as soon as the user arrives on the page with some simple JavaScript. Support for service workers is currently sparse, so we need to use feature detection to avoid any errors.

if (navigator.serviceWorker) {
    navigator.serviceWorker.register('/service-worker.js');
}

As part of the service worker’s install event, we can use the new cache API to cache the various moving parts of our website, such as HTML, CSS, and JavaScript:

var staticCacheName = 'static';
var version = 1;

function updateCache() {
    return caches.open(staticCacheName + version)
        .then(function (cache) {
            return cache.addAll([
                '/offline-page.html',
                '/assets/css/main.css',
                '/assets/js/main.js'
            ]);
        });
};

self.addEventListener('install', function (event) {
    event.waitUntil(updateCache());
});

Once install has completed, the service worker can listen to and control the fetch event, giving us full control over all future network requests incurred by the website.

self.addEventListener('fetch', function (event) {
    event.respondWith(fetch(event.request));
});

To give you some idea of the flexibility we have here, we could construct our own response programmatically:

self.addEventListener('fetch', function (event) {
    var response = new Response('<h1>Hello, World!</h1>',
        { headers: { 'Content-Type': 'text/html' } });
    event.respondWith(response);
});

Or, we could respond with something from the cache if we can find a match for the given request, falling back to the network:

self.addEventListener('fetch', function (event) {
    event.respondWith(
        caches.match(event.request)
            .then(function (response) {
                return response || fetch(event.request);
            })
    );
});

So how do we use all of this to provide an offline experience?

Firstly, the HTML and resources needed for the offline page are cached by the service worker upon installation. Included in this cache is the React application we have developed for our crossword pages. Thereafter we intercept all network requests to a web page on theguardian.com, including requests for subresources on those pages. The logic for handling these requests goes something like:

  1. If we detect the incoming request is a navigation to one of our HTML pages, we always want to serve the most up-to-date content, so we attempt to make the request over the network to the server.
    1. When we get a response from the server, we can respond with that directly.
    2. If the network request throws an error (i.e. failed because the user is offline), we catch this and instead respond with the cached HTML for the offline page.
  2. Else, if we detect the request is anything other than HTML, we will lookup the request in the cache.
    1. If a cached match is found, we can respond with that directly.
    2. Else, we will attempt to make the request over the network to the server.

The resulting code, which uses the new cache API (as part of the Service Worker API) and fetch (for making network requests), is as follows:

var doesRequestAcceptHtml = function (request) {
    return request.headers.get('Accept')
        .split(',')
        .some(function (type) { return type === 'text/html'; });
};

self.addEventListener('fetch', function (event) {
    var request = event.request;
    if (doesRequestAcceptHtml(request)) {
        // HTML pages fallback to offline page
        event.respondWith(
            fetch(request)
                .catch(function () {
                    return caches.match('/offline-page.html');
                })
        );
    } else {
        // Default fetch behaviour
        // Cache first for all other requests
        event.respondWith(
            caches.match(request)
                .then(function (response) {
                    return response || fetch(request);
                })
        );
    }
});

That’s it! All the code for theguardian.com is open source on GitHub, so you can view the full version of our service worker script there, or in production at https://www.theguardian.com/service-worker.js.

We have good reasons to be excited about these new browser technologies, because they can be used to give websites the same rich offline experiences we have in native apps today. In the future when theguardian.com has completed migration to HTTPS, the offline page will increase in significance and we can make further improvements to the offline experience. Imagine opening theguardian.com on your internet-less commute to work to find content personalised for you, downloaded and cached by the browser ahead of your visit. There is no friction involved in the installation step—unlike native apps which require users to have app store accounts for installation, all that’s needed on the web is to visit the website in question. Service workers can also help improve website load times, as the shell of a website can be cached reliably, just like in native apps.

If you’re interested in learning more about service workers and what’s possible, Matt Gaunt, who is a Developer Advocate for Chrome, has written an introduction to Service Worker which goes into more detail.