A place to cache linked articles (think custom and personal wayback machine)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

index.md 21KB

4 years ago
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126
  1. title: The interesting ideas in Datasette
  2. url: https://simonwillison.net/2018/Oct/4/datasette-ideas/
  3. hash_url: c55f3b229f95e1195003697beba95fe4
  4. <p><a href="https://github.com/simonw/datasette">Datasette</a> (<a href="https://simonwillison.net/tags/datasette/">previously</a>) is my open source tool for exploring and publishing structured data. There are a lot of ideas embedded in Datasette. I realized that I haven’t put many of them into writing.</p>
  5. <p>
  6. <a href="#Publishing_readonly_data">Publishing read-only data</a><br/>
  7. <a href="#Bundling_the_data_with_the_code">Bundling the data with the code</a><br/>
  8. <a href="#SQLite_as_the_underlying_data_engine">SQLite as the underlying data engine</a><br/>
  9. <a href="#Farfuture_cache_expiration">Far-future cache expiration</a><br/>
  10. <a href="#Publishing_as_a_core_feature">Publishing as a core feature</a><br/>
  11. <a href="#License_and_source_metadata">License and source metadata</a><br/>
  12. <a href="#Facet_everything">Facet everything</a><br/>
  13. <a href="#Respect_for_CSV">Respect for CSV</a><br/>
  14. <a href="#SQL_as_an_API_language">SQL as an API language</a><br/>
  15. <a href="#Optimistic_query_execution_with_time_limits">Optimistic query execution with time limits</a><br/>
  16. <a href="#Keyset_pagination">Keyset pagination</a><br/>
  17. <a href="#Interactive_demos_based_on_the_unit_tests">Interactive demos based on the unit tests</a><br/>
  18. <a href="#Documentation_unit_tests">Documentation unit tests</a></p>
  19. <p>Datasette provides a read-only API to your data. It makes no attempt to deal with writes. Avoiding writes entirely is fundamental to a plethora of interesting properties, many of which are expanded on further below. In brief:</p>
  20. <ul>
  21. <li>Hosting web applications with no read/write persistence requirements is incredibly cheap in 2018—often free (both <a href="https://zeit.co/now">ZEIT Now</a> and a <a href="https://www.heroku.com/">Heroku</a> have generous free tiers). This is a big deal: even having to pay a few dollars a month is enough to dicentivise sharing data, since now you have to figure out who will pay and ensure the payments don’t expire in the future.</li>
  22. <li>Being read-only makes it trivial to scale: just add more instances, each with their own copy of the data. All of the hard problems in scaling web applications that relate to writable data stores can be skipped entirely.</li>
  23. <li>Since the database file is opened using SQLite’s <a href="https://www.sqlite.org/uri.html#uriimmutable">immutable mode</a>, we can accept arbitrary SQL queries with no risk of them corrupting the data.</li>
  24. </ul>
  25. <p>Any time your data changes, you need to publish a brand new copy of the whole database. With the right hosting this is easy: deploy a brand new copy of your data and application in parallel to your existing live deployment, then switch over incoming HTTP traffic to your API at the load balancer level. Heroku and Zeit Now both support this strategy out of the box.</p>
  26. <p>Since the data is read-only and is encapsulated in a single binary SQLite database file, we can bundle the data as part of the app. This means we can trivially create and publish Docker images that provide both the data and the API and UI for accessing it. We can also publish to any hosting provider that will allow us to run a Python application, without also needing to provision a mutable database.</p>
  27. <p>The <a href="https://datasette.readthedocs.io/en/stable/publish.html#datasette-package">datasette package</a> command takes one or more SQLite databases and bundles them together with the Datasette application in a single Docker image, ready to be deployed anywhere that can run Docker containers.</p>
  28. <p>Datasette encourages people to use <a href="https://www.sqlite.org/">SQLite</a> as a standard format for publishing data.</p>
  29. <p>Relational database are great: once you know how to use them, you can represent any data you can imagine using a carefully designed schema.</p>
  30. <p>What about data that’s too unstructured to fit a relational schema? SQLite includes excellent <a href="https://www.sqlite.org/json1.html">support for JSON data</a>—so if you can’t shape your data to fit a table schema you can instead store it as text blobs of JSON—and use SQLite’s JSON functions to filter by or extract specific fields.</p>
  31. <p>What about binary data? Even that’s covered: SQLite will happily store binary blobs. My <a href="https://github.com/simonw/datasette-render-images">datasette-render-images plugin</a> (<a href="https://datasette-render-images-demo.datasette.io/favicons-6a27915/favicons">live demo here</a>) is one example of a tool that works with binary image data stored in SQLite blobs.</p>
  32. <p>What if my data is too big? Datasette is not a “big data” tool, but if your definition of big data is something that won’t fit in RAM that threshold is growing all the time (2TB of RAM on a single AWS instance <a href="https://aws.amazon.com/about-aws/whats-new/2016/05/now-available-x1-instances-the-largest-amazon-ec2-memory-optimized-instance-with-2-tb-of-memory/">now costs less than $4/hour</a>).</p>
  33. <p>I’ve personally had great results from multiple GB SQLite databases and Datasette. The theoretical maximum size of a single SQLite database is <a href="https://www.sqlite.org/limits.html">around 140TB</a>.</p>
  34. <p>SQLite also has built-in support for <a href="https://www.sqlite.org/fts5.html">surprisingly good full-text search</a>, and thanks to being extensible via modules has excellent geospatial functionality in the form of the <a href="https://www.gaia-gis.it/fossil/libspatialite/index">SpatiaLite extension</a>. Datasette benefits enormously from this wider ecosystem.</p>
  35. <p>The reason most developers avoid SQLite for production web applications is that it doesn’t deal brilliantly with large volumes of concurrent writes. Since Datasette is read-only we can entirely ignore this limitation.</p>
  36. <p>Since the data in a Datasette instance never changes, why not cache calls to it forever?</p>
  37. <p>Datasette sends a far future HTTP cache expiry header with every API response. This means that browsers will only ever fetch data the first time a specific URL is accessed, and if you host Datasette behind a CDN such as <a href="https://www.fastly.com/">Fastly</a> or <a href="https://www.cloudflare.com/">Cloudflare</a> each unique API call will hit Datasette just once and then be cached essentially forever by the CDN.</p>
  38. <p>This means it’s safe to deploy a JavaScript app using an inexpensively hosted Datasette-backed API to the front page of even a high traffic site—the CDN will easily take the load.</p>
  39. <p>Zeit added Cloudflare to every deployment (even their free tier) <a href="https://zeit.co/blog/now-cdn">back in July</a>, so if you are hosted there you get this CDN benefit for free.</p>
  40. <p>What if you re-publish an updated copy of your data? Datasette has that covered too. You may have noticed that every Datasette database gets a hashed suffix automatically when it is deployed:</p>
  41. <p><a href="https://fivethirtyeight.datasettes.com/fivethirtyeight-c9e67c4">https://fivethirtyeight.datasettes.com/fivethirtyeight-c9e67c4</a></p>
  42. <p>This suffix is based on the SHA256 hash of the entire database file contents—so any change to the data will result in new URLs. If you query a previous suffix Datasette will notice and redirect you to the new one.</p>
  43. <p>If you know you’ll be changing your data, you can build your application against the non-suffixed URL. This will not be cached and will always 302 redirect to the correct version (and these redirects are extremely fast).</p>
  44. <p><a href="https://fivethirtyeight.datasettes.com/fivethirtyeight/alcohol-consumption%2Fdrinks.json">https://fivethirtyeight.datasettes.com/fivethirtyeight/alcohol-consumption%2Fdrinks.json</a></p>
  45. <p>The redirect sends an HTTP/2 push header such that if you are running behind a CDN that understands push (<a href="https://blog.cloudflare.com/announcing-support-for-http-2-server-push-2/">such as Cloudflare</a>) your browser won’t have to make two requests to follow the redirect. You can use the Chrome DevTools to see this in action:</p>
  46. <p><img alt="Chrome DevTools showing a redirect initiated by an HTTP/2 push" src="https://static.simonwillison.net/static/2018/http2-push.png"/></p>
  47. <p>And finally, if you need to opt out of HTTP caching for some reason you can disable it on a per-request basis by including <code>?_ttl=0</code> <a href="https://datasette.readthedocs.io/en/stable/json_api.html#special-json-arguments">in the URL query string</a>. —for example, if you want to return a random member of the Avengers it doesn’t make sense to cache the response:</p>
  48. <p><a href="https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+%5Bavengers%2Favengers%5D+order+by+random()+limit+1&amp;_ttl=0">https://fivethirtyeight.datasettes.com/fivethirtyeight?sql=select+*+from+[avengers%2Favengers]+order+by+random()+limit+1&amp;_ttl=0</a></p>
  49. <p>Datasette aims to reduce the friction for publishing interesting data online as much as possible.</p>
  50. <p>To this end, Datasette includes <a href="https://datasette.readthedocs.io/en/stable/publish.html">a “publish” subcommand</a>:</p>
  51. <pre><code># deploy to Heroku
  52. datasette publish heroku mydatabase.db
  53. # Or deploy to Zeit Now
  54. datasette publish now mydatabase.db
  55. </code></pre>
  56. <p>These commands take one or more SQLite databases, upload them to a hosting provider, configure a Datasette instance to serve them and return the public URL of the newly deployed application.</p>
  57. <p>Out of the box, Datasette can publish to either Heroku or to Zeit Now. The <a href="https://datasette.readthedocs.io/en/stable/plugins.html#publish-subcommand-publish">publish_subcommand plugin hook</a> means other providers can be supported by writing plugins.</p>
  58. <p>Datasette believes that data should be accompanied by source information and a license, whenever possible. The <a href="https://datasette.readthedocs.io/en/stable/metadata.html">metadata.json file</a> that can be bundled with your data supports these. You can also provide source and license information when you run <code>datasette publish</code>:</p>
  59. <pre><code>datasette publish fivethirtyeight.db \
  60. --source="FiveThirtyEight" \
  61. --source_url="https://github.com/fivethirtyeight/data" \
  62. --license="CC BY 4.0" \
  63. --license_url="https://creativecommons.org/licenses/by/4.0/"
  64. </code></pre>
  65. <p>When you use these options Datasette will create the corresponding <code>metadata.json</code> file for you as part of the deployment.</p>
  66. <p>I really love faceted search: it’s the first tool I turn to whenever I want to start understanding a collection of data. I’ve built faceted search engines on top of Solr, Elasticsearch and PostgreSQL and many of my favourite tools (like Splunk and Datadog) have it as a core feature.</p>
  67. <p>Datasette automatically attempts to calculate facets against every table. You can read <a href="https://simonwillison.net/2018/May/20/datasette-facets/">more about the Datasette Facets feature here</a>—as a huge faceted search fan it’s one of my all-time favourite features of the project. Now I can add SQLite to the list of technologies I’ve used to build faceted search!</p>
  68. <p>CSV is by far the most common format for sharing and publishing data online. Almost every useful data tool has the ability to export to it, and it remains the lingua franca of spreadsheet import and export.</p>
  69. <p>It has many flaws: it can’t easily represent nested data structures, escaping rules for values containing commas are inconsistently implemented and it doesn’t have a standard way of representing character encoding.</p>
  70. <p>Datasette aims to promote SQLite as a much better default format for publishing data. I would much rather download a .db file full of pre-structured data than download a .csv and then have to re-structure it as a separate piece of work.</p>
  71. <p>But interacting well with the enormous CSV ecosystem is essential. Datasette has <a href="https://datasette.readthedocs.io/en/stable/csv_export.html">deep CSV export functionality</a>: any data you can see, you can export—including the results of arbitrary SQL queries. If your query can be paginated Datasette can stream down every page in a single CSV file for you.</p>
  72. <p>Datasette’s sister-tool <a href="https://github.com/simonw/csvs-to-sqlite">csvs-to-sqlite</a> handles the other side of the equation: importing data from CSV into SQLite tables. And the <a href="https://simonwillison.net/2018/Jan/17/datasette-publish/">Datasette Publish web application</a> allows users to upload their CSVs and have them deployed directly to their own fresh Datasette instance—no command line required.</p>
  73. <p>A lot of people these days are excited about <a href="https://graphql.org/">GraphQL</a>, because it allows API clients to request exactly the data they need, including traversing into related objects in a single query.</p>
  74. <p>Guess what? SQL has been able to do that since the 1970s!</p>
  75. <p>There are a number of reasons most APIs don’t allow people to pass them arbitrary SQL queries:</p>
  76. <ul>
  77. <li>Security: we don’t want people messing up our data</li>
  78. <li>Performance: what if someone sends an accidental (or deliberate) expensive query that exhausts our resources?</li>
  79. <li>Hiding implementation details: if people write SQL against our API we can never change the structure of our database tables</li>
  80. </ul>
  81. <p>Datasette has answers to all three.</p>
  82. <p>On security: the data is read-only, using SQLite’s immutable mode. You can’t damage it with a query—INSERT and UPDATEs will simply throw harmless errors.</p>
  83. <p>On performance: SQLite has a mechanism for canceling queries that take longer than a certain threshold. Datasette sets this to one second by default, though you can <a href="https://datasette.readthedocs.io/en/stable/config.html#sql-time-limit-ms">alter that configuration</a> if you need to (I often bump it up to ten seconds when exploring multi-GB data on my laptop).</p>
  84. <p>On hidden implementation details: since we are publishing static data rather than maintaining an evolving API, we can mostly ignore this issue. If you are really worried about it you can take advantage of <a href="https://datasette.readthedocs.io/en/stable/sql_queries.html#canned-queries">canned queries</a> and <a href="https://datasette.readthedocs.io/en/stable/sql_queries.html#views">SQL view definitions</a> to expose a carefully selected forward-compatible view into your data.</p>
  85. <p>I mentioned Datasette’s SQL time limits above. These aren’t just there to avoid malicious queries: the idea of “optimistic SQL evaluation” is baked into some of Datasette’s core features.</p>
  86. <p>Consider <a href="https://datasette.readthedocs.io/en/stable/facets.html#suggested-facets">suggested facets</a>—where Datasette inspects any table you view and tries to suggest columns that are worth faceting against.</p>
  87. <p>The way this works is Datasette loops over <em>every</em> column in the table and runs a query to see if there are less than 20 unique values for that column. On a large table this could take a prohibitive amount of time, so Datasette sets an aggressive timeout on those queries: <a href="https://datasette.readthedocs.io/en/stable/config.html#facet-suggest-time-limit-ms">just 50ms</a>. If the query fails to run in that time it is silently dropped and the column is not listed as a suggested facet.</p>
  88. <p>Datasette’s JSON API provides a mechanism for JavaScript applications to use that same pattern. If you add <code>?_timelimit=20</code> to any Datasette API call, the underlying query will only get 20ms to run. If it goes over you’ll get a very fast error response from the API. This means you can design your own features that attempt to optimistically run expensive queries without damaging the performance of your app.</p>
  89. <h3/>
  90. <p>SQL pagination using OFFSET/LIMIT has a fatal flaw: if you request page number 300 at 20 per page the underlying SQL engine needs to calculate and sort all 6,000 preceding rows before it can return the 20 you have requested.</p>
  91. <p>This does not scale at all well.</p>
  92. <p><a href="https://use-the-index-luke.com/sql/partial-results/fetch-next-page">Keyset pagination</a> (often known by other names, including cursor-based pagination) is a far more efficient way to paginate through data. It works against ordered data. Each page is returned with a token representing the last record you saw, then when you request the next page the engine merely has to filter for records that are greater than that tokenized value and scan through the next 20 of them.</p>
  93. <p>(Actually, it scans through 21. By requesting one more record than you intend to display you can detect if another page of results exists—if you ask for 21 but get back 20 or less you know you are on the last page.)</p>
  94. <p>Datasette’s table view includes a sophisticated implementation of keyset pagination.</p>
  95. <p>Datasette defaults to sorting by primary key (or SQLite rowid). This is perfect for efficient pagination: running a select against the primary key column for values greater than X is one of the fastest range scan queries any database can support. This allows users to paginate as deep as they like without paying the offset/limit performance penalty.</p>
  96. <p>This is also how the “export all rows as CSV” option works: when you select that option, Datasette opens a stream to your browser and internally starts keyset-pagination over the entire table. This keeps resource usage in check even while streaming back millions of rows.</p>
  97. <p>Here’s where Datasette gets fancy: it handles keyset pagination for any other sort order as well. If you sort by any column and click “next” you’ll be requesting the next set of rows after the last value you saw. And this even works for columns containing duplicate values: If you sort by such a column, Datasette actually sorts by that column combined with the primary key. The “next” pagination token it generates encodes both the sorted value and the primary key, allowing it to correctly serve you the next page when you click the link.</p>
  98. <p>Try clicking “next” <a href="https://latest.datasette.io/fixtures-dd88475/sortable?_sort_desc=sortable">on this page</a> to see keyset pagination against a sorted column in action.</p>
  99. <p>I love interactive demos. I decided it would be useful if every single release of Datasette had a permanent interactive demo illustrating its features.</p>
  100. <p>Thanks to Zeit Now, this was pretty easy to set up. I’ve actually taken it a step further: every successful push to master on GitHub is also deployed to a permanent URL.</p>
  101. <p>Some examples:</p>
  102. <p>The database that is used for this demo is the exact same database that is created by Datasette’s <a href="https://github.com/simonw/datasette/blob/master/tests/fixtures.py">unit test fixtures</a>. The unit tests are already designed to exercise every feature, so reusing them for a live demo makes a lot of sense.</p>
  103. <p>You can view this test database on your own machine by checking out the full Datasette repository from GitHub and running the following:</p>
  104. <pre><code>python tests/fixtures.py fixtures.db metadata.json
  105. datasette fixtures.db -m metadata.json
  106. </code></pre>
  107. <p>Here’s <a href="https://github.com/simonw/datasette/blob/96af802352e49e35751e295e9846aa39c5e22311/.travis.yml#L23-L42">the code in the Datasette Travis CI configuration</a> that deploys a live demo for every commit and every released tag.</p>
  108. <p>I wrote about the <a href="https://simonwillison.net/2018/Jul/28/documentation-unit-tests/">Documentation unit tests</a> pattern back in July.</p>
  109. <p>Datasette’s unit tests <a href="https://github.com/simonw/datasette/blob/master/tests/test_docs.py">include some assertions</a> that ensure that every plugin hook, configuration setting and underlying view class is mentioned in the documentation. A commit or pull request that adds or modifies these without also updating the documentation (or at least ensuring there is a corresponding heading in the docs) will fail its tests.</p>
  110. <p>Datasette’s <a href="http://datasette.readthedocs.io/">documentation</a> is in pretty good shape now, and <a href="https://datasette.readthedocs.io/en/stable/changelog.html">the changelog</a> provides a detailed overview of new features that I’ve added to the project. I presented Datasette at the PyBay conference in August and I’ve published <a href="https://static.simonwillison.net/static/2018/pybay-datasette/">my annotated slides</a> from that talk. I was <a href="https://changelog.com/podcast/296#t=00:54:45">interviewed about Datasette</a> for the Changelog podcast in May and <a href="https://simonwillison.net/2018/May/9/changelog/">my notes from that conversation</a> include some of my favourite demos.</p>
  111. <p>Datasette now has an official Twitter account—you can follow <a href="https://twitter.com/datasetteproj">@datasetteproj</a> there for updates about the project.</p>