A place to cache linked articles (think custom and personal wayback machine)
Du kan inte välja fler än 25 ämnen Ämnen måste starta med en bokstav eller siffra, kan innehålla bindestreck ('-') och vara max 35 tecken långa.

index.md 20KB

4 år sedan
1234567
  1. title: The Irony of Writing About Digital Preservation
  2. url: http://www.theatlantic.com/technology/archive/2015/11/the-irony-of-writing-about-digital-preservation/416184/
  3. hash_url: b74e15baff22ce27ad96166f56bdff36
  4. <section id="article-section-1"><p>Recently, Adrienne LaFrance <a href="http://www.theatlantic.com/technology/archive/2015/10/raiders-of-the-lost-web/409210/" data-omni-click="r'article',r'link',r'0',r'416184'">wrote</a> in <em>The Atlantic</em> about the digital death and rebirth of a story that was a Pulitzer Prize finalist in 2008. Because “The Crossing,” a 34-part series originally published by the <em>Rocky Mountain News</em>, was born digital, it was not as easily archived as print stories, and its journey from obscurity to resurrection was moving.</p><p>I loved LaFrance’s story. It was masterfully written, and it touched on most of the issues that digital preservationists grapple with every day. Coincidentally, the story was published the same week as a <a href="http://nrj.sagepub.com/content/current" data-omni-click="r'article',r'link',r'1',r'416184'">special issue of <em>Newspaper Research Journal</em></a> called “Capturing and Preserving the ‘First Draft of History’ in the Digital Environment,” which is a collection of scholarly papers (including my own) about preserving digital news.</p><p>Which led me to wonder: In 20 years, will anyone be able to read LaFrance’s story?</p><p>There is no guarantee that we will be able to read today’s news on tomorrow’s computers. I’ve been studying news preservation for the past two years, and I can confidently say that most media companies use a preservation strategy that resembles Swiss cheese.</p></section><p class="ad-boxinjector-wrapper"><gpt-ad id="boxinjector1" targeting-pos="boxinjector1" class="ad ad-boxinjector" lazy-load="2" data-object-pk="1" data-object-name="boxinjector"><gpt-sizeset viewport-size="[1050, 0]" sizes="[[728, 90], [728, 350], [970, 250], [1024, 350], [1, 3], [1600, 520], [1000, 350], [1600, 500], [970, 350]]"/><gpt-sizeset viewport-size="[1010, 0]" sizes="[[728, 90], [728, 350], [970, 250], [1, 3], [768, 350], [768, 520], [640, 360], [970, 350]]"/><gpt-sizeset viewport-size="[760, 0]" sizes="[[728, 90], [728, 350], [1, 3], [768, 350], [768, 520], [640, 360]]"/><gpt-sizeset viewport-size="[0, 0]" sizes="[[300, 250], [320, 350], [300, 350], [1, 3], [320, 520], [320, 430]]"/></gpt-ad></p><section id="article-section-2"><p><a href="http://nrj.sagepub.com/content/36/3/299.full.pdf+html" data-omni-click="r'article',r'link',r'2',r'416184'">My contribution</a> to the <em>NRJ</em> special issue centers on news apps, the interactive databases like <em>ProPublica</em>’s <a href="https://projects.propublica.org/surgeons/" data-omni-click="r'article',r'link',r'3',r'416184'">“Surgeon Scorecard”</a> that allow readers to read a story, search for themselves or their community in the data, and then figure out exactly how the story affects their own lives. When a data journalist calls something a “news app,” it doesn’t mean the thing you download from the App Store. <em>ProPublica</em>’s Scott Klein <a href="http://knightlab.northwestern.edu/2014/03/18/preserving-interactive-news-projects-with-newseum-opennews-and-pop-up-archive/" data-omni-click="r'article',r'link',r'4',r'416184'">explains</a>: “Inside newsrooms, these interactive databases are sometimes called ‘news applications’—but don’t be confused. They’re interactive databases published on the web, not something you buy on your smartphone. Think Dollars for Docs, not Flipboard or Zite.”</p>
  5. <p>News apps aren’t being preserved because they are software, and <a href="http://www.softpres.org/" data-omni-click="r'article',r'link',r'5',r'416184'">software preservation</a> is a specialized, idiosyncratic pursuit that requires more money and more specialized labor than is available at media organizations today. But, you might argue, it ought to be easy to preserve stories that are not software, right? A story like LaFrance’s, which is composed of text and images and a few hyperlinks to outside sources, ought to be simpler to save?</p><p>You’d think so. But not necessarily.</p><p>To understand why, we need to look at the back-end technology of the newsroom. In developer-speak, the front end is the nice-looking part of the technology that is open to customers and the world; the back end is the factory where the sausage is made.</p><p>You probably know the basics of the back end: When you click a link or type a URL into your web browser, a <a href="https://en.wikipedia.org/wiki/Web_server" data-omni-click="r'article',r'link',r'6',r'416184'">web server</a> delivers a page to your browser. At a media organization, the web server assembles a page for you that consists of different digital assets: text, images, captions, headlines, code, videos, or ads. These assets reside in a <a href="https://en.wikipedia.org/wiki/Content_management_system" data-omni-click="r'article',r'link',r'7',r'416184'">content management system</a> (CMS) that organizes the thousands or millions of pieces of content that the media company generates.</p><p>It’s rarely just one CMS, however. Newsrooms rely on a blend of new and legacy systems. In a newsroom that produces a print edition, there is always an additional software system—like K4 or CCI or Hermes—that manages page layouts and sends those pages to digital printers. Let’s call this the print CMS. This is different than the web CMS, which could be a system like Wordpress. The BBC <a href="https://vimeo.com/119949161" data-omni-click="r'article',r'link',r'8',r'416184'">uses</a> at least two web CMSs. (Here’s a diagram of the newest one, <a href="http://www.bbc.co.uk/blogs/internet/entries/89de2d90-d020-47d0-857e-03ee4f7b2beb" data-omni-click="r'article',r'link',r'9',r'416184'">Vivo</a>.)</p></section><p class="ad-boxright-wrapper" data-pos="boxright"><gpt-ad id="boxright1" targeting-pos="boxright1" class="ad ad-boxright" lazy-load="2" data-object-pk="3" data-object-name="boxright"><gpt-sizeset viewport-size="[1010, 0]" sizes="[[300, 250], [300, 600]]"/><gpt-sizeset viewport-size="[0, 0]" sizes="[[300, 250]]"/></gpt-ad></p><section id="article-section-3"><p>Invisible processes seamlessly transmit text, images, headlines, and other content from one system to the other. Most news organizations don’t have in-house librarians any more, so archiving is largely done automatically. Large organizations like LexisNexis or EBSCO (<em>The Atlantic</em>’s archiver) will hoover up a digital feed from the news organization, store the information in a database, and then license packages of such databases to libraries. The digital feed might include the text of each story, the author’s name, the title of the story, any associated images, and some meta-information that describes the placement of the story or its licensing rights. In some cases, the feed also includes PDF images of each page of the newspaper or magazine.</p><p>To try to determine if LaFrance’s story was included in the archival feed, I ran a search on October 16, 2015, for all articles from <em>The Atlantic</em> in the EBSCO database (using my university-library subscription) from January 1, 2014, to December 31, 2015. There were 488 results.</p><p>I ran the same search on Google on the same date for stories that show a publication date on TheAtlantic.com from January 1, 2014, to December 31, 2015. There were 20,200 results.</p><p>Were there really 19,712 more stories published on TheAtlantic.com than in <em>The Atlantic</em> magazine? I’m not sure. Some of the Google hits could be duplicates, bringing the total number of articles published down below 20,200. Or, there could be something I don’t know about how many articles are included in my library’s subscription to EBSCO’s collection of works in <em>The Atlantic</em>. There could also be additional technical and licensing issues that I’m not aware of—archiving is an immensely complex practice. The 20,200 number does not include <em>Atlantic</em> writers’ posts to Facebook, Twitter, Instagram, Pinterest, Reddit, or any other social platforms where the journalists may have interacted with readers or posted comments related to their stories. If we want to count social posts as journalistic content, we need to revise our estimate dramatically upward. (Social posts are also <a href="https://www.washingtonpost.com/lifestyle/style/library-of-congress-has-archive-of-tweets-but-no-plan-for-its-public-display/2013/01/03/e4db1c24-55d4-11e2-bf3e-76c0a789346f_story.html" data-omni-click="r'article',r'link',r'10',r'416184'">surprisingly difficult</a> to meaningfully preserve in libraries, by the way.)</p><aside class="pullquote">The challenges of maintaining digital archives are as much social and institutional as technological.</aside><p>In all of my library searching, I couldn’t find LaFrance’s article on “The Crossing.” In fact, searching more than 400 databases and publishers via EBSCO, and the 700 million sources contained therein, I only found nine articles by Adrienne LaFrance. Which is strange, because looking at <a href="http://www.theatlantic.com/author/adrienne-lafrance" data-omni-click="r'article',r'link',r'11',r'416184'">LaFrance’s author page</a> on The Atlantic.com reveals pages upon pages of search results.</p><p>To understand what’s happening, we need to return to the back-end and think about the systems in which story text resides. LaFrance’s story appeared on TheAtlantic.com, which runs on a web CMS called <a href="http://www.theatlantic.com/product/archive/2015/01/building-our-new-photo-channel/384397/" data-omni-click="r'article',r'link',r'12',r'416184'">Ollie</a>. Ollie, which replaced three older CMSes, was <a href="https://www.youtube.com/watch?v=RWLQTCUpyWw" data-omni-click="r'article',r'link',r'13',r'416184'">custom-built</a> using a popular open-source software framework called <a href="https://www.djangoproject.com/" data-omni-click="r'article',r'link',r'14',r'416184'">Django</a>. The print edition of <em>The Atlantic</em> is managed through a workflow system called <a href="http://www.vjoon.com/" data-omni-click="r'article',r'link',r'15',r'416184'">K4</a>, which (unlike Django) works well with the Adobe software programs that are used to create layouts. From a media-tech perspective, this is state-of-the-art engineering. I don’t know how or where the EBSCO feed taps into this configuration. Probably, what happens is something like this:</p><figure><picture><img alt="" data-src="https://cdn.theatlantic.com/assets/media/img/posts/2015/11/Meredith_Diagram/133cff4d7.jpg" class="lazyload"/></picture><figcaption class="credit">Meredith Broussard</figcaption></figure><p>I’m reminded of the time I used a sink in a friend’s new pool house, which he built himself. “Don’t run too much water when you’re washing things,” my friend told me. “It looks like a real sink, but I didn’t hook it up to the sewer system, so the water just runs out onto the ground.” I was flummoxed. How could that be? Was he even allowed to do that? In that moment, I realized that plumbing, like software, is a complex system built by humans. Humans make mistakes and make idiosyncratic design decisions. So it is surprising, but not improbable, to realize that the complex multidimensional software systems that serve us web content might not be sending content to libraries in the ways that we expect.</p></section><p class="ad-boxinjector-wrapper"><gpt-ad id="boxinjector2" targeting-pos="boxinjector2" class="ad ad-boxinjector" lazy-load="2" data-object-pk="1" data-object-name="boxinjector"><gpt-sizeset viewport-size="[1050, 0]" sizes="[[728, 90], [728, 350], [970, 250], [1024, 350], [1, 3], [1600, 520], [1000, 350], [1600, 500], [970, 350]]"/><gpt-sizeset viewport-size="[1010, 0]" sizes="[[728, 90], [728, 350], [970, 250], [1, 3], [768, 350], [768, 520], [640, 360], [970, 350]]"/><gpt-sizeset viewport-size="[760, 0]" sizes="[[728, 90], [728, 350], [1, 3], [768, 350], [768, 520], [640, 360]]"/><gpt-sizeset viewport-size="[0, 0]" sizes="[[300, 250], [320, 350], [300, 350], [1, 3], [320, 520], [320, 430]]"/></gpt-ad></p><section id="article-section-4"><p>When I started my research into news preservation, I thought there would be an easy technological solution. There isn’t. Every media company in the world grapples with the issue of digital archiving. Large legacy organizations, like <em>The Atlantic</em> or <em>The New York Times</em> or the BBC, do a better job than smaller companies, but nobody has a solution. From a software perspective, it is a legitimately difficult problem: unsolved, but probably not unsolvable. “The challenges of maintaining digital archives over long periods of time are as much social and institutional as technological,” reads a 2003 NSF and Library of Congress <a href="http://www.digitalpreservation.gov/multimedia/documents/about_time2003.pdf" data-omni-click="r'article',r'link',r'16',r'416184'">report</a>. “Even the most ideal technological solutions will require management and support from institutions that in time go through changes in direction, purpose, management, and funding.”</p><p>Newsrooms need to manage workflow and content for print, audio, visuals, video, and code. Most software is built for companies that do only one of those things at a time; newsrooms do them all simultaneously. Every time a new technology is introduced, a newsroom needs a new content-management or workflow system to handle it. Ensuring interoperability between these systems and archival systems requires engineering, ingenuity, and regular attention.</p><p>The scale is different for newsrooms, too. Facebook only has to manage 11 years’ worth of data, all of which is digital and all of which is structured exactly the way it needs to be structured. A legacy media company might have to deal with more than a hundred years’ worth of data, only some of which is digital, all of which is <a href="http://educopia.org/publications/gdnpr" data-omni-click="r'article',r'link',r'17',r'416184'">potentially important to scholars</a>, all of which has different licensing restrictions and preservation needs and is <a href="http://blogs.loc.gov/digitalpreservation/2014/07/preserving-born-digital-news-at-digital-preservation-2014/" data-omni-click="r'article',r'link',r'18',r'416184'">ambiguously structured</a>. Remember when <a href="http://www.ojr.org/050922mcadams/" data-omni-click="r'article',r'link',r'19',r'416184'">Macromedia Flash was the new hot thing</a> in journalism? Most of those elaborate Flash projects have <a href="http://flashjournalism.com/examples/case_studies.htm" data-omni-click="r'article',r'link',r'20',r'416184'">disappeared</a> now. They’re probably archived on <a href="https://en.wikipedia.org/wiki/Jaz_drive" data-omni-click="r'article',r'link',r'21',r'416184'">Jaz drives</a> in a storage room somewhere, next to boxes of color slides and piles of floppy disks and other outdated media. Future historians will likely lament this loss.</p><aside class="pullquote">The Internet Archive will allow you to find a needle in a haystack, but only if you already know approximately where the needle is.</aside><p>The web only shows recent history. “Not one publication has a complete archive of its website,” my colleagues Kathleen Hansen and Nora Paul write in their <em>NRJ </em>article, <a href="http://nrj.sagepub.com/content/36/3/290.full.pdf+html" data-omni-click="r'article',r'link',r'22',r'416184'">“Newspaper Archives Reveal Major Gaps in Digital Age.”</a> “Most can go back no earlier than 2008 … In every case, informants talked about the chaos of switching CMSes or servers, of shifting organizational homes for the website, of staffing changes and many other elements that have had an impact on the integrity of the website over time.”</p></section><p class="ad-boxinjector-wrapper"><gpt-ad id="boxinjector3" targeting-pos="boxinjector3" class="ad ad-boxinjector" lazy-load="2" data-object-pk="1" data-object-name="boxinjector"><gpt-sizeset viewport-size="[1050, 0]" sizes="[[728, 90], [728, 350], [970, 250], [1024, 350], [1, 3], [1600, 520], [1000, 350], [1600, 500], [970, 350]]"/><gpt-sizeset viewport-size="[1010, 0]" sizes="[[728, 90], [728, 350], [970, 250], [1, 3], [768, 350], [768, 520], [640, 360], [970, 350]]"/><gpt-sizeset viewport-size="[760, 0]" sizes="[[728, 90], [728, 350], [1, 3], [768, 350], [768, 520], [640, 360]]"/><gpt-sizeset viewport-size="[0, 0]" sizes="[[300, 250], [320, 350], [300, 350], [1, 3], [320, 520], [320, 430]]"/></gpt-ad></p><section id="article-section-5"><p>The quantity and variety of information we now produce has outpaced our ability to preserve it for the future. Librarians are the only ones who are making sure that our collective memory is preserved. And they, along with small teams of digital historians elsewhere, are still trying to understand the scope of myriad challenges involved in modern preservation. If today’s born-digital news stories are not automatically put into library storehouses, these stories are unlikely to survive in an accessible way.</p><p>So: The articles we see today on TheAtlantic.com are stored in a CMS that is ambiguously hooked up to my library’s archival feed. For the purposes of scholarly research (which is performed through library databases, not through Google), it appears that some subset of articles from TheAtlantic.com are not being preserved. Which means that in 20 years, media scholars may not be able to read Adrienne LaFrance’s article about a story that disappeared and was resurrected, because LaFrance’s article may have disappeared.</p><p>Some savvy readers may wonder: What about <a href="https://archive.org/about/" data-omni-click="r'article',r'link',r'23',r'416184'">the Internet Archive</a>? Doesn’t the Wayback Machine preserve web pages, and won’t LaFrance’s story be preserved that way? The simple answer is yes. LaFrance’s article was crawled by the Internet Archive’s Wayback Machine, and you can go and look at it there. The folks at the Internet Archive are thoughtful digital preservationists, and I am grateful every day for their work preserving our collective digital memory.</p><p>If I know exactly what web page I am looking for, the Internet Archive is very helpful. I know that LaFrance’s story ran on the front page of TheAtlantic.com on October 14, 2015, and so I can go to the Wayback Machine and <a href="https://web.archive.org/web/*/http://theatlantic.com" data-omni-click="r'article',r'link',r'24',r'416184'">look at the snapshot</a> taken closest to that date, which is October 15, and I can see LaFrance’s story “Raiders of the Lost Web” and I can click on it.</p><p>But if I don’t know exactly the web page that I want and exactly the day that the information appeared, I won’t be able to find the information in the Internet Archive. Library databases are indexed so that they are searchable, meaning that the databases include lots of information about the information that they contain. The Wayback Machine is <a href="http://www.xml.com/pub/a/ws/2002/01/18/brewster.html" data-omni-click="r'article',r'link',r'25',r'416184'">technologically quite sophisticated</a>—it preserves images and code too, for example—but it is not <a href="https://blog.archive.org/2015/10/21/grant-to-develop-the-next-generation-wayback-machine/" data-omni-click="r'article',r'link',r'26',r'416184'">yet</a> indexed so as to be easily searchable. The Internet Archive will allow you to find a needle in a haystack, but only if you already know approximately where the needle is.</p><p>I’m pretty sure that in five years, when I want to re-read LaFrance’s article, I won’t remember the exact date on which it was published. I’m also reasonably sure that in five years my browser bookmark to the story will be broken because of linkrot: <em>The Atlantic </em>will have redesigned its website and the story’s URL will be different. My 2020 web-searching self will probably look on <em>The Atlantic</em>’s website and fail to find the article because the CMS will have changed, and the search parameters will be set up differently, and I will not be able to find so much as a title for the article in the library databases. Which means I will give up in frustration and rant to anyone who will listen about how disorganized the online world is and how we are losing digital history almost as soon as we make it. This is a shame. Because it’s a really good article, and it deserves to endure.</p><p>There is a solution, of course. I could just print the article and keep it in my filing cabinet. But that would be a step backward, not forward.</p></section>