A place to cache linked articles (think custom and personal wayback machine)
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

title: The Future of News Is Not An Article url: http://nytlabs.com/blog/2015/10/20/particles/ hash_url: 2170b5071a

nytlabs

In May of this year, Facebook announced Facebook Instant Articles, its foray into innovating the Facebook user experience around news reading. A month later, Apple introduced their own take with their Apple News app, which allows “stories to be specially formatted to look and feel like articles taken from publishers’ websites while still living inside Apple’s app”. There has been plenty of discussion about what these moves mean for the future of platforms and their relationship with publishers. But platform discussions aside, let’s examine a fundamental assumption being made here: both Facebook and Apple, who arguably have a huge amount of power to shape what the future of news looks like, have chosen to focus on a future that takes the shape of an article. The form and structure of how news is distributed hasn’t been questioned, even though that form was largely developed in response to the constraints of print (and early web) media.

Rather than look to large tech platforms to propose the future of news, perhaps there is a great opportunity for news organizations themselves to rethink those assumptions. After all, it is publishers who have the most to gain from innovation around their core products. So what might news look like if we start to rethink the way we conceive of articles?

Letting go of old constraints

News has historically been represented (and read) as a series of articles that report on events as they occur because it was the only way to publish news. The constraints of print media meant that a newspaper was published, at most, twice a day and that once an article was published, it was unalterable. While news organizations have adapted to new media through the creative use of interactivity, video, and audio, even the most innovative formats are still conceived of as dispatches: items that get published once and don’t evolve or accumulate knowledge over time. Any sense of temporality is still closely tied to the rhythms of print.

Creating news for the current and future media landscape means considering the time scales of our reporting in much more innovative ways. Information should accumulate upon itself; documents should have ways of reacting to new reporting or information; and we should consider the consumption behavior of our users as one that takes place at all cadences, not simply as a daily update.

So what does news that is accumulative look like and what is technically required to realize those possibilities? First, let us posit that we’re not talking about transforming news reporting into pure reference material, like a news-based Wikipedia, but rather that this is about leveraging the depth of knowledge from a rich body of reporting to extend and deepen news experiences.

In order to leverage the knowledge that is inside every article published, we need to first encode it in a way that makes it searchable and extractable. This means identifying and annotating the potentially reusable pieces of information within an article as it is being written – bits that we in The New York Times R&D Lab have been calling Particles. This concept builds on ideas that have been discussed under the rubric of the Semantic Web for quite a while, but have not seen universal adoption because of the labor costs involved in doing so. At the Lab, we have been working on approaches to this kind of annotation and tagging that would greatly reduce the burden of work required. Our Editor project, for example, looks at how some forms of granular metadata could be created through collaborative systems that rely heavily on machine learning but allow for editorial input. And more generally, this speaks to an approach where we create systems that piggyback on top of the existing newsroom workflow rather than completely reinventing it, applying computational techniques to augment journalists’ processes.

Once we begin to capture and encode that knowledge that is contained within articles, it can be used in all sorts of ways to transform the news reading experience:

First, once we begin to have a substrate of structured news elements, we can give the traditional article new superpowers. At the moment, if a journalist or editor wants to refer back to previous reporting on a topic in order to give context to an article, she has to do quite a bit of manual work to find the article that contained the information and then link to it. That hyperlink isn’t an ideal affordance, either, as it requires the reader to leave the article and read a second one in order to get the background information.

But if Particles were treated as their own first-class elements that were encoded, tagged, and embeddable, contextual information would be easy for a journalist to find. All kinds of newsroom tools could be built to allow journalists to leverage the rich body of previous reporting to make their jobs easier and more efficient. Furthermore, that information could be embedded inline in ways that allow an article to become a dynamic framework for deeper reading and understanding, one that can expand and contract in response to a reader’s interest. An article could contain not only its top-level narrative, but also a number of entry points into deeper background, context or analysis.

2. Summarization and synthesis

But Particles become much more powerful when we think of possibilities across articles, how a corpus of structured information is far more powerful than an archive of articles. One of the impacts of treating articles as singular monoliths is that it’s very hard to combine knowledge or information from more than one article after it’s been published. Doing any kind of synthesis, getting answers to questions that cut across time, getting a sense of aggregate knowledge around a topic — all of these acts still depend on a human being reading through multiple articles and doing that work manually.

For example, if I wanted to see all of the times Donald Trump has been quoted about immigration, there is no way to extract that information, much less create compelling reading experiences around it. But if every quote we publish was tagged and attributed, that would be a much more approachable task. Or if we identified every discrete event in an ongoing story as it was reported, then generating a dynamically updating timeline of that larger narrative would be similarly trivial. Much of the synthesis that is essential to better understanding news events is this kind of contextual, longitudinal knowledge that is currently very difficult to access. This gap is one reason that there is a market for sites that specialize in “explainer” journalism. But every news organization publishes the information that is needed to create such longitudinal knowledge. It is simply a matter of encoding that information in ways that make it accessible, reusable, and remixable after the fact.

3. Adaptive content

Finally, the recent proliferation of new devices and platforms for media consumption creates new pressures for news organizations to programmatically identify the pieces of information within an article. Consider every new platform and product to which news organizations currently publish their content, and how each of those outputs requires a different format and presentation. For example, a New York Times food article may be published as a medium-to-long-form piece on the website, as a headline with bullet points on the NYT Now app, as a one-sentence story on the Apple Watch, and as a standalone recipe on Cooking, not to mention the various formats required for platforms like Facebook or Pinterest or Twitter. Identifying and tagging Particles within a piece as it is being created may allow for a more streamlined workflow and a lighter editorial load, where the information within that piece can be stored in one place, but manifested in many different ways for different endpoints.

Ephemeral and evergreen

The biggest underlying shift in conceiving of the future of news as something more than than a stream of articles is in the implied distinction between ephemeral content and evergreen content. There has always been a mixture of the two types in news reporting: An article will contain a narrative about the event that is currently occurring but also will contain more evergreen information such as background context, key players, etc. But the reliance on the form of the article as the atomic unit of news means that all of that information has essentially been treated as ephemeral. A news organization publishes hundreds of articles a day, then starts all over the next day, recreating any redundant content each time. This approach is deeply shaped by the constraints of print media and seems unnecessary and strange when looked at from a natively digital perspective. Can you imagine if, every time something new happened in Syria, Wikipedia published a new Syria page, and in order to understand the bigger picture, you had to manually sift through hundreds of pages with overlapping information? The idea seems absurd in that context and yet, it is essentially what news publishers do every day. The Particles approach suggests that we need to identify the evergreen, reusable pieces of information at the time of creation, so that they can be reused in new contexts. It means that news organizations are not just creating the “first draft of history”, but are synthesizing the second draft at the same time, becoming a resource for knowledge and civic understanding in new and powerful ways.