Alexis Lloyd, the creative director of R&D at The New York Times, has published an article entitled ‘The Future of News is Not an Article’, in which she presents a new model of news reporting and distribution.
She suggests that the structure of news reporting has been developed through centuries of print media. News today is generally arranged along a linear timescale, written as a series of unconnected articles about individual events and published separately on a variety of platforms. But in a digital age, is there a technical solution which might fundamentally change the way a body of news accumulates and is surfaced?
Creating news for the current and future media landscape means considering the time scales of our reporting in much more innovative ways. Information should accumulate upon itself; documents should have ways of reacting to new reporting or information; and we should consider the consumption behavior of our users as one that takes place at all cadences, not simply as a daily update.
In the NY Times R&D lab it seems the buzzwords are evolution and accumulation, but Lloyd is quick to distance her vision from a Wikipedia style accumulation of facts. Instead, her team is exploring ways to expand and finesse the metadata of an NY Times article so that published work can start to build into a complex body of networked and explorable work.
In order to leverage the knowledge that is inside every article published, we need to first encode it in a way that makes it searchable and extractable. This means identifying and annotating the potentially reusable pieces of information within an article as it is being written – bits that we in The New York Times R&D Lab have been calling Particles. This concept builds on ideas that have been discussed under the rubric of the Semantic Web for quite a while, but have not seen universal adoption because of the labor costs involved in doing so. At the Lab, we have been working on approaches to this kind of annotation and tagging that would greatly reduce the burden of work required.
Lloyd points to the NY Times Editor project which introduces a robot element into a complex method of ‘fine grained annotation’ (tagging) of an article, as it is being written by a journalist. The automated element seeks to speed up the work exponentially, but the writer maintains control ‘to augment, edit and correct those processes with their knowledge’.
For Lloyd, this layering of news and data is the primary step of an extended process to re imagining a future of news, which contains enhanced search tools, structured information and endlessly adaptive content. In reality perhaps there is currently a disconnect between the work that goes on to gather and process global news, and the ‘high concept’ thinking contained in her article, but nevertheless her writing offers a tantalising glimpse into a connected future of news.
Can you imagine if, every time something new happened in Syria, Wikipedia published a new Syria page, and in order to understand the bigger picture, you had to manually sift through hundreds of pages with overlapping information? The idea seems absurd in that context and yet, it is essentially what news publishers do every day. The Particles approach suggests that we need to identify the evergreen, reusable pieces of information at the time of creation, so that they can be reused in new contexts. It means that news organizations are not just creating the “first draft of history”, but are synthesizing the second draft at the same time, becoming a resource for knowledge and civic understanding in new and powerful ways.