18 November 2019

#newsHack report: Storytelling with voice

It is amazing to see how quickly the media and tech worlds have moved since our last voice hack.

Not only is a previous News Labs prototype "Skippy" now a feature of BBC News on Alexa, but the adoption of smart speakers and voice assistants has exploded.

Nine teams gathered at Level39 in London’s Canary Wharf for a two-day NewsHack on the theme of storytelling with voice. They explored new ideas that could be enabled by the latest innovations in this space.

The interest in this NewsHack was the strongest we’ve seen all year. The ideas pitched lived up to the high expectations, creating an unenviable task for the judging panel as they decided on winners in three categories.

Teams travelled from as far afield as Norway and Sweden with a mix of journalists, designers, developers and academic researchers all collaborating to rapidly build demos.

Teams discuss new ideas for storytelling with voice.

Teams work to build their demos before the pitching begins.

Nicky Birch, a lead development producer at BBC R&D, Tom Hewitson, founder of voice games studio labworks.io and Miles Bernie, head of BBC News Labs, made up the judging panel.

We’ve outlined all of the pitches below, starting with the three winners followed by the six other teams in the order they presented their ideas at the end of the second day.

Most captivating audience experience: NRK

The judges agreed unanimously that the team from Norwegian public radio, NRK, won this category for their focus on the user journey and the editorial thought that went into their prototype.

The team’s idea examined converting radio audiences from passive to active listeners by allowing them to interact with the news and then interpreting the user’s responses. The team identified the challenges involved in making a user aware that a level of interaction is expected, and designed ways to avoid long periods of passive listening.

Avoiding yes/no type questions within stories, and asking listener’s - “What do you think?” - allowed the team to move beyond a simple passive or active model of story consumption. The tool groups arguments made by users on particular news stories and topics, and could potentially convert these user generated answers into further content.

The team acknowledged that there was significant editorial work involved in the prototype but suggested that cutting up existing news content and then forming new narratives for the tool could allow it to scale.

Most useful editorial tool: The David Hackingboroughs

This team, with members from BBC Voice+AI, identified the challenge BBC News faces in engaging certain audience segments as the problem they would like to solve through voice interaction.

By allowing users that consume other offerings from the BBC, in particular factual and entertainment brands, to ask questions while watching or listening to those shows they hope to increase engagement with the news. This could create an enjoyable experience for consumers that also adds to their knowledge on topical issues that impact them.

The demo allowed a user to interrupt the show “Seven worlds, One planet” and ask questions about climate change that were then answered using content outside of the show. An example question of how much ice a user’s carbon footprint melts was answered by showing a map on screen.

The team set up their demo on stage.

The David Hackingboroughs prepare to face the judges.

The judges said the demo offered a compelling vision of how we will interact with television sets in the future, and that it won this category because of the "grown up" production tools used to communicate the idea in the demo. This meant they could already see how this could potentially be used for audiences.

In reply to the question of what they would do with an unlimited budget the team said they would build a synthetic David Attenborough to answer a viewer’s questions while she watched the show.

“Surprise us!” category winner: My Beeb - Edinburgh University

This team’s prototype focused on personalisation without bias amplification. By gently curating voice-based content in a way that is comparable to “shuffle” on a music playlist the team hopes to break audiences out of their filter bubbles without turning them off from relevant news altogether.

The options to “skip” or “tell me more” based on the headlines of stories would be offered as buttons on a smartphone and in a handsfree voice interface. Users would be clustered based on their consumption habits, and then have the content they consume gradually resemble that in a different cluster over time. In this way a user is gently broken out of their habits without compromising their experience of the app.

In response to a question from the judges the team said they would aim to automatically tag the granular blocks of news stories automatically based on named entity recognition and use text-to-space technology to convert this into audio content. These blocks could be consumed by users meaning they would still hear some relevant content from stories that would otherwise be completely ignored for their ears by the curation algorithm.

The judges said this idea was an interesting spin on a familiar concept.

A room full of ideas

Swedish public radio (SverigeSRadio) proposed a user feedback loop for editors through an interruptible story format that becomes richer with the help of listeners' questions. Crowd sourced story telling through voice interaction was demonstrated with the examples of surfacing the most popular stories based on user requests and journalists creating answers for frequently asked questions. The journalists could then send notifications to the listeners' smart devices letting them know the question they asked now had an answer for them to listen to.

Attendees listen to one of the talks.

Over 70 people attended the event which included talks from industry experts.

Team Two-Legged Dog showed how news and music could be combined to match the mood of the listener. The team described how content could be tagged with its editorial tone and the user could be monitored for feedback, for example inferring stress from their voice. This would lead to a relevant play queue, built specifically for them.

Story Explorer, a demo built by BBC News Labs and Content Production Workflows (CPW) engineers, showed how rich interactions for users can be built by semantically tagging the events and elements that make up a story. The team asked questions of their version of "Little Red Riding Hood" over Alexa.

Building a semantic representation of Little Red Riding Hood

Team Story Explorer.

The creative agency Joi Polloi showed how a user’s voice can be used to search for and surface relevant content that already exists online from a specific news organisation. Their prototype “Oh Wow” returned the latest videos from Channel 4 News on YouTube related to Boris Johnson, based on the intent of the user’s voice search on a smartphone. The team said this showed how content that already meets editorial standards and matches the zeitgeist can be used to make search results from voice more relevant.

Team Carol Nate with members from BBC CPW and City, University of London, showed how a Deep personalised News Explainer (DeePex) could be used to engage younger audiences who feel news content assumes too much prior knowledge. The tool would fill the user’s knowledge gap with conversational explainers that relate directly to the individual’s existing level of understanding of a topic.

The 3 Amigos from BBC News Labs focused on the human relation to the smart speaker and how 41% of users say it feels like they are talking to another person. The team used voice recognition and face motion detection services to interpret a user’s reactions to news stories and give them more or less information based on their level of interest in a headline.


Love data and code?

We'd like to hear from you.