I’m Jeremy Hodges and am currently a developer on the BBC’s Digital Media Graduate Scheme. I’ve been working on various newsroom tools at the BBC, the latest of which is the new Window on the Newsroom Ingestor.
What is WoN?
Window on the Newsroom (WoN) is an umbrella name covering a multitude of products which News Labs has developed to aid journalists in finding content for broadcast and online. WoN integrates with the BBC’s existing infrastructure, whilst using newer technologies to improve workflows. Ingestor is the newest of those tools and will be used by journalists to import content into the WoN infrastructure. Ingestor allows users to choose clips to be imported from BBC News’ video management system, called Jupiter, and machine transcribed using BBC R&D’s Kaldi speech-to-text technology.
What is the current problem?
Currently, to save on resources, WoN only automatically ingests BBC-generated video which is no more than 20 minutes long. This excludes longer interviews and programmes, which are the most time-consuming to manually transcribe. What happens when we want to transcribe and “paper edit” a video which is longer than 20 minutes? Now, we can use Ingestor.
The BBC has a lot of old systems and therefore uses legacy technologies and formats, which presents problems when using newer technologies. The first struggle in this project was getting content out of Jupiter. Jupiter has an API that only returns XML data and is not really easy to use in web applications, especially with the wide adoption of JSON.
How did we solve it?
That means an API needed to be developed to interact between the Jupiter APIs and Ingestor. The API was built in NodeJS and processes the XML, cleans up the XML and then returns JSON for use in the application.
The application UI is built on the AngularJS framework, which has allowed quick prototyping. We also made use of the Google Material Angular library to provide a useable user experience built from re-useable components.
What does it do?
Ingestor is built around the idea of a search engine, allowing the search of the BBC’s vast content stores. These are then presented in an easily understandable form to the user.
How does it work?
Let’s break down the components of the product. Firstly, the site has a huge focus on searching and filtering, the search bar at the top of every page reinforces this.
Each search result then presents its status to the user. An item easily shows its status to the user, with the most important metadata (e.g. time and site) clearly shown. The option to transcribe an item is then as simple as clicking ‘TRANSCRIBE”.
When a user requests a transcription, the system calculates how long this will take, then prompts the user to confirm whether to continue. This allows journalists to easily manage their time around transcription.
An item that has been transcribed can be easily be identified and the transcription seen from the main page.
An item that has an error is flagged to the user with an error message.
The idea behind the tool is to allow a quick overview for journalists to easily search the video archives, see all results at a glance, with the most important metadata visible. The application offers journalists the ability to preview each item, where a popup modal provides video and key frame previews.
A transcription can then be seen, tweaked or edited by the users.
This is still a prototype, but it has become a tool used by and designed for journalists thanks to input and feedback from the newsroom. So a huge thank you to all those who used and tested the product during its development. The application will now continue to be improved based on user feedback, before (I hope) being released to a full live environment.