Graphical Storytelling

Reaching new audiences with short comics about important health stories


Can we automatically generate news comics from text?

Younger audiences are interested in stories with great public service value — such as issues around mental health — but might be less interested in the text-heavy format through which many of these stories are delivered online.

In 2019, News Labs explored how a new format could be developed to convey such stories in a visually appealing way, without significantly increasing journalists' workloads.

The Graphical Storytelling (GST) project generates short news comics based on text provided by a journalist, to tell health stories in an accessible and appealing way.

Three examples of comic panels generated by the first prototype, including a visualisation of a percentage statistic, a quotation with a speech bubble and attribution, and an impactful way of showing differences in numbers.

Examples of some of the comic panels generated by our first prototype.

User research into a hand-crafted prototype of stories was overwhelmingly positive with the targeted younger demographic. Other teams have successfully used similar formats to tell stories using Instagram and Snapchat.

There were significant technical and editorial challenges to overcome. Crafting these stories need not be a burdensome task. How might we lean on automation to identify features in text to choose the most appropriate graphical treatments?

Phase One: Making it happen

In June 2019 we began the first six-week phase of the project, to determine if building what is proposed would be technically feasible. We studied health stories to identify common structural elements and entities, to better understand the visual elements we would need to assemble. We built prototypes of entity detection and image lookup services, and a renderer with templates for recurring panel types, such as quotations, statistics, and graphical symbols. We joined these together with a simple web app that let a user input some text, then review and tweak the comic panel that it has generated.

The first prototype let us explore how journalists might put together a comic from a text story. The comics produced at this stage were very simple, but were sufficient to establish the feasibility of pursuing a more sophisticated method of assembling images. Exploring this prototype, we found that journalists would spend a lot of time tweaking the appearance of individual panels. For our second phase, it became clear we need to look at how we can do more of the heavy-lifting to automatically choose the best panels.

Phase Two: Getting smarter

In August 2019, we began our second six-week project. This time we focused on increasing the level of automation in generating comics. We built a natural language processing pipeline to identify the features in the text entered by a journalist, and generating a resulting "best-fit" image.

For example, if there was numerical data, a simple data visualisation was generated. If there was a quote, the styling reflected it, with an attribution to its owner.

Meanwhile, we gained a better understanding of the gaps that automation is unlikely to fill. For example, if a story described a novel treatment for an obscure condition, a graphical representation may not be relevant. Ultimately, we want to close this gap as much as possible. Part of the process also involved working with UX colleagues to build an interface that better helps journalists conceive of the whole story as a visually coherent comic.

Phase Three: Going global

The GST experience has demonstrated it could make things simpler for English speaking journalists. Instead of having to go through storyboarding, graphics commissioning and development stages, they could select the most relevant options from the menu cards within seconds.

With BBC broadcasting in more than 40 languages, could we not provide the same facility to non-English teams? These teams often operate in languages where linguistic machine learning solutions are less common. They are also less resourced for conventional paths of data visualisation tools.

As part of the GoURMET project exploring machine translation under our multilingual solutions workstream, we deployed a subset of machine translation models we have developed to act as an intermediary.

This meant a journalist speaking Hausa, Turkish or Serbian could have their text rendered through the tool as it is intended to be displayed, while the its English version (machine translated and available to be checked and tweaked by a human) drives the automatic generation of the graphics.


  • Multilingual GST tool forms part of the Multilingual Journalism suite shortlisted for the News Innovation Award in 2021
  • There are multiple strands to explore further such as:
  • How this work might interact with automatic image and graphics generation
  • How the process might deliver a more continuous visual experience
  • How it might track BBC’s changing visual style more closely
  • How it might work for more challenging alphabets and languages


Love data and code?

We'd like to hear from you.