In November this year, News Labs collaborated with BBC Connected Studio and BBC World Service to hold a #newsHACK event in New Delhi, India. Software engineers Remi Oduyemi, James Dooley and Eimi Okuno share what they learnt and what they built during the event.
The celebration of Diwali the previous week left smouldering firework remains and caused pollution levels to rise to “Severe”. Persevering still, people — with and without masks — gathered at the Google Campus, located just outside of Delhi, to tackle the issue of fake news.
Fake news is internationally topical, but in India and developing countries, the recent explosion of cheap data means that many people don’t yet have the digital literacy to separate real stories from rumour. This has resulted in a series of violent deaths — many of which have come on the heels of fear-mongering videos shared in small private chat groups on the messaging platform WhatsApp.
At the hack event, professionals and students gathered for two days to spark insights and strategies against fake news. The first day consisted of presentations and engineering exercises to help participants think of solutions in three categories:
Challenge fake news in private/encrypted messaging Fact-check news from source through to social platforms Check combined image, videos and text for indicators of fake news Judging the entries were:
- Dr. Santanu Chakrabarti, Head of Audience Research at BBC World Service;
- Irene Jay Liu, head of Google News Lab in the Asia-Pacific region;
- Pratik Sinha, founder of Alt News.
Each shared insights into the problem before hacking kicked off. We give a brief summary of some of the biggest takeaways from their presentations, before we share what we built in response to the challenge.
The fake news ecosystem
While most research has focused on the origins of fake news, Dr Chakrabarti has been analysing the behaviour of those who read it and pass it on to their social networks.
Unsurprisingly, people are less likely to scrutinise content shared by someone they trust, or content that aligns with their own beliefs. Generally, people tend to trust family members or close friends to verify whether a story is true — so they don’t have to.
There is also a preference for news content shared as images and video. Sinha said that 80% of fake news in India is in the form of images or videos with overlaid text. Video manipulation tends to be in the form of clipping or dubbing, rather than deep fakes.
Dr Chakrabarti and Mr Sinha both shared a few red flags that might indicate a piece of content is fake:
- Do you feel like you’re drowning in data?
- Is there a heightened sense of urgency?
- Is the image dominant with very little text?
- …Or is there just too much text?
As well as these tell-tale signs, a characteristic of viral fake news messages is that the same body of text will be shared multiple times, often with little variation. This can be used to detect patterns of fake news on social media before they go viral.
So what solutions might work?
Ms Jay Liu shared some insights from the Trusted Media Summit, “What is News Now?” — a conference and hack event held to spark conversations about creative reporting, public trust in news and the spread of misinformation.
At the Trusted Media Summit, the winning prototype was an image verification tool for mobiles, designed to tackle fake news spread primarily as images with text. Importantly, the designers of this prototype decided to take a mobile-first approach. In Asia, it’s all about the smartphone — although often Western hack prototypes are built for laptops, with an assumption that everyone owns one.
When it comes to fully automated verification tools, Mr Sinha said that there are pros and cons. While automated fact-checking can speed up the the verification process, it could also lead to more misinformation being spread. Automated fact-checking tools can only indicate whether a story is likely to be true or false. Without the nuances and context provided by human fact-checkers, there is a danger that people will view these as conclusive rather than tentative suggestions.
Mr Sinha recommended focusing on the most widespread method of fake news dissemination: images and videos embedded with text.
With this insight, teams of engineers and journalists collaborated to develop prototypes.
After many hours of negotiating each idea or features between editorial and engineering, and feverishly working away, the time was up for further development. Each team brought their prototypes forward to be judged by the panelists.
So what did we build?
Loop: A prototype for debunking local fake news
This joint team from the BBC’s London-based Audience Engagement and Monitoring teams built a prototype mobile app that shows debunked fake news based on the user’s location.
The list of debunked articles is sourced from existing fact-checkers and hoax busters in India.
The team wanted to encourage users to take initiative and fact check. They said that they believed this could be achieved with Loop by providing relevant local content for users, and making the experience simple, easy and quick.
Predicting viral posts on Twitter
This team from Audience Engagement, the BBC’s Delhi bureau and Indian Institutes of Technology (ITTs) aimed to assist human fact-checkers in discovering fake news before it goes viral.
They used a number of metrics to help predict a viral post, including:
- Entity extraction
- Tone of voice
Given the correct combination of these metrics, and a set threshold for growth and volume of a particular piece of news, the team could flag up likely fake news stories and then send to the human fact checkers to validate.
This team from BBC News Labs, Scroll.in and an IIT aimed to hold fake news sources accountable for spreading disinformation by tracking and displaying each fake news story on a timeline. They hoped to encourage news sources to fact-check stories they write while also informing news consumers of consistently untrustworthy news sources.
The team focused on verifying news stories in the form of images with text and used a combination of:
- Google search results to find instances of the story in the media
- a bank of trustworthy vs untrustworthy news sources
- sentiment analysis to determine whether the news story was likely to be trustworthy
This joint team from News Labs and journalists from Delhi’s Monitoring teams built a Telegram Bot that protects your family and friend group chats from fake news. By inviting Judge Judy into your group, she aims to bridge the gap between fact-checking organisations and the news consumers, alerting members when unverified links are forwarded or shared in the chat.
By integrating with fact-checking organisations, it can categorise the shared content as either:
- Untrustworthy/fake news
at both an article and news-source level. This will then warn the members of the group, and tell off the person who’s continually sharing fake news. Hopefully this can change user behaviour, and reinforce thinking before sharing.
News Labs is exploring whether it can work to develop any of the prototyped further. Keep an eye on our blog for updates!