Tackling the fake news problem

by

Letter blocks that spell our 'fact' and 'fake'

Meet the Imperial researchers who are trying to understand and solve fake news problems in the digital age.

Fake news is a human activity, so humans should be involved Dr Julio Amador

The term ‘fake news’ regularly hits the headlines these days. Whether it is to do with political events or information on the various social platforms, it seems like it is getting harder and harder to know who to trust to be a reliable source of information.

For researchers at Imperial, however, understanding of how the misinformation online is generated and spreads also presents an interesting research challenge, which may one day help to solve the fake news problem.

Finding a definition

Perhaps surprisingly, the first challenge is actually finding the definition for what fake news is. Factchecking a statement is easy if it presents some very specific indicators, but human language is rarely this straight forward. Any statement should be considered with its context and intentions taken into account – propaganda campaigns or sarcastic jokes are good examples where the context and intentions are key to understanding the message.

Dr Julio Amador
Dr Julio Amador

Dr Julio Amador, a research fellow at Imperial College Business School, and his colleagues have published several studies looking at how misinformation spreads through Twitter. In two independent studies the researchers looked at the tweets online during the 2016 Brexit vote and the US presidential election. In a recent interview Dr Amador highlighted two important social factors that are often associated with the spread of misinformation-containing messages: previous exposure to misinformation and social polarisation.

If a person has been previously exposed to similar information, they are more likely to assume that the information provided is correct and pass it forward. The polarisation between social media users also facilitates the misinformation spread, if information is coming from a source that has similar political and social attitudes then it is assumed to carry more reliable information. Both of these factors ultimately contribute to the creation of filter bubbles online, in which people of opposing opinions listen only to those who hold the same beliefs.

You can find the full audio interview with Dr Julio Amador below, where we covered topics such as government propaganda campaigns, social media platform responsibility and tips for detecting misleading tweets.

Classifying misinformation

Professor Michael Bronstein, who is the Chair in Machine Learning and Pattern Recognition in Imperial’s Department of Computer Sciences, has also been involved in the research to better understand how the misinformation spreads online. He has previously developed a type of machine learning algorithm, called geometric deep learning (GDL), which can be used to find patterns in data networks, just like the ones formed by the sharing of messages and media online.

Previous research has shown that misleading information tends to have a unique pattern of spread online. Using GDL, Prof Bronstein and his team are building a platform that can detect and classify the misinformation using those patterns. Looking at the information spread rather than its content i.e. the linguistic and semantic aspects of it, allows the researchers to overcome the difficulty in specifically defining fake news and to focus on the unique characteristics of how misinformation is amplified. Furthermore, not relying on the language itself allows the algorithm to be applicable more broadly to other forms of communication such as audio and video content.

Through comparison to previously detected pieces misinformation, the researchers can also get an insight into why certain information is classed as false and provide a deeper understanding on what makes into the misinformation bubble. Professor Bronstein and his colleagues have also created a startup company, called Fabula.AI, that is dedicated to further development and wider distribution of their misinformation-detection algorithm.

In the following audio interview with Professor Bronstein we discussed his Fabula.AI startup, the different approaches used to detect fake news and the challenges ahead in talking the fake news problem using automated tools.

Neither Dr Amador nor Professor Bronstein think that artificial intelligence or machine learning algorithms are the sole solution to today’s fake news problem. While artificial intelligence-based fake news detection tools can help to automate misinformation detection, both researchers stress the importance of human-made decisions in classifying which information should be treated as fake versus what should be considered true. As Dr Amador puts it: “Fake news is a human activity, so humans should be involved”.

And as for who should have the responsibility for managing the spread of misinformation online, Dr Amador thinks that the social media platforms should perhaps be more proactive and Professor Bronstein suggests that the government should focus on providing a better education in media and online literacy. Both scientists stress that any active government involvement should proceed with extreme caution, given the grey line between regulation and censorship.

Reporter

Bernadeta Dadonaite

Bernadeta Dadonaite
Centre for Languages, Culture and Communication

Click to expand or contract

Contact details

Email: press.office@imperial.ac.uk
Show all stories by this author

Leave a comment

Your comment may be published, displaying your name as you provide it, unless you request otherwise. Your contact details will never be published.