29. August '24
by Natalie Schalk
Researchers at Coburg University of Applied Sciences have developed a tool that uses AI to display distorted and one-sided information in online news. “Bias” is the term used to describe the falsified distortion of information in the media.
These are not always deliberately propagandistic texts, but also unconsciously one-sided formulations. The topic is important because media bias influences our purchasing behavior just as much as election results. “Today, everyone informs themselves on their own channels,” says Prof. Dr. Jochen L. Leidner. “Information is scattered and people are not trained to question where content comes from and who has an interest in them believing this or that.” The computational linguist holds a research professorship at Coburg University of Applied Sciences, was previously Research Director at the global news agency Reuters in London and has brought his interest in media with him to Upper Franconia: he teaches at Coburg University of Applied Sciences and as a guest at the University of Sheffield, for example, the interdisciplinary course “Media Manipulation, Propaganda and Fake News”.
Facts instead of fake news
Leidner is a founding member of the new research institute “Coburg University Research Center for Responsible Artificial Intelligence” (CRAI) at Coburg University of Applied Sciences. Together with his research assistant Tim Menzner, he has developed a system that is able to recognize and differentiate between 27 types of media bias. This so-called “bias scanner” uses machine learning and, with the help of a so-called “large language model” based on a neural network, can identify, for example, personal attacks on minorities, commercial bias in which the text passage serves economic interests or the “causal misunderstanding bias”: this is the case when a cause-and-effect relationship between two variables is assumed without sufficient evidence. The widespread “ad hominem bias” is also recognized. This does not focus on the content of an argument, but on the person making the argument – character, motives or other characteristics of the person are attacked. For example, something like: “Demand better climate policy from others, but jet off on vacation yourself: The purest hypocrisy!”
A unique AI application for text analysis
Online media often mix facts and opinions. Given the mass of information, it is impossible for people to always check this critically. There are already various attempts to develop technical solutions for this. For example, the tendency of an entire text is analyzed in comparison to other texts. Or a trend is inferred from comments made by users. “It looks similar at first glance,” says Leidner, “but it’s something completely different.” The Coburg scientists’ BiasScanner applies text analysis methods to each sentence individually in a completely non-judgmental way and checks which of the 27 bias subcategories on which the AI tool has been trained apply to a sentence – and whether any of them occur at all. The system highlights bias in the text in color and generates explanatory reasons for each automatic decision. At the same time, the strength of the bias is displayed on a scale.
Looking for partners: How the BiasScanner is progressing
The software supports several languages and can be tried out in a demo version at biasscanner.org. The aim is for as many people as possible to be able to use it to recognize and better understand manipulative statements on the Internet. The BiasScanner is now also available as an add-on for the first browser and the Coburg research group is looking for support to drive the project forward and develop it further. Funding, donations and research collaborations are needed. However, the public can also help to improve the quality of the results. Leidner explains: “We have built in a data donation function. If someone has exciting examples where the BiasScanner has not yet detected a manipulation, they can donate such articles for research at the touch of a button.”