The right tool for the (misinformation) job

Conceived as an 'open' guide, it identifies the most effective tools depending on the context.

Folco Panizza | researcher of prosocial behaviour and decision-making, Scuola IMT Alti Studi Lucca

Artificial intelligence-generated images, malicious bots, swarms of fake accounts orchestrated by foreign agencies: it is no surprise to anyone that the Internet has become a chaos of disinformation. But in a year when almost half of the world's population is expected to vote, it cannot be left unchecked that the Internet and social media are overrun with fake content.ย 

There are ways to curb the phenomenon. In an article just published in the magazine Nature Human Behaviour, an international group of researchers, including myself, identified and analysed the effectiveness of a number of possible interventions, a sort of toolbox, and also provided a simple guide to their use. The intuition is precisely that there is no one-size- fits-all solution to the problem of misinformation. Therefore, if the type of misinformation varies in format and content, the same should apply to the proposed intervention. The difficult part is obviously to identify the right 'tool' under the specific circumstances. 

By reviewing over 80 scientific publications on the effectiveness of various types of interventions against disinformation, we have compiled a guide comprising nine types - from debunking to media literacy training - each accompanied by indications of the target audience and the purpose for which it should be used, some examples, and a quantitative summary of its effectiveness. The toolbox, available online, is a work in progress and, unlike normal academic publications, will be updated in the coming months and years. 

In addition to being up-to-date to keep pace with the ever-changing landscape of misinformation, the evidence gathered comes from research conducted on six continents, thus providing useful insights into the varying effectiveness of interventions in different contexts: what works in one country or situation may not work in another or in different circumstances. For example, evidence suggests that preventive measures to stimulate users' logical reasoning may be effective in groups with high literacy rates, but work less well in social groups where abstract thinking is not as common. In such communities, 'old-school' refutation of false information is preferred: so-called debunking.

Other interventions seem to have more far-reaching effectiveness, such as pushing users not to be distracted and to focus on the accuracy of the content they are sharing. However, to give another example of the specificity of situations, this tool seems to have limited impact in the long run, as users may gradually become desensitised. Remember, for example, how content warnings about COVID-19 and vaccines in Instagram or Facebook posts quickly faded into the background and almost went unnoticed after only a few appearances on our feeds?

The ultimate goal of the project is to make all this information accessible to people and decision-makers alike. Indeed, the very idea of the toolbox was born to overcome the barriers inherent in scientific publications, which are often considered technical and inaccessible. With an open-access format and the use of comprehensible language, the toolbox's in-depth and detailed information could potentially provide the impetus for designing new policies and driving reforms on how to make social platforms more liveable.

You might also be interested in


The majority opinion against fake news

Social media users 'listen' to the evaluations of others, even if they say they don't.

SocietyTechnology and Innovation

Where artificial intelligence will take us

The AI Index Report 2023 captures the state of the most talked about technology of the moment.


How to fight fake news

A study compares different approaches, including economic incentives, to 'vaccinate' against misinformation.