The invasion of the 'bad bots'

What they are, what we know about them and how they act in the online manipulation market.

malicious bot
freepik

They are used to spread misinformation, sow discord, and even cheat people. These are the so-called 'bad bots', the fake accounts that on the internet and on social networks allow people to buy tens of thousands of fake views, to spread fake news with both hands, but also to sell out a star's concert tickets in a matter of moments. We discussed with Jon Roozenbeek, researcher at Social Decision-Making Laboratory of the University of Cambridge, and recently visiting professor at the IMT School, of the more or less known facts about the workings of this market of which we are all actors and its potential impact on societies and democracies.

How many of the 'people' we interact with online do not actually exist?

This is a difficult question to answer because our detection methods are so bad. But we have suspicions. There is a study that is conducted every year by a research group called Barracuda to check how much Internet traffic is made up of bots, good and bad. Good bots are news aggregators, chatbots, and so on. Bad bots are those that publish disinformation, or steal personal data, or massively buy tickets for events, and so on. According to the last study, dating back to last year, the percentage of malicious bots out of all internet traffic was 24 per cent: so, one in four. It does not mean that 24 per cent of all social media content is created by bots. It means that of all internet traffic, a quarter is malicious bots.

Although this number seems to decrease year by year, it is a false impression: it only means that we are becoming less and less able to recognise what is and what is not a bot online, which is a problem because it is partly due to the rise of artificial intelligence. Today's bots seem much more human than before. This makes it even more difficult to estimate the extent of the problem, but we are pretty sure that it is substantial and also that our detection tools are pretty poor, unfortunately.

What are the categories of 'bad' bots?

There are various ways in which a bot is harmful, perhaps not all illegal, but more akin to black economy behaviour. One of the main ones is simply making money. It is very simple, for example buying tickets for events. Let's say Taylor Swift does a concert in Italy, the tickets sell out in a minute. It's not because there are people queuing up, or not just because of that. Sometimes it is, for very popular artists. But often there are automated scripts that buy these tickets, and then sell them, on eBay or elsewhere.

This happens not only for event tickets, but also for popular hardware, such as the PlayStation 5, which had this problem badly when it was launched: bots were buying it, generating much more demand than supply. This means that you can make a lot of money by being among the first to buy and then reselling at a higher price online. 

And is there no way to detect this kind of ticket-buying bot?

There is, but it doesn't work well. It is really difficult.

What are other areas in which bots are used? 

There is the cryptocurrency scam, the modern version of email scams, more or less. This brings us closer to social media territory, where someone programs bots on X, to say 'you should buy this cryptocurrency'. Real people think 'toh, there are a lot of people buying this cryptocurrency, maybe I should too'. In reality, they are simply bots. So this is a way to scam people.

There are also political influence operations, in which activity is generated on social media about a politician. It seems that this politician is very popular, but in reality it is just bot trading. Related to this is another phenomenon: the purchase of online accounts to spread disinformation. Accounts are worth more if they are 'older' and have more activity. So a lot of bot activity takes place not for a specific purpose, but just to give the account legitimacy. For example, on Reddit you see many bot accounts posting fake AI-generated content, not because people really care about the content posted, but to give the account more positive ratings, more 'karma', and make it look like it is legitimate, and then sell it, or use it for other reasons. 

And this happens on all social media?

Indiscriminately, everywhere. Although there are variations in supply and demand. For some really unknown channels it is different than for Facebook, X, Reddit or TikTok, for example.

Jon Roozenbeek, researcher at the University of Cambridge, author of The Psychology of Misinformation (Cambridge University Press).

Who is behind this 'industry'?

It is really a whole industry in which one can buy fake assets. And this industry is not illegal at all, just a bit questionable. In almost any country you would not be arrested if you ran such a website. It is actually a grey market, not a black market. What they do is sell, for example, account verifications. Let's say you want to open a Facebook account, you need an email address and a phone number. If you are in Italy and you have to register an account in Austria, for example, you need an Austrian phone number, otherwise the registration won't work: the platforms have security measures that prevent this. But you can buy this service: you can go to a website and buy a Facebook account in Austria almost for free, it costs one euro cent, not more. This sector is widely used by all the actors I just mentioned, such as scanners and political operatives to buy a large number of accounts, which then together become networks of bots programmed for various purposes.

Do we know where those offering these services are based?

Most are in Russia, some in other countries, such as China and Belarus. It is not clear how they work, whether they are interconnected, whether they are state-run, whether they are secret service operations. It is possible. We don't know. We can look at their websites, and the reason we think they are Russian is because their website is in English, but it is grammatically incorrect English, whereas in Russian it is perfectly correct. And also you can use Russian payment service providers, you can pay in roubles, and so on. So we think they are Russian, but they might not be: they might operate somewhere else and simply target the Russian market. It is a possibility. Although my suspicion is that they are indeed Russian. 

Are there any estimates of how many people could be employed in this sector?

The scale of the phenomenon is one of the things we are trying to understand. We are talking thousands, tens of thousands. We have identified 17 suppliers, but there could be more. It is possible that we have not found them all, but it is also possible that some of these suppliers, who are like clones of each other because they have exactly the same prices every day, so they use the same data, are not actually different suppliers. 

My initial idea was that this was a very well known phenomenon, but you just said that it is actually difficult to study it and have a precise idea of how it works, who is behind it, how extensive it is. Why is it difficult to have data on this phenomenon, for technical reasons? Or for some other reason? 

The problem of bots is fairly well studied. There are studies going back several years that try to estimate the percentage of bots on Twitter, for example, the use of bots in disinformation campaigns, etc. These things are fairly well known. What is not well known and studied is the market that sells them. It is not entirely unknown, but for the moment there are no major studies on it.

Do you think it is useful or necessary to make the public aware of the existence and functioning of this phenomenon?

Well, I approach it as a regulatory issue. Obviously, if people are interested, then all the better. But I'm not so sure that public education helps, because it's really hard to tell if you're talking to a bot online, or if you're reading things produced by a bot. I think you have to look at it mainly from a cybersecurity perspective. Looking at it from a market perspective, it makes sense to ask whether we should regulate this market, and if so, how. As a society, do we decide that this market is acceptable or do we decide that we prefer to disincentivise it as much as possible? But that is not for me to decide. It is up to the regulators and legislators to express an opinion on it. In order to be able to express an opinion on this, we obviously have to understand it first.

Chiara Palmerini

You might also be interested in

SocietyTechnology and Innovation

If TV series are the ones to reflect on cybersecurity

Gli attacchi e il terrorismo informatico sono argomento di attualità, perfino al centro di serie tv, come Zero Day. Quanto la fiction è vicina alla...

SocietyTechnology and Innovation

Artificial intelligence, human errors

When AI gets it wrong: why it happens, and how to avoid it.

SocietyTechnology and Innovation

Fact-checking: what is it? how does it work?

Following Meta's decision to change its content moderation policies, a reflection on the ongoing research to combat online disinformation.

SocietyTechnology and Innovation

If we go extinct, it will not be ChatGPT's fault

A reflection on the real and supposed risks of artificial intelligence.

Mind and Brain

Technostress, when the use of technology at work becomes a problem

A group at the IMT School studies this form of psychophysical discomfort, and designs interventions to prevent and reduce it.