By Kai-Cheng Yang, and Filippo Menczer
External researchers do not have access to the same data as Twitter, such as IP addresses and phone numbers. This hinders the public’s ability to identify inauthentic accounts. But even Twitter acknowledges that the actual number of inauthentic accounts could be higher than it has estimated , because detection is challenging .
Inauthentic accounts evolve and develop new tactics to evade detection. For example, some fake accounts use AI-generated faces as their profiles . These faces can be indistinguishable from real ones, even to humans . Identifying such accounts is hard and requires new technologies.
Another difficulty is posed by coordinated accounts that appear to be normal individually but act so similarly to each other that they are almost certainly controlled by a single entity. Yet they are like needles in the haystack of hundreds of millions of daily tweets.
Finally, inauthentic accounts can evade detection by techniques like swapping handles or automatically posting and deleting large volumes of content.
The distinction between inauthentic and genuine accounts gets more and more blurry. Accounts can be hacked, bought or rented , and some users “donate” their credentials to organizations who post on their behalf. As a result, so-called “cyborg” accounts are controlled by both algorithms and humans. Similarly, spammers sometimes post legitimate content to obscure their activity.
We have observed a broad spectrum of behaviors mixing the characteristics of bots and people. Estimating the prevalence of inauthentic accounts requires applying a simplistic binary classification: authentic or inauthentic account. No matter where the line is drawn, mistakes are inevitable.
Missing the big picture
The focus of the recent debate on estimating the number of Twitter bots oversimplifies the issue and misses the point of quantifying the harm of online abuse and manipulation by inauthentic accounts.
Through BotAmp , a new tool from the Botometer family that anyone with a Twitter account can use, we have found that the presence of automated activity is not evenly distributed. For instance, the discussion about cryptocurrencies tends to show more bot activity than the discussion about cats. Therefore, whether the overall prevalence is 5% or 20% makes little difference to individual users; their experiences with these accounts depend on whom they follow and the topics they care about.
Recent evidence suggests that inauthentic accounts might not be the only culprits responsible for the spread of misinformation, hate speech, polarization and radicalization. These issues typically involve many human users. For instance, our analysis shows that misinformation about COVID-19 was disseminated overtly on both Twitter and Facebook by verified, high-profile accounts .
Even if it were possible to precisely estimate the prevalence of inauthentic accounts, this would do little to solve these problems. A meaningful first step would be to acknowledge the complex nature of these issues. This will help social media platforms and policy makers develop meaningful responses.
Kai-Cheng Yang is a doctoral student in informatics at Indiana University. Filippo Menczer is professor of informatics and computer science at Indiana University.
This commentary was originally published by The Conversation—How many bots are on Twitter? The question is difficult to answer and misses the point
More about Twitter
New Twitter policy aims to pierce fog of war misinformation
The odds are against Elon Musk making Twitter a more profitable company