Snopes Tips: How to Spot Social Media Bots

Social media bots are fake accounts that use automated or semi-automated programming to infiltrate platforms and shape human behavior online.

Using artificial intelligence to both mimic and stimulate human behavior, bots are wreaking havoc on various social media platforms. They include Twitter, where Elon Musk is calling for more transparency in a high-profile data debacle, and Facebook, where bot posts have influenced people’s perception of politics (this type of conduct is also called coordinated in authentic behavior). ).

Why are social media bots used?

Groups or individuals create social media bots for several purposes. Companies can sell the fake accounts to other users for money, or political groups can use them to share content aimed at polarizing and trolling viewers.

Broadly, there are two types of bots: automated bots, like those that automatically retweet a specific hashtag each time it’s posted, and semi-automated bots, which are usually fake accounts run by humans. Both types of bots can be used to spread hate speech, spread propaganda, influence public opinion, and sell goods or services.

Some bots are programmed to increase engagement or the number of followers, while others are intended to stimulate insidious talk and incite harmful actions. Either way, social media bots can be a serious problem, especially since social media users are often unable to tell them apart from accounts run by real people.

A peer-reviewed study from Stony Brook University published in 2021 analyzed more than 3 million tweets written by 3,000 bot accounts and compared the language of those tweets with that of 3,000 genuine accounts. When the researchers looked at the bot accounts individually, they appeared to be operated by humans. But, when the researchers analyzed the accounts as a whole, they realized that the accounts were apparently clones of each other.

In recent years, the use of bots has increased. And with this increase, cybersecurity experts have tried to sound the alarm about their threat to our digital ecosystem. In fact, the European Commission launched its Action Plan against Disinformation in 2018, specifically to address social media bots as a technique to “disseminate and amplify divisive content and debates on social media” and spread the misinformation and misinformation. Similarly, the US Department of Homeland Security (DHS) has also launched efforts to combat misinformation on social media, including guidance for identifying bot accounts.

Why identify bots?

Not only is it important to identify social media bots to prevent the spread of false information, but also, a routine for removing bot followers from your profile could improve your account’s ranking on a platform.

With few or no bot followers, your profile content is more likely to appear at the top of feeds, resulting in you receiving likes, retweets, shares, or likes. “comments” (i.e. an “engagement”), based on the sites algorithms.

In other words, while removing bot accounts from your subscriber lists may reduce your total number of subscribers, such actions will ensure that those who do follow you are human and interact with your content in a meaningful way.

What are common bot behaviors?

The DHS Office of Cyber ​​and Infrastructure Analysis has described common methods in which social media bots influence and/or interact with humans online – also known as “attacks” – such as:

  • Click or like farming. This is when bots inflate the notoriety or popularity of a website by liking or reposting content. These types of bots also allow people to buy similar fake accounts to increase their number of followers.
  • Hashtag hijacking. This method uses hashtags to focus an attack on a specific audience using the same hashtag.
  • Repost Storms. This happens when a parent social media bot account launches an attack, and then a group of bots instantly reposts that attacking post.
  • Sleeping robots. These are bots that sit idle and then wake up to do a series of posts or retweets over a short period of time.
  • Jacking trend. This method uses trending topics to focus and attack a targeted audience.

If you encounter any of the patterns mentioned above, we recommend that you report the suspicious activity to the administrators of the hosting social media platform.

What do bot accounts look like?

Spotting a bot can be a tedious task.

You can try a bot detection tool like Botometer, which describes itself as a “machine learning algorithm trained to calculate a score where low scores indicate likely human accounts and high scores indicate likely bot accounts.” But there are limits to these types of services; some things that only a human eye can grasp.

Here are some questions the Snopes Engagement team considers when triaging our Instagram, Twitter, and Facebook follower lists, and deleting bot accounts:

  • Does the account have a profile picture? Sometimes a bot account will not have a profile picture. Spotting a profile account without a photo is often our first indication that the account may be inauthentic.
  • Does the account use a generic username? Often with bot accounts we will see a generic username that was probably created as part of an automated system. These often combine names and then end with a series of numbers.
  • What is the number of subscribers? Often with only a few dozen followers, bot accounts tend to have lower follower counts than genuine accounts.
  • What are the account privacy settings? Some bot accounts will have high privacy settings, giving no public or limited access to their profile.
  • How many messages have they shared? For example, on Instagram, bot accounts often only have a few grid images.
  • What is the quality of the shared content? On Twitter, for example, bot accounts can be programmed to automatically share posts with a certain hashtag. If an account appears to only share a certain type of content, we’re suspicious.

This guide only scratches the surface of bot behavior. For more information on bot identification and coordinated inauthentic behavior, see Snopes’ Media Literacy Collection.

This page is part of an ongoing effort by the Snopes Newsroom to teach the public the ins and outs of online fact-checking and, therefore, build people’s media skills. Misinformation is everyone’s problem. The more we can all get involved, the better we can fight it. Have a question about how we do what we do? Let us know.


Sources

“A collection of tips for fighting online misinformation like a pro.” Snopes.Com, https://www.snopes.com/collections/international-fact-checking/. Accessed July 23, 2022.

Botometer by OSoMe. https://botometer.iuni.iu.edu. Accessed July 23, 2022.

DHS Coordination Efforts to Combat Social Media Misinformation | Office of the Inspector General. https://www.oig.dhs.gov/node/6297. Accessed July 23, 2022.

“In Reversal, Twitter plans to comply with Musk’s data requests.” Washington Post. www.washingtonpost.com, https://www.washingtonpost.com/technology/2022/06/08/elon-musk-twitter-bot-data/. Accessed July 23, 2022.

Martini, Franziska, et al. “Bot, or not? Comparison of three methods for detecting social robots in five political speeches.” Big Data & Society, vol. 8, no. 2, July 2021, p. 205395172110335. DOI.org (cross-reference), https ://doi.org/10.1177/20539517211033566.

“Snopes Tips: How to Detect Coordinated Inauthentic Behavior.” Snopes.Com, https://www.snopes.com/articles/385721/coordinated-inauthentic-behavior-2/. Accessed July 23, 2022.

“Snopedestrian: misinformation versus misinformation.” Snopes.Com, https://www.snopes.com/articles/386830/misinformation-vs-disinformation/. Accessed July 23, 2022.

“Snopestionary: What is ‘Coordinated Inauthentic Behavior’?” Snopes.Com, https://www.snopes.com/articles/366947/coordinated-inauthentic-behavior/. Accessed July 23, 2022.

Social media “bots” tried to influence the US election. Germany could be next. https://www.science.org/content/article/social-media-bots-tried-influence-us-election-germany-may-be-next. Accessed July 23, 2022.

“Study Suggests New Strategy for Detecting Social Bots |.” SBU News, November 30, 2021, https://news.stonybrook.edu/homespotlight/study-suggests-new-strategy-to-detect-social-bots/.

Uyheng, Joshua, and Kathleen M. Carley. “Bots and Online Hate During the COVID-19 Pandemic: Case Studies from the United States and the Philippines.” Journal of Computational Social Sciences, vol. 3, no. 2, 2020, p. 445–68. PubMed Central, https://doi.org/10.1007/s42001-020-00087-4.

Comments are closed.