Skip to Main Content
Research and Innovation

Miami University professor directs study focusing on recent increases in fake accounts and bots on Twitter

Philippe Giabbanelli’s study focused on the popular issue of bots on Twitter and whether or not their presence had any effect on recent political events

Phillipe Giabbanelli and Benton Hall
Philippe Giabbanelli and Benton Hall, home of the Department of Computer Science and Software Engineering
Research and Innovation

Miami University professor directs study focusing on recent increases in fake accounts and bots on Twitter

#Philippe Giabbanelli and Benton Hall, home of the Department of Computer Science and Software Engineering()

By Gabby Benedict, CEC Student Reporter

Recently, Elon Musk backed out of the deal to buy Twitter, which has attracted more attention to the large issue of fake accounts and bots on the social networking site. Bots are software applications that run automated tasks over the Internet, usually with the intent to copy human activity on the Internet.

Philippe Giabbanelli, associate professor of Computer Science and Software Engineering at Miami University, recently directed a study focusing on these bots and whether or not they had effects on political events in the United States. The study “(Re)shaping online narratives: when bots promote the message of President Trump during his first impeachment,was published in PeerJ Computer Science in April 2022.

Giabbanelli’s research focuses on models from machine learning and simulation to examine human behaviors. This study originated as a course project in his machine learning class, with one of his former students, Michael Galgoczy ‘21, a lead author. 

“It is an exciting challenge to do research within a class and bring it up to the level where it can be published in a Q1 journal. The class needs to teach the foundations of a certain field, but students also need advanced knowledge so they can reflect state-of-the-art practices on their specific projects. It's the classic role of a teacher-scholar to navigate such challenges in guiding students to accomplish quality work,” Giabbanelli said.

The project examined the role of bots in former president Donald Trump’s first impeachment by researching three questions: were bots involved in the (impeachment) debate; whether or not the bots target one political affiliation more than another; and which sources are being used by bots to support their arguments.

Over 13 million tweets on six key dates, from Oct. 6, 2019 to Jan. 21, 2020, were collected by Giabbanelli and his team, which included Galgoczy, Lakehead University's Atharva Phatak and Vijay Mago, and Furman University's Danielle Vinson. They collected these by using machine learning to evaluate whether it originates from a bot and the sentiment of the tweets via BERT, a transformer-based machine learning technique for natural language processing.

The team’s first finding was that bots have played a significant role in contributing to the overall negative tone of the 2020 impeachment debate. They found that bots were targeting Democrats more than Republicans and that the bots provided sources that were almost twice as likely to be from the right than the left and were mostly extreme right-leaning sources.

Based on these findings, the team concluded that bots were purposely used to promote a misleading version of events and suggests an intentional use of bots as a strategy, which further confirms that computational propaganda via Twitter was involved in defining recent political events in the United States.

“It’s important to be clear that social bots are not all malicious entities that operate secretly and must be removed,” Giabbanelli said. “There are social bots that operate officially and serve a useful purpose, such as aggregate news feeds.”

The team's research, however, was focused on finding the undisclosed social bots with malicious intents. These types of bots on Twitter illustrate a classic cycle in cybersecurity, where developers tend to create smarter bots quickly so that security experts are always challenged to keep up with quickly evolving technology improvements, according to Giabbenelli.

“As bots strive to resemble normal users of Twitter, we now focus our detection efforts on coordinated, yet inauthentic, behaviors to find groups of malicious accounts instead of individual bots. It's a difficult effort, as recent studies have shown that even humans mostly failed at detecting recent bots, with a 24% success rate,” Giabbanelli said.

“Lately, bots have been found to be promoting conspiracy theories of COVID-19 and have actively opposed vaccination,” said Giabbanelli, hence “a lack of proper defenses against bots in the future could affect our political life, public health, and other aspects that are necessary for the strength of our institutions.”