YouTube algorithm pushed voter fraud allegations to Trump supporters, report says

For years, researchers have suggested that the algorithms feeding users’ content are not the cause of online echo chambers, but are more likely due to users actively searching for content that matches their beliefs. This week, New York University researchers for the Center for Social Media and Politics showed the results of a YouTube experiment that was just conducted just as allegations of voter fraud were being raised at the Fall 2020. They say their findings provide an important caveat to previous research showing evidence that in 2020, YouTube’s algorithm was responsible for “disproportionately” recommending voter fraud content to more “skeptical” users. as to the legitimacy of the election to begin with”.

A study co-author, Vanderbilt University political scientist James Bisbee, told The Verge that while participants were recommended a low number of election denial videos — a maximum of 12 videos out of hundreds participants clicked – the algorithm generated three times as many for people. predisposed to join the conspiracy than to people who have not. “The more sensitive you are to these types of stories about the election…the more content about that story would be recommended to you,” Bisbee said.

YouTube spokeswoman Elena Hernandez told Ars that the Bisbee team’s report “does not accurately represent how our systems work.” Hernandez says, “YouTube does not allow or recommend videos that make false claims that widespread fraud, error, or trouble occurred in the 2020 U.S. Presidential Election” and the “most popular videos and channels Election-related YouTube views and recommendations come from authoritative sources, such as news channels.”

Bisbee’s team directly states in their report that they did not attempt to solve the riddle of how YouTube’s recommendation system works:

“Without access to YouTube’s trade secret algorithm, we cannot say with certainty that the recommendation system infers a user’s appetite for voter fraud content using their past watch histories, demographics or a combination of both.For the purposes of our contribution, we treat the algorithm as the black box that it is, and we simply ask whether it will disproportionately recommend voter fraud content to users who are more skeptical of the legitimacy of the election.”

To conduct their experiment, Bisbee’s team recruited hundreds of YouTube users and recreated the recommendation experiment by asking each participant to complete the study by logging into their YouTube account. After participants clicked on various recommendations, researchers recorded any recommended content flagged as supporting, refuting, or neutrally reporting Trump’s voter fraud allegations. After they finished watching videos, participants filled out a lengthy survey to share their thoughts on the 2020 election.

Bisbee told Ars that “the purpose of our study was not to measure, describe or reverse engineer the inner workings of the YouTube algorithm, but rather to describe a systematic difference in the content it recommended to users who were more or less concerned about the election”. fraud.” The sole purpose of the study was to analyze the content provided to users to test whether online recommendation systems contributed to the “polarized information environment.”

“We can show this model without reverse-engineering the black box algorithm they use,” Bisbee told Ars. “We just watched what real people were shown.”

YouTube recommendation system test

The Bisbee team reported this because YouTube’s algorithm relies on viewing histories and subscriptions. In most cases, it’s a positive experience for recommended content to align with user interests. But due to the dire circumstances following the 2020 election, the researchers speculated that the recommender system would naturally feed more voter fraud content to users who were already skeptical of Joe Biden’s victory.

To test the hypothesis, the researchers “carefully monitored the behavior of real YouTube users while on the platform.” Participants logged into their accounts and downloaded a browser extension to capture data about recommended videos. Then they cycled through 20 recommendations, following a path specified by the researchers, such as only clicking on the second recommended video from the top. Each participant started watching a randomly assigned “seed” video (political or non-political) to ensure that the initial video they watched did not influence subsequent recommendations based on the user’s prior preferences that the algorithm would recognize.

There were many limitations to this study, which the researchers described in detail. Perhaps first and foremost, the participants were not representative of typical YouTube users. The majority of attendees were young, college-educated Democrats who watched YouTube on devices running Windows. The researchers suggest that the recommended content might have differed if more participants were conservative or Republican-leaning, and therefore supposedly more likely to believe in voter fraud.

There was also an issue where YouTube removed voter fraud videos from the platform in December 2020, resulting in researchers losing access to what they described as an insignificant number of participant-recommended videos that could not be assessed.

Bisbee’s team reported that the main finding of the report was preliminary evidence of a behavioral pattern of YouTube’s algorithm, but not a true measure of the spread of misinformation on YouTube in 2020.

Comments are closed.