Islamist extremists bypass police over Facebook content – POLITICO

Photos of beheadings, extremist propaganda and violent hate speech linked to ISIS and the Taliban have been shared for months within Facebook groups over the past year despite claims from the social media giant that he had intensified his efforts to remove this content.

Posts – some tagged as ‘insightful’ and ‘engaging’ via news Facebook tools to promote community interactions – has championed violence from Islamic extremists in Iraq and Afghanistan, including videos of suicide bombings and calls to attack rivals in the region and the West, according to a review of media activity social between April and December. At least one of the groups had more than 100,000 members.

In several Facebook groups, competing Sunni and Shiite militias trolled each other by posting pornographic and other obscene photos to rival groups in the hope that Facebook would suppress these communities.

In others, ISIS supporters have openly shared links to websites with tons of terrorist propaganda online, while pro-Taliban Facebook users have posted regular updates on the website. how the group took control of Afghanistan for much of 2021, according to POLITICO analysis.

During this period, Facebook said it has invested heavily in artificial intelligence tools to automatically remove extremist content and hate speech in more than 50 languages. Since the start of 2021, the company told POLITICO it had added more Pashto and Dari speakers – the main languages ​​spoken in Afghanistan – but declined to provide figures for staff increases.

Yet the dozens of Islamic State and Taliban content still on the platform show that these efforts have failed to stop extremists from exploiting the platform. Internal documents, made public three months ago by Frances Haugen, whistleblower on Facebook, show company researchers had warned that Facebook failed to consistently protect its users in some of the world’s most volatile countries, including Syria, Afghanistan and Iraq.

“It’s too easy for me to find this stuff online,” said Mustafa Ayad, executive director for Africa, the Middle East and Asia at the Institute for Strategic Dialogue, a think tank that tracks extremism online, that uncovered extremist groups on Facebook and shared its findings with POLITICO. “What happens in real life happens in the Facebook world”,

Many countries in the Middle East and Central Asia are torn by sectarian violence, and Islamist extremists have turned to Facebook as a weapon to promote their hate agenda and rally supporters to their cause. Hundreds of such groups, ranging in size from a few hundred members to tens of thousands of users, have mushroomed on the platform – in Arabic, Pashto and Dari – over the past 18 months.

When POLITICO reported open Facebook groups promoting Islamic extremist content to Meta, Facebook’s parent company, it removed them, including a pro-Taliban group that was formed in the spring and had 107,000 members.

Yet within hours of its withdrawal, a separate Islamic State support group reappeared on Facebook and again began posting messages and images in support of the banned extremist organization, in direct violation of the terms of the organization. use of Facebook. These groups were eventually removed after they were also reported.

“We recognize that our app isn’t always perfect, which is why we’re looking at a range of options to address these challenges,” Ben Walters, a spokesperson for Meta, said in a statement.

An unresolved problem

Much of the Islamic extremist content targeting these war-torn countries has been written in local languages ​​- an issue researchers also pointed out in internal documents made public by Haugen, who submitted them as disclosures to the Securities and Exchange Commission and provided to the United States Congress. . POLITICO and a media consortium reviewed the documents.

At the end of 2020, for example, Facebook engineers found that only 6% of hate speech in the Arabic language was reported on Instagram, Meta’s photo-sharing service, before it was posted online. This compares to a 40% takedown rate for similar material on Facebook.

In Afghanistan, where around five million people log on to the platform each month, the company had few local language speakers to control content, according to a separate internal document released on December 17, 2020. Due to this lack of local staff, less than 1% of hate speech was suppressed.

“There is a huge gap in the process of reporting hate speech in local languages ​​in terms of the accuracy and completeness of the translation of the entire reporting process,” the Facebook researchers concluded.

However, a year after these discoveries, pro-Taliban content is regularly on the Internet.

In the now-deleted open Facebook group, with around 107,000 members, reviewed by POLITICO, dozens of graphic videos and photos, with posts written in local languages, had been uploaded through much of 2021 in support of the Islamic group. still officially banned from the platform due to its international designation as a terrorist group.

It included footage of Taliban fighters attacking forces loyal to the now ousted Afghan government, while other pro-Taliban users praised such violence in comments that have escaped moderation.

“There is clearly a problem here,” said Adam Hadley, director of Tech Against Terrorism, a nonprofit that works with smaller social networks, but not Facebook, to fight the rise of extremist content online. .

He added that he was not surprised that the social network had difficulty detecting extremist content because its automated content filters were not sophisticated enough to report hate speech in Arabic, Pashto or Dari.

“When it comes to non-English language content, it is not possible to concentrate enough machine language algorithm resources to combat this,” he added.

Battle between cyber-armies

Much of the Facebook group’s recent activity has focused on digital battles between rival Sunni and Shia militias via Facebook in Iraq – a country that continues to suffer from widespread sectarian violence that has migrated to the greater social network of the world.

It comes after separate internal Facebook documents from late 2020 raised concerns that so-called “cyber armies” between rival Sunni and Shia groups were using the platform in Iraq to attack online.

In several Facebook groups over the past 90 days at least, these battles have been unfolding, almost in real time, as extremists backed by Iran and the Islamic State adorn each other with sexually-oriented images and others. graphic content, according to Ayad, the researcher at the Institute for Strategic Dialogue.

In one, which included activists on both sides of the fight, Iraqi Shiite militants goaded ISIS rivals with photos of scantily clad women and sectarian slurs, while in the same Facebook group, supporters ISIS also posted derogatory memes attacking local rivals.

“It’s basically trolling,” Ayad said. “It annoys group members and also makes someone in moderation take note, but groups are often not eliminated. This is what happens when there is a lack of content moderation.

POLITICS has extended its technology coverage to the UK and is piloting a UK edition of our Morning Tech newsletter. To test it, send an email [email protected] mentioning UK Tech.

Comments are closed.