Combating hate speech on social networks


Disinformation, hatred and emotional exploitation have always been the preferred tools of instigators, disbelievers and sects. Recently revealed Facebook posts indicate that social media has become an important channel for the use of these tools.

The Facebook Papers, reviewed by more than a dozen news agencies, point to a systemic problem at the political and technical levels. Frances Haugen, the whistleblower who leaked the documents that testified before the US Congress and the UK Parliament, says: “The company (Facebook) was aware of the spread of disinformation and hateful content around the world, from Vietnam and from Myanmar to India and the United States ”.

This spread of disinformation and hate by digital means exists for Pakistan. There were 46 million social media users in Pakistan as of January 2021, which is equivalent to 20.6% of the total population.

With a tempting growing market, Facebook appears to be laser-focused on growth and profits instead of regulating user-generated content, the real source of its revenue. A multi-country study from the Next Billion Network reveals that Facebook has rejected “a large number of legitimate complaints, including death threats, in Pakistan, Myanmar and India.”

An audit published by a Facebook employee in June 2020 mentioned a “massive gap” in the coverage of “countries at risk” and that Facebook’s algorithm could not analyze the local language in Pakistan, Myanmar and Ethiopia.

Facebook must also listen to “recommendations” given by its own staff to protect minorities and vulnerable segments of hatred around the world.

Facebook’s community standards have not even been translated into Urdu, the national language of Pakistan. As a result, the set of rules regarding disinformation and hateful content could not even be applied or used in what was disseminated on social media in Pakistan.

As with many global trends, major social media networks have changed the way people communicate and converse, causing a shift in people’s perception and thinking. For the average user, credible news is not what the mainstream news organizations say, but updates from their friend or network subscriber. Whether it’s a successful product, business, service, or deceptively crafted propaganda piece on social media, many people generally accept it without critical review.

In view of the above, it is now essential and possible to connect all the dots as to why fake news and hate speech can spread so quickly. Some researchers, including the Center for Information Technology and Society (CITS) and the National Center for Biotechnology Information (NCBI) conclude that it is a mix of trolls, BOTs and common users who combine to speed up the spread of bogus and hateful content.

Trolls are individuals who have social media accounts for one purpose: to deliver content that incites people, insults others, including public figures, and presents damaging and inflammatory ideas. They are the “masters of disaster” on social platforms when it comes to spreading hatred and fake news.

BOTs are software designed to simulate human behavior on social platforms and to repeatedly interact with other social media users. CITS reveals that there were around 190 million BOTs on Facebook, Twitter and Instagram in 2018.

Common users are those who frequently use Facebook and social media. Together with trolls and BOTs, they become carriers of false news and hate messages. Many everyday users believe that what they read or watch almost continually is true. Thus, they transmit it to others with whom they communicate on social networks.

This pattern is true in Pakistan on political, social, ethnic and other controversial issues. Pakistani non-governmental organization Bytes for All has been monitoring hate speech online on social media since September 2019 and has recorded several incidents in this regard.

Over the years, Facebook, Twitter, and other big techs have taken steps to “strengthen security and suppress viral hate speech and disinformation.” They include expanding lists of derogatory terms in local languages, removing fake accounts and BOTs, and improving their “deep learning algorithms”.

Regardless of the technical adjustments made to Facebook and other platforms, no excuse or explanation is sufficient to make up for human life lost due to bogus or hateful content.

As I suggested earlier, governments, especially the US government, need to be proactive in exerting more control over social media. Strong action by the United States, the birthplace of Facebook and other social media and tech giants, will have a global impact.

Here in this region, governments like Pakistan and India have been working on new laws for the social media giants. Some of these include the presence of an office in Pakistan and the presence of an Indian resident on staff to collaborate with local authorities.

Facebook must also listen to “recommendations” given by its own staff to protect minorities and vulnerable segments of hatred around the world. “Content quality” and “content accuracy” should be among the key themes of the algorithms used to promote or recommend content such as that seen on search engines, and not the number of “likes, dislikes”. shares or comments ”.

“If we foster an environment where everyone, regardless of religion, ethnicity or gender, can participate, we believe we can create harmony. This will put us on the path to sustainable development, ”said Haroon Baloch, senior program director and digital rights researcher at Pakistani organization Bytes for All.

This is good advice. It should be tracked and integrated into the algorithm to combat hate speech on social media.

This will help make Pakistan and the world more collaborative and harmonious.

The writer is an entrepreneur, civic leader and thought leader based in Washington DC.



Comments are closed.