Big Tech is committed to better combating violence against women on the Internet. History suggests they will be short.

0

Seven years after GamerGate, Facebook, Google, Twitter and TikTok collectively pledged to end what an open letter called a “pandemic of online abuse against women and girls.” The pledge follows the publication of the letter by the World Wide Web Foundation, which was signed by more than 200 influential women, including politicians and celebrities, and took place at the UN Women Generation Equality Forum in Paris. .

“This is a positive and necessary step that these companies have taken and which should now be a stepping stone for these companies to tackle online abuse against women as a top priority,” Azmina Dhrodia, WWWF Senior Policy Advisor on gender and data rights, told Forbes. This commitment comes after 14 months of virtual workshops sponsored by the WWWF in which representatives from Facebook, Google, Twitter and TikTok participated. All of them have emerged with plans to provide users with improved capabilities to control who can view, share, and comment on their content, as well as more effective ways to report abuse using new, more understandable tools.

However, a timeline for each of the four giants’ implementations has not been shared. “It’s a balance between being really ambitious about it, making sure it’s their top priority, while being realistic,” says Dhrodia.

Brianna Wu doesn’t hold her breath. “If you don’t commit to measuring results, I don’t think you’re serious,” the video game developer said. Forbes. Wu was a major target of Gamergate, the notorious organized online violence campaign targeting women in the video game industry that brought gender-based abuse to the public eye. In 2014, anonymous users posted personal information about Wu, including his address, on an 8chan bulletin board. Multiple death and rape threats, among other forms of harassment, including gruesome images sent by email, forced Wu to flee his home.

Now Wu demands more than words. “When I read a report that says Facebook is committed to helping women file more harassment reports, that sounds good,” she says. Forbes. “But are they allocating more budget for people to respond to it?” Do they commit to measuring the extent to which harassment is increasing or decreasing? “

Facebook, which posted on Wednesday about its commitment to advancing its Women’s Safety Hub but made no mention of the WWWF’s impending engagement, led Forbes to its report on the application of Community standards in response to a question on these measures. The report contains a section on bullying and harassment, and includes statistics on the “content implemented” in response to such cases, but says nothing about harassment of women in particular, who are more often targeted than men. In fact, the WWWF found that 38% of women and girls worldwide, and that the rates are even higher among women from marginalized groups, and particularly in black and LGBTQ communities.

Representatives from Twitter and TikTok stressed safety was a priority, but did not respond to questions about how new protocols developed during the workshops could measurably change abuse against women on their platforms. . Google also declined to discuss quantifiable results and instead directed Forbes to a Medium article from Jigsaw, the Google unit that tracks and analyzes emerging threats. The post recounts Google’s participation in WWWF’s engagement and describes its own digital security program for women which “aims to keep women safe around the world, by providing them with the skills to protect themselves from violence. , both online and offline. ”

“We always hope that the people facing the problem are not charged with the solution,” says Elisa Lees Munoz, executive director of the International Women’s Media Foundation, which tracks online attacks against women journalists. She says the innovations described by the WWWF continue to weigh too heavily on victims of internet violence. “We spoke with reporters who spent countless hours taking screenshots, trying to get in touch with the platform, reporting their attackers, trying to find them. These women have work to do.

Past examples of uneven tracking by companies involved in engagement do little to allay the fears of those who think this time around will be no different. In 2011, Facebook responded to a petition to remove pages advocating violence against women by removing them from the site. He did not, however, condemn the systemic misogyny and gender-based violence on his platform. “Our experience has taught us that openness requires that people be free from harm, but also free to offend,” the social media company told the Toronto Star.

Two years later, Facebook responded to pressure from activist group Everyday Sexism by describing in its public safety forum the steps it would take to tackle abuse on its platform. An expert-assisted review of its community standards, improved training for content moderators, accountability for creators of offensive or threatening content, better communication with civilian groups, and support for research conducted by organizations like Antidefamation League were all part of the plan. “We have to do better – and we will,” Facebook promised eight years ago.

Likewise, in 2014, Twitter executives proclaimed they would take action after Zelda Williams, daughter of comedian Robin Williams, became the target of a brutal harassment campaign on the platform after her father’s death. “We will not tolerate abuse of this nature on Twitter … we are in the process of assessing how we can further improve our policies to better handle tragic situations like this,” wrote Del Harvey, vice president. of Trust & Safety, the language of the company’s Policy account tweeted Thursday.

“It’s almost like a cycle,” said social media specialist Chloe Nurik of social media’s promise to tackle gender-based violence on the internet. “People bring attention to this, and the sites take initiatives that are useful, but have a very limited use in time, then the problems persist, then there is media exposure, and then the site takes a sort of limited measure “, notes the doctoral student. candidate at the Annenberg School for Communication at the University of Pennsylvania. “This is one of my main concerns about engagement – it just fits into this larger narrative of weak responses that go unconfirmed over time.”

The financial incentive available to social media giants to allow inflammatory content to thrive on their sites is also problematic, according to Nurik: Studies have examined how Gamegate generated enough revenue from site traffic to pay Reddit bills. for a month.

Conversely, creating and integrating new protections into every aspect of a social media platform will cost businesses millions, says Kat Lo, a researcher and consultant specializing in online harassment and content moderation. Lo is optimistic about some of the prototype solutions presented at WWWF workshops, and in particular one called Gateway, a kind of digital panic button that would allow users to alert platforms when an attack is in progress. She is more speculative as to whether any of these prototypes will see the light of day. “In what detail did the social media companies commit to making one of these prototypes? Lo said, rhetorically asking what many are thinking. “And how will they be held accountable to them?” “


Source link

Leave A Reply

Your email address will not be published.