Kate has suffered years of abuse on social media. She says it’s time for platforms to do something

Kate doesn’t feel comfortable including her real name in this article – and it’s revealing.

Since her teens, the 23-year-old from Queensland has suffered so much online harassment and abuse that she has become very suspicious of what she would face if she had spoken about it.

On digital platforms such as Facebook, Instagram, Tinder and Snapchat, Kate has received unwanted and unsolicited nude photos and videos, persistent direct messages from individuals, and verbal abuse from male users of the platform.

“Physical distance doesn’t protect you online,” she says.

Kate is not alone. A 2020 Plan International survey showed that in Australia, 65% of girls and young women said they had been harassed or abused online.

And it’s not just the harassment Kate experiences that makes her uncomfortable.

She says there is a “constant accumulation” of “misogynists [or] violent “on social platforms, including in the comment sections of other posts.

“You scroll through your news feed and everything you see is degrading,” she says.

“There is a harm in constantly having to see horrible content.”

Kate wants social media platforms to do more to protect their users.

Alice Marwick, associate professor at the Center for Information, Technology and Public Life at the University of North Carolina, says platforms have “enormous… social responsibility” to do so.

Alice Marwick says online platforms need to do more to protect their users.(Provided: Jon Gardiner / University of North Carolina)

Dr Marwick says it is up to the companies that create a space for users to come together and communicate online to “understand the harms and impacts of their functionality” and to better manage them.

And she says platforms not only need to take better action when abuse occurs, but they also need to take action to prevent that from happening in the first place.

What are platforms doing to protect users?

For years, platforms have come under pressure from many different quarters to do more to tackle abuse perpetrated through their networks.

Tech companies, with a few exceptions, are generally not legally responsible for the content and behavior of their users.

But Dr Marwick, who specializes in the social implications of social media technologies, says platforms still have a “moral and ethical responsibility” to their users.

Rachel Haas, vice president of member safety for the dating app Bumble, agrees.

In what it says is an industry first, her organization has partnered with global gender-based violence support service Chayn this year to offer online support to Bumble users who experience sexual assault or abuse. relational through connections established using the application.

Ms Haas says the partnership means users who experience harm will have access to therapists and, “in very sensitive cases,” up to six therapy sessions.

Portrait shot of woman with blond hair in front of beige wall
Bumble has started offering online trauma support, said vice president of member safety Rachel Haas. (Provided)

A few months later, the impact of the partnership on Bumble users is unclear. But it demonstrates a social platform taking action to meet the needs of victims of abuse and survivors.

Dr Marwick says Twitter, where user harassment has always been a big issue, has also introduced new security features, including better control over replies to your tweets, and muting and blocking. some content more easily.

Kara Hinesley, director of public policy at Twitter Australia and New Zealand, said 65% of abusive content that Twitter takes action against today is identified “by using technology proactively instead of relying solely on reports. Twitter users “.

Ms Hinesley also points to new tools such as Safe Mode, which automatically blocks accounts for seven days for using potentially dangerous language or sending repetitive and unsolicited responses or mentions. He is currently in testing.

The platform still has “a long way to go,” says Dr Marwick, but it has made significant progress, especially compared to others.

“A lot of these spaces just don’t have robust mechanisms in place.”

Challenges in identifying online abuse

It can be difficult to understand what constitutes online abuse.

Shoulders up, Kara Hinesley, who has long blonde hair, smiles.
Twitter’s Kara Hinesley says most of the abusive content on the platform is identified using technology.(Provided)

When Kate was inundated with daily late-night posts and voice recordings from a man on Facebook several years ago, she was faced with a dilemma.

Kate argues that social platforms don’t make it easy.

She says an online behavior that made her uncomfortable, such as when a man viewed and shared photos from one of her social media accounts of her exercising, sounded “really disgusting” to her but she thought “technically [the man] had done nothing that would violate the rules of the platform “.

Unlike explicit harassment, photo sharing or late-night messages were not clearly harmful.

This is part of what can make harassment and abuse difficult to identify. Harmful interactions can sometimes feel like healthy communication between friends or loved ones. This is why many behaviors that make women feel unsafe can go unnoticed.

Abusive speech can also be difficult to identify, in part because it has become so normalized, says Dr Marwick.

“Misogynistic speech is so much a part of everyday life that women just learn to accept it in order to be on the platform,” she says.

Eradicating online abuse presents additional challenges.

Dr Marwick acknowledges that the huge scale of content that social media platforms face makes monitoring difficult.

“They just aren’t human scale platforms at this point,” she says. “Plus, they cover hundreds of languages ​​and cultures,” she says.

Protecting users from harm and abuse is therefore “complicated and expensive”.

But that doesn’t mean platforms can raise their hands.

“Platforms need to be willing to invest in resources on harassment in many different languages ​​in many different spaces, and understand how these things differ from country to country, culture to culture. “says Dr Marwick.

The funds exist for many companies. “These are incredibly profitable companies. They are not companies that barely manage,” said Dr Marwick.

“More money needs to be spent on moderation of human content, and in particular moderation of human content in a non-English context,” she says.

Necessary solutions “from the ground up”

Moderating existing content is not enough to target online abuse, says Dr Marwick.

She would like platforms to incorporate ways to proactively mitigate harassment in their design.

“We need to create ways to deal with harassment in the platform from the ground up, rather than trying to implement it after the fact,” she said.

She says any new rig under construction today should think about harassment as it is built.

Kate is hopeful that this will be the case and that more digital platforms begin to engage with women safety experts and users with diverse backgrounds.

“You can’t imagine how these platforms are going to be harmful in different ways without talking to the people who are affected by them,” she says.

Dr Rosalie Gillett is a postdoctoral fellow at Queensland University of Technology and an ABC Top 5 Humanities Fellow for 2021.

RN to your inbox

Get more stories that go beyond the news cycle with our weekly newsletter.


Source link

Comments are closed.