The Guardian’s Perspective on Social Media Regulation: Necessary but Risky | Editorial
TAt the end of another difficult week, Mark Zuckerberg has taken refuge in the utopian environment of the technology of his new growth vehicle – the “metaverse”. Surrounded by avatars of jovial colleagues, 3D street art, and brightly colored flying fish, the Facebook CEO was the tour guide in a short promotional video released Thursday, showcasing the company’s future plans for tourism. virtual reality experience. Coinciding with the announcement that Facebook is changing its corporate name to Meta, the saccharin video and the ominous rebranding quickly spread across all platforms.
The hostile reception should not have been a surprise. In the real world, Facebook has become the society for displaying the negative and polarizing impacts of social media on politics and society. Following the publication of the leaked articles on Facebook – which reveal how the company prioritized profit over mitigating the social damage it knew certain online tools were causing – its reputation is at an all-time low. As parliamentary testimony by former employee-turned-whistleblower Frances Haugen, Mr Zuckerberg and his small circle of trusted advisers have ignored ethical red flags from “integrity teams”. There has been a culpable reluctance to act on the evidence that key engagement mechanisms promote extreme content and disinformation, and fuel discord around the world. After hearing Ms Haugen earlier this week, MEPs interrogates Facebook’s global chief safety officer Antigone Davis points to research suggesting the company’s Instagram app is harming the mental health of one in three teenage girls. Representatives from Twitter, Google and TikTok were interviewed during the same session.
Change is almost certainly coming – in particular, the end of the era of big tech self-regulation, in which private platforms such as Facebook and Twitter have failed to keep their homes tidy. The desire to detoxify social networks is justified and understandable. But the development of a coherent system of external regulation is fraught with pitfalls and dilemmas. The government’s plan online security invoice – still in the early stages of its parliamentary career – would institute the most ambitious regulation of the Web of all liberal democracies. As it stands, that would also create significant risks in itself.
The bill envisions an expanded Ofcom as a regulator of major social media, with the power to impose fines of up to 10% of global profits on companies that fail to comply with its code of conduct. Services considered to present a risk of causing significant harm to citizens could be blocked in the UK. The culture secretary of the day would be able to define and modify the strategic priorities imposed by Ofcom. That’s a tremendous amount of power and discretion to be given to a minister and a watchdog run by unelected officials. The lack of clarity on the bill’s definition of “lawful but harmful” online content compounds the problem, creating what one expert has said. called a “muddy, intermediate” interpretation area. What criteria determine when the unpleasant turns into the unacceptable? In an age of polarization, the possibility of aggressively pursuing contentious agendas at the expense of free speech is evident.
Following the murder of Sir David Amess, Sir Keir Starmer required that the government speed up the bill to “clean up the cesspool” of online extremism. A regulatory system that would give current Culture Secretary Nadine Dorries and a future Ofcom President (Paul Dacre?) Broad and loosely defined powers is not the right solution. The self-regulation of social media giants is not working. But what replaces it must be carefully considered and its categories clearly defined. Facebook’s failures do not justify a new era of top censorship.