The Guardian’s Perspective on Social Media Regulation: Necessary but Risky | Editorial

TAt the end of another difficult week, Mark Zuckerberg has taken refuge in the utopian environment of the technology of his new growth vehicle – the “metaverse”. Surrounded by avatars of jovial coworkers, 3D street art and brightly colored flying fish, the Facebook CEO was the tour guide in a short promotional video released Thursday, showcasing the company’s future plans for tourism. virtual reality experience. Coinciding with the announcement that Facebook is changing its corporate name to Meta, the saccharin video and the ominous rebranding was quickly broadcast across all platforms.

The hostile reception shouldn’t have been a surprise. In the real world, Facebook has become the society for displaying the negative and polarizing impacts of social media on politics and society. Following the publication of leaked articles on Facebook – which reveal how the company prioritized profit over mitigating the social damage it knew certain online tools were causing – His reputation is at an all-time low. As the parliamentary testimony of former employee-turned-whistleblower Frances Haugen made clear, Mr. Zuckerberg and his small circle of trusted advisers ignored ethical red flags from “integrity teams”. There has been a culpable reluctance to act on the evidence that key engagement mechanisms promote extreme content and disinformation, and fuel discord around the world. After hearing Ms Haugen earlier this week, MPs then questioned Facebook’s global chief security officer Antigone Davis, highlighting research suggesting the company’s Instagram app is damaging a teenage girl’s mental health. on three. Representatives from Twitter, Google and TikTok were interviewed during the same session.

Change is almost certainly coming – in particular, the end of the era of big tech self-regulation, in which private platforms like Facebook and Twitter have failed to keep their homes tidy. The desire to detoxify social networks is justified and understandable. But designing a coherent system of external regulation comes up against difficulties and dilemmas. The government’s online security bill – still in the early stages of its parliamentary journey – would institute the most ambitious web regulation of any liberal democracies. As it stands, that would also create significant risks in itself.

The bill envisions an expanded Ofcom as regulator of major social media, with the power to impose fines of up to 10% of global profits on companies that fail to comply with its code of conduct. Services considered to present a risk of causing significant harm to citizens could be blocked in the UK. The culture secretary of the day would be able to define and modify the strategic priorities imposed by Ofcom. That’s a tremendous amount of power and discretion to be given to a minister and a watchdog run by unelected officials. The lack of clarity on the bill’s definition of “lawful but harmful” online content compounds the problem, creating what one expert has said. called a “muddy, intermediate” interpretation area. What criteria determine when the unpleasant turns into the unacceptable? In an age of polarization, the possibility of aggressively pursuing contentious agendas at the expense of free speech is evident.

Following the murder of Sir David Amess, Sir Keir Starmer demanded that the government speed up the bill to ‘clean up the sump’ of online extremism. A regulatory system that would give current Culture Secretary Nadine Dorries and a future Ofcom President (Paul Dacre?) Broad and loosely defined powers is not the right solution. The self-regulation of social media giants is not working. But what replaces it must be carefully considered and its categories clearly defined. Facebook’s failures do not justify a new era of top censorship.



Source link

Comments are closed.