28.2 C
City of Banjul
Friday, October 4, 2024
spot_img
spot_img

On Facebook: extremism needs moderation

- Advertisement -

facebook

Politicians all around the world are worried by Facebook’s power. They have good reason to be. In 14 years this one company has become a new mass medium, dwarfing all previous corporations in its reach and power. The US-based Pew Research centre reckons that 1.8 billion people use Facebook as their leading source of news. Even when the company makes no efforts to influence which stories people see (an activity that is central to its business) it can influence democracies profoundly. Simply by allowing Facebook users to click a button to tell their friends they had voted, it was able in one experiment to raise the turnout significantly. This power is exercised almost entirely asymmetrically: democratic governments have very little power to influence Facebook’s policies or even to know what those policies are.

The same dynamic applies to other social media, especially Twitter, and to the Google empire as well. The algorithms deployed by YouTube (which is owned by Google’s parent, Alphabet) have a capacity to drive radicalisation and to normalise extremist views. All these companies make their money by keeping viewers’ attention so that it can be sold on to advertisers – and this attention is best caught, and kept, by increasingly sensational content. The process by which this happens is entirely automatic and algorithmic, controlled by programs of such complexity that not even their developers can understand how they work in detail: they can only measure how effective they are. Few people think about, or attempt to measure, their effects on society as a whole, and not just on the balance sheet. And the companies that use them have no incentive to do so.

- Advertisement -

It is one of the tasks of a democratic society to provide such incentives. The curbs on hate speech and on libel that apply in the offline world must apply in the online world as well. This is not controversial in principle but it is in practice extremely difficult. The sheer volume of material uploaded to YouTube or Facebook means no human could possibly consider all of it. Any system of control must rely on the companies responding promptly to complaints of abuse. This does not at present happen. Their systems of moderation are opaque, confusing and entirely inadequate. That must change.

One effort to change this is the private member’s bill introduced with considerable cross-party support by the Labour MP Lucy Powell today. This would make the moderators of online groups – and not just the giant, distant companies that own them – responsible for their content. It would also ban private groups of more than 500 members: their existence would have to be public, as would the number, if not the names, of their membership.

More thought needs to be given to these well-intentioned proposals. It is difficult to define an online forum: do the comments on YouTube videos count? Do mailing lists? Newspaper comment sections? WhatsApp groups for MPs? All function for discussions among communities. Much harm can be done in secret groups, it’s true. But society would be impossible without the right to keep secrets and without privacy. Despite these flaws, Ms Powell’s bill is aimed at an urgent and serious problem. The law must be able to hold both companies and individuals responsible when crimes are committed online. The rules governing political campaigning and financing must apply online as well. Much greater scrutiny of the internet giants is needed, and their response should be much greater transparency.

Join The Conversation
- Advertisment -spot_img
- Advertisment -spot_img