Regulating Big Tech – the issue of our time?

Sometimes it takes a personal tragedy to focus attention on an issue which many people know is important but think is distant and rather boring. This is what has happened after the suicide of 14 year old Molly Russell. Her father blamed her death on Instagram content linked to self-harm and suicide Molly had viewed. The sad but significant result of Molly’s suicide has been to bring alive to many people the growing debate about whether to regulate Big Tech more tightly – and if so, how.

Everyone knows the power of Big Tech is now immense. Facebook alone has 2.32 billion users worldwide – almost a third of the world’s population. Having bought two of its rivals, Instagram and Whatsapp, Facebook together with Google directly influences over 70% of internet traffic. Facebook, Amazon, Apple, Netflix and Google are together known as the FAANGs.

The issue is not just potential harm to social media users, especially young ones like Molly Russell. There is growing evidence that users are caused mental distress, damage to their sense of self-worth and even suffer symptoms similar to addiction. ‘Fake news’ is another problem: the dissemination on the internet of propaganda which may not only unfairly influence elections and referenda (the US and French Presidential elections, and Brexit) but foment terrorism and violence. Unauthorised data sharing and invasions of privacy by the FAANGs are other vital issues.

As awareness of these problems has grown so have calls in the UK for regulation of Big Tech. The Culture Department, the DCMS, is due to publish proposals to force social media companies to remove harmful and offensive content within a set time limit to protect young people online within the coming weeks. Leaks suggest companies like Facebook and Twitter would have to sign up to a code of practice and face fines from a regulator if they did not take down harmful posts quickly enough. They also face a new duty to reveal how many complaints they receive about online bullying, selfharm, trolling and other threats.

On 14 February the Commons DCMS Select Committee’s report recommended that a compulsory code of ethics, overseen by an independent regulator, should be brought in to monitor online platforms like Facebook and self-regulation ended. The Committee’s final report on “Disinformation and ‘fake news'”, published after an 18-month inquiry, said regulation was needed to clamp down on harmful or illegal online content. It called for the new regulator to be funded by a levy on tech companies operating in the UK and said that it should have statutory powers to launch legal action and levy “hefty fines” against companies in breach of the proposed ethics code.

It looks as though a tipping point has been passed. The question in early 2019 is no longer SHOULD Big Tech be more tightly regulated – but HOW? To help provide informed answers to these complex legal and policy issues, a timely debate will take place on Thursday, 28 February organised by the Media Society and hosted at the offices of Simons Muirhead & Burton LLP, the leading West End law firm. The panel will have speakers representing Big Tech and the broadcasters, but also contributions from expert observers of the FAANGs, and be chaired by former BBC executive Phil Harding.

Will the government’s social media proposals go far enough? Will they be workable? Who should the new regulator be? What powers should it have? And what about data privacy and harvesting? How should these be regulated? These are just some of the key questions to be debated on Thursday night.

By Trevor Barnes