Big Tech and Accountability: Why regulation for technology platforms is inevitable

21st March 2019

It was great to be invited back onto the BBC this week to talk about the options for regulating ‘Big Tech’ and how governments might oblige the likes of Facebook, Twitter and Google to prevent the proliferation of hate speech on their platforms. As a legal advisor to technology companies my usual role is to advise them on what they need to do in order to comply with their legal obligations and, where their products involve letting users vent their opinions online, to reduce my clients’ exposure to liability for things their users say.

The question being put to me was what could legislators do to make your clients do more? How might governments change the law to oblige technology platforms to take steps to stop their users from spreading hateful views online, rather than just reacting to remove those views when they are eventually brought to the platform’s attention?

While the recent debate was prompted by the recent tragic events in Christchurch, New Zealand, the debate isn’t a new one and repeatedly held the headlines in 2018. However, despite the repeated calls for action from the public, there has been a remarkable lack of action from global regulators. The duty of the providers of online communication platforms remains that they are merely bound to take retroactive steps to remove unlawful content that is reported to them, and aren’t under a binding obligation to actively search it out and remove it.

Why is that, you might? What is it about ‘Big Tech’ that makes regulators so hesitant to act, despite endless declarations from politicians that ‘something must be done’ to make Big Tech do more?

Well, in fairness to legislators in both the EU and the US, while their desire to regulate might be real; there are significant legal obstacles that stand in the way of the kind of regulation that keeps being called for. At this side of the Atlantic there exists a significant body of European Law which places online platforms such as Facebook outside of the definition of ‘Publishers’ and thereby shelters them from the majority of liability for unlawful material which users may post. That body of law also specifically prohibits EU member states from introducing any national legislation which would have the effect of compelling Facebook, and other similar technology providers, to take more proactive steps to police its service. In the US there is a no less significant barrier to action in the form of the constitutional guarantee of the right to free speech.

A number of commentators have pointed out that tech providers haven’t always been shy about proactively policing their platforms for unlawful content. Youtube famously has algorithms in place which hunt down and restrict unauthorised use of copyrighted music recordings. Why, they ask, couldn’t something similar be done for hateful speech? Well, in fairness to Technology providers, hunting down the latest vile expression of a hate-filled view is a rather more difficult task than searching out an unauthorised sample of a known record – one which is likely to have both a high financial cost to put in place, and will inevitably result in a public-relations gaffe when the over-enthusiastic algorithm mistakenly classifies a school play as a white supremacist rally.

Either way, the growing public anger about the seeming lack of accountability of technology providers is gaining traction in major markets around the world – with moves to increase regulation of the Tech Sector gradually moving forwards in in Europe, and forming a key part of candidates’ messages in the American Democratic Party’s ongoing primaries. Some form of regulation seems inevitable in the near future.