Attempts to rein in the internet will take a step closer later this month when the UK government gives its official response to its Online Harms White Paper published in April 2019. The government's move - one of the first by the new Conservative administration - comes after it received over 2,000 responses from, amongst others, the voluntary sector, think tanks, and leading technology companies under the umbrella of the Internet Association. Since then, some of the more draconian proposals appear to have been tempered and there are tentative indications that the government may have accepted freedom of speech concerns. However, there is so far little sign that the response will address some of the more fundamental issues with the proposals identified by commentators.
Aspiring to make the UK the world's safest place to be online, the proposals are aimed at a multitude of social ills; from widely understood scourges, such as terrorist content and child sexual exploitation material, to less clearly defined phenomena, such as cyber-bullying, trolling and intimidation. The government is committed to tackling such problems by imposing a duty of care on organisations facilitating the sharing of user-generated content online, encompassing tech giants, social media companies, public discussion forums and even retailers inviting online product reviews.
Such entities would be obliged to take reasonable steps to keep their users safe and prevent others from coming to harm as a direct consequence of activity on their services. The duty would be underpinned by legally enforceable codes issued - initially at least - by the communications regulator, Ofcom. The proposals include senior management liability and substantial financial penalties for breaches linked to annual turnover or the volume of illegal material online. Overseas companies would be obliged to appoint a UK-based director to facilitate enforcement action, with a levy on tech companies under discussion to fund the additional costs of regulation.
Foreshadowed by the prime minister in a speech to the UN General Assembly in September 2019, and confirmed in the Queen's Speech last December, the UK proposals are part of a world-wide shift towards greater regulation of the online sphere. High profile tragedies such as the Christchurch mosque shootings in 2019, online threats to national security and democracy, and a perceived lack of accountability by dominant tech giants are driving this trend, generating a disparate patchwork of national strategies to counter online harms.
These include Australia’s legislation against sharing abhorrent violent material, India’s proposals to impose liability on social media companies and internet service providers (ISPs) for hosting unlawful content, and Singapore’s protection from online falsehoods law which prohibits the online dissemination of lies which would diminish confidence in the government. In Europe, too, plans are underway to unveil an EU-wide Digital Services Act imposing mandatory “take down” notices for illegal content and disinformation, enforced by a new regulator armed with investigatory, corrective and fining powers. The EU proposals have led to intensive lobbying in Brussels by Big Tech companies but the need for greater regulation is now all but accepted.
Controversial though some of these overseas measures have been, they arguably represent an assertion of governmental control on the online space, something which had been de facto ceded to platform providers themselves, obliging them to assume an expensive oversight role over uploads to their platforms which they may have been unwilling or ill-equipped to take on.
When first published, the UK proposals were both eye-catching in their approach and eye-watering in the punitive measures they contained, with the power to order ISPs to block websites and apps from use in the UK in the worst cases. The government’s imminent response is believed to drop such extreme measures. The White Paper also caused alarm at the potential threat to freedom of speech, with fears that anxious platform providers would overzealously filter content to avoid fines for causing vaguely-defined online harms. In his UN speech about emerging technologies last September, however, Boris Johnson implicitly acknowledged this concern, highlighting the need to find the right balance between freedom and control, to avoid censorship, repression and control, and to uphold freedom of opinion and expression.
It remains to be seen how the UK government will address other fundamental issues raised about its approach to online harms. For example, commentators have suggested that, when much of what we view on social media is dictated by unforeseen algorithms monitoring our behaviour, an alternative approach to tackling online harms might be to clarify and regulate their use by Big Tech organisations.
The precise definition of some of the harm’s within the scope of the proposals remains problematic. What constitutes “disinformation” and what counts as “excessive” screen time, both harms within the government’s sights? Regulating the amount of an individual’s screen time risks seeming both arbitrary and Orwellian. For some critics, the government’s enthusiasm for algorithmic methods of tackling online harms, though well-intentioned, has drawbacks, imposing prohibitive costs on smaller start-up tech companies and promoting an unrealistic expectation of security such that, when online harms nevertheless occur, the media will demand even stricter controls. Arguably, that leads to a vicious cycle of ever greater protective surveillance which ultimately undermines the freedoms and way of life which the measures were designed to protect.
Notwithstanding such misgivings, with the government’s parliamentary majority and the backing of the Opposition, the draft legislation is is certain to pass into law, marking a further step towards the end of the era of online self-regulation.
- Michael Drury and Julian Hayes are partners at London-based BCL Solicitors LLP
Are you a recognised expert in your field? At Euronews, we believe all views matter. Contact us at email@example.com to send pitches or submissions and be part of the conversation.