What we need to fix this is the buy-in of all the players. Digital companies, big and small, and governments must commit to regulating the right way, Tawfik Jelassi writes.
Less than a decade ago, social media was heralded as a potent new force for the democratisation of knowledge and information.
It catalysed great shifts in power during the Arab Spring, handing a microphone to the voiceless and enabling a revolution in connectivity.
But in the mid-2010s, a fundamental change took place: the quest for revenues kicked off an algorithmic arms race.
The goal was to hold people’s attention for longer and serve them precision-targeted advertising, all underpinned by reams and reams of personal data.
It quickly became clear that emotions – and especially negative ones like anger and outrage – lead to increased sharing and engagement, which was lucrative.
This drive to provoke emotion has, step by step, come at the expense of the truth.
Systemic deficiencies are easily exploited by malign actors
A 2018 study conducted by MIT over a period of 11 years found that “falsehood diffused significantly farther, faster, deeper and more broadly than truth in all categories of information”.
Global indignation about these developments has gradually led platforms to boost their ‘safety' mechanisms – with some notable exceptions – but these remain inadequate, and there are massive disparities in the distribution of moderation resources between regions and languages.
Content in many languages seems to be left almost entirely unchecked.
All of this has been exacerbated by actors all too willing to exploit these deficiencies for their own ends.
An Oxford Internet Institute study from 2021 found that what Facebook calls "co-ordinated inauthentic behaviour" had more than doubled since 2017, with dozens of countries using computational propaganda and fake followers to manipulate public opinion and garner voters’ support.
Recent investigations have shown how a handful of people are able to destabilise dozens of elections by spreading disinformation at lightning speed and on a huge scale, enabled by global armies of fake profiles.
Harmful content online can destabilise democratic processes
Between now and the end of 2024, more than 90 new elections will take place worldwide, and over two billion people will be called on to cast their votes.
We cannot allow these democratic processes to be swayed by dealers in disinformation.
The regulation of online platforms could be a powerful tool for preventing it - but we must get it right.
At least 55 countries have issued or are currently considering national legislation to address the spread of harmful content online.
Some of these efforts go in the right direction. But some risk infringing human rights, particularly the right to freedom of expression and opinion.
UNESCO Director-General Audrey Azoulay has argued forcefully that, since online information disruption is a global problem, it can only be tackled in a globally-coordinated manner.
She stresses that a global approach led by UNESCO is essential to ensure that international human rights standards are upheld in future legislation.
It's not freedom of expression if it infringes on your right to information
Since September 2022, UNESCO, the UN agency for communication and information, has been leading extensive global consultations with governments, private companies, experts from regulatory bodies, civil society, academia, the technical community and international organisations.
Our goal is to get all these players to agree to a common set of written guidelines – a new blueprint for the regulation of social media platforms.
These guidelines will put human rights, currently compromised by the way the digital ecosystem is operating, at the heart of regulation.
Some worry that any attempt to regulate social media platforms implies curbing our right to freedom of expression.
They are forgetting that a basic element of the principle of freedom of expression, as defined by Article 19 of the ICCPR, is the right to seek and receive information.
This right is directly infringed by the spread of false information, hate speech and conspiracy theories.
Each time a social media algorithm actively promotes misleading or hateful content to increase engagement, a user’s right to seek and receive information is breached.
Protecting this integral view of freedom of expression must be a part of any regulation of the digital space.
We have to regulate the right way
In February 2023, at UNESCO’s "Internet for Trust" conference, more than 4,000 representatives came together for three days of intense debate and discussion.
We observed a broad consensus that it is time that digital platforms become subject to new rules requiring greater transparency and accountability and that any measures to regulate them must be firmly anchored in human rights.
At the end of this conference, media and online safety regulators from all over the world issued a statement voicing their support for UNESCO’s ambition to “impose obligations of due diligence and transparency on digital platforms, especially in terms of online content management for user safety, under the supervision of a regulation system”.
Now what we need is the buy-in of all the players. Digital companies, big and small, and governments must commit to regulating the right way.
And civil society and the media must continue their efforts to scrutinise regulatory initiatives and uphold human rights in this sphere.
Only together can we forge a new social contract for the digital age and reclaim The Internet as our global public square.
_Tawfik Jelassi is the Assistant Director-General for Communications and Information at UNESCO.
At Euronews, we believe all views matter. Contact us at firstname.lastname@example.org to send pitches or submissions and be part of the conversation.