While we strive to develop new laws to govern AI itself, it is imperative that we train judges, prosecutors and public servants so that they can protect us from AI-related harms right now, Tawfik Jelassi writes.
The moment is finally ripe, we are finally all on the same page — it’s time for human-rights-based, ethical regulation of artificial intelligence.
The disruptions caused by generative AI have forced countries and companies to sit up and take notice.
This year, policymakers around the world have suddenly become acutely aware of the harm unregulated AI use can cause. What’s more, the private sector, hesitant about regulation in the past, is now in favour of government oversight in the deployment of AI systems.
The EU and North American countries are moving ahead in defining principles for regulation and proposing laws to address some of the most pressing challenges.
This will take time
UNESCO’s Recommendation on the ethics of AI, adopted by UNESCO’s 193 member states in late 2021 serves as a comprehensive and solid blueprint for building regulatory regimes upon universally accepted values and principles.
Governments worldwide are exploring regulatory measures to manage the risks of AI use.
This year, UNESCO will assist 50 countries in designing ethical AI policies. If these new policies are to be implemented effectively, the ground must be thoroughly prepared.
First, UNESCO will assess the resilience of existing national laws, and then gauge the technical capacities of public servants and institutions.
Chile is advancing in this process, and in June 2023, unveiled a new AI strategy, developed hand-in-hand with UNESCO. But building robust regulations and institutions takes time.
Judges can take action in the immediate term
So, how can we protect the person being falsely identified by facial recognition technology deployed by the police or the artist who feels their intellectual property has been stolen by generative AI developers?
Well, judges and prosecutors have a vital role to play; they can set powerful legal precedents in their judgements on individual cases, enabling a country to leap forward in a particular area of the law.
Recent legal cases have shown that the judiciary can draw upon international human rights law, constitutional safeguards and data protection laws to safeguard against discriminatory and biased AI systems.
The Brazilian Institute for Consumer Protection challenged the use of smart billboards that claim to anticipate the emotion, age, and gender of metro riders to provide them with "better adverts”. The courts fined the metro line for violating the privacy of the riders.
The Italian Data Protection Authority fined a food delivery application for using discriminatory automated decision-making systems against its delivery persons and failing to meet the appropriate standards of transparency and privacy.
And in a landmark judgment, the European Court of Human Rights ruled that the use of facial recognition technology by a state in locating and arresting a peaceful protestor violates the individual’s right to privacy and freedom of expression.
If judges are to play this vital role effectively, we must build their knowledge and understanding of how AI systems work, and how international human rights law can be applied to AI.
UNESCO has so far trained over 5,400 judicial actors from over 140 countries on the benefits, challenges and risks of AI systems to support them in their role as defenders of human rights and fundamental freedoms.
Public servants: Governing AI on a daily basis, as well as developing regulation
Governments are increasingly using AI systems to improve public service delivery, improving the distribution of social welfare. However, biases in algorithms can have life-or-death consequences.
In Australia, the "Robodebt" system, which was used to detect fraud in welfare payments to beneficiaries, wrongly denied welfare support to thousands of individuals due to a poorly designed algorithm, driving some into serious debt.
A similar system in the Netherlands to fight tax fraud incorrectly labelled individuals as fraudulent.
There is an urgent need for AI upskilling in public bodies, in order to ensure the responsible use of artificial intelligence in government and to guide its governance.
Last year, UNESCO launched the definitive framework on skills and competencies that public servants need to govern digital technologies. We are now supporting countries in their digital transformation journey through our capacity-building programme.
While we strive to develop new laws to govern AI, which is a long-term project, it is imperative that we train judges, prosecutors and public servants, so that they can protect us from AI-related harms right now.
Tawfik Jelassi is the Assistant Director-General for Communications and Information, at UNESCO.
At Euronews, we believe all views matter. Contact us at email@example.com to send pitches or submissions and be part of the conversation.