Euroviews. Regression to self-regulation means the EU AI Act will fail to plug the harms

Members of the European Parliament check their smartphones in Strasbourg, January 2023
Members of the European Parliament check their smartphones in Strasbourg, January 2023 Copyright AP Photo/Euronews
Copyright AP Photo/Euronews
By Dr Kris Shrishak
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Back in April 2021, when Brussels proposed the first cross-sector AI regulation in the world, it claimed to protect fundamental rights and promote innovation. In 2023, fundamental rights may not remain fundamental to this regulation, Dr Kris Shrishak writes.

ADVERTISEMENT

A coalition of French, German and Italian governments have proposed that companies self-regulate AI systems like GPT that can be used in various applications. This proposal follows from their 9 November opposition against the regulation of such AI systems in the AI Act.

This push for self-regulation in the EU should not be seen in isolation. It follows a series of small steps by the legislators that make this new proposal disappointing, but not surprising.

Evidence from the online advertising industry’s neglect of data protection and from various Facebook whistleblowers, among others, shows that the self-regulation approach in the tech industry has contributed to significant harms. 

And yet, the EU flagship tech regulation, the AI Act, has been grounded in self-assessment, right from the time the European Commission proposed it in 2021.

A step further, then a step back?

Companies could self-assess whether to fulfil the requirements for high-risk AI systems. They could voluntarily provide information and manage risks. Even when there are serious problems with their high-risk AI systems, they must inform the regulators only under a narrow set of conditions and they could easily evade responsibility.

One would have hoped that the European Parliament and the Council of the EU would recognise the issues arising from self-assessments. Instead, they have taken it one step further.

They have created provisions that allow the companies to choose whether their AI systems are high-risk or not. In other words, companies can decide whether to be regulated or not because the AI Act only regulates high-risk AI systems.

President of the European Commission Ursula von der Leyen looks at the invention 'Do you Speak Robot?' at the AI Xperience Center at the VUB in Brussels, February 2020
President of the European Commission Ursula von der Leyen looks at the invention 'Do you Speak Robot?' at the AI Xperience Center at the VUB in Brussels, February 2020AP Photo/Stephanie Lecocq

In addition to self-assessments, the European Commission’s 2021 proposal had another gaping hole in it. It only considered AI systems with an “intended purpose”. 

Already in 2021, evidence of harms from AI systems without pre-defined purposes, such as GPT-2 and GPT-3, was accumulating. However, the European Commission failed to address these in its proposal.

Yet, in November 2022, ChatGPT, built on top of GPT-3, was released, and the harms were reported widely in popular media. 

The European Parliament laid down rules for such AI systems in its position in June of this year. These rules were further modified by the Spanish Presidency of the Council in October-November.

It looked like the legislators had found a deal on regulating these AI systems — until France, Germany and Italy opposed.

A free ride for the rule breakers

The governments of these countries have now proposed “mandatory self-regulation through codes of conduct” without any sanction for violations. 

How is a rule mandatory to follow if there is no enforcement and no sanction? And why would any company follow these rules?

Rule breakers will have a free ride while the rule-followers will find it costly. This will promote harmful and poorly tested AI systems to be deployed in the EU. 

It might even promote “innovation” of getting around the rules, as seen in the Volkswagen scandal. Competition between rule breakers will be the only competition in this market.

The new proposal will allow the AI industry to continue its current practices and remain unaccountable. Harms to fundamental rights will continue to propagate and the AI Act will fail to plug the harms.
Activists wearing masks of German Chancellor Olaf Scholz and French President Emmanuel Macron hold shovels pose in front of EU headquarters in Brussels, February 2022
Activists wearing masks of German Chancellor Olaf Scholz and French President Emmanuel Macron hold shovels pose in front of EU headquarters in Brussels, February 2022AP Photo/Virginia Mayo

In October, Neil Clarke of Clarkesworld Magazine said it clearly when speaking to the Federal Trade Commission: “Regulation of this [AI] industry is needed sooner than later, and each moment they are allowed to continue their current practices only causes more harm. Their actions to date demonstrate that they cannot be trusted to do it themselves.”

The EU's regression to self-regulation is the exact opposite of what is required of a regulatory superpower. 

ADVERTISEMENT

The new proposal will allow the AI industry to continue its current practices and remain unaccountable. Harms to fundamental rights will continue to propagate and the AI Act will fail to plug the harms.

Back in April 2021, when the European Commission proposed the first cross-sector AI regulation in the world, it claimed to protect fundamental rights and promote innovation. 

By the end of 2023, fundamental rights may not remain fundamental to this regulation.

Dr Kris Shrishak is a Senior Fellow at the Irish Council for Civil Liberties, Ireland’s oldest independent human rights monitoring organisation.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

ADVERTISEMENT
Share this articleComments

You might also like