Member states and MEPs strike deal on EU AI Act after drawn-out, intense talks

The original version of the AI Act had to be extensively amended to regulate the foundation models that power chatbots like OpenAI's ChatGPT.
The original version of the AI Act had to be extensively amended to regulate the foundation models that power chatbots like OpenAI's ChatGPT. Copyright LIONEL BONAVENTURE/AFP or licensors
Copyright LIONEL BONAVENTURE/AFP or licensors
By Jorge Liboreiro
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

Member states and the European Parliament have reached a preliminary deal on the AI Act, the world's first attempt to regulate the fast-evolving technology in a comprehensive, ethics-based manner.

ADVERTISEMENT

The agreement was struck at the political level on Friday night after talks that occupied the entire day and followed an unsuccessful marathon between Wednesday and Thursday afternoon. In total, the entire push took more than 35 hours.

The breakthrough came amid aggressive lobbying from tech companies of all sizes, stark warnings from civil society and intense media scrutiny as the legislation from Brussels could very well influence similar regulatory efforts across the world.

"Historic! The EU becomes the very first continent to set clear rules for the use of AI," said Thierry Breton, the European Commissioner for the internal market who took part in the debate. "The AI Act is much more than a rulebook – it's a launchpad for EU startups and researchers to lead the global AI race."

The negotiations were a hard-fought back-and-forth between governments and lawmakers over a string of highly complex and technical issues.

Having sealed on Thursday a tentative compromise to rein in the foundation models that power chatbots like OpenAI's ChatGPT, Friday's talks had a strong focus on the use of real-time biometrics, including facial recognition, in public spaces.

At the core of the debate was the question of whether state authorities should be allowed to deploy AI-powered biometric systems that can identify and categorise people based on sensitive characteristics such as gender, race, ethnicity, religion and political affiliation, as well as systems of emotion recognition and predictive policy.

In their joint mandate, MEPs said these practices were "intrusive and discriminatory" and therefore should be prohibited across EU territory. Member states, though, had quite a different opinion and argued exceptions for law enforcement were necessary to track down criminals and thwart threats against national security. France approved earlier this year legislation to enable the use of biometric surveillance during the 2024 Paris Olympics and Paralympics, a first for an EU country.

The clash between national security and fundamental rights absorbed most of the energy on Friday. Spain, the current holder of the Council's rotating presidency, had the hard task of representing the 27 member states and keeping a united front.

Talks were interrupted by a protracted recess where lawmakers discussed among themselves the demands made by the Spanish presidency. Meanwhile, scholars and activists took to social media to urge MEPs to resist the exemptions for law enforcement.

In the end, the Parliament relented and signed up to a series of "strict conditions" that will allow authorities to use real-time biometrics to search for victims of abduction, trafficking and sexual exploitation; prevent terrorist threats; and locate people suspected of having committed serious crimes, such as terrorism, murder, rape and armed robbery.

By contrast, biometric categorisation based on sensitive characteristic, social scoring, predictive policing, the exploitation of vulnerabilities and emotion recognition in the workplace and educational institutions will be prohibited.

"We fought hard for the ban on biometric identification," said Brando Benifei, a socialist MEP who served as co-rapporteur. "It was important to bring all these pieces together."

"All these things are a very strong regime that say what kind of uses of AI we don't want in our Union and in our market," said Dragoș Tudorache, the other co-rapporteur. 

Given the complexity of the issue at hand, the compromise that emerged from the drawn-out talks is likely to require further fine-tuning in the coming days. Carme Artigas, Spain's secretary of state for digitalisation and AI, insistedthat "every article" had been agreed-upon, without leaving any "loopholes" to circumvent respect of human rights.

Once the legal text, which covers hundreds of pages in articles and annexes, is rewritten, it will be sent to the European Parliament for a new vote in the hemicycle, followed by the green light by the countries in the Council.

"We're hopeful that all (member states) will confirm" the compromise, Artigas said, when asked about the concerns raised by Germany and France.

The final votes are expected to take place in early 2024. The law will then have a gradual period before it becomes fully applicable.

An ever-evolving technology

First presented in April 2021, the AI Act is a ground-breaking attempt to ensure the most radically transformative technology of the 21st century is developed in a human-centric, ethically responsible manner that prevents and contains its most harmful consequences.

ADVERTISEMENT

The Act is essentially a product safety regulation that imposes a staggered set of rules that companies need to follow before offering their services to consumers anywhere across the bloc's single market.

The law proposes a pyramid-like structure that splits AI-powered products into four main categories according to the potential risk they pose to the safety of citizens and their fundamental rights: minimal, limited, high and unacceptable.

Those that fall under the minimal risk category will be freed from additional rules, while those labelled as limited risk will have to follow basic transparency obligations.

The systems considered high risk will be subject to stringent rules that will apply before they enter the EU market and throughout their lifetime, including substantial updates. This group will encompass applications that have a direct and potentially life-changing impact on private citizens, such as CV-sorting software for job interviews, robot-assisted surgery and exam-scoring programmes in universities.

High-risk AI products will have to undergo a conformity assessment, be registered in an EU database, sign a declaration of conformity and carry the CE marking – all before they get to consumers. Once they become available, they will be under the oversight of national authorities, aided by a dedicate AI office inside the European Commission.

ADVERTISEMENT

AI systems with an unacceptable risk for society, including social-scoring to control citizens and applications that exploit socio-economic vulnerabilities, will be outright banned across all EU territory.

Under the preliminary deal, companies that violate the rules will face multi-million fines of up €35 million or 7% of global turnover.

Although this risk-based approach was well received back in 2021, it came under extraordinary pressure in late 2022, when OpenAI launched ChatGPT and unleashed a global furore over chatbots. ChatGPT was soon followed by Google's Bard, Microsoft's Bing Chat and, most recently, Amazon's Q.

Chatbots are powered by foundation models, which are trained with vast troves of data, such as text, images, music, speech and code, to fulfil a wide and fluid set of tasks that can change over time, rather than having a specific, unmodifiable purpose.

The Commission's original proposal did not introduce any provisions for foundation models, forcing lawmakers to add an entirely new article with an extensive list of obligations to ensure these systems respect fundamental rights, are energy efficient and comply with transparency requirements by disclosing their content is AI-generated.

ADVERTISEMENT

This push from Parliament was met with scepticism from member states, who tend to prefer a soft-touch approach to law-making. Germany, France and Italy, the bloc's three biggest economies, came forward with a counter-proposal that favoured "mandatory self-regulation through codes of conduct" for foundation models. The move sparked an angry reaction from lawmakers and threatened to derail the legislative process.

But the daunting prospect of thrusting the landmark law into limbo in the lead-up to next year's European elections acted as a motivation to bridge the gaps and strike a preliminary deal, which now foresees transparency requirements for chatbots to ensure users are aware they interact with an AI-powered system.

"The AI Act is a global first," said Ursula von der Leyen, the president of the European Commission. "A unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses. A commitment we took in our political guidelines – and we delivered."

This article has been updated with more information about the political deal.

Share this articleComments

You might also like

Commission sets up expert group for AI trade secrets, copyright questions

State of the Union: End of the mandate, enlargement and revolution

Is the European Parliament under threat from espionage?