EU Policy. EU countries approve technical details of AI Act

AI rules regulate the use of foundation models including ChatGPT.
AI rules regulate the use of foundation models including ChatGPT. Copyright Peter Morgan/Copyright 2023 The AP. All rights reserved.
Copyright Peter Morgan/Copyright 2023 The AP. All rights reserved.
By Cynthia Kroet
Share this articleComments
Share this articleClose Button

EU countries today agreed on the technical details of the AI Act, the world's first attempt to regulate the technology according to a risk-based approach, following a political agreement in December. It now needs a sign-off from EU lawmakers before the rules enter into force.

ADVERTISEMENT

Whether or not a deal would be reached today remained uncertain until the very end. 

France in particular has been sceptical about regulating so-called foundation models such as ChatGPT; the country opposed any binding obligation for providers of such models. It also had reservations about transparency requirements and trade secrets, but in today's meeting of EU ambassadors, the text was unanimously approved.

Chatbots

The European Commission’s risk-based approach to AI was generally positively received in 2021, when the rulebook was first presented, but came under pressure in late 2022, when OpenAI launched ChatGPT and sparked a global debate about chatbots. 

The European Parliament added a new article with an extensive list of obligations to ensure these systems respect fundamental rights as the EU executive’s plan included no provisions for foundation models.

In response, Germany, France and Italy came forward with a counter-proposal that favoured "mandatory self-regulation through codes of conduct" for foundation models.

Following today’s approval, the European Parliament will most likely vote in its Internal Market and Civil Liberties committees mid-February, and in plenary in March or April. After that, the act is expected to enter into force later this year and includes an implementation period of up to 36 months. The requirements for AI models will start to apply already after one year.

The law divides AI systems into four main categories according to the potential risk they pose to society.

The systems that are considered high risk will be subject to stringent rules that will apply before they enter the EU market. Once available, they will be under the oversight of national authorities, supported by the AI office inside the European Commission. 

Those that fall under the minimal risk category will be freed from additional rules, while those labelled as limited risk will have to follow basic transparency obligations.

Share this articleComments

You might also like

Commission to look for head of AI Office only when law is fully approved

Microsoft says Iran, North Korea, China and Russia starting to use generative AI for hacking

State of the Union: End of the mandate, enlargement and revolution