AI models don’t comply with the EU’s AI Act, according to a Stanford study

Most AI models such as OpenAI’s GPT-4 don't comply with the EU’s AI Act, study finds
Most AI models such as OpenAI’s GPT-4 don't comply with the EU’s AI Act, study finds Copyright Canva
Copyright Canva
By Imane El Atillah
Share this articleComments
Share this articleClose Button

A recent Stanford study showed that most AI models including Google’s PaLM 2 and Open AI’s GPT-4 don’t comply with the EU AI Act.

ADVERTISEMENT

Leading tech companies’ artificial intelligence (AI) models don’t comply with the requirements of the EU’s upcoming AI Act, a new study has found.

The EU has been actively working on establishing comprehensive regulations to govern AI technologies and has been developing the AI Act for the last two years.

The Act recently underwent a vote in the European Parliament, garnering overwhelming support with 499 votes in favour, 28 against, and 93 abstentions.

This legislation is set to impose explicit obligations on foundational model providers like OpenAI and Google, in efforts to regulate the use of AI and limit the dangers of the new technology.

However, since the democratisation of AI systems, many legislators have been trying to catch up with the technology’s rapid development and the AI Act resurfaced with a much more alarming need to regulate.

A study conducted by researchers at Stanford University’s Center for Research on Foundation Models (CRFM) focused on the European Parliament's version of the Act.

Among the 22 requirements directed at foundation model providers, the researchers selected 12 requirements that could be assessed using publicly available information.

These requirements were grouped into four different categories including data resources, compute resources, the model itself, and deployment practices.

To evaluate compliance, the researchers devised a 5-point rubric for each of the 12 requirements. Their assessment involved examining 10 major model providers, including OpenAI, Google, Meta, and Stability.ai, and assigning scores ranging from 0 to 4 based on adherence to the outlined requirements. 

The study revealed a significant discrepancy in compliance levels, with some providers scoring below 25 per cent. It further showed that there exists a significant lack of transparency among model providers.

Several areas of non-compliance were identified, including the failure to disclose the status of copyrighted training data, which could play a significant role in deciding on new copyrighting laws tailored to AI-generated content.

Moreover, most providers have undisclosed energy usage and emissions during model training data in addition to the absence of transparent methodologies to mitigate potential risks, which also represent important parts of the AI Act.

The research further noted disparities between open and closed AI model providers, with open releases, such as Meta’s LLaMA, offering more comprehensive disclosure of resources than restricted or closed releases, such as OpenAI’s GPT-4.

Challenges to comply with the AI Act within the EU

All of the studied foundation models have fallen short of attaining a perfect score, which indicates that none of them fully comply with the current regulations outlined in the AI Act draft, as per the study.

While the study notes “ample room for improvement” for providers to align themselves more closely with the requirements, the high-level obligations established in the AI Act may pose a challenge for many companies.

In a recent development, executives from 150 prominent companies such as Siemens, Renault, and Heineken have expressed their concerns regarding the tight regulations in an open letter addressed to the European Commission, the parliament, and member states.

The letter states, “In our assessment, the draft legislation would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing,” as reported by the Financial Times.

The executives further assert that the proposed rules will place heavy regulations on foundation models which will, in turn, burden companies involved in the development and implementation of AI systems.

ADVERTISEMENT

Consequently, they warn these limitations may prompt companies to consider leaving the EU and investors to withdraw their support for AI development in Europe which may put the EU behind in the AI development race compared to the United States

The study further suggests there is still an urgent need for enhanced collaboration between policymakers and model providers in the EU to effectively address the gaps and challenges, and find a common ground to ensure the appropriate implementation and effectiveness of the AI Act.

Share this articleComments

You might also like