'Potentially disastrous' for innovation: Tech sector reacts to the EU AI Act saying it goes too far

The EU has agreed to provisional AI rules but the tech sector says it stifles competition
The EU has agreed to provisional AI rules but the tech sector says it stifles competition Copyright Canva
Copyright Canva
By Pascale Davies
Share this articleComments
Share this articleClose Button

The EU AI Act still isn’t set in stone and the European elections could shake things up. For now, the tech industry fears the act may stifle competition.

ADVERTISEMENT

Agreed after a marathon 36-hour negotiation this month, the EU AI Act is being hailed as historic, but reactions from the continent’s tech sector, rights groups and politicians are mixed.

The European Union agreed to the world’s first set of provisional rules to regulate artificial intelligence (AI) late last Friday but the details of the legislative text are still being figured out before they are set in stone.

The rules categorise AI applications into four risk levels and impose the strictest rules on high-risk and prohibited AI.

One of the sticking points that led to negotiations was how foundation models, the technology that underpins OpenAI's ChatGPT, would be regulated.

'Never a good idea'

France and Germany warned against over-regulation as they wanted to protect their champion AI start-ups.

French President Emmanuel Macron delivers his speech during the 18th edition of French conference on the maritime economy in Nantes western France, Tuesday, Nov. 28, 2023
French President Emmanuel Macron delivers his speech during the 18th edition of French conference on the maritime economy in Nantes western France, Tuesday, Nov. 28, 2023Damien Meyer, Pool via AP

“We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea,” French President Emmanuel Macron said on Monday (11 December).

"When I look at France, it is probably the first country in terms of artificial intelligence in continental Europe. We are neck and neck with the British. They will not have this regulation on foundational models. But above all, we are all very far behind the Chinese and the Americans," he added, referring to the French AI start-up Mistral.

The EU plans to regulate foundation models by ensuring developers provide documentation that includes training methods and data. They will also be regulated by giving users the right to lodge complaints and prohibit them from discrimination.

Companies that fail to comply with the rules face fines of €35 million or 7 per cent of global revenue. Some say this goes too far.

'Potentially disastrous consequences'

The Computer & Communications Industry Association said the text largely departs from the "sensible risk-based approach" proposed by the Commission, which prioritised innovation above overly prescriptive regulation.

The organisation said that the act imposed "stringent obligations" on developers of cutting-edge technologies that underpin many downstream systems and is therefore likely to hinder innovation in Europe. This could lead to an exodus of AI talent, it warned.

"Regrettably speed seems to have prevailed over quality, with potentially disastrous consequences for the European economy. The negative impact could be felt far beyond the AI sector alone," said Daniel Friedlaender, senior vice president and head of CCIA Europe.

'Doesn’t support Europe’s champions'

France Digitale, an independent organisation that represents European start-ups and investors, said that AI in the high-risk category will have to obtain a CE mark, which is a long and costly process, which could hurt start-ups.

We called for not regulating the technology as such, but regulating the uses of the technology. The solution adopted by Europe today amounts to regulating mathematics, which doesn't make much sense.
France Digitale

But the group welcomed the fact that start-ups operating in high-risk sectors can petition against the status and demonstrate that their AI is not high-risk and should be recategorised.

As for generative AI and foundation models, France Digitale said the regulation is "very strict" and could also hurt companies as they will have to disclose their private business models, which other companies could then copy.

"We called for not regulating the technology as such, but regulating the uses of the technology. The solution adopted by Europe today amounts to regulating mathematics, which doesn't make much sense," the group said.

France Digitale also warned that the Commission can add further criteria through delegated acts, which can be risky for start-ups that "need visibility and predictability to develop their business models".

"We can't change the rules of the game at any time," the group said.

ADVERTISEMENT

‘Embracing’ copyright rules

Most AI models are trained on material that is found online, which has prompted a series of copyright lawsuits by artists and the companies that represent them against the AI firms.

The act has strict copyright rules, which include having to respect the EU’s current copyright law. Companies must also make public a summary of the content they use for training general-purpose AI models.

Applying the act to general-purpose AI systems without restricting their use by labelling them all as high-risk could be challenging.
Valmiki Mukherjee
Cybersecurity industry executive ​​

This transparency requirement and policy to adhere to the EU’s current rules has been welcomed by The European Authors’ Societies (GESAC), which represents 32 European author societies and more than one million authors.

"A robust implementation allowing rightsholders to properly exercise their rights under EU law is crucial to ensure that the principles agreed upon have a real impact in practice," said Véronique Desbrosse, the association’s general manager.

"Authors’ societies look forward to embracing this new market and generating value for creators and businesses alike while contributing both to innovation and creation in Europe".

ADVERTISEMENT

Cybersecurity and facial recognition

The EU AI Act implements strict restrictions on facial recognition technology and other behavioural signals, barring law enforcement exceptions.

The restrictions on technology being used for facial recognition have been welcomed as have data protection rules.

While there is no specific legislation to protect data, the act is designed to work alongside the EU’s GDPR rules, the EU’s data protection regulation.

Facial recognition technology will have strict rules
Facial recognition technology will have strict rulesCanva

However, cybersecurity industry executive ​​Valmiki Mukherjee told Euronews Next the act could face similar challenges to those of GDPR.

"Applying the act to general-purpose AI systems without restricting their use by labelling them all as high-risk could be challenging," he said.

ADVERTISEMENT

"There’s also a potential issue of creating one large international surveillance system to prevent surveillance-based AI. It’s unclear how this will work with cybersecurity standards that are still being developed".

'Future-proofing a powerful technology'

While the original draft text is still being finalised, a process that some commentators say could go on until January 2024 or even beyond, there is another time pressure, the new European Parliament elections in June, which could shake things up on the items that still need to be agreed on.

"There doesn’t appear to be sufficient time now before the Parliament elections to get the AI Liability Directive through the legislative process, so that will have to be picked up by the new Parliament and the new Commission that it will appoint," said Benjamin Docquir, head of IT and data, at the international legal office Osborne Clarke.

The new EU Parliament may also have to decide on legislation on AI in the workplace.

Another factor to decide will be the regulation of open-source AI software, which allows the computer code to be freely copied and reused, which gives anyone permission to build their own chatbot.

ADVERTISEMENT

OpenAI and Google have warned that open-source software can be dangerous as the technology can be used to spread disinformation.

As AI technology is developing quickly and the EU AI Act is unlikely to be enforced by EU members for another two years, the regulation could already be too old despite efforts to make it flexible.

"As to what may change in the AI Act, the legislators have endeavoured to make the AI Act flexible, but the emergence of generative AI has demonstrated the difficulty of future-proofing in relation to such a powerful technology," Docquir said.

Share this articleComments

You might also like