Euroviews. What can the EU learn from China's generative AI regulation before it adopts its AI Act?

A Chinese paramilitary policeman stands on duty in front of the European flag outside the office of the EU delegation to China in Beijing, October 2011
A Chinese paramilitary policeman stands on duty in front of the European flag outside the office of the EU delegation to China in Beijing, October 2011 Copyright AP Photo/Euronews
Copyright AP Photo/Euronews
By Dr Kris Shrishak, Senior Fellow, Irish Council for Civil Liberties
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

If the EU is prudent, it could choose to adopt two specific requirements from China’s draft law on generative AI: prohibit the use of copyright-protected content and personal data for training AI models without consent, Dr Kris Shrishak writes.

ADVERTISEMENT

Companies are in a rush to roll out new versions of generative AI. They are also embedding these AI systems into various products. 

Google recently announced that they’ll use generative AI to serve search results, an approach that mimics that of Microsoft. Other companies are banning their employees from using ChatGPT.

As companies grapple with promoting and banning generative AI, the risks from these systems are mounting. 

These generative AI systems are being used as "experiments" in the world despite their flaws being known. What should regulators and lawmakers do about this?

The EU tries to address the risks

Competition and data protection regulators can and are using the tools at their disposal to address AI risks.

Lina Khan, the chair of the US competition and consumer protection Federal Trade Commission (FTC), has written how existing powers can be applied. At the same time, the UK competition authority has launched an investigation into generative AI.

The European Data Protection Board has launched a task force to foster the exchange of information related to data protection enforcement related to ChatGPT. The Italian Data Protection Authority acted and forced OpenAI to make limited data protection improvements. 

Until now, lawmakers in the EU had either ignored or only superficially attempted to address the risks of generative AI. This was despite some of these risks being known already in 2021.
AP Photo/Virginia Mayo
European Commission President Ursula von der Leyen speaks at a plenary session in the European Parliament in Brussels, 29 March 2023AP Photo/Virginia Mayo

These disparate attempts have only amplified calls for the regulation of AI systems.

On 11 May 2023, lawmakers in the European Parliament voted on a draft AI regulation, also known as the AI Act, which attempts to regulate generative AI systems like ChatGPT by setting minimum requirements. 

Until now, lawmakers in the EU had either ignored or only superficially attempted to address the risks of generative AI. This was despite some of these risks being known already in 2021.

Meanwhile, China is developing its own regulation

However, this new attempt is not the first in the world. China proposed a regulation tailored for generative AI in April 2023.

The EU and China have taken contrasting approaches to regulating generative AI. The EU's approach might superficially appear to be tough, but it is China that is placing stringent requirements on generative AI development.

It might come as a surprise that two requirements for generative AI development included in China’s draft law could strengthen protections for people: stronger copyright and data protection.

China may not have a strong track record in enforcing intellectual property, but this seems to be changing. This change is evident in its draft law.
AP Photo/Mark Schiefelbein
A doctor examines a magnetic resonance image of a human brain during the world's first competition in neuroimaging between AI and human experts, Beijing, June 2018AP Photo/Mark Schiefelbein

Lawsuits against Stability AI have raised the question of whether copyright-protected content can be used to develop generative AI without consent. 

China’s draft law has an answer for this. It prevents the use of data that infringes intellectual property to develop generative AI.

China may not have a strong track record in enforcing intellectual property, but this seems to be changing. This change is evident in its draft law. 

The EU, on the other hand, fails to take a firm stand on the issue and only requires developers to provide “a sufficiently detailed summary of the use of training data protected under copyright law.” Just as in the case of Stability AI, the rights holders will have to sue.

You should be the one in charge of your personal data

The General Data Protection Regulation (GDPR) is the EUs flagship regulation, and when it comes to personal data processing by generative AI, the AI Act does not place additional requirements. 

ADVERTISEMENT

While OpenAI and Google gobble up personal data in their development of generative AI, people in the EU will have to wait for the data protection regulators and the courts to decide the legality of this.

Baidu should not scrape the personal data of individuals in China from the Internet for their next generative AI system. What about people in the EU and the rest of the world?
AP Photo/Ng Han Guan
Robin Li, CEO of search giant Baidu, talks about AI during the Baidu Create 2018 held in Beijing, July 2018AP Photo/Ng Han Guan

China, on the other hand, will only allow personal data to be used for generative AI development with consent. 

Baidu should not scrape the personal data of individuals in China from the Internet for their next generative AI system. What about people in the EU and the rest of the world?

Perhaps the most disappointing part of the GDPR has been its lack of enforcement. 

You might imagine that the EU would have learnt from its mistake and would make AI law easier to enforce. Alas, that is not to be.

ADVERTISEMENT

What about truth?

Irresponsible deployment of generative AI should have prompted lawmakers to require third-party assessment of these technologies before they are deployed to the world at large. 

Instead, the EU continues to rely on self-assessments by the developers of these systems. This is concerning and could create another enforcement debacle, as with the GDPR.

If the EU is prudent, it could choose to adopt two specific requirements from China’s draft law on generative AI: prohibit the use of copyright-protected content and personal data for training AI models without consent.
AP Photo/Virginia Mayo
A man wearing headphones walks toward EU flags as he is reflected in the marble of the European Commission building in Brussels, August 2020AP Photo/Virginia Mayo

If the EU is prudent, it could choose to adopt two specific requirements from China’s draft law on generative AI: prohibit the use of copyright-protected content and personal data for training AI models without consent.

Text-generative AI systems like ChatGPT are sometimes termed “bullshit generators” that can generate and spread misinformation. They have no notion of truth in them. 

If the developers were to embed truth in them, whose truth would it be?

ADVERTISEMENT

If China has its way, then these AI systems will generate outputs where the oppression of Uyghurs in Xinjiang is termed as “combating separatism, extremism, and terrorism.” 

The EU draft law, at this point, has nothing to say about truth.

Dr Kris Shrishak is a Senior Fellow at the Irish Council for Civil Liberties, Ireland’s oldest independent human rights monitoring organisation.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Roberta Metsola appeals to voters ahead of European Elections: ‘You have a choice’

Chinese president to visit France, Serbia and Hungary in May

What does the crisis in the Middle East mean for the future of common European defence?