Euroviews. How should we regulate generative AI, and what will happen if we fail?

The ChatGPT app is displayed on a mobile phone, May 2023
The ChatGPT app is displayed on a mobile phone, May 2023 Copyright AP Photo/Euronews
Copyright AP Photo/Euronews
By Rohit Kapoor, Vice Chairman and CEO, EXL
Share this articleComments
Share this articleClose Button
The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists, Rohit Kapoor writes.

ADVERTISEMENT

Generative AI is experiencing rapid growth and expansion. 

There’s no question as to whether this technology will change the world — all that remains to be seen is how long it will take for the transformative impact to be realised and how exactly it will manifest in each industry and niche. 

Whether it’s fully automated and targeted consumer marketing, medical reports generated and summarised for doctors, or chatbots with distinct personality types being tested by Instagram, generative AI is driving a revolution in just about every sector.

The potential benefits of these advancements are monumental. Quantifying the hype, a recent report by Bloomberg Intelligence predicted an explosion in generative AI market growth, from $40 billion (€36.5bn) in 2022 to 1.3 trillion (€1.18tn) in the next ten years. 

But in all the excitement to come, it’s absolutely critical that policy-makers and corporations alike do not lose sight of the risks of this technology.

These large language models, or LLMs, present dangers which not only threaten the very usefulness of the information they produce but could also prove threatening in entirely unintentional ways — from bias to blurring the lines between real and artificial to loss of control.

Who's responsible?

The responsibility for taking the reins on regulation falls naturally with governments and regulatory bodies, but it should also extend beyond them. The business community must self-govern and contribute to principles that can become regulations while policy-makers deliberate.

Two core principles should be followed as soon as possible by those developing and running generative AI, in order to foster responsible use and mitigate negative impacts. 

AP Photo/Jean-Francois Badias
EU lawmakers vote on the AI Act at the European Parliament in Strasbourg, 14 June 2023AP Photo/Jean-Francois Badias

First, large language models should only be applied to closed data sets to ensure safety and confidentiality. 

Second, all development and adoption of use cases leveraging generative AI should have the mandatory oversight of professionals to ensure “humans in the loop”.

These principles are essential for maintaining accountability, transparency, and fairness in the use of generative AI technologies.

From there, three main areas will need attention from a regulatory perspective.

Maintaining our grip on what’s real

The capabilities of generative AI to mimic reality are already quite astounding, and it’s improving all the time. 

So far this year, the internet has been awash with startling images like the Pope in a puffer jacket or the Mona Lisa as she would look in real life. 

And chatbots are being deployed in unexpected realms like dating apps — where the introduction of the technology is reportedly intended to reduce “small talk”.

The wider public should feel no guilt in enjoying these creative outputs, but industry players and policy-makers must be alert to the dangers of this mimicry. 

Amongst them are identity theft and reputational damage. 

Distinguishing between AI-generated content and content genuinely created by humans is a significant challenge, and regulation should consider the consequences and surveillance aspects of it.
AP Photo/Tsering Topgyal
A person uses the dating app Tinder, July 2015AP Photo/Tsering Topgyal

Distinguishing between AI-generated content and content genuinely created by humans is a significant challenge, and regulation should consider the consequences and surveillance aspects of it.

ADVERTISEMENT

Clear guidelines are needed to determine the responsibility of platforms and content creators to label AI-generated content. 

Robust verification systems like watermarking or digital signatures would support this authentication process.

Tackling imperfections that lead to bias

Policy-makers must set about regulating the monitoring and validation of imperfections in the data, algorithms and processes used in generative AI. 

Bias is a major factor. Training data can be biased or inadequate, resulting in a bias in the AI itself. 

For example, this might cause a company chatbot to deprioritise customer complaints that come from customers of a certain demographic or a search engine to throw up biased answers to queries. And biases in algorithms can perpetuate those unfair outcomes and discrimination.

ADVERTISEMENT
AP Photo/Luca Bruno
Customers look at packages of pasta on sale in a supermarket in Milan, June 2023AP Photo/Luca Bruno

Regulations need to force the issue of transparency and push for clear documentation of processes. This would help ensure that processes can be explained and that accountability is upheld. 

At the same time, it would enable scrutiny of generative AI systems, including safeguarding of intellectual property (IP) and data privacy — which, in a world where data is the new currency, is crucially important.

On top of this, regulating the documentation involved would help prevent “hallucinations” by AI — which are essentially where an AI gives a response that is not justified by the data used to train it.

Preventing the tech from becoming autonomous and uncontrollable

An area for special caution is the potential for an iterative process of AI creating subsequent generations of AI, eventually leading to AI that is misdirected or compounding errors. 

The progression from first-generation to second- and third-generation AI is expected to occur rapidly. 

ADVERTISEMENT

The fundamental requirement of the self-declaration of AI models, where each model openly acknowledges its AI nature, is of utmost importance. 

RICHARD A. BROOKS/AFP or licensors
This general view shows visitors at various booths during the three-day 7th AI Expo in Tokyo, May 2023RICHARD A. BROOKS/AFP or licensors

However, enabling and regulating this self-declaration poses a significant practical challenge. One approach could involve mandating hardware and software companies to implement hardcoded restrictions, allowing only a certain threshold of AI functionality. 

Advanced functionality above such a threshold could be subject to an inspection of systems, audits, testing for compliance with safety standards, restrictions on degrees of deployment and levels of security, etc. Regulators should define and enforce these restrictions to mitigate risks.

We should be acting quickly and together

The world-changing potential of generative AI demands a coordinated response. 

If each country and jurisdiction develops its own rules, the adoption of the technology — which has the potential for enormous good in business, medicine, science and more — could be crippled. 

ADVERTISEMENT

Regulation must encourage collaboration and research between all the major players, from experts in the field to policy-makers and ethicists. 

With a coordinated approach, the risks can be sensibly mitigated, and the full benefits of generative AI realised, unlocking its huge potential.

Rohit Kapoor is the Vice Chairman and CEO of EXL, a data analytics and digital operations and solutions company.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Share this articleComments

You might also like

Denmark to get powerful AI supercomputer and research centre

AI revolution is about to ring in the next chapter of Poland's history

In 2024 elections, we have to protect minorities from AI-aggravated bias