Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

How the global competition to win the AI race is going

Artificial Intelligence
Artificial Intelligence Copyright  Michael Dwyer/Copyright 2023 The AP. All rights reserved.
Copyright Michael Dwyer/Copyright 2023 The AP. All rights reserved.
By Theodora Iliadi
Published on
Share Comments
Share Close Button

Global battle to lead in AI: Regulation, geopolitics, security and the new rules that will determine the future of AI

2025 finds the world at one of the most critical moments in technological history. Artificial intelligence (AI) is no longer a promise or a vague threat of the future. It is already here, reshaping economies, overturning power balances, influencing elections and raising questions about the limits of human responsibility.

The debate is no longer "f AI will change everything, but who will determine how**.**

Governments, regulators, tech giants, and leading researchers are clashing over the architecture of regulation: what rules will be set, to whom they will be applied, who will be protected and who will have the power to disrupt or certify a powerful AI model.

A person types on a keyboard
A person types on a keyboard Jenny Kane/Copyright 2024 The AP. All rights reserved.

Europe is moving forward with the AI Act, the United States is trying to build a more flexible model driven by innovation, while China is seeking to combine technological leadership with strict state control. And in the meantime, scientists like Geoffrey Hinton, Yoshua Bengio, Stuart Russell and Timnit Gebru are sounding the alarm.

The result is a multifaceted battle for the future. And 2025 is the year when it all begins to crystallise.

The Fragmented Landscape of Global Regulation

The European Union: The most ambitious - and most rigorous - approach

The European Union is the first region in the world to decide that AI needs a comprehensive institutional framework. The EU AI Act, the flagship of European digital policy, attempts to introduce a single risk categorisation:

  • Unacceptable risk
  • High risk
  • Limited risk
  • Low risk

The logic is clear: Protect citizens, fundamental rights, and avoid abuse in critical areas such as health, justice, education, and public administration.

But the devil is in the details.

At the core of European concern is the idea that AI cannot be left to complete market freedom, but must operate in a secure environment.

Margrethe Vestager, former executive vice president of the European Commission: 'We cannot let AI develop unchecked. Protecting citizens is a prerequisite for innovation."

And European Parliament President Roberta Metsola has said, "AI can transform Europe, but only if there are rules to ensure that it serves humans".

However, many tech companies argue that overregulation stifles innovation.

Vassilis Stoidis, CEO of 7L International and MassinGRID, said: "We don't need a specific regulatory framework for AI.Existing data protection legislation for the individual should also cover AI. We need to avoid over-regulation, as this leads to the dismantling of individual rights and progress.

"However, on AI, Europe can strengthen and simplify its legislation on the protection of the individual. Simplification is also an extra strengthening of individual rights".

Europe is trying to become the "regulatory model" of the planet, but it faces a colossal problem: it doesn't have its own tech giants to implement its strategy at scale.

Tech companies fear that regulation will slow down innovation, while many SMEs are worried about the cost of compliance.

"Yes, there is a risk of disadvantage for European companies. However, in the wake of AI, Europe can strengthen and simplify its legislation to protect the individual. Simplification is also an extra strengthening of individual rights," saidStoidis.

On the other hand, technology rights organisations argue that the regulation is not strict enough for the most powerful systems - the so-called frontier models.

 Loona Petbot
Loona Petbot John Locher/Copyright 2025 The AP. All rights reserved.

The United States: regulation by the back door

The US does not have a uniform law like the EU AI Act. Instead, they use:

  • Executive Orders,
  • guidelines for federal agencies,
  • state-level legislative initiatives,
  • and export controls on advanced chips.

Washington adopts one principle: don't stifle innovation.

The American model wants to give companies room to grow. But at the same time, the administration is trying to limit the spread of strategic technologies to China through export controls.

US President Donald Trump is considering pressuring states to halt state regulation of artificial intelligence (AI) in a draft executive order, according to reports on Thursday.

he OpenAI logo is seen on a mobile phone in front of a computer screen
he OpenAI logo is seen on a mobile phone in front of a computer screen Michael Dwyer/Copyright 2023 The AP. All rights reserved

China: Control, speed and strategic superiority

China has adopted some of the most stringent yet fastest regulations in the world. From 2022 it has:

  • Regulation on algorithms,
  • Rules for deepfakes,
  • A sophisticated state licensing system.

The philosophy is based on state oversight: AI is a strategic infrastructure and must be consistent with the interests of the state.

The Chinese approach allows for very rapid adoption of new technologies at scale. But it is criticised for lack of transparency, absence of independent control and restrictions on freedom of use.

Yoshua Bengio: "The most powerful models should not go unregulated"

Bengio, one of the three 'godfathers' of AI, has become the most vocal voice in favour of regulating frontier models- huge systems that may acquire unpredictable capabilities.

Yoshua Bengio
Yoshua Bengio Michel Euler/Copyright 2025 The AP. All rights reserved.

He suggests:

  • Independent safety testing
  • Mandatory transparency of training data
  • International coordination is similar to that of nuclear energy

Geoffrey Hinton: Fears his own creation

Hinton, who left Google so he could speak freely, is perhaps the most iconic figure in the debate.

In his talks, he explains that large-scale models develop unpredictable behaviours.

Jeffrey Hinton
Jeffrey Hinton Yui Mok/PA Wire/PA Images

He insists that it is required:

  • International cooperation
  • Limits on the autonomy of systems
  • A gradual transition to secure architectures

Stuart Russell: We built machines we don't know how to control

Russell, one of the most respected academics in AI safety, argues that the most fundamental mistake is the traditional design of systems that maximise a goal.

He states systems should be uncertain about their goals. Only then can they be corrected by humans.

He proposes a new architecture: "AI systems that defer to humans".

Timnit Gebru: The voice of ethics and accountability

Gembrou, who was removed from Google due to controversy around the ethics of AI, stresses that the debate cannot be just about safety; it must also be about fairness.

It highlights risks of discrimination, bias and social inequality.

Chat GPT app icon is seen on a smartphone screen
Chat GPT app icon is seen on a smartphone screen Kiichiro Sato/Copyright 2025 The AP. All rights reserved

The Big Breaks in the Global Conversation

Who will set the rules?

The EU, the US, and China have three completely different models of regulation. The result is that companies operate like "multinationals in three worlds".

The question is increasingly being asked:

Can AI be regulated at a national level?

Most experts believe not.

The Transparency Dilemma

The big models are 'black boxes'. Even their creators cannot fully explain why they produce specific answers.

"There are closed source and open source models. Each company chooses its strategy. History has shown that open source wins, I believe we will see the same in AI," Stoidis said.

Frontier Models and the Black Hole of Power

The frontier models of the next two years will have thousands of times more parameters than the current ones. They will be able to:

  • Generate code autonomously
  • Conduct scientific research
  • Manage crises
  • Perform complex tasks without human supervision

Who will certify them? Who will decide if they are safe?

This is the hottest topic of discussion.

Towards a new international regulatory architecture

Experts involved in international initiatives (G7, OECD, UN AI Advisory Board, etc.) are proposing a new model of global cooperation.

This architecture includes:

Frontier AI International Certification Authority

An international body that:

  • Will test models before they are released
  • Assess capabilities, risks and vulnerabilities
  • Issues binding certificates

Education and Transparency Registry

Mandatory disclosure:

  • Of training resources
  • Computing power
  • Basic principles of model operation

Does not mean disclosure of trade secrets, but democratic accountability

Mandatory Safety Tests

Safety tests will examine:

  • Ability to misinform
  • Ability to generate malicious code
  • Manipulation
  • Unwanted emergent abilities

Civil Rights in the Age of AI

Privacy, right to explanation, right not to be profiled without consent, right to human oversight.

Economic Incentives for Safe Innovation

Subsidies for secure models, tax incentives for implementing standards, and funding for small labs so that innovation is not monopolised by giants.

Global Agreement on AGI & Frontier AI

An international treaty setting limits on the development of models that go beyond specific computational capabilities.

Many scientists believe this must be done before systems with human-level general intelligence emerge.

he OpenAI logo is seen on a mobile phone in front of a computer screen
he OpenAI logo is seen on a mobile phone in front of a computer screen Kiichiro Sato/Copyright 2025 The AP. All rights reserved

Who will win the battle?

The battle to regulate AI is not only institutional. It is also:

  • Economic (who will lead the industry)
  • Geopolitical (who will set the global standards)
  • Social (who will be protected)
  • Democratic (who will control the information)

The big question remains:

Will AI serve society or define it?

The answer depends on the decisions that will be taken in the next two years. The window will not remain open for long.

Go to accessibility shortcuts
Share Comments

Read more