Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

‘Humanity needs to wake up’ to AI threats, Anthropic CEO says

FILE - Dario Amodei, CEO and co-founder of Anthropic, attends the annual meeting of the World Economic Forum in Davos, Switzerland, Jan. 23, 2025
FILE - Dario Amodei, CEO and co-founder of Anthropic, attends the annual meeting of the World Economic Forum in Davos, Switzerland, Jan. 23, 2025 Copyright  AP Photo/Markus Schreiber, File
Copyright AP Photo/Markus Schreiber, File
By Anna Desmarais
Published on
Share Comments
Share Close Button

Dario Amodei, the CEO of Anthropic, says that humanity needs to regulate the use of AI, otherwise it could lead to the creation of autocratic governments that utilise the technology to suppress populations.

The world is entering a stage of artificial intelligence (AI) development that is testing “who we are as a species,”, warns Anthropic’s CEO in a sweeping essay.

Dario Amodei argues that humanity is entering an age of “technological adolescence,” where AI is advancing faster than legal systems, regulatory frameworks and society can keep pace.

In just two years, he argues that AI could become “smarter than a Nobel Prize winner” across most relevant fields, such as biology, programming, math, engineering, and writing, in as little as two years.

When these AI systems work together, Amodei likens them to “a country of geniuses in a data centre,” capable of completing complex tasks at least 10 times faster than a human in fields such as software design, cyber operation and even relationship building.

This combination of superhuman intelligence, autonomy and the difficulty of controlling the technology is “both plausible and a recipe for existential danger,” he wrote.

“Humanity needs to wake up, and this essay is an attempt - a possibly futile one, but it’s worth trying - to jolt people awake,” he said.

Amodei’s essay comes after his company published an 80-page “constitution” for its Claude chatbot last week, which sets out how the company will help its AI behave in a safe and ethical way.

Amodei is not the only person warning about AI’s potential dangers. A 2025 reportbacked by 30 countries said that advanced AI systems could create extreme new risks, such as widespread job losses, enabling terrorism or losing control over the technology.

Fellow tech leaders including OpenAI’s Sam Altman and Apple co-founder Steve Wozniak, have also warned about the risks of AI.

AI is a ‘civilisational challenge’

While Amodei stops short of saying that disaster is inevitable, he warns that AI is a serious “civilisational challenge”.

“AI is so powerful, such a glittering prize, that it is very difficult for human civilisation to impose any restraints on it at all,” he wrote.

Powerful AI systems could be used to advise governments, organisations or individuals about geopolitics, diplomacy or military planning, Amodei added.

The greatest danger is that autocrats use that AI-generated advice to “permanently steal” the freedom of citizens under their control and “impose a totalitarian state from which they can’t escape,” he wrote.

A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming and stamp them out before they grow.
Dario Amodei
CEO, Anthropic

Large-scale use of AI for surveillance, he adds, should be considered a crime against humanity.

Amodei said there’s a risk the world could be split up into autocratic spheres, each using AI to monitor and repress its population.

“A powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming and stamp them out before they grow,” the essay reads.

Amodei identifies China's government as the primary concern, given its combination of AI prowess, autocratic governance, and existing high-tech surveillance infrastructure.

Amodei also said that democracies that are competitive in AI, non-democratic countries with large datacenters, and AI companies themselves are potential actors who could misuse the technology.

Chips ‘the greatest bottleneck’

Controlling the sale of advanced computer chips that are used to train AI models is the most effective way to fight back, he wrote.

Democracies should not sell these technologies to authoritarian states, particularly China, which is widely considered the main competitor with the United States in the AI race, Amodei added.

“Chips and chip-making tools are the single greatest bottleneck to powerful AI, and blocking them is a simple but extremely effective measure, perhaps the most important single action we can take,” he said.

Beyond export controls, Amodei advocated for industry-wide coordination and social oversight. He called for transparency laws that compel AI companies to disclose how they guide their models’ behaviour.

He cites California’s SB-53 law, known as the Transparency in Frontier Artificial Intelligence Act (TFAIA), as one example.

The law forces AI companies to publish frameworks on their websites that describe how the company incorporates national and international best practices and standards into their AI models, according to California state governor Gavin Newsom.

But Almodei was also upbeat about AI’s future.

“I believe if we act decisively and carefully, the risks can be overcome – I would even say our odds are good. And there’s a hugely better world on the other side of it. But we need to understand that this is a serious civilisational challenge,” he said.

Go to accessibility shortcuts
Share Comments

Read more