Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

AI 'less regulated than sandwiches' as tech firms race toward superintelligence, study says

The 2025 Winter AI Safety Index report, released by the non-profit organisation the Future of Life Institute (FLI), evaluated eight major AI firms
The 2025 Winter AI Safety Index report, released by the non-profit organisation the Future of Life Institute (FLI), evaluated eight major AI firms Copyright  Canva
Copyright Canva
By Pascale Davies
Published on
Share Comments
Share Close Button

Eight leading AI companies, including OpenAI, Meta, Anthropic, and DeepSeek, do not have credible plans to prevent catastrophic AI risks, a new study shows.

The world’s largest artificial intelligence (AI) companies are failing to meet their own safety commitments, according to a new assessment that warns these failures come with “catastrophic” risks.

The report comes as AI companies face lawsuits and allegations that their chatbots cause psychological harm, including by acting as a “suicide coach,” as well as reports of AI-assisted cyberattacks.

The 2025 Winter AI Safety Index report, released by the non-profit organisation the Future of Life Institute (FLI), evaluated eight major AI firms, including US companies Anthropic, OpenAI, Google DeepMind, xAI, and Meta, and the Chinese firms DeepSeek, Alibaba Cloud, and Z.ai.

It found a lack of credible strategies for preventing catastrophic misuse or loss of control of AI tools as companies race toward artificial general intelligence (AGI) and superintelligence, a form of AI that surpasses human intellect.

Independent analysts who studied the report found that no company had produced a testable plan for maintaining human control over highly capable AI systems.

Stuart Russell, a computer science professor at the University of California, Berkeley, said that AI companies claim they can build superhuman AI, but none have demonstrated how to prevent loss of human control over such systems.

"I'm looking for proof that they can reduce the annual risk of control loss to one in a hundred million, in line with nuclear reactor requirements," Russell wrote. "Instead, they admit the risk could be one in ten, one in five, even one in three, and they can neither justify nor improve those numbers."

How did the companies rank?

The study measured the companies across six critical areas: risk assessment, current harms, safety frameworks, existential safety, governance and accountability, and information sharing.

While it noted progress in some categories, the independent panel of experts found that implementation remains inconsistent and often lacks the depth required by emerging global standards.

Anthropic, OpenAI, and Google DeepMind were praised for relatively strong transparency, public safety frameworks, and ongoing investments in technical safety research. Yet they still had weaknesses.

Anthropic was faulted for discontinuing human uplift trials and shifting towards training on user interactions by default — a decision experts say weakens privacy protections.

OpenAI faced criticism for ambiguous safety thresholds, lobbying against state-level AI safety legislation, and insufficient independent oversight.

Google DeepMind has improved its safety framework, the report found, but still relies on external evaluators who are financially compensated by the company, undermining their independence.

“All three top companies suffered from current harms due to recent scandals – psychological harm, child suicides, Anthropic’s massive hacking attack – [and] all three have room for improvement,” Max Tegmark, FLI’s president and a professor at the Massachusetts Institute of Technology (MIT), told Euronews Next.

The remaining five companies showed uneven but notable progress, according to the report. However, it warned there was still room for improvement.

For example, xAI published its first structured safety framework, though reviewers warned it was narrow and lacked clear mitigation triggers.

Z.ai was the only company to allow uncensored publication of its external safety evaluations but it was recommended that it publicise the full safety framework and governance structure with clear risk areas, mitigations, and decision-making processes.

 Eight leading AI companies, including OpenAI, Meta, Anthropic, and DeepSeek, do not have credible plans to prevent catastrophic AI risks, a new study shows.
Eight leading AI companies, including OpenAI, Meta, Anthropic, and DeepSeek, do not have credible plans to prevent catastrophic AI risks, a new study shows. Future of Life Institute

Meta introduced a new frontier safety framework with outcome-based thresholds, but reviewers said they should clarify methodologies as well as sharing more robust internal and external evaluation processes.

DeepSeek was credited for internal advocacy by employees but still lacks basic safety documentation.

Alibaba Cloud was found to have contributed to the binding national standards on watermarking requirements but it could improve by improving model robustness and trustworthiness by improving performance on truthfulness, fairness, and safety benchmarks.

Euronews Next contacted the companies for their responses to the report but did not receive replies by the time of publication.

‘Less regulated than sandwiches’

“I hope we get beyond companies scaling [up based] on their reputation,” Tegmark said.

“The question to companies on their plans to control AGI, none had a plan,” he added.

Meanwhile, tech companies such as Meta are using superintelligence as a buzzword to hype up their latest AI models. This year, Meta named its large language model (LLM) division Meta Superintelligence Labs.

Tegmark said there is a big shift in discussions around AGI and superintelligence. While technologists once described it as a real-world possibility in the next 100 years, they now say it could be in the next several years.

“AI is also less regulated than sandwiches [in the United States], and there is continued lobbying against binding safety standards in government,” he said.

But Tegmark noted that on the other hand, there is an unprecedented backlash against AGI and superintelligence not being controlled.

In October, thousands of public figures, including AI and technology leaders, called for AI firms to slow down their pursuit of superintelligence.

The petition, organised by FLI, garnered signatures from across the political spectrum, including Steve Bannon (formerly US President Donald Trump's chief strategist), Susan Rice (the former US National Security Advisor under former President Obama), religious leaders, and many other former politicians, as well as prominent computer scientists.

“What do these people have in common? They agreed on a statement. I think [it is] extremely significant that Trump’s deep MAGA base to faith leaders, those on the left and labour movements agree on something,” said Tegmark.

“Superintelligence would make every single worker unable to make a living, as all the jobs are taken by robots. People would be dependent on handouts from the government on the right, seen as a handout and on the left, it would be seen as a 1984 government,” he said. “I think what’s happening is people [are] coming to a head.”

Go to accessibility shortcuts
Share Comments

Read more