Newsletter Newsletters Events Events Podcasts Videos Africanews
Loader
Advertisement

Extremists could use AI to make bioweapons capable of sparking future pandemics, tech experts warn

A member of the UME (Emergency Army Unit) wears a protective suit at a facility in Madrid, Spain, on March 31, 2020.
A member of the UME (Emergency Army Unit) wears a protective suit at a facility in Madrid, Spain, on March 31, 2020. Copyright  Manu Fernandez/AP Photo
Copyright Manu Fernandez/AP Photo
By Gabriela Galvin
Published on
Share Comments
Share Close Button

Existing and emerging AI tools could technically be used to create a bioweapon capable of starting a pandemic, experts said.

Earlier this year, some of the world’s top technology and counter-terrorism experts met to work through a hypothetical scenario: a global pandemic sparked by a novel enterovirus strain intentionally created by an extremist group using artificial intelligence (AI).

The scenario may sound like the plot of a science fiction novel, but it is entirely plausible in the coming years, according to the group of 14 experts who met to discuss AI safeguards for the life sciences.

The experts found the fictional pandemic – which envisioned 850 million cases and 60 million deaths worldwide – “deeply concerning and worthy of near-term action to prevent,” according to a report on the group’s discussions.

AI is already revolutionising the medical field, promising to speed the development of new drugs and vaccines. But experts have also raised concerns that an AI-powered bioweapon could wreak havoc on humanity.

The expert group, which was convened by the Nuclear Threat Initiative and the Munich Security Conference in February, warned that AI’s rapid evolution is “eroding barriers to bioweapons development by malicious actors”.

These threats are not far off in the future, the group found. It would be technically possible to use existing and emerging AI-driven biological tools to create new pathogens with pandemic-level risks, the experts said.

What’s more, current security measures are woefully unprepared to tackle these threats, the group said.

The experts called for greater cooperation between global leaders to assess and respond to AI-driven biological threats.

They also said efforts to manage AI risks should be balanced against the potential benefits of these technologies to “avoid placing undue constraints” on scientific innovation.

Go to accessibility shortcuts
Share Comments

Read more