EventsEventsPodcasts
Loader
Find Us
ADVERTISEMENT

AI Seoul Summit: World leaders agree to launch network of safety institutes

People pass by screens announcing the upcoming AI Seoul Summit in Seoul, South Korea.
People pass by screens announcing the upcoming AI Seoul Summit in Seoul, South Korea. Copyright AP Photo/Ahn Young-joon
Copyright AP Photo/Ahn Young-joon
By Anna Desmarais
Published on
Share this articleComments
Share this articleClose Button

The agreement came during a virtual session of the AI Safety Summit hosted jointly by South Korea and the UK.

ADVERTISEMENT

Ten countries and the European Union will be developing more artificial intelligence (AI) safety institutes to align research on machine learning standards and testing.

The international network was agreed during the AI Safety Summit in Seoul, South Korea during which world leaders met virtually.

It will bring together scientists from publicly-backed institutions, like the UK’s AI Safety Institute to share information about AI models' risks, capabilities and limitations.

The group of institutions will also monitor “specific AI safety incidents” when they occur.

“AI is a hugely exciting technology…but to get the upside, we must ensure it’s safe,” UK prime minister Rishi Sunak said in a press release.

“That’s why I’m delighted we have got an agreement today for a network of AI Safety Institutes”.

Which countries are behind the new safety institutes?

Signatories to this new AI Safety Institute network include the EU, France, Germany, Italy, the UK, the United States, Singapore, Japan, South Korea, Australia and Canada.

The UK claims to have created the world’s first AI Safety Institute last November with an initial investment of £100 million (€117.4 million).

Since then, other countries like the United States, Japan and Singapore have launched their own.

The mission of the UK’s AI Safety Institute is to “minimise surprise to the UK and humanity from rapid and unexpected advances in AI,” a November 2023 press release from the UK government reads.

The EU, now having passed the EU AI Act, is getting ready to launch its AI office. The European Commission previously told Euronews they would hire the new office’s head once the law has been fully approved.

Ursula von der Leyen, president of the European Commission, said at last year’s AI Safety Summit that the AI office would and should have a “global vocation” so that it could “cooperate with similar entities around the world”.

Leaders also signed up to the wider Seoul Declaration during this conference which declares the importance of “enhanced international cooperation” to develop human-centric, trustworthy AI.

The first day of the AI Safety Summit this week saw 16 of the world’s biggest tech companies, including Open AI, Mistral, Amazon, Anthropic, Google, Meta, Microsoft and IBM agree to a set of safety commitments.

The list includes setting thresholds for when the risks of AI become too high and being transparent about them. A statement from the UK government, which co-hosted the event, called it a “historic first.”

France will host the next summit on safe AI use.

Share this articleComments

You might also like