Representatives and companies from 28 countries, including the US and China, as well as the EU, signed the pact that aims to tackle the risks of so-called frontier AI models.
International governments signed a “world-first” agreement on artificial intelligence (AI) at a global summit in the United Kingdom to combat the "catastrophic" risks the technology could present.
Tech experts, global leaders and representatives from across 27 countries and the European Union are attending the UK’s AI Safety Summit, which runs from Wednesday until Thursday at Bletchley Park, once home to Second World War codebreakers.
The UK announced it would invest in an AI supercomputer, while the Tesla and X boss Elon Musk said on the sidelines of the event that AI is "one of the biggest threats to humanity".
However, many in the tech community signed an open letter calling for a spectrum of approaches — from open source to open science and for scientists, tech leaders and governments to work together.
Here are the key takeaways from the event.
The AI agreement
The Bletchley Declaration on AI safety is a statement signed by representatives and companies of 28 countries, including the US, China, and the EU. It aims to tackle the risks of so-called frontier AI models - the large language models developed by companies such as OpenAI.
The UK government called it a “world-first” agreement between the signatories, which aims to identify the “AI safety risks of shared concern” and build “respective risk-based policies across countries”.
It warns frontier AI, which is the most sophisticated form of the technology that is being used in generative models such as ChatGPT, has the "potential for serious, even catastrophic, harm, either deliberate or unintentional, stemming from the most significant capabilities of these AI models".
The UK’s Secretary of State for Science, Innovation and Technology Michelle Donelan said the agreement was a “landmark achievement” and that it “lays the foundations for today’s discussions”.
However, experts argue the agreement does not go far enough.
"Bringing major powers together to endorse ethical principles can be viewed as a success, but the undertaking of producing concrete policies and accountability mechanisms must follow swiftly," Paul Teather, CEO of AI-enabled research firm AMPLYFI, told Euronews Next.
"Vague terminology leaves room for misinterpretation while relying solely on voluntary cooperation is insufficient toward sparking globally recognised best practices around AI".
More AI summits
The UK government also announced that there would be future AI safety summits.
South Korea will launch another “mini virtual” Summit on AI in the next six months and France will host the next in-person AI summit next year.
Who said what?
Billionaire tech entrepreneur Elon Musk arrived at the summit and kept quiet during the talks but warned about the risks of AI.
"We’re not stronger or faster than other creatures, but we are more intelligent. And here we are, for the first time really in human history, with something that’s going to be far more intelligent than us.”
Musk, who co-founded the ChatGPT developer OpenAI and has launched a new venture called xAI, said there should be a “referee” for tech companies but that regulation should be implemented with caution.
“I think what we’re aiming for here is... first, to establish that there should be a referee function, I think there should.
"And then, you know, be cautious in how regulations are applied, so you don’t go charging in with regulations that inhibit the positive side of AI."
Musk will speak with British Prime Minister Rishi Sunak later on Thursday on his platform X, formerly Twitter.
Ursula von der Leyen
European Commission chief Ursula von der Leyen warned AI came with risks and opportunities and praised how quantum physics led to nuclear energy but also societal risks such as the atomic bomb.
"We are entering a completely different era. We are now at the dawn of an era where machines can act intelligently. My wish for the next five years is that we learn from the past, and act fast!" she said.
Von der Leyen urged for a system of objective scientific checks and balances, with an independent scientific community, and for AI safety standards that are accepted worldwide.
She said the EU's AI Act is in the final stages of the legislative process. She also said the potential of a European AI Office is being discussed which could "deal with the most advanced AI models, with responsibility for oversight" and would cooperate with similar entities around the world.
US Vice President Kamala Harris said that action was needed now to address “the full spectrum” of AI risks and not just “existential” fears about threats of cyber attacks or the development of bioweapons.
“There are additional threats that also demand our action, threats that are currently causing harm and to many people also feel existential,” she said at the US embassy in London.
King Charles III
Britain’s King Charles III sent in a video speech in which he compared the development of AI to the significance of splitting the atom and harnessing fire.
He said AI was “one of the greatest technological leaps in the history of human endeavour” and said it could help “hasten our journey towards net zero and realise a new era of potentially limitless clean green energy”.
But he warned: “We must work together on combatting its significant risks too”.
Backlash from the tech community
Meta's president of global affairs Nick Clegg said there was "moral panic" over new technologies, indicating government regulations could face backlash from tech companies.
“New technologies always lead to hype,” Clegg said. “They often lead to excessive zeal amongst the advocates and excessive pessimism amongst the critics.
“I remember the 80s. There was this moral panic about video games. There were moral panics about radio, the bicycle, the internet.”
Mark Surman, president and executive director of the Mozilla Foundation linked to browser Firefox, also raised concerns that the summit was a world-stage platform for private companies to push their interests.
Mozilla published an open letter on Thursday, signed by academics, politicians and employees from private companies, in particular Meta, as well as Nobel Peace Prize Maria Ressa.
"We have seen time and again that increasing public access and scrutiny makes technology safer, not more dangerous. The idea that tight and proprietary control of foundational AI models is the only path to protecting us from society-scale harm is naive at best, dangerous at worst," Surman said in comments to Euronews Next.
"We’re asking policymakers to invest in a range of approaches - from open source to open science - in the race to AI safety. Open, responsible and transparent approaches are critical to keep us safe and secure in the AI era," he added.
A new AI supercomputer
The United Kingdom announced it will invest £225 million (€257 million) in a new AI supercomputer, called Isambard-AI after the 19th-century British engineer Isambard Brunel.
It will be built at The University of Bristol, in southern England, and the UK government said it would be 10 times faster than the UK’s current quickest machine.
Alongside another recently announced UK supercomputer called Dawn, the government hopes both will achieve breakthroughs in fusion energy, health care and climate modelling.
Both computers aim to be up and running next summer.
The UK’s ambitions
It is no secret that Sunak wants the UK to be a leader in AI, it is unclear how it will be regulated and other countries are already setting their own AI regulations. There is stiff competition from the US, China and the EU.
President Joe Biden said “America will lead the way during this period of technological change” after signing an AI executive order on October 30. Meanwhile, the EU is also trying to set its own set of AI guidelines.
However, unlike the EU, the UK has said it does not plan to adopt new legislation to regulate AI but would instead require the existing regulators in the UK to be responsible for AI in their sectors.
China too has been pushing through its own rules governing generative AI.
The country’s vice minister of technology Wu Zhaohui said at the summit China would contribute to an “international mechanism [on AI], broadening participation, and a governance framework based on wide consensus delivering benefits to the people, and building a community with a shared future for mankind."
Despite the competition, Will Cavendish, former Google DeepMind and UK government advisor and now global digital lead at the firm Arup, argues that the UK can have a place on the world's AI stage.
"The UK already has the third largest AI sector in the world, so it is reasonable for the UK to aspire to be one of the global leaders in this exciting area," he told Euronews Next.
But he also said countries need to come together "to carve out a future for all".