Six months ago experts called for a pause to AI experiments. Where are we now?

Do machines and AI pose an existential risk to humanity?
Do machines and AI pose an existential risk to humanity? Copyright Canva
Copyright Canva
By Luke Hurst
Share this articleComments
Share this articleClose Button

Tech leaders signed an open letter calling for a pause to AI experiments but have their warnings been heeded?

ADVERTISEMENT

Does artificial intelligence (AI) pose a potentially existential risk to humanity? That’s the view of the Future of Life Institute (FLI), which published an open letter six months ago, calling for an “immediate pause” to large AI experiments.

The letter came amid a flurry of public interest in generative AI, as apps like ChatGPT and Midjourney showcased how technology is getting ever-closer to replicating human ability in writing and art.

For the letter's signatories - which included X, Tesla and SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, and author Yuval Noah Harari - the apparently sudden rise of generative AI necessitated the call for a pause. Companies like ChatGPT’s OpenAI and Google were asked to consider the “profound risks to society and humanity” their technology might be creating.

It’s fair to say the major players did not press the pause button.

Instead, more companies have joined the generative AI race with their own large language models (LLMs), with Meta releasing Llama 2 and Anthropic showcasing its ChatGPT rival Claude 2.

Whether or not the warnings were heeded by the tech giants, the FLI’s letter did mark something of a milestone in what is shaping up to be remembered as the year of AI.

Mark Brakel, the institute’s director of policy, says they didn’t expect to get the response to their letter that it received, with widespread press coverage and a renewed urgency within governments to work out what to do about the rapid progress of AI.

There was a US Senate hearing which cited the letter, and it prompted a formal reply from the European Parliament.

Brakel tells Euronews Next that the upcoming global summit on AI safety in Bletchley Park in the UK will be a good opportunity for governments to step up where companies refuse to put on the brakes.

He says where the buzzword has so far been “generative AI”, it may soon become “agentic AI”, where AI can actually make decisions and take action autonomously.

“I think that's maybe the trend line that we see - and we also see how OpenAI has almost scraped the entire Internet of text. We're starting to see scraping of videos and podcasts and Spotify as alternative sources of data, video and voice,” he says.

Edging closer to disaster?

Brakel points out that while “we’ve become a little bit like 'the letter organisation'", the FLI was founded in 2014 and has done work on three major areas of civilisational-level risk: AI, biotechnology, and nuclear weapons.

It has a particularly striking video on its website, an expensively-produced fictional account of a global Al catastrophe in the year 2032. Amid the tensions between Taiwan and China, military reliance on AI for decision-making leads to an all-out nuclear war, the video ending with the planet lit up with nuclear weapons.

Brakel believes we have gotten closer to that sort of scenario.

“Integration of AI in military command and control is still progressing, especially with the major powers. However, I also see a greater desire from states to regulate, especially when you look at autonomy in conventional weapons systems,” he said.

The next year also looks promising for the regulation of autonomy in systems like drones, submarines and tanks, he says.

“I’m hoping that will also allow the major powers to reach agreements in avoiding accidents in nuclear command and control, but that’s one level more sensitive than conventional weapons”.

Regulation incoming

While major AI companies haven’t pressed pause on their experimentation, their leaders have openly acknowledged the profound risks AI and automation pose to humanity.

ADVERTISEMENT

OpenAI’s CEO Sam Altman called on US politicians earlier this year to establish government regulation of AI, revealing his “worst fears are that we… the technology industry, cause significant harm to the world”. That could happen in “a lot of different ways”, he added. He called for a US or global agency that would license the most powerful AI systems.

Europe may however prove to be the leader in AI regulation, with the European Union’s landmark AI Act in the works.

The final details are still being worked out between the union’s institutions, with the European Parliament overwhelmingly voting for it with 499 votes in favour, 28 against, and 93 abstentions.

Under the act, AI systems will be sorted into different tiers based on risk level, with the riskiest types prohibited, and limited risk systems requiring certain levels of transparency and oversight.

“We're generally quite happy with the Act,” says Brakel. “One thing that we've argued for from the very beginning when the act was first proposed by the Commission is that it needs to regulate GPT-based systems. At the time we were talking GPT3 rather than 4 [OpenAI’s transformer models], but the principle remains the same and we face a lot of big tech lobbying against that.

ADVERTISEMENT

“The narrative is the same in the US as it is in the EU that only the users of AI systems, the deployers, know what context it's being deployed in”.

He gives the example of a hospital using a chatbot for patient contact. “You're just going to buy the chatbot from OpenAI, you're not going to build it yourself. And if there's then a mistake that you're held liable for because you've given medical advice you shouldn't have, then clearly you need to understand what kind of product you've bought. And some of that liability really ought to be shared.”

While Europe awaits the final formulation of the EU’s AI Act, the global AI safety summit on November 1 should be the next event that provides some insight into how leaders around the world will be approaching AI regulation in the near future.

Share this articleComments

You might also like