Could AI lead us to extinction? This activist group believes so

Could AI become the trojan horse for humanity? One group believes so
Could AI become the trojan horse for humanity? One group believes so Copyright AFP
Copyright AFP
By Sophia Khatsenkova
Share this articleComments
Share this articleClose Button
Copy/paste the article video embed link below:Copy to clipboardCopied

It's the most recent in a series of alarms raised by experts in the artificial intelligence field.

ADVERTISEMENT

Could artificial intelligence (AI) destroy humanity?

One international activist collective called PauseAI believes so and they’ve been trying to lobby the European Union into stopping the roll-out of more powerful artificial intelligence systems.

In the past few months, new tools such as ChatGPT have raised fears that AI will lead to millions of people losing their jobs or the generation of disinformation especially during elections.

But according to the nearly 300 members of the activist collective PauseAI, these powerful systems could very soon outsmart and even manipulate humans. 

And if people try to interfere or shut them down, these technologies could resist.

Launched in May, PauseAI is the brainchild of software engineer Joep Meindertsma.

He took a break from his current role as the director of a technology company to focus solely on PauseAI.

"I feel like there's a real chance that in the next couple of months, someone will invent a superintelligent machine. That will be the end of humanity as we know it. That was for me enough reason to say, 'Oh, let's, let's stop focusing on my database thing, start doing this policy movement, and give it everything I've got because I'm quite scared," he told Euronews. 

A call for an international summit on AI

In concrete terms, what scares Joep Meindertsma the most is that AI will be able to find zero-day vulnerabilities in cyber issues.

Zero-day exploits are security vulnerabilities found by hackers in software and systems.

Considering that companies and governments could eventually give AI systems more autonomy and connect them to certain vital infrastructures such as power grids or even weapons, this could mean superintelligent AI systems could shut them down causing chaos. 

That’s why the founder of the PauseAI group is also asking the EU to spearhead an international summit on the subject.

"The number one thing is to call for one government to start, step up, organise a summit, and implement a moratorium, a pause on the development of these dangerous systems," he explained.

"That needs to happen on an international level because if you do it on a national level, there will be a lot of arguments against doing so. Nations compete just like companies compete. We can't just ask nicely for companies to stop their AI development because they have a lot of competitive pressure. That's why we need to do this summit."

More than 350 experts in tech sound the alarm

PauseAI isn’t the only one to ring the alarm bells on this issue. In June, Rishi Sunak, UK Prime minister became one of the first world leaders to acknowledge the potential “existential” threat of developing a “superintelligent” AI without appropriate safeguards.

Recently 350 executives, researchers, and engineers working in AI signed an open letter warning that it could one day destroy humanity.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. 

But some argue that AI is still too underdeveloped to pose an existential threat and that short-term problems such as biased and incorrect responses are the biggest issues right now.

ADVERTISEMENT

Meredith Whittaker, president of the messaging app Signal mocked the statement as tech leaders overpromising their product on Twitter. 

Sam Altman, CEO of OpenAI which developed ChatGPT said that while artificial intelligence can be beneficial to humans, "regulating AI is essential."

Share this articleComments

You might also like