EventsEventsPodcasts
Loader

Find Us

ADVERTISEMENT

Why is ‘godfather of AI’ Geoffrey Hinton worried? These are the 4 dangers he fears about the tech

Why is artificial intelligence unleashing such fear - and what can we do about it?
Why is artificial intelligence unleashing such fear - and what can we do about it? Copyright Canva
Copyright Canva
By Euronews with AP
Published on
Share this articleComments
Share this articleClose Button

Here are some of the biggest concerns being voiced about the future of AI - and humanity.

ADVERTISEMENT

Prominent experts in artificial intelligence (AI) have been sounding the alarm over the pace and magnitude of recent advances in the field, warning they pose nothing less than a threat to mankind.

These include Geoffrey Hinton, an award-winning computer scientist known as the “godfather of AI,” who quit his job at Google last month to share his concerns about the unchecked development of new AI tools.

“I have suddenly switched my views on whether these things are going to be more intelligent than us," Hinton, 75, said in an interview with MIT Technology Review this week.

"I think they’re very close to it now and they will be much more intelligent than us in the future... How do we survive that?”

Hinton is not alone in his concerns. In February, even Sam Altman, the CEO of ChatGPT’s developer OpenAI, said the world may not be “that far away from potentially scary” AI tools - and that regulation would be critical but would take time to figure out.

Shortly after the Microsoft-backed start-up released its latest AI model called GPT-4 in March, more than 1,000 researchers and technologists signed a letter calling for a six-month pause on AI development because, they said, it poses “profound risks to society and humanity”.

Here's a look at the biggest concerns voiced by Hinton and other experts.

1. AI may already be smarter than us

Our human brains can solve equations, drive cars and keep track of Netflix series thanks to their native talent for organising and storing information and reasoning out solutions to thorny problems. 

The roughly 86 billion neurons packed into our skulls - and, more important, the 100 trillion connections those neurons forge among themselves - make that possible.

By contrast, the technology underlying ChatGPT features between 500 billion and a trillion connections, Hinton said. While that would seem to put it at a major disadvantage relative to us, Hinton notes that GPT-4, the latest AI model from OpenAI, knows “hundreds of times more” than any single human. Maybe, he suggests, it has a "much better learning algorithm” than we do, making it more efficient at cognitive tasks.

Researchers have long noted that artificial neural networks take much more time to absorb and apply new knowledge than people do since training them requires tremendous amounts of both energy and data.

That's no longer the case, Hinton argues, noting that systems like GPT-4 can learn new things very quickly once properly trained by researchers. That's not unlike the way a trained professional physicist can wrap her brain around new experimental findings much more quickly than a typical high school science student could.

That leads Hinton to the conclusion that AI systems might already be outsmarting us: not only can they learn things faster - they can also share copies of their knowledge with each other almost instantly.

“It’s a completely different form of intelligence,” he told MIT Technology Review. “A new and better form of intelligence”.

2. AI can 'supercharge' the spread of misinformation

What would smarter-than-human AI systems do? One unnerving possibility is that malicious individuals, groups or nation-states might simply co-opt them to further their own ends.

Dozens of fake news websites have already spread across the web in multiple languages, some publishing hundreds of AI-generated articles a day, according to a new report from NewsGuard, which rates the credibility of websites and tracks online misinformation.

Hinton is particularly concerned that AI tools could be trained to sway elections and even wage wars.

Election misinformation spread via AI chatbots, for instance, could be the future version of election misinformation spread via Facebook and other social media platforms.

ADVERTISEMENT

And that might just be the beginning.

“Don’t think for a moment that Putin wouldn’t make hyper-intelligent robots with the goal of killing Ukrainians,” Hinton said in the article. “He wouldn’t hesitate".

3. Will AI make us redundant?

OpenAI estimates that 80 per cent of workers in the United States could see their jobs impacted by AI, and a Goldman Sachs reports says the technology could put 300 million full-time jobs at risk worldwide.

Humanity's survival is threatened when "smart things can outsmart us,” according to Hinton.

“It may keep us around for a while to keep the power stations running,” Hinton told MIT Technology Review's EmTech Digital conference from his home via video on Wednesday. “But after that, maybe not”.

ADVERTISEMENT

“These things will have learned from us, by reading all the novels that ever were and everything Machiavelli ever wrote, how to manipulate people,” Hinton said. “Even if they can’t directly pull levers, they can certainly get us to pull levers”.

4. We don’t quite know how to stop it

“I wish I had a nice simple solution I could push, but I don’t,” Hinton added. “I’m not sure there is a solution”.

Governments are, however, paying close attention to the rise of AI. The White House has called in the CEOs of Google, Microsoft and ChatGPT-maker OpenAI to meet on Thursday with Vice President Kamala Harris in what's being described by officials as a frank discussion on how to mitigate both the near-term and long-term risks of their technology.

European lawmakers are also accelerating negotiations to pass sweeping new AI rules, and the UK’s competition regulator plans to examine the impact of AI on consumers, businesses and the economy and whether new controls are needed on technologies such as ChatGPT.

What’s not clear is how anyone would stop a power like Russia from using AI technology to dominate its neighbours or its own citizens.

ADVERTISEMENT

Hinton suggests that a global agreement similar to the 1997 Chemical Weapons Convention might be a good first step toward establishing international rules against weaponized AI.

Though it's also worth noting that the chemical weapons compact did not stop what investigators found were likely Syrian attacks using chlorine gas and the nerve agent sarin against civilians in 2017 and 2018 during the nation's bloody civil war.

Share this articleComments

You might also like