Scientists scoff at the idea that AI poses an existential threat, but military applications of the technology pose concern.
"Terminator: Dark Fate," the sixth installment in the long-running science-fiction franchise, opens Friday and posits a world in which a self-aware computer builds an army of killer robots it then uses in an attempt to wipe humanity off the face of the Earth.
It's the same vision that filmmaker James Cameron dreamed up for the first "Terminator" movie in 1984, well before the advent of autonomous drones and advanced machine learning made the premise seem a little less science fiction.
In that 35-year span, a variety of technological advancements in AI and robotics have brought elements of "Terminator" closer to reality. Artificial intelligence experts are confident, however, that the kind of independent AI and humanoid robots of the movie franchise are still far off.
But they also offer a warning: the developments that people have made in AI and military technology could create their own kind of "Judgement Day."
"AI is a powerful technology, but it's a tool, not unlike a pencil," Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, told NBC News. "How it's used is in the hands of people."
AI may not yet boast self-awareness, but it already rivals and in some cases surpasses human intelligence across a range of applications, includingreading CT scans and spotting shoplifters as well as helping self-driving cars navigate crowded cities. Developers have not have made artificially intelligent machines to look like Arnold Schwarzenegger, but they can at least make one sound exactly like podcast host Joe Rogan to the point where it can fool human listeners.
The world's fastest computer — currently the IBM Summit supercomputer at Oak Ridge National Laboratory in Tennessee — can handle 200,000 trillion calculations per second. But scientists haven't cracked the code for machine learning to come close to basic human common sense needed for nuanced problem solving skills.
Panicking at this stage of the technology's development, Etizoni said, "is like being worried about overpopulation on Mars before we even have gotten a person on Mars."
When might we feel the need to push the panic button? Estimates vary wildly. Some experts interviewed by NBC News predict the singularity — roughly defined as the time when an artificial intelligence will surpass human intelligence and be able to evolve autonomously — will arrive as soon as 15 years from now. Others say it will be closer to a century.
One point that everyone agrees on, however, is a computer will eventually surpass its creators. And when it does, it's not clear it will be possible to program enough safeguards for humans to remain the apex programmers.
"Humans are not the apex species because we're bigger, faster, stronger. It's because we're smarter," said R.P. Eddy, CEO of Ergo, a New York City-based technology consulting firm, and a former member of the White House National Security Council. "Because we're smarter, we have complete dominion over the world. And now we're talking about the creation of something else that's smarter than us."
That continuing uncertainty is apparently what drew Cameron back to the sci-fi franchise he launched all those decades ago.
"Over the years I have continued to consult with people working at the forefront of the artificial intelligence world," Cameron said in a statement. "They all believe there will be an A.I. equal to or greater than a human mind. They also say it's not going to turn into Skynet (the killer A.I. in the first few 'Terminator' movies), but how do we know that?"
Cameron compared the drive to expand the capabilities of AI to the race to split the atom in the 1930s and 1940s, which led ultimately to the development of nuclear weapons.
"The first manifestation of nuclear power on our planet was the destruction of two cities and hundreds of thousands of people. So the idea that it can't happen now is not the case," Cameron said.
In "Terminator: Dark Fate," a super-intelligence from the future sends yet another robotic assassin, a "Rev-9" (Gabriel Luna), to kill a young woman who is destined to become key figure in the human resistance (Natalia Reyes), who is in turn protected by a time-traveling super-soldier (Mackenzie Davis), veteran Terminator killer Sarah Connor (Linda Hamilton) and a virtually obsolete Terminator model (Arnold Schwarzenegger).
In reality, a robot would make an awful assassin.
"When you see a lot of the successes in A.I., they tend to be success in the analyzing past experience: what it's seen in the past and using that to extrapolate into the future," said Reid Simmons, professor of computer science at Carnegie Mellon University. "But if something comes along that it has never seen before, it doesn't know how to handle it, and it doesn't know it doesn't know hot to handle it."
"Computer systems are woefully inadequate in terms of common sense reasoning," Simmons said.
It doesn't, however, take the development of a super-intelligence to reach the potential for some of that "Dark Fate" prognostication to play out.
Mary Wendham, advocacy director for the arms division of Human Rights Watch, would like to see some of that common sense applied toward a global treaty banning the use of AI on the battlefield.
Robot sentries are already lined up along the South Korean side of the Demilitarized Zone, programmed to turn their built-in machine guns on North Korean soldiers if they are detected attempting to cross the border. For now, those bots require a human operator to approve a request to open fire.
In August, DARPA announced a successful test of an autonomous drone swarm that has military applications.
"It's that moment where we cross the threshold to outsourcing killing to machines," says Mary Wareham, organizer of the Campaign to Stop Killer Robots, which lobbied the United Nations on the issue last week.
"We're crossing a moral line."
It's interesting to note that Wareham's campaign often includes a still from "The Terminator" in their power point presentations for effect.
The movie franchise has done no public relations favors to advocates who see the positive value that a super intelligence could bring. Etzioni posits such a computer could calculate solutions to climate change, the over-saturation of plastic in oceans, and potential asteroid strikes. He adds that safety protocols could be programmed into place — a sort of off switch in case of emergency.
Eddy, however, worries that there are no universal standards or ethics being applied as nations and corporations vie with "guys in their basements" to develop what could be the most powerful in human history — and possibly the last.
"One thing the 'Terminator' movies have done is created this idea of a killer artificial intelligence as outlandish," said Eddy, who co-wrote on the subject in the book, "Warnings: Finding Cassandras to Stop Catastrophes." "Because we look at these James Cameron big blockbuster movies and we think that this is only exists in the realm of fantasy and outlandish fiction.
"That is actually a disservice to the conversation about consequence."
"Because if risk is equal to likelihood times consequence, then even if the likelihood is small, the potential consequence is the loss of all life on Earth. So the risk is worth talking about."