Artificial Intelligence: genuine concern, or fearmongering?

Dozens of prominent scientists have put their names to an open letter warning the public about the danger of Artificial Intelligence (AI). They are

Now Reading:

Artificial Intelligence: genuine concern, or fearmongering?

Text size Aa Aa

Dozens of prominent scientists have put their names to an open letter warning the public about the danger of Artificial Intelligence (AI). They are, specifically, worried about potential developments in autonomous weapons, made possible by the progress of robotics and AI.

Among those endorsing the letter are Stephen Hawking, Noam Chomsky and Elon Musk of Tesla and Space X fame.

What does the letter say?

In the letter, the signatories claim “the deployment of such systems is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”


They highlight a large list of possible drawbacks to consider, should such weapons become reality.

For example, the signatories argue that they are relatively cheap to mass-produce, which could lower the boundaries for going to war, since fewer (if any) human lives would be lost.

The ‘human factor’ is also listed as one of few advantages of autonomous weapons, however, it is followed by:
“There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”

Mass-production could mean the weapons would easily end up on the black market or in the hands of terrorists wishing to destabilise nations, or war lords “wishing to perpetrate ethnic cleansing, etc.”

They write: “the key question for humanity today is whether to start a global AI arms race or to prevent it from starting.”

Their response: “A military AI arms race would not be beneficial for humanity.”

Acknowledging the role of science

Despite objections to developing automated weapons, those endorsing the letter do not reject the “future societal benefits.”

“In summary,” they conclude, “we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”


However, it seems not all robotic scientists would be on board. In a November 2014 blog post, Australian Rodney Brooks, founder of RethinkRobotics, is adamant AI is a tool, not a threat.

“I think it is a mistake to be worrying about us developing malevolent AI anytime in the next few hundred years,” he wrote. “I think the worry stems from a fundamental error in not distinguishing the difference between the very real recent advances in a particular aspect of AI, and the enormity and complexity of building sentient volitional intelligence.”

He concluded: “Worrying about AI that will be intentionally evil to us is pure fear-mongering. And an immense waste of time. Let’s get on with inventing better and smarter AI.”