The UN is holding its first ever convention on killer robots – or lethal autonomous weapons systems.
Although fully self-determining attack machines have yet to be manufactured, many believe technology is moving so fast that they are just around the corner.
Some want a total ban, while others are calling for a moratorium until the systems are able to significantly reduce civilian casualties.
The 1977 Nobel Peace Prize Laureate, Jody Williams, explained how they would work: “If robotics were allowed to fully develop and they were autonomous – killer robots as I like to call them – they would be able to be programmed, set free and make the decisions about when, where, who and how to attack.”
The concept of machines making decisions on life and death makes many feel uneasy.
Noel Sharkey, Professor of Artificial Intelligence at the University of Sheffield and the co-founder of the Campaign Against Killer Robots, said: “There’s nothing in artificial intelligence or robotics that could discriminate between a combatant and a civilian. It would be impossible to tell the difference between a little girl pointing an ice cream at a robot or someone pointing a rifle.”
A killer robot is defined as a fully autonomous weapon that can detect, select and attack targets without any human intervention.
The whole issue of killer robots has huge implications for current international human rights law.