Experts discuss the ethical and legal dilemmas thrown up by rapid advances in battlefield technology such as AI, machine learning and robotics.
For Ukraine, in the months since Russia staged its all-out invasion, the drone may have come to symbolise the proverb that necessity is the mother of invention, showing that innovation and intelligence can create obstacles for the fiercest opponents.
Taking this concept one step further, Kyiv recently launched BRAVE 1, a tech hub dedicated to breaking new ground in the technology of warfare to ensure it will be better prepared for any future threat to its territory.
Advances in artificial intelligence, machine learning, and robotics all allow scientists and engineers to reimagine how battles may be fought in the near future.
But experts say many of the new technologies have created fresh ethical issues and moral dimensions to deal with, some of which are already being faced on the battlefield.
Euronews discussed these issues with two leading academics in the field: Cesar Pintado of Spain’s International Campus for Security and Defence (CISDE); and Anna Nadibaidze, a PhD fellow at the Centre for War Studies and Department of Political Science and Public Management at the University of Southern Denmark.
So how has the relationship between soldier and cyber-weapon evolved?
"There are constantly missions that are aborted due to ethical, legal, and technical issues. Or simply due to the evolution of the combat itself,” said Professor Pintado.
“With a human operator, there is always the possibility in theory that they might exercise compassion, empathy, and human judgement. A system that is trained on data and pre-programmed to do something, doesn’t have that option.”
Modern wars are supposed to be fought according to internationally agreed laws and conventions. One concern is that automated weapons systems may not be fine-tuned enough to legal definitions in a combat scenario.
In addition, Professor Pintado said there are also situations in which a cyber-weapon would not have the same ability as a human to measure potential collateral consequences.
“What if that model of tank is also being used by allied forces, or if they are momentarily next to allied troops or near areas such as a temple or a school?"
There is also a concern that governments and individuals may approach this new moral military dimension in the wrong way.
“This idea that so-called killer robots gain their own consciousness and appear on the battlefield, that's a futuristic thing, it's from science fiction and movies. That's not really what the debate should be about,” said Anna Nadibaidze.
Her view is that alongside investment in technology, governments should also be considering the regulatory and legal framework of battlefields of the future.
“There is an urgent need to formulate legally binding rules and address these challenges because current international regulations and international humanitarian law aren't sufficient to address them,” she warned.