Sending machines to war instead of people is already commonplace, but the general public is not quite on board yet, a study led by the Estonian Military Academy shows.
This article is published in collaboration with Research in Estonia.
There is a lot of talk about self-driving cars and whether they are safe enough to exist alongside humans on the streets – but what about military robots?
Defence systems are usually purchased with government funds. Therefore, the public should have the right to have their say, however civilians are scarcely included in military decisions. A European Union study, led by the Estonian Military Academy, aimed to change that.
Estonian, Austrian and Norwegian researchers looked at what the people had to say about unmanned ground vehicles, or in other terms, self-driving weapons systems.
Always controlled by a human
“Most civilians would only agree with the use of self-driving military vehicles if they were controlled by a human at all times,” Wolfgang Wagner, a social psychology professor at the University of Tartu, concluded in the study.
He explained that the technology was still new and developing. Self-driving vehicles, for instance, are not yet ready to tackle difficult terrain and are vulnerable to glitches.
At the same time, governments all over the world are pushing for it. Russia, for instance, has declared they are developing a wide range of unmanned ground vehicle platforms.
Western countries don’t lag behind, with many military robots already in use in Europe. Australia’s prime minister Scott Morrison promised to send unmanned aerial and ground systems to Ukraine, among other support.
All this has created the necessity to understand how artificial intelligence-based military systems comply with the law. Someone needs to be accountable for the decisions the machine is making, after all – you cannot lock up a machine if it makes the wrong decision.
“In both the public and the academic discourse on technology, the development of intelligent systems is often portrayed as something inevitable and out of control,” Wagner said.
This thinking partially derives from how robots have been portrayed as killer machines in science fiction movies.
Therefore, people who took part in the series of studies led by the Estonian Military Academy generally said they preferred the defence systems to be controlled by a human at all times, such as with a remote control from a distance.
Ethical and legal questions
From the legal perspective, similar concerns appeared. Janar Pekarev, a researcher at the Estonian Military Academy, indicated that a weapon system that could select and engage targets without human intervention poses a serious challenge in terms of international humanitarian law.
The fundamental ethical question is whether people can delegate life-and-death decisions and accountability to artificial agents, the researchers concluded. Wouldn’t it be against basic human rights to classify a human being as a mere military target?
Pekarev concluded that even if autonomous weapon systems were able to follow the principles of international law of armed conflict, some violations would still most likely occur. In case that happens, who would be held legally responsible?
Based on the law of armed conflict she extensively studied, Camilla Guldahl Cooper, an associate professor in operational law at the Norwegian Defense University College, is a little more hopeful.
“Unmanned systems can be applied in war in a lawful manner,” she told Research in Estonia. “It requires a lot of awareness.” As part of the EU study, she concluded there needed to be limits as to what the machine is capable of in a war situation. In case of a risk to a civilian casualty, for instance, depending on how it’s programmed, the machine would have to stop or a human would have to be involved. In short, taking care of the civilians needs to be programmed into the machine – a human would have to take the responsibility.
The trust issue will be overcome once people understand the unmanned military systems better, Cooper believes. It’s simply a matter of clearly stating who is responsible for what exactly.
For this, people must be able to trace back how the robots made the decisions they did.
“If you create a black box where you don’t know what is going on, then it’s not lawful,” Cooper said. Being able to control the machine is in the interest of everyone. Once the control is gone, the machine can turn against anyone, including your own people.
“Nobody wants an uncontrollable weapon,” she said.
Protecting human lives
Current artificial intelligence systems do, in any case, include human control on many levels: in giving them abilities, in equipping and applying them, turning them on, giving them targets, giving them ammunition. “It’s just another system with a new capability,” Cooper noted.
The rules of war are ancient, she pointed out – a long part of our culture. Some of them are written in the bible and the Quran – like how one shouldn’t attack children or mistreat prisoners. The EU and NATO countries are trying exceptionally hard to follow the rules, she said.
Technological development in the military is a recurring aspect of warfare, we just don’t know everything about it yet, which makes it understandably scary, but they could also protect human lives in the middle of the chaos.
First though, we need to explore what the robots can do for us, because “once you see the potential, that’s where you start seeing the limitations,” as Cooper pointed out. “But we won’t be able to see the limitations before seeing the potential.”