81As technology keeps getting better and better in the 21st century, and as Artificial Intelligence (AI) is used more and more in armed conflicts, International Humanitarian Law (IHL) needs to figure out how the rules of war apply to the use of AI-based Autonomous Weapon Systems (AWS).
Before going any further, it’s important to define a few things. AI refers to machines that are built with specific algorithms that allow them to gather information, analyse it, and make decisions without the help of a human. In a way, AI tries to mimic human intelligence by using machine learning and deep learning, which are both complicated processes.
There are many different ways to explain what AWS are, and there isn’t yet a global agreement on what they are. In simple terms, AWS are machines that, once they are set up, can use AI to work on their own without further human supervision. They can collect data, analyse it, choose targets, and use force, whether it is lethal or not.
Land mines and missile defence systems can also do some things on their own. But they shouldn’t be called AWS because they need very specific and exact trigger situations to work, they don’t choose their own targets completely on their own, and they aren’t based on AI. Machines, like drones, that use AI only to collect and send information but can’t use weapons on their own shouldn’t be called AWS either.
AWS is known by many different names around the world, such as “killer robots,” “fully autonomous weapons,” “lethal autonomous weapon systems,” “lethal autonomous robotics,” and so on. It could be argued that a simple, consistent language is easier for people around the world to understand and helps build momentum for international advocacy. It is hoped that academics and other important people can come to an agreement about this.
There are problems with using AWS in wars.
The deployment of AWS is a very worrisome thing, not least because there is a good chance that algorithm-based machines may make wrong decisions. This is because different details of a situation may need to be taken into account, which a person may be better at than a machine.
For example, a drone with explosives is sent into a conflict zone to destroy an enemy military installation next to a civilian building. If a bomb is dropped on the military base, there is a chance that civilians will die. If AI is running the drone, the question is how it will decide whether to drop the bomb or not. Can the AI algorithm be made so that it strikes a delicate balance between the principles of proportionality and military necessity in relation to the desired military advantage? Will there be a certain amount of risk that the AWS can handle? Or will there be a mathematical threshold based on simple consequentialist moral reasoning that tells us when the loss of a life in an accident is too much?
Also, how well can AWS even tell the difference between civilians and combatants? In some situations, people who are fighting may dress like civilians. Toy guns are sometimes seen being played with by kids growing up in war zones. In these situations, a machine might not follow the principle of difference.
In an article on Just Security, it was said that “the mere existence of guidance, processes, and technologies to avoid harming civilians does not, by itself, solve the tragedy of collateral damage.” They have to be used honestly […].” It seems unlikely that there could be a formula for good faith.
In the end, if the AWS makes a “mistake” and there was no way for a human to stop the attack, then the machine itself cannot be held criminally responsible under International Criminal Law.
These are just a few of the many important problems with AWS that still need to be fixed.
It is inevitable that AWS will be used.
In its June 2019 report on AI, the ICRC said that it is “not opposed to new technologies of warfare per se” as long as they “must be used, and must be capable of being used, in compliance with existing rules” of IHL and that it is “essential to keep human control over tasks and human judgement in decisions that may have serious consequences for people’s lives in armed conflict.”
But when it comes to AWS, the ICRC has voiced serious worries. In March 2019, the UN Secretary General said that AWS are “politically unacceptable, morally repugnant, and should be banned by international law.”
Even though there are worries and opposition, people are also making the case for AI and AWS. Some people say that AWS could help attacks be more accurate, cause less damage to innocent people, be more efficient, and save money. In September 2019, the head of the US Joint Artificial Intelligence Center said that AI will give them an advantage on the battlefield, and they will start putting the technology to use in war zones. Other countries’ armies, like those of Israel, Russia, and China, are also getting better at AI and AWS.
In its 2019 report on AI, the International Committee of the Red Cross (ICRC) said that “military applications of new and emerging technologies are not inevitable,” but that “States choose” to use them in this way. It is thought that in the near future, military states with advanced technology will continue to develop it at an even faster rate and start testing the use of AI and AWS in conflict situations. It is unlikely that the deployment of AWS can be stopped.
The need to regulate the use of AWS as soon as possible
Advanced military systems all over the world are quickly building up their AI and AWS capabilities, but there are no international safeguards to stop this. On the other hand, research is being done on how IHL could be updated and new rules put in place to deal with the problems caused by AWS, but progress has been slow. We don’t yet know how AWS might change the situation on the ground in armed conflicts because we don’t have enough real-world evidence. The UN said in 2021, based on information from a confidential source, that the Turkish-made STM Kargu-2, which the company that made it says is a type of AWS, was used in Libya. Unfortunately, the UN report is short on details. There have also been rumours that Azerbaijan used AWS against Armenia in the Nagorno-Karabakh region, but there aren’t many facts about this. In this situation, the international community won’t get anywhere by being reactive and waiting to see what the effects of AWS are before trying to regulate it more. We need a more proactive approach.
It’s important to note that militaries all over the world don’t say that AWS use can’t be in line with IHL. But, no matter how hard they try or how good their intentions are, AWS that can learn on its own and change over time may act in ways that break IHL rules on the battlefield. Also, there are huge chances that AWS could end up in the hands of people who are not in the government. If there aren’t enough safety measures, the risks are way too high.
So, it would be smart for organisations like the UN and the global civil society to take action right away and with more vigour to make sure that, while AI and AWS are still in their early stages of development, the international community agrees on a set of rules that AWS can be built and used within. If we don’t, technology will move faster than the law, and once a dangerous, uncontrollable weapon is made, the situation may not be able to be fixed. When nuclear weapons were first made, the world wasn’t able to stop them from being made. We shouldn’t do what we did before.
Putting people in charge of the deployment of AWS
In an armed conflict, the final decision about whether or not to attack a person should be made by another person, not by a machine. In fact, a person’s death at the hands of a machine “raises fundamental ethical concerns,” as the ICRC put it in its May 2021 position paper on AWS. This is an insult to human dignity. The ICRC also says that AWS must have “human control” in order to be in line with IHL. In its Directive on AWS, the US Department of Defense also says that AWS should be made so that “appropriate levels of human judgement over the use of force” are kept. Aside from the moral aspect, human control is also needed to make sure that anyone who breaks IHL is held accountable. Because of this, an AWS should never be fully self-sufficient, and human control must always be in place.
Notably, there are strong arguments that AI can’t necessarily be built with an algorithm that can take into account cultural sensitivity and on-the-ground realities to make decisions based on context and reliably tell who is a civilian and who isn’t. So, Article 51.4 of Additional Protocol I can be taken to mean that an AWS that is not controlled by a human has the potential to hit both military targets and civilians or civilian objects without discrimination, so it is inherently a weapon that doesn’t care who or what it hits. So, it’s possible that IHL could ban this kind of way of fighting.
Also, the use of some weapons is already limited by the Convention on Certain Conventional Weapons. Given that more and more people are using AWS, it could be updated or a new Protocol could be negotiated and adopted to make it clear that AWS always needs human supervision before it can attack another person.
A threat to all people and the need for a global agreement
At this point in human history, machines made by humans may soon reach a level of independence where humans can no longer control them. AI is a very powerful tool, and the technology behind it is already very far along. AI can process huge amounts of data and can teach itself to do very complex analyses in milliseconds with a very high level of accuracy. A quick look at OpenAI’s ChatGPT and Dall-E 2 shows how advanced and maybe even scary AI technology is right now. As we’ve seen, it’s important that the development and use of AWS follow standards that everyone agrees on around the world.
Even though a complete ban on AWS might be the best thing, it is clear that they are here to stay. Still, when building AWS, we need to make sure that, at the very least, human supervision and control are stronger than the machines’ autonomy. So, any AWS should only have limited autonomy, and it should be against International Law for a weapon to be fully autonomous and work without human control. If the term “AWS” doesn’t work in that situation because it can’t be truly autonomous when people are in charge, it could be called “Controlled-AWS,” as an alternative (CAWS). In practise, this means that the CAWS can find targets and make suggestions, but only humans can decide whether or not to attack.
In the end, it is hoped that the global community can find a good balance between dogmatism and realism, and that real steps will be taken to make sure that people, not machines, are in charge of civilization.