Ethics of autonomous weapons systems and its applicability to any AI systems

AI ethics Meaningful human control Autonomous weapons Explainability CCW Dual-use AI

Authors: Ángel Gómez de Ágreda

Year: 2020

Published in: Telecommunications Policy.

Read me: Preprint. DOI: https://doi.org/10.1016/j.telpol.2020.101953. Website. 🎥Video. 👩‍💻Replication package.

Abstract: Most artificial intelligence technologies are dual-use. They are incorporated into both peaceful civilian applications and military weapons systems. Most of the existing codes of conduct and ethical principles on artificial intelligence address the former while largely ignoring the latter. But when these technologies are used to power systems specifically designed to cause harm, the question must be asked as to whether the ethics applied to military autonomous systems should also be taken into account for all artificial intelligence technologies susceptible of being used for those purposes. However, while a freeze in investigations is neither possible nor desirable, neither is the maintenance of the current status quo. Comparison between general-purpose ethical codes and military ones concludes that most ethical principles apply to human use of artificial intelligence systems as long as two characteristics are met: that the way algorithms work is understood and that humans retain enough control. In this way, human agency is fully preserved and moral responsibility is retained independently of the potential dual-use of artificial intelligence technology.

Bibtex (copy):

Annotation

By Rutger Doting, Zeger Mouw. 🪧Slides.

This paper surveys the existing codes of conduct for AI, and argues that these fall short, since they do not take into account dual-use of AI technologies. Therefore, the author argues that existing ethics for lethal autonomous weapon systems should also apply to all other AI systems.

Dual-use AI is AI that is used in both civilian and military contexts. The paper uses exoskeletons as an example. In civilian use, these exoskeletons can be used by people with disabilities to enable them to walk again. However, this technology also has military applications, as an exoskeleton can also be used to increase the strength and speed of soldiers on the battlefield. Connected with dual-use AI, are lethal autonomous weapon systems, which the author defines as ”any weapon system that has the ability to [identify, select and engage a target with lethal consequences] with limited human intervention”. These systems are seen as more dangerous than ordinary AI, and thus have more ethical principles formulated for them.

To arrive at valuable insights into the ethics of AI. The author found some com- mon principles shared by the majority: Beneficence, Human dignity, Privacy, Human autonomy, Fairness and Explainability. The following sections explain the principles and the difference in both AI domains.

AI should benefit the well-being of human beings. The benefit of AI could be that systems might predict targets better than humans, but it could also give only a GO/No GO option which could escalate a situation.

Human dignity is the particular value that humans possess intrinsic to their humanity. With AI, a human becomes less involved in decisions. In the military domain, this concerns if it is a breach if technology gets to decide who can live or should die.

Privacy is the base principle for all others. The way the data is accessed and processed should be done carefully.

Human autonomy concerns how much human decision is still considered in tech- nology. Even if the human decision is added to the loop, AI may influence the opinion. In the military domain, this determines if the user or a robot ’pulls the trigger’.

Fairness is if the decisions are based on fair data. So with no biases. In the military domain, fairness states if weapons are used proportionally.

In AI, explainability is very important. An AI should be able to explain why it made a specific decision. This accounts for both civil and military.

The author of the paper draws a number of interesting conclusions based on the ethical principles that he surveyed. The first of these is that no security measures should be added to the development of AI. However, the author argues that the dual use of AI should be taken into account. Autonomy over AI systems seems to be the most important for the author, as he sees it as an essential part of human dignity. In this light, he sees the coercion of humans by AI systems as hostile acts. From this follows that principles formulated for LAWS should apply to other AI systems as well since other AI systems have the potential to remove autonomy too. The author concludes by emphasising that we need to retain power over these systems besides being responsible for them.

– 📖 –