Home / Global Governance and Human Rights / Humans must retain control of military AI Systems

Humans must retain control of military AI Systems

8 March, 2021

By Ataa Dabour – Research Assistant

Artificial intelligence (AI) is today part of our everyday lives and refers to the ability of a machine or computer to perform tasks commonly associated with human beings. We apply the term artificial intelligence when it comes to the development of systems with the intellectual processes characteristic of humans, such as the ability to reason, to discover meaning, to generalize, or to learn from past experiences. Since the 1940s, it has been shown that computers can be programmed to perform very complex tasks such as discovering evidence for mathematical theorems or playing chess with great mastery.

Some programs have reached the level of performance of human experts and professionals in the conducting of specific tasks, and the speed of data processing, as well as the memory capacity of computers, is constantly increasing. However, there are as yet no programs capable of matching human flexibility. Artificial intelligence can be divided into four distinct categories:

  • Reactive – can only react to existing situations, but not to past experiences.
  • Limited memory – relies on stored data to learn from recent experiences to make decisions.
  • Theory of mind – is capable of comprehending conversational speech, emotions, non-verbal cues, and other intuitive elements.
  • Self-awareness – reaches human-level consciousness with its desires, goals, and objectives.

According to the RAND Corporation, AI systems that can learn, think, and be trained independently will likely dominate the field of AI. In other words, technologies that can train software to learn and think for themselves are one of the most productive areas of AI. Replacing frozen software with systems that do not need to be periodically refreshed creates smart and nimble systems, at a lower cost.

The military-strategic superiority requirement is creating increasing pressure and plunging states into an AI arms race, as was the case for nuclear weapons in the 20th century. As the military use of AI has become the focus of great power competition, governments around the globe are increasingly investing in research projects to enhance their armed forces’ combat capabilities by providing them with brand-new technological equipment, including autonomous weapons systems (AWS).

The Pax report “State of AI”, published in 2019, gives an overview of the current AI arms programs, policies, and positions of seven key players: the US, China, Russia, the United Kingdom, France, Israel, and South Korea. The way militaries across the globe perceive the future military use of AI is led by the United States and China. Since it pushes for rapid technological development based on security arguments more than ethical considerations, this race is certainly a potential threat to humanity.

Artificial Intelligence in the Military field

In defense terms, AI is best understood as a cluster of enabling technologies that will be applied to most aspects of the military sphere, explains Niklas Masuhr, a researcher at the Center for Security Studies (CSS) at ETH Zurich. Therefore, the world’s military, defense, and intelligence organizations apply artificial intelligence to surveillance, cybersecurity, homeland security, logistics, autonomous vehicles, autonomous weapons, and targeting.

Surveillance – AI is used to observe, collect, record, and report all kinds of information about a situation or an individual, and to process data collected via different sources such as telephones, laptops, drones, satellite images, or social media. AI can also be used to recognize someone based on their physical characteristics like height, posture, and build as well as activity patterns, and to identify and analyze any anomalies, relevant objects, or any change in situation it has been trained to flag.

Cybersecurity – AI can play an important role in preventitive measures, as software can identify and neutralize threatening digital situations – such as an email attachment – that could be a trap or a tool to implement malware before it is active. AI can also identify anomalies in network characteristics due to security intrusions. AI defense companies use machine learning to provide security products that can identify and predict threats before they affect networks.

Homeland Security – AI is used for predictive analysis – that is, for identifying trends and patterns within a dataset and then predicting whether and when that trend will (re)occur. Predictive analysis software can be used to predict potential crime suspects based on a variety of environmental factors and criminal record data, or to correlate signs of readiness to engage in illegal activities. Predictive analysis software can help reduce or avoid potential threats.

Logistics – AI improves the speed at which decisions are made and executed. Conversational interfaces can increase the effectiveness and speed of a decision-maker’s decisions. Artificial intelligence increases the efficiency of maintenance and repair of many military vehicles, craft, and equipment compared to what maintenance technicians can do alone.

Autonomous Vehicles – Unmanned automated vehicles with a relatively simple AI driving algorithm can perform logistical support operations and increase productivity and human operator safety. They can patrol a secure area and investigate any signs of intruders by focusing their cameras on places of possible disturbance and alerting human security forces of intrusion, which result in a significant reduction in the requirement for human patrol personnel.

Autonomous weapons and targeting – Autonomous weapon platforms use computer vision to identify and track targets. For example, autonomous weapons allow the ever-vigilant “eyes” of artificial vision to be trained to prevent surprise rocket attacks by targeting and shooting enemy rockets into the air before they can explode in a populated area. Autonomous weapons systems increase strength and effectiveness, can reduce casualties by deploying fewer combatants, and can reach previously inaccessible areas.

The use of artificial intelligence in the military field not only changes the future of warfare but also offers three main advantages and benefits to ensure success in the military-strategic superiority requirement and race.

One of these advantages concerns particularly the training and organization of armed forces. Training of armed forces could be personalized, with fair assessments and promotions. More realistic but also creative exercises and crisis simulations can also be designed and applied. Additionally, the use of artificial intelligence enables more precise and efficient strategic decision-making as situational assessments and analysis are made faster, and without any human emotions or prejudices, and also assures the adoption of rational behavior in crises. Finally, for military operations, artificial intelligence reduces risks for troops through automated logistics as well as administrative and staff work. It improves support and reconnaissance systems, and the effectiveness of data processing from different sources.

Human-Centric Approach to Artificial Intelligence

The emergence of new technologies that could replace human participation in warfare with machines raises serious concerns about the moral implications of the increasing autonomy of weapons and military systems more generally, and ethical considerations regarding international humanitarian law (IHL).

Under international humanitarian law, the notions of responsibility, legitimacy, and accountability are of great importance, and only humans possess the moral capacity to justify taking the life of another human. Additionally, under the 1907 Hague Convention (IV), the combatant must be commanded by a person.

Besides, the Martens Clause of the 1899 Hague Convention, also enshrined in Additional Protocol I of the Geneva Conventions, states that even when not covered by other laws and treaties, civilians and combatants remain under the protection and authority of the principles of international law derived from established custom as well as from the principles of humanity.

If machines replace human beings in making life and death decisions, how are actions legitimized, how can responsibility or a chain of accountability for violations be determined, and how can the principles of distinction, proportionality, and humanity be ensured?

Technology can replace neither human decision-making nor human contact. Taking this into account, Geneva Academy of International Humanitarian Law and Human Rights recommended in 2017 to:

  • Exercise the control necessary to determine, on time, what legal rules govern applications of force employing an AWS, and adapt operations as required.
  • Remain involved in algorithmic targeting processes in a manner that enables them to explain the reasoning underlying algorithmic decisions in concrete circumstances.
  • Be continuously and actively (personally) engaged in every instance of force application outside of the conduct of hostilities.
  • Exercise active and constant human control over every individual attack in the conduct of hostilities; they must appropriately bound every attack in spatio-temporal terms to enable them to recognize changing circumstances and adjust operations promptly.

While great powers are racing for the use of AI systems mainly for security and strategic issues, the EU has set out its definitive position on the military use of artificial intelligence in a newly adopted report published in January 2021. Additionally, Members of the European Parliament (MEPs) issued a report that calls for an EU legal framework on AI with definitions and ethical principles.

Although one can question the real reasons, apart from ethical and legal considerations, why the European Union would like to play an important role in ensuring a responsible and human-centric approach to AI and in leading the way in developing best practices, focusing on preserving humanity, ethic, and legality is urgent today. Taking this leadership is possible through several recommendations:

  • EU member states could contribute to the reflection on the responsible military use of AI by being more open and transparent about their national and intra-European deliberations related to the opportunities and risks of AI for the military, as well as by increasing awareness of the European Defense Agency’s work on the responsible use of AI.
  • The Council of the European Union can use its preparatory bodies – COJUR, EUMCWG, CODUN, CONOP – to foster deeper intra-EU discussion on the legal, ethical, and technical bases for the responsible military use of AI, and by organizing special meetings on the interpretation and applications of IHL concerning military uses of AI, the ethical principles and safety guidelines for the military use of AI by European armed forces, and on the EU’s position on human control.
  • The European Defence Agency (EDA) could engage more and expand its collaboration with academics, think tanks, civil society, and the business sector on issues related to the development, use, and control of military AI.
  • The European Commission could support research on ethical and safety challenges by using the European Defense Fund to fund projects on ethical technology design, transparency, explainability and traceability of military AI systems, the development of a European framework for testing and evaluating military AI systems, and designing methodologies for pooling data at the EU level according to key ethical principles.
  • The European Parliament can provide an open forum for the democratic exchange of perspectives on legal and ethical issues related to the military use of AI.

Within the first quarter of 2021, the European Commission will put forward a regulatory proposal, that aims at safeguarding fundamental EU values and rights and user safety by subjecting high-risk AI systems to mandatory requirements related to their trustworthiness. As part of its AI Strategy, the EU will in the coming months focus on a legislative proposal on AI and an updated coordinated plan on AI.

As we observe, the reflections of the EU, its agencies, and its member states around the military use of AI should mainly focus on legal compliance, ethical acceptability, and safety, while expanding the work to become either more public or more collaborative. Furthermore, much of the work on AI governance in the civilian sector could provide a useful basis for future EU and member state projects on the responsible military use of AI.

Image: the Campaign to Stop Killer Robots, a transnational effort to ban autonomous weapons systems (Source: Campaign to Stop Killer Robots via CC BY 2.0)

About Ataa Dabour

Ataa Dabour is the founding President of Security and Human Rights Association based in Geneva. She obtained a first Master’s degree in Humanities and in Transnational History from the University of Geneva. She then obtained a Master of Advanced Studies in Global Security and Conflict Resolution at the Global Studies Institute. She is also certified in Human Rights from the University of Leiden. Her professional career allowed her to specialize in contemporary issues relating to armed conflict, defense, security, new technologies, and international humanitarian law. In 2018, she became editor for the Swiss Military Review (SMR).