AI and autonomous weapons arms transfers

Image of a drone. Source: Emmanuel Granatello / Flickr.

The Convention on Certain Conventional Weapons—Group of Governmental Experts (CCW and GGE) on Lethal Autonomous Weapons Systems hosted its much anticipated second session on July 25–29, 2022. 

Before the talks even began, they were predicted to fail due to the lack of consensus among states regarding the regulation of AI weapons. Although there has been some progress, human rights organizations are agonizing over the slow development of the meetings, Russia’s many objections, and the lack of desire among states to impose stricter rules than the already existing 11 Guiding Principles on LAWS—lethal autonomous weapons—which seeks to apply international humanitarian law (IHL) to all weapon systems.

In the prior session, the U.S. promoted its Joint Proposal, with support from the U.S. Department of State, reaffirming that AI weapon usage needs to follow existing IHL . However, the country’s stance on “killer robots,” often called LAWS; autonomous weapons systems (AWS); and AI-enabled weapons does not advocate for a total ban on these weapons. The Joint Proposal, which is not legally binding, instead emphasizes risk assessment and mitigation. 

Another group of states instead seek implementation of their Written Commentary Calling for a Legally-Binding Instrument on Autonomous Weapon Systems. This Commentary calls for legally binding rules and principles for the production, development, possession, acquisition, deployment, transfer, and use of LAWS and AWS. 

AI technology, some argue, is outpacing the international community’s efforts to control, regulate, and ban these weapons. In unregulated and protracted conflict zones, there are concerns that these technologies in the hands of repressive regimes, proxies, and terrorist networks will lead to more civilian harm and grave human rights abuses.

The data is clear that in the Southwest Asia and North Africa (SWANA) region, civilians are targeted by proxy warfare. In Yemen,  13,635 civilians have died since 2018 as a result of the conflict. In the SWANA region from July 9-July 11, there were 1,187 reported fatalities from all weapon types, with a reported 100 drone and air strikes occurring on July 11th alone. The UN reported in their 2021 Letter to the Security Council by the Panel of Experts in Libya that an unmanned drone “hunted down” retreating Haftar Armed Forces, who were “remotely engaged by the unmanned combat aerial vehicles of the lethal autonomous weapons.” 

Even before this report, human rights organizations brought attention to the potential devastation that AI weapons could render, including indiscriminate attacks, the lack of human emotion in close-call circumstances, misidentification, and bias within AI algorithms.

While these instances stand out, the data on exactly how many civilians have already been killed or maimed by LAWS, AWS, and technology with AI power is murky. Part of this uncertainty is due to the lack of a clear definition of AI weapons. This makes tracking violations from AI weapons more difficult because of the confusion over which attacks are counted as conducted by AI weapons rather than by more conventional means. 

AI technology, some argue, is outpacing the international community’s efforts to control, regulate, and ban these weapons.

The U.S. is no stranger to technological errors that bump up against IHL principles. For example, in 2015 the U.S. misidentified a civilian hospital as a Taliban target, due to “avoidable human error” and system and equipment failures including faulty coordinates and the failure of video imaging systems. The New York Times has also reported that the Pentagon released detailed records that provide evidence of how the  use of drones by the U.S. in aerial attacks in the SWANA region has led to misidentification and civilian deaths. The article notes that the U.S. has still not been found internationally liable for these attacks.

A lack of a clear definition can also generate  gaps in accountability since there are no criteria to define when or how an AI weapon is used in violation of international or domestic law. The U.S. Department of Defense’s most recent Autonomous Weapons Report 3000.09 has been criticized for not taking the opportunity to define “AI-enabled” in its text.

Not all states at the CCW advocated for a clear definition of AI weapons. The United Kingdom voiced concerns about focusing too much on definitions and instead advocated for an analysis of the “effect” of the weapons to determine international regulation. On the contrary, China advocated for distinguishing between several key terms like “automation” and “autonomy.” 

Some rights groups want to go beyond current IHL. Some observers, for example. argue for AI weapons, claiming that the technology curbs civilian deaths because of increased target identification and the lack of human emotion. Rights groups, however, counter these claims, arguing that the risk of potential harm outweighs any benefits from these systems. Some even advocate for a total ban on autonomous weapons. Stop Killer Robots’ Statement at the 2021 CCW highlights specific weapons systems effects that it seeks to prohibit, such as systems that use sensors to automatically target human beings.

There are several gaps in existing international and U.S. domestic law. Internationally, there are concerns about a lack of understanding of international jurisdiction, state responsibility, and criminal responsibility. 

In the United States, there has been a push from the White House’s Office of Science and Technology Policy (OSTP) for an AI Bill of Rights. The purpose of this bill is to protect against potentially harmful aspects and effects of AI technology, like discrimination, particularly in biometrics. While the focus of the bill remains domestic, without codification or any clear text, it’s not clear that it would  address any issues that could impact AI weapons transferred outside of the U.S. 

In terms of legal issues, the U.S. must also contend with the current lack of corporate accountability on the part of arms exporters. While the Leahy Laws were designed to prohibit sales to gross violators of human rights, the Government Accountability Office (GAO) has found inconsistent Leahy Law application

Without clear laws that define AI weapons; mention the use, sale, or attribution of legal responsibility; and track AI-weapons, the risk of additional human rights violations remains. The same issues that affect traditional arms transfers from defense companies and state governments also arise in AI weapons transfers. Possible legal challenges to underregulated AI weapons transfers could include liability for war crimes, crimes against humanity, and genocide at the ICC or an ad hoc tribunal; sanctions; or domestic prosecution. 

As advocates have already found, implementation of robust Human Rights Due Diligence could help curb human rights violations from defense manufacturers. The new age of AI weapons requires specialized rules to contain AI weapons and updated language that accurately reflects the types of weapons being used.

Disclaimer: This commentary was prepared by the staff of the American Bar Association Center for Human Rights and reflects their own views. It has not been approved or reviewed by the House of Delegates or the Board of Governors of the American Bar Association and therefore should not be construed as representing the policy of the American Bar Association as a whole. Further, nothing in this piece should be considered as legal advice in a specific case.