Professionalism/The 2015 Open Letter in Autonomous Weapons

Case study into professional ethics behind the 2015 Open Letter on Autonomous Weapons

In Correspondence with STS 4600 at the University of Virginia.

Introduction
In discussing autonomous weapons the letter specifically discusses “offensive autonomous weapons” that “select and engage targets without human intervention.” This casebook will not deeply explore other uses of A.I. in combat or defense, but will focus on the main argument of the open letter, the current status of autonomous weapons, and historical examples that may inform future actions on this subject.

The terms autonomous weapons and Artificial Intelligence (A.I.) weapons are used interchangeably as they are comparable in this analysis of human independent weaponry.

The Letter
''“In summary, we believe that A.I. has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military A.I. arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”''

Announced in July 2015 at the opening of the International Joint Conferences on Artificial Intelligence (IJCAI), the letter campaigns for a total ban on the development of autonomous weapons. To date, the letter has been signed by 4502 A.I. / Robotics researchers and over 26,000 others, including professors at major universities, professionals at major computer science firms, and major names like Stephen Hawking, Elon Musk, and Steve Wozniak. The main argument of the letter is that autonomous weapons will be viable within years, and while they offer major advantages in battle, they are so easy to use that abuse is inevitable. The letter compares the weapons to multiple analogous historical cases, and warns that if developed, AI weapons will be cheaper than automatic guns and more effective than nuclear weapons.

How the Technology Works
There are many different types of autonomous weapons: drones, firearms, tanks, etc. What unifies these weapons is their ability to operate without direct human interaction.

Image classification
The base technology required for weapons to evolve into autonomous weapons is the ability take visual input and identify targets. This is accomplished by using cameras to capture real time video and running that video through image recognition algorithms to find potential targets. For these algorithms, identifying any given human is easy, but it becomes extremely more difficult when you try to distinguish combatants from non-combatants. This is because what determines a combatant from a non-combatant comes down to a lot of contextual and situational information that can change from case to case.

State of Autonomous Weapons
 “Artificial Intelligence technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades” 

Since the theory behind implementing rudimentary A.I. weapons is so simple, there are many examples of home-made versions. A student in a high school introductory programming and robotics course developed a pan and tilt turreted NERF gun that could aim and shoot using only facial recognition. Other youtube videos offer schematics and assembly instructions for “Do it yourself” nerf and air-soft turrets. These examples were all implemented using cheap and easily accessible circuit boards, motors, and programming platforms.

Higher level A.I. weapons research has also been ongoing since the release of the open letter. From 2016 to 2019 the Pentagon budgeted $18 million for autonomous weapons technology, and have ongoing contracts with Amazon and Microsoft to develop A.I. as the “centerpiece of its weapons strategy”. Mr. Work, former Deputy Secretary of Defense, is a strong advocate of A.I. in warfare. He encouraged the Department of Defense to invest in A.I. as a way to “have an advantage as we start the competition” uncannily similar to the Cold War and a new era of the arms race. As early as 2016 military testing on facial recognition on drones was better at identifying non-combatants than humans in specific scenarios. While the drone was not equipped with the authority to engage these targets, the implementation of that aspect of autonomous weapons is remarkably simple.

Historic Comparisons: Technology
''“Autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms…. [they] will become the Kalashnikovs of tomorrow.”''

The 2015 letter provides multiple historical cases that give insight into the potential impact of autonomous weapons, as the “third revolution in warfare”. Scharre, the author of Army of None, states that these weapon systems could create “flash wars” where combat happens at such a fast pace between the weapons that humans can’t keep up to consider the implications of each attack. This mirrors the second revolution in warfare, nuclear weapons. With nukes, it was now possible to decimate entire cities immediately, like when 140,000 people died after the uranium bomb was dropped on Hiroshima, and 75,000 at Nagasaki. The speed of “flash wars” could quickly become as destructive as nuclear weapons. Another similarity to nuclear weapons is the potential for an arms race. After World War II the invention of nuclear weapons resulted in a competition for supremacy with massive increases in both development and production of nuclear weapons. The signatories of 2015 letter believe that the arms race potential for autonomous weapons is even greater, “virtually inevitable”. They suggest that while the nuclear arms race mainly involved just the US and USSR an autonomous weapons arms race could involve many more countries. The barriers to enter this arms race will be much lower since “unlike nuclear weapons, [autonomous weapons] require no costly or hard-to-obtain raw materials.” The letter also references Kalashnikov rifles, the most popular gun design in the world. Of the estimated 500 million firearms worldwide 100 million are Kalashnikovs with about 75 million of those AK-47s. Advantages include simplicity of design and a service life of anywhere from 20 to 40 years. The gun is easy to manufacture, use, and repair, and it is reliable and cheap. In Killicoats “Weaponomics” paper he found that AK-47s were being traded for only $40 in Eastern Europe and Asia and as low as $12 in Africa and the Middle East. Originally developed for the Russian army, the low cost and huge supply has led to AK-47s being used by revolutionaries, terrorists, cartels, and criminals. The letter asserts that this problem will be replicated with autonomous weapons. The small cost will lead to an endless supply of the weapons and creators of the weapons will not be able to stop them from getting into nefarious hands. With the potential power of these weapons, the danger of their use in terrorism is limitless.

Benefits and Drawbacks
 “The key question for humanity today is whether to start a global A.I. arms race or to prevent it from starting.” 

The open letter provides one example on a double bladed effect of autonomous weapons. Replacing soldiers with machines will reduce casualties and cost for the owner. The latent effect however is a reduced threshold for entering battle, and the potential not only to be drawn into more conflicts but also to increase the number of unsavory conflicts that are entered due to the lack of American lives at risk.

Autonomous weapons might also have the ability to reduce or remove certain human biases. A lack of racism or cultural biases could have an impact in combat, as well as a lack of survival instincts. Without survival instincts, a machine would lack the “shoot-first, ask questions later” mentality and would not be affected by adrenaline or fear when making decisions. This impact could be felt to a lesser degree if the system is not weaponized. If the global arms race has already begun, then investment in the technology will also develop better A.I. weapon defense systems and prevent a future technological disadvantage.

Downsides for developing these weapons are more numerous. A.I. weapons low cost leads to easy mass production and acquirement by unsavory parties. A single weaponized drone could be deployed by an individual with the right contacts, let alone terrorist organizations or hostile governments. These weapons, such as drones, are also ideally suited for acts of terrorism, dictatorial control, ethnic cleansing, and assassinations. Software is also fallable and may come with built in biases from the producers. Amazon’s facial recognition software has notably worse accuracy when identifying women and people of color, and while Amazon recommends double checking the results with a confidence threshold, the Washington County Sheriff’s Office in Oregon, an identified customer of Rekognition, said “we do not set nor do we utilize a confidence threshold”. Using code to govern weapons also increases the risk of coding errors or hacking. When A.I. fails, the issue of accountability also arises, and is discussed later.

Historic Comparisons: Regulation
“Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.”

London Naval Treaty 1930
The U.S. helped negotiate the London Naval Treaty after World War I to ban unrestricted submarine warfare on civilian ships. The practice has parallels to autonomous weapons as it was considered a futuristic weapon with devastating effects at the time of the treaty. The treaty was signed by all major powers. After the Pearl Harbor attack, it took only 6 hours for the United States to violate the 11-year-old treaty to attach Japan’s merchant fleet. The practice was generally used by all combatants in World War II. For autonomous weapons, would a treaty remain practical after the technology is no longer futuristic? Would countries continue to respect a treaty banning autonomous weapons if their adversaries, including non-government groups, use these weapons?

Asilomar conference on Recombinant DNA 1975
The Asilomar conference is a more successful example of international regulation. In 1975 a group of 140 professionals including biologists, lawyers, and physicians came together to discuss guidelines to ensure the recombinant DNA technology was used safely. Prior to the conference, many biologists had halted experiments out of fear of the potential dangers. The guidelines created at this conference allowed scientists to continue their research safely which increased both public interest and knowledge about life processes. This conference sets a precedent of the people creating the technology being involved in developing regulation.

Ethical Considerations
The key issue regarding the ethics of autonomous weapons is who to blame when things go wrong. “Go wrong” here meaning attacking/killing a target erroneously or attacking a civilian.

Responsibility
There’s a complicated chain of command, from the beginning of development to the final firing of the autonomous weapon. For human in/on the loop weapons, there is the person overseeing the weapon, if the weapon misfires and they don’t prevent it is it their fault? Or is it the fault of the programmer(s) who built a faulty recognition algorithm that caused the weapon to fire erroneously? What if the company developing the algorithm was under a time/money constraint, causing the manager of the programmers to push them to make shoddy algorithm?

Further questions of responsibility comes from just how easily replicable autonomous weapons are once they’re out in the public. If the researchers’ predictions are to be trusted, then autonomous weapons will be the next revolution in warfare, following gunpowder and nuclear weapons. Having such an easily copied weapon that can be so dangerous heightens the question of who to blame.

Because responsibility is so blurred regarding autonomous weapons, the AI researchers co-signing the letter have taken it upon themselves to take no part in the further research of autonomous weapons.

Professional Ethics
Engineers entering this space may be wondering what action they can take to prevent escalation. Google engineers may provide a good example, as when it was leaked that Google was helping the US government on “Project Maven.” The project was utilizing Google’s image recognition software for use in autonomous weapons. When this information was leaked, there was a massive protest by engineers within the company, which eventually pressured Google to pull out of the deal.

Conclusion
As a novel technology, autonomous weapons and A.I. research has the potential to greatly benefit humanity or greatly harm it. The authors and signatories of the Open Letter on Autonomous Weapons believe that unrestricted A.I. development will be disastrous on a global scale no matter the intentions behind its creation. They campaign for a ban on offensive autonomous weapons, but the ease of manufacture and potential for effective, lethal tactics of these weapons may make that impossible.

Future exploration could include examination of the potential for these weapons to be hacked or stolen, the future of autonomous weapon technology including defense, and the current global status of the technology.