National Security
National Security National Security Commission on Artificial Intelligence (2021)
“The Department of Defense (DoD) should [...] establish the foundations for widespread integration of AI by 2025. This includes building a common digital infrastructure, developing a digitally-literate workforce, and instituting more agile acquisition, budget, and oversight processes.” [emphasis added]
Autonomous Weapons Systems
Losing Humanity: The Case against Killer Robots (Human Rights Watch and Harvard Law School’s International Human Rights Clinic, 2012):
A relatively small community of specialists has hotly debated the benefits and dangers of fully autonomous weapons. Military personnel, scientists, ethicists, philosophers, and lawyers have contributed to the discussion. They have evaluated autonomous weapons from a range of perspectives, including military utility, cost, politics, and the ethics of delegating life-and-death decisions to a machine. According to Philip Alston, then UN special rapporteur on extrajudicial, summary or arbitrary executions, however, “the rapid growth of these technologies, especially those with lethal capacities and those with decreased levels of human control, raise serious concerns that have been almost entirely unexamined by human rights or humanitarian actors.” It is time for the broader public to consider the potential advantages and threats of fully autonomous weapons.
Pros and Cons of Autonomous Weapons Systems (Amitai Etzioni & Oren Etzioni, Military Review, 2017):
We find it hard to imagine nations agreeing to return to a world in which weapons had no measure of autonomy. On the contrary, development in AI leads one to expect that more and more machines and instruments of all kinds will become more autonomous. Bombers and fighter aircraft having no human pilot seem inevitable. Although it is true that any level of autonomy entails, by definition, some loss of human control, this genie has left the bottle and we see no way to put it back again.
The Moral Case for the Development and Use of Autonomous Weapon Systems (Erich Riesen, Journal of Military Ethics, 2022):
In this article, I provide the positive moral case for the development and use of supervised and fully autonomous weapons that can reliably adhere to the laws of war. Two strong, prima facie obligations make up the positive case. First, we have a strong moral reason to deploy AWS (in an otherwise just war) because such systems decrease the psychological and moral risk of soldiers and would-be soldiers. Drones protect against lethal risk, AWS protect against psychological and moral risk in addition to lethal risk. Second, we have a prima facie obligation to develop such technologies because, once developed, we could employ forms of non-lethal warfare that would substantially reduce the risk of suffering and death for enemy combatants and civilians alike. These two arguments, covering both sides of a conflict, represent the normative hill that those in favor of a ban on autonomous weapons must overcome. Finally, I demonstrate that two recent objections to AWS fail because they misconstrue the way in which technology is used and conceptualized in modern warfare.