AI on the battlefield will come. That’s guaranteed. Before AI automates ‘slaughterbots,’ we need to think through the moral and ethical implications of such powerful technology in warfare. What does it mean for a weapon to be fail-safe? Is a human in the loop necessary or even desirable?
by Dr. Gordon Cooke
We live in an era of rapid technological advancement, in which yesterday’s pure fiction is today’s widely adopted consumer product. Such technologies have created a highly interconnected present. They portend an even more connected and automated future, in which the children who grew up asking Alexa why the sky is blue will be far more comfortable with artificial intelligence than we are today. And they bring with them a host of moral and ethical questions far more complex than any science fiction story.
Gaming out the effects of technology is notoriously difficult. Artificial intelligence (AI) already surrounds us in our devices, cars and homes. We accumulate capabilities and take them for granted as their benefits accrue. But now and again, it’s a good idea to stop and try to think about the potential for harm that comes with these technologies. To do that, we have to look at what we have, where it is and where it could go.
Weapons controlled by AI will appear on the battlefields of the future. Despite the protests (more on those in a moment), this is going to happen. Making a cheap, fully automated system that can detect, track and engage a human with lethal fires is trivial and can be done in a home garage with hobbyist-level skill. This isn’t science fiction. It’s fact. (Need more proof? Just watch the last episode of “Breaking Bad.”)
A variety of instructions, how-to videos and even off-the-shelf, trained AI software is readily available online that can be easily adapted to available weapons. Automated gun turrets used by hobbyists for paintball and airsoft guns have demonstrated the ability to hit more than 70 percent of moving targets.
To put that capability into perspective, the Army rifle qualification course requires a Soldier to hit only 58 percent of stationary targets to qualify as marksman on their weapon. Soldiers who hit 75 percent of stationary targets receive a sharpshooter qualification. It would only take some basic engineering, or enough tinkering, to build a heavier-duty turret with off-the-shelf software, a zoom camera and a fine control pan/tilt mechanism that holds a lethal firearm.
AI FOR DECISION-MAKING
In the near term, AI is going to be used in military applications to aid decision-makers. The automotive industry is already integrating AI into vehicles to analyze driving situations and provide augmented reality to drivers via heads-up displays that can help avoid accidents.
Such systems work by judging the deceleration of nearby vehicles, analyzing the context of roadway markings, or using additional sensors to enhance navigation in low-visibility fog. Automakers have even integrated fail-safe technology that can brake the car to avoid collisions if the driver fails to act. This same type of technology will be deployed by the military to aid Soldiers’ decision-making.
AI will be used to analyze the battlefield and provide augmented reality information to Soldiers via heads-up displays and weapon control systems. Such systems will be used to identify and classify threats, prioritize targets, and show the location of friendly troops and safe distances around them. Such systems will take information from multiple sensors across the battlefield to generate a picture based on information that Soldiers today would not even be aware of. Human Soldiers will still control the majority of military actions in the near term, but AI will provide easy-to-understand analysis and recommendations based on huge datasets that are too large for unaided humans to comprehend.
AI IS EVERYWHERE
AI-based systems already permeate our daily lives. The list of the world’s biggest companies is dominated by corporations that are built on or rely heavily on AI, such as Apple, Google, Microsoft, Amazon and Facebook. Amazon recently released Rekognition, a tool for image and video analysis that anyone can add to a software application. In fact, police are using the facial recognition software already.
The AI market was more than $21 billion in 2018, and it is expected to grow almost nine times larger by 2025. AI systems provide predictive analysis to interpret human inputs, determine what we most likely want, and then provide us with highly relevant information.
AI is no longer a technology reserved for a handful of multimillion-dollar fighter jets. Advances in hardware technology provide cheaper, smaller, more powerful processors that can be integrated affordably into individual Soldier equipment and fielded by the hundreds of thousands. These advances in hardware are what enable the “internet of things,” and what will become the internet of battlefield things.
The U.S. Army Combat Capabilities Development Command Armaments Center (CCDC) is developing smart weapon sights that can provide targeting information to aid riflemen and machine gunners. Soldiers will have an aiming display that helps identify targets by classifying people in view as threats or nonthreats as well as indicating the relative location of “friendlies” and mission objectives.
Networking capabilities will further allow automated coordination to assign priority targets to individual Soldiers so that all targets are eliminated as efficiently as possible and time is not wasted by having multiple Soldiers engage the same target. Networked smart weapons will also allow logistics systems to automatically initiate resupply actions as soon as combat begins, providing just-in-time logistics all the way to the forward edge. Supply and transportation assets will be able to begin rerouting truckloads of supplies across the battlespace to the point of need. At the tactical level, small robots will be able to bring loaded magazines to individual Soldiers as their weapons reach the end of their basic combat load.
TOUGH ETHICAL QUESTIONS
All the above is coming in the next 10 to 20 years. The technology exists, and it is simply a matter of time, development effort and cost-benefit ratios.
Even more automation is possible in the future. DOD and society at large will be faced with complex questions as this technology continues to grow. For example, it is already possible to include AI safety features that can prevent a weapon from firing at certain “wrong” targets—that is, not firing at targets the AI system does not classify as an “enemy”—to decrease collateral damage or to prevent enemy use of friendly weapons. This, however, raises a very interesting question: What does it mean for a weapon to be fail-safe? What error rate makes it “safe” for a weapon to potentially not fire when a Soldier pulls the trigger?
Some have raised concerns about increasing autonomy in weapon systems. Groups such as the Campaign to Ban Killer Robots and the International Committee for Robot Arms Control have called for total bans on the research and development of autonomous weapons and for limiting AI research to civilian uses only.
Such calls for a ban on development of autonomous lethal weapons, however well-meaning, seem to ignore the fact that the technology they most seek to prevent (autonomous machines that indiscriminately kill humans) already exists. Autonomous armaments that can find and kill humans will appear on the battlefield, even if not introduced by the United States or another major state, because the required technology is already available.
The reason we do not see major armies deploying such systems is because of a lack of the ability to discriminate between legitimate and illegitimate targets. Research and development in this area is in its infancy and is intertwined with needed policy decisions about how to precisely define a legitimate military target. Stopping research into autonomous weapons now will not prevent “slaughterbots” that indiscriminately kill; it will only prevent responsible governments from developing systems that can differentiate legitimate military targets from noncombatants and protect innocent lives.
WHAT ABOUT HUMAN ERROR?
We must consider the fact that humans make mistakes about using lethal weapons in combat. The U.S. bombing of the Doctors Without Borders hospital in Kunduz, Afghanistan, in October 2015 and the hundreds of thousands of civilian casualties in Iraq and Afghanistan attest to this reality.
We essentially still have the same “version 1.0” human that has existed for roughly 200,000 years, and capability development in humans is relatively flat. Our decision-making error rate in life-or-death situations is likely to be constant. Machine accuracy, on the other hand, is improving at an exponential rate. At some time in the future, machine accuracy at making combat-kill decisions will surpass human accuracy. When that occurs, it raises a host of interesting questions: Is it ethical to keep a human in the loop for weapon systems when a machine is less error-prone? Does the idea that only humans should be allowed to kill humans trump the desire to minimize civilian deaths? Are we willing to accept additional, avoidable deaths in order to keep humans in absolute control of lethal decisions? Is our human need to have someone to blame, someone to “hold accountable” and exact retribution from, more important than rational interest balancing that minimizes suffering?
This desire to keep humans in control and the current distrust in autonomous systems mean that the next systems to come in the mid-term, perhaps the next 30 to 50 years, will most likely continue to be semi-autonomous. The underlying technology will continue to improve, allowing human operators to place more and more trust in these systems.
Over time, we should expect the automated portions to become more capable and the human-machine interfaces to improve. This will enable human operators to increase their control over multiple systems while decreasing the level of detail the human has to control directly.
CONTROLLING LETHALITY
Future semi-automated systems will evolve through three levels of human control over lethality. We currently operate at the first level, where every individual trigger pull to deploy a lethal weapon requires human approval.
At the second level, the person operating the weapon becomes more like a small-unit leader; the human decides when and where to open fire and the weapon then picks out individual targets and engages them. The human retains the ability to order a cease-fire.
The third and most abstract level is like a battalion-or-above commander exercising command and control. Here, the human decides on the mission parameters (such as left and right boundaries, movement corridors, desired outcomes, sequence of events or constraints), selects the engagement area, and designates weapon-control measures throughout the mission (e.g., firing only at identified enemies who have fired first while moving to the target area, firing at all targets not identified as friendly inside the engagement area boundaries, or not firing within 10 meters of friendly locations). The weapon system then executes the mission orders, finds and selects targets, and reacts within its parameters without further guidance as events unfold.
All three levels of control retain a human in the loop and allow humans to decide and define what a valid target is. Whether each level is deemed acceptable depends on how broadly we interpret the requirement to have a person selecting “specified target groups,” which is the language about semi-autonomous weapons used in current DOD policy.
Is it adequate to say that all persons in a designated geographic area are part of the specified target group? Does it matter that the human has direct observation of the targeted area to see and decide that all persons in the area are legitimate combatants—and can cease fire if that changes? Is it enough to specify that anyone wearing an enemy uniform is part of the specified target group if sensors are capable enough at differentiating uniforms and clothing? How specific does the target description need to be, considering sensor and automation capabilities, to meet the standard for saying the human was in control?
ATTITUDES AND GENERATIONS CHANGE
We should also consider how policy might evolve as society’s confidence in AI increases. Current policies reflect the nascent state of current automated systems. Yet AI-based systems are improving and proliferating throughout society. Cameras no longer snap photos when we press the shutter-release button; rather, we trust the AI software to decide when everyone is smiling and record the best image. We have AI systems targeting us with individually tailored advertising. AI systems make million-dollar trades on stock exchanges throughout the world without human approval.
Our children are growing up in a world where they can ask an AI-powered device a question and not only get a correct answer, but the device recognizes them and addresses them by name when giving that answer. In only 20 years, some of these children will be the generals on the battlefield. In less than a generation, we should expect societal attitudes toward artificial intelligence to adjust to the demonstrated reliability that comes from improvement in the technology.
At what point does the human in the loop on a weapon system stop deciding whether a weapon should be used and start clicking the “approve” button because the AI sensor system assessed the proposed target as a threat? If a family court judge rejected the results of a DNA paternity test because he didn’t think the child resembled the father, there would be shock in the courtroom, followed by a quick appeal. What happens when faith in the performance of a technology is high enough that disagreeing with what the system tells you becomes unthinkable? What happens when we reach the point where we court-martial weapon operators for placing friendly units at risk when they override weapon systems? At that point, why is the human part of the process and what role do they serve? Societal attitudes toward autonomous systems are going to change. It is highly likely we will eventually see fully autonomous weapons on the battlefield.
CONCLUSION
The technologies that allow creation of AI weapon systems are inevitable, if not already existent. It is no longer possible to prevent research unique to AI weapons while allowing research into helpful civilian applications to continue, because the remaining research areas are all dual-use. Furthermore, rudimentary but functional autonomous weapon systems can already be created with existing technology. The horse is out of the barn.
What we need to do now is have a serious discussion about the moral and ethical implications of AI technology. But it must be one that starts from the reality of the current state of technology, the capabilities that already exist, and recognizes that bad actors will misuse any technology in the future. We should consider not just our current morals and ethics, but also account for how society’s norms will shift over time, as they always do.
What we do about the ethical and moral implications of AI will say a great deal to future generations about how we balanced rational and emotional concerns, and what kind of character and values we had.
For more information, contact the author at Gordon.cooke@westpoint.edu or visit https://westpoint.edu/military/department-of-military-instruction/simulation-center, https://www.pica.army.mil/tbrl/ or https://www.ardec.army.mil/.
DR. GORDON COOKE is director of the West Point Simulation Center and an associate professor in the Department of Military Instruction at the United States Military Academy at West Point. He holds a Ph.D. in biomechanics and an M.S. in mechanical engineering from Stevens Institute of Technology, as well as graduate certificates in ordnance engineering and biomedical engineering from Stevens. After graduating from West Point with a B.S. in mechanical engineering, he served as a combat engineer officer in the 11th Armored Cavalry Regiment, then spent 12 years as a civilian research engineer at the U.S. Army Armament Research, Development and Engineering Center (ARDEC), now known as the U.S. Army Combat Capabilities Development Command Armaments Center. During his time at ARDEC, he spent five years on the faculty of the Armaments Graduate School. Cooke was selected for Junior and Senior Science Fellowships, was awarded the Kurt H. Weil Award for master’s candidates, and received the U.S. Army Research and Development Achievement Award twice. He is an Acquisition Corps member and is Level III certified in production, quality and manufacturing.
This article is published in the Summer issue of Army AL&T magazine.
Date Taken: | 06.18.2019 |
Date Posted: | 06.18.2019 17:17 |
Story ID: | 328218 |
Location: | US |
Web Views: | 616 |
Downloads: | 2 |
This work, The Future of Artificial Intelligence in Weapon Systems, must comply with the restrictions shown on https://www.dvidshub.net/about/copyright.