A back-and-forth interplay of government and commercial funding and research has brought AI to the edge of a breakthrough.
by Dr. Alexander Kott
Few fields of technology are as paradoxical as artificial intelligence (AI). For one thing, since its official inception in the mid-1950s, AI has experienced multiple cycles of boom and bust. Time and again, AI would be proclaimed a miracle technology; an intense hype would build up and last for a decade or so, only to be followed by an equally intense disappointment and sense of abandonment. Similarly, human emotions around AI seem to run to extremes.
Back in the 1950s, many a life was changed by fascinating visions of the future depicted in the robot stories of Isaac Asimov. Sixty years later, science and technology experts, including astrophysicist Stephen Hawking, Microsoft’s Bill Gates, Apple co-founder Steve Wozniak and Tesla’s Elon Musk, have warned that humankind could be extinguished by AI. It is hard to imagine more passionate attitudes toward what is, after all, merely software.
This brings to mind yet another paradox: As soon as a research topic in AI achieves practical maturity, it is invariably demoted to “just a computer program.” Thirty years ago, finding an efficient route on a complex, realistic map while taking into account traffic conditions and road closures was considered a major topic of AI research. Today, it is merely a GPS app on your smartphone, and nobody calls it AI anymore.
While no definition of AI seems quite adequate for such an unconventional field of endeavors, one way to describe AI is the ability of computer-enabled agents (e.g., robots) to perceive the world, reason and learn about it, and propose actions that meet the agent’s goals. Equipped with AI, agents—whether purely computer-resident, like a highly sophisticated version of Amazon’s Alexa, or physical robots—become capable of autonomy. Autonomy means the ability of a system to perform highly variable tasks with limited human supervision (e.g., dealing with unpredicted obstacles and threats). Another often-heard term, machine learning, is a subfield of AI; it refers to improving machine knowledge and performance via interactions with the environment, data, people, etc.
The last few years have seen dramatic yet uneven advances in AI in application to both physical robots and software-only intelligent agents. Some capabilities, like answering questions (IBM’s Watson), “deep learning” (Google’s TensorFlow machine learning) and self-driving cars, have achieved significant breakthroughs. But others see ongoing exploration without any dramatic advances—yet. Almost all initial breakthroughs (all of those named above) came, to a large extent, from government’s pioneering research funding. Only later, when the research efforts showed commercial potential, were they picked up by industry, which then invested much more in these technologies than the initial government funding.
Considering the recent, enormous growth of interest in AI shown by both the public and industry, the interplay between government and commercial investments is interesting and complex. Published estimates of global commercial investment in AI (including autonomy) vary widely, between $20 billion and $50 billion per year. The major commercial markets include retail, telecommunications, financial, automotive and industrial assembly robots. In comparison, the Army’s science and technology (S&T) investment in AI and autonomy is two to three orders of magnitude lower. If so, why should the Army bother? Why not let industry take the lead and wait until its enormous investments produce the AI technologies the Army wants?
First, the Army S&T community is well aware of the industry efforts and products; it uses these products extensively in Army-focused research, often tailoring them as needed. In their autonomy research, for example, Army scientists and engineers use industrial or industry-supported robotic platforms, such as iRobot’s widely used small unmanned ground vehicle PackBot and the popular Robotic Operating System (ROS)—open-source middleware supported by a number of corporations. Computers and processors also come from industry: NVIDIA Corp.’s graphic processing unit, which helps accelerate deep learning, is one example, as is IBM’s TrueNorth chip, which emulates brain neurons for power-efficient computations. For machine learning, Army S&T uses well-developed software tools such as TensorFlow.
At the same time, the focus of the Army S&T community is on problems that are quite distinct and are not going to be addressed by commercial applications. For example, much of Army research and development (R&D) investments in autonomy are focused mainly on autonomous convoys traveling in adversarial environments on terrain other than conventional roads; on robotics for manned-unmanned teams for reconnaissance, surveillance and target acquisition and breaching; and on AI for military intelligence data analysis. These are not yet areas of significant interest to commercial developers, who focus on lucrative consumer markets.
Furthermore, there are deep, foundational differences in the scientific and technical challenges that Army-specific AI problems present, and which are not typical—or at least not a high priority—compared with the problems targeted by commercial investments. For example, AI and machine learning for self-driving cars, although initially spurred by the Defense Advanced Research Projects Agency’s Grand Challenge competitions, are currently being developed by industry and optimized for relatively orderly, stable, rule-driven, predictable environments, such as the highways and streets of modern cities. Nothing could be further from the environments where the Army-specific AI will have to operate—unstructured, unstable, chaotic, rubble-filled urban combat.
As another example, the recent explosion of successes in machine learning has been connected with availability of very large, accurate, well-labeled data sets, which can be used for training and validating machine learning algorithms and, given lengthy periods of time, for the learning process. But Army-relevant machine learning must work with data sets that are dramatically different: often observed and learned in real time, under extreme time constraints, with only a few observations (e.g., of the enemy techniques or materiel); potentially erroneous, of uncertain accuracy and meaning; or even intentionally misleading and deceptive. In other words, some of the very foundations of commercial AI algorithms diverge strongly from what the Army needs.
MANNED-UNMANNED TEAMING
Human-agent teams—Soldiers teamed with robots and other intelligent systems operating with varying degrees of autonomy—will be ubiquitous on the future battlefield. These systems will selectively collect and process information, help Soldiers make sense of the environment they’re in, and—with appropriate human oversight—undertake coordinated offensive and defensive actions.
Many will resemble more compact, mobile and capable versions of current systems such as unattended ground sensors, unmanned aerial vehicles (drones) and fire-and-forget missiles. Such systems could carry out individual actions, either autonomously or under human control, collectively provide persistent and complete battlefield coverage as a defensive shield or sensing field, or function as a swarm or “wolf pack” to unleash a powerful coordinated attack.
In this vision of future ground warfare, a key challenge is to enable autonomous systems and Soldiers to interact effectively and naturally across a broad range of warfighting functions. Human-agent collaboration is an active research area that addresses calibrated trust and transparency, common understanding of shared perceptions, and human-agent dialogue and collaboration. Army S&T is focused on the fundamental understanding and methods to design and develop future Army autonomous systems that will interact seamlessly with Soldiers.
One function with technology that has relied on a foundation of government research is question answering—the system’s ability to respond with relevant, correct information to a clearly stated question. The recent question-answering successes of commercial technologies like IBM Watson and Apple’s Siri are based on several decades of government leadership in related research fields.
They work well for very large, stable and fairly accurate volumes of data, like encyclopedias. But such tools don’t work for rapidly changing battlefield data, which can be distorted by adversaries’ concealment and deception. Commercial question-answering systems cannot support continuous, meaningful dialogue in which both Soldiers and artificially intelligent agents develop shared situational awareness and intent understanding. The Army is performing research to develop human-robotic dialogue technology for warfighting tasks, using natural voice, which is critical for reliable battlefield teaming.
Also critical is the self-organization of robotic team members. By leveraging available commercial technologies like the Robotic Operating System and commercial robotic platforms, Army scientists are performing research to address Soldier-robotic teaming on complex ground terrain. For example, the Army recently demonstrated leader-follower driving of resupply trucks in which several unmanned vehicles autonomously follow a human-driven truck, on narrow forest roads with tree canopy, at tactically appropriate speed and with long gaps between the trucks.
When a team includes multiple artificial agents, or when multiple teams must work together, new challenges arise: decentralized mission-level task allocation; self-organization, adaptation, and collaboration; space management operations; and joint sensing and perception. Commercial efforts to date have largely been limited to single platforms in benign settings. Within the Army, some programs like the U.S. Army Research Laboratory’s (ARL’s) Micro Autonomous Systems and Technology Collaborative Technology Alliance (MAST CTA) have been developing collaborative behaviors for unmanned aerial vehicles. Ground vehicle collaboration is challenging and is largely still at the basic research level. The Army’s long-term focus is on enabling collaboration among large numbers of highly dissimilar entities, such as large and small teams of air and ground robots, as well as human Soldiers, distributed over a large contested environment. To address such challenges, ARL has started Distributed and Collaborative Intelligent Systems and Technology, a collaborative research alliance between academic scientists and ARL government scientists.
MACHINE LEARNING
Machine learning is a key precondition for human-agent teaming on a battlefield, because agents will be neither intelligent nor useful unless they are capable of learning from experiences and adapt what they know while acting on the battlefield. For example, ARL has been working on learning algorithms for small ground robots that are able to learn the conditions of the ground (wet, slippery, sandy, etc.) and learn the appropriate modifications that control the turns and the speeds of their tracks. In another example, academic scientists collaborating with ARL in the framework of the recently completed MAST CTA developed a small rotorcraft that can execute aggressive maneuvers while flying through unfamiliar, highly cluttered indoor environments. The rotorcraft does so by continually learning the probability of collision directly from an onboard video camera. It recognizes new scenes and continually updates its knowledge.
Machine learning, although not yet capable of addressing the complexities of battle, has seen dramatic advances using “deep learning” computer algorithms known as deep neural networks. To deal with the unique nature of Army-specific machine learning, ARL is researching specialized extensions to commercial algorithms such as the TensorFlow software toolkit.
Yet another challenge that is uniquely exacerbated by battlefield conditions is constraints on the available electric power. Commercial AI relies on vast computing and electrical power resources, including cloud computing reachback when necessary. Battlefield AI, on the other hand, must operate within the constraints of edge devices: Computer processors must be relatively light and small and as frugal as possible in the use of electrical power. Additionally, the enemy’s inevitable interference with friendly networks will limit opportunities for using reachback computational resources.
HUMAN LEARNING
Human learning and training for the complex battlefield of the future needs AI for building realistic, intelligent entities in immersive simulations. The Army principle of “train as you fight” places high importance on training experiences with the realism to match operational demands. Immersive training simulations must have physical and sociocultural interactions with the fidelity to meet the training demands of strategic and operational planning and execution. Modeling and simulation capabilities must also match the complexity of the operational environment so that simulated interactions enable effective transfer of skills and knowledge to the operational environment.
Game-based training provides cost-effective development of immersive training experiences. Still, game-based training is not a silver bullet. Mismatches between the gaming environment and the real world may cause unintended effects, such as giving users an unrealistic framework for combat. Army training simulations need to include realistic sociocultural interactions between trainees and simulated intelligent agents. The actions of human actors teaming with robots and other intelligent agents will be pervasive in the complex operational environments of the future.
Army training simulations build on advances in commercial game engines like Unreal, which powers the game “Kingdom Hearts III,” and adapt that kind of action role-playing to meet the unique needs of the Army in programs like the $50 million Games for Training, overseen by the Program Executive Office for Simulation, Training and Instrumentation.
ARL is also at the cutting edge in computer generation of realistic virtual characters that are needed to enable realistic sociocultural interactions in future Army training applications. More than once, Hollywood studios have sought technologies from the ARL-sponsored Institute for Creative Technologies at the University of Southern California to create realistic avatars of actors. These technologies enable film creators to digitally insert an actor into scenes, even if that actor is unavailable, much older or younger, or deceased. That’s how actor Paul Walker was able to appear in “Furious 7,” even though he had died partway into filming.
CONCLUSION
That is a glimpse of perhaps the greatest paradox of AI: its looming power to erase the divide between the real and the imaginary, the natural and created. To defy, indeed, the very notion of artificial.
For more information, contact the author at alexander.kott1.civ@mail.mil.
DR. ALEXANDER KOTT is chief scientist at ARL. From 2009 to 2016, he was chief of ARL’s Network Science Division. He holds a Ph.D. in mechanical engineering from the University of Pittsburgh and a Master of Engineering from Leningrad Polytechnic Institute.
Date Taken: | 01.30.2018 |
Date Posted: | 01.30.2018 15:07 |
Story ID: | 263969 |
Location: | VIRGINIA, US |
Web Views: | 493 |
Downloads: | 0 |
This work, The artificial becomes real, must comply with the restrictions shown on https://www.dvidshub.net/about/copyright.