Building on recent breakthroughs in autonomous cyber systems and formal methods, DARPA today announced a new research program called Assured Autonomy that aims to advance the ways computing systems can learn and evolve to better manage variations in the environment and enhance the predictability of autonomous systems like driverless vehicles and unmanned aerial vehicles (UAVs).
“Tremendous advances have been made in the last decade in constructing autonomy systems, as evidenced by the proliferation of a variety of unmanned vehicles. These advances have been driven by innovations in several areas, including sensing and actuation, computing, control theory, design methods, and modeling and simulation,” said Sandeep Neema, program manager at DARPA. “In spite of these advances, deployment and broader adoption of such systems in safety-critical DoD applications remains challenging and controversial.”
The Defense Science Board Report on Autonomy, released in 2016, heavily emphasizes the need for autonomous systems to have a strong degree of trust. Assuring systems operate safely and perform as expected, the report notes, is integral to trust, especially in a military context. But systems must also be designed so that operators can determine whether, once it has been deployed, it is operating reliably, and, if not, that appropriate action can be taken. Assured Autonomy aims to establish trustworthiness at the design stage and incorporate sufficient capabilities so that inevitable variations in operational trustworthiness can be measured and addressed appropriately.
“Historically, assurance has been approached through design processes following rigorous safety standards in development, and demonstrated compliance through system testing,” said Neema. “However, these standards have been developed primarily for human-in-the-loop systems, and don’t extend to learning-enabled systems with advanced levels of autonomy. The assurance approaches today are predicated on the assumption that the systems, once deployed, do not learn and evolve.”
One approach to assurance of autonomous systems that has recently garnered attention, particularly in the context of self-driving vehicles, is based on the idea of “equivalent levels of safety,” i.e., the autonomous system must be at least as safe as a comparable human-in-the-loop system that it replaces. The approach compares known rates of safety incidents of manned systems—number of accidents per thousands of miles driven—and conducting physical trials to determine the corresponding incident rate for autonomous systems. Studies and analyses indicate, however, that assuring safety of autonomous systems in this manner alone is prohibitive, requiring millions of physical trials, perhaps spanning decades. Simulation techniques have been advanced to reduce the needed number of physical trials, but offer very little confidence, particularly with respect to low-probability, high-consequence events.
In contrast to prescriptive, process-oriented standards for safety and assurance, a goal-oriented approach, such as the one espoused by Neema, is arguably more suitable for systems that learn, evolve, and encounter operational variations. In the course of Assured Autonomy program, researchers will aim to develop tools that provide foundational evidence that a system can satisfy explicitly stated functional and safety goals, resulting in a measure of assurance that can also evolve with the system.