Description of Trust in Intelligent Systems

advertisement
NOT APPROVED FOR PUBLIC RELEASE
INTELLLIGENT SYSTEMS ROADMAP
Topic Area: Trust and Verification and Validation (V&V) of Intelligent Systems
Stephen Cook, The MITRE Corporation
Introduction
This contribution to the Roadmap for Intelligent Systems will focus on the need to
develop trust in intelligent systems to perform aviation safety-critical functions.
Intelligent systems are characterized by non-deterministic processes, adaptive learning,
and highly complex software. Traditional methods for establishing trust in aviation
systems involve verification (i.e., is this system right?) and validation (i.e., is it the right
system?) and certification (i.e., does a trusted third party believe it is right?). Traditional
methods in this discipline rely on repeatable experimental results and exercising fast time
simulation coupled with flight tests. An intelligent system may produce non-repeatable
results in tests or may be so complex that the V&V process is impractical. New V&V
and certification approaches are needed to establish trust in these systems.
Capabilities and Roles
Description of Trust in Intelligent Systems
There are many different perspectives on trust depending upon a person’s role in
interacting with an intelligent system. Their lives, livelihoods, or reputation may be at
stake. Independent certification is one way to increase trust in a system. The
introduction of a trusted third party that takes some investment in the relationship
between the two parties may provide oversight, regulation or enforcement of a contract
(social, legal, or both) between the intelligent system and the end user. Certification
typically depends on defining a standard for performance, building evidence to show
compliance to that standard, and identification of the means of V&V (e.g., analysis,
simulation flight test, etc.). Highly complex intelligent systems may perform differently
under different circumstances. For example, an intelligent system that “learns” may
produce different outputs given the exact same inputs depending on the level of training
of the system. It is presumed that traditional methods such as exhaustive testing,
stressing case analysis, and Monte Carlo simulation will not be sufficient to establish
trust in intelligent systems. Therefore, methods are needed to establish trust in these
systems, either through enhancement of existing certification paradigms or development
of new paradigms.
Intelligent Systems V&V Example Applications
-
-
Formal methods seek to mathematically prove that an intelligent system will not
exceed the bounds of a specific solution set. Formal methods analysis examine the
algorithms and formally prove that an intelligent system cannot produce an unsafe
output.
Runtime assurance methods seek to monitor the behavior of an intelligent system in
real time. Runtime assurance programs – sometimes called “wrappers” – can detect
NOT APPROVED FOR PUBLIC RELEASE
NOT APPROVED FOR PUBLIC RELEASE
-
when an intelligent system is going to produce an unsafe result and either revert to an
alternate pre-programmed safe behavior or revert control to a human.
Bayesian analysis methods examine the outputs from an intelligent system and
determine a level of confidence that the system will perform safely. This is
analogous to a qualified instructor pilot making a determination that a student is ready
to fly solo. The instructor cannot and does not test every possible circumstance the
student may encounter, but infers from a variety of parameters that the student is safe.
Bayesian methods extend this approach to technical systems.
Technical Challenges and Technology Barriers
Technical Challenges






Establishing methods and metrics to infer when an intelligent system can be relied on
for safety critical function.
Runtime assurance methodologies that are robust enough to restore unsafe intelligent
system behavior to a safe state.
Understanding the human factors implications of part-time monitoring of a trusted
intelligent system.
Adapting existing software assurance methods or developing new ones for nondeterministic systems.
Expanding formal methods to highly complex systems.
Development of human factors standards to address part-time monitoring of safetycritical functions (e.g., how to rapidly provide situation awareness to a disengaged
pilot as the intelligent system returns system control in an unsafe state)
Technical Barriers
Advanced formal methods techniques.
Robust wrapper technology.
Human factors alerting methodologies.
Advanced prognostics and health management systems.
Policy and Regulatory Barriers
US leadership in autonomous systems development does not necessarily translate to
leadership in the enabling technology associated with establishing trustworthy systems.
While both are extremely important, most of the attention in the autonomy community is
focused on systems development. There is far less attention being paid worldwide and
nationally in addressing the methods, metrics, and enablers associated with determining
the trustworthiness of autonomous systems. Historically, regulatory agencies have been
slow to approve new aviation technologies. With the advance of unmanned aircraft
systems throughout the world, US leadership in this area may hinge on its ability to
rapidly establish trust and safety of these systems.
Impact to Aerospace Domains and Intelligent Systems Vision
For widespread use of intelligent systems in aviation in safety-critical roles, development
of certification, V&V, and other means of establishing trustworthiness of these systems is
paramount.
NOT APPROVED FOR PUBLIC RELEASE
NOT APPROVED FOR PUBLIC RELEASE
Research Needs to Overcome Technology Barriers
Research Gaps






Establishing methods and metrics to infer when an intelligent system can be relied on
for safety critical function.
Runtime assurance methodologies that are robust enough to restore unsafe intelligent
system behavior to a safe state.
Understanding the human factors implications of part-time monitoring of a trusted
intelligent system.
Adapting existing software assurance methods or developing new ones for nondeterministic systems.
Expanding formal methods to highly complex systems.
Development of human factors standards to address part-time monitoring of safetycritical functions (e.g., how to rapidly provide situation awareness to a disengaged
pilot as the intelligent system returns system control in an unsafe state)
Operational Gaps
Processes/methods for querying an intelligent system to understand the basis for an
action.
Cost-wise approaches to certification that allow flexibility and control costs.
Research Needs and Technical Approaches
Advanced formal methods techniques.
Robust wrapper technology.
Human factors alerting methodologies.
Advanced prognostics and health management systems.
Prioritization
TBD
NOT APPROVED FOR PUBLIC RELEASE
Download