AIAA Roadmap for Intelligent Systems in Aerospace

advertisement
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
AMERICAN INSTITUTE OF AERONAUTICS AND ASTRONAUTICS (AIAA)
INTELLIGENT SYSTEMS TECHNICAL COMMITTEE
ROADMAP FOR INTELLIGENT SYSTEMS
IN AEROSPACE
First Edition
January 8, 2016
Disclaimer: This technical document represents the views of AIAA Intelligent Systems Technical
Committee members, but does not necessarily represent the institutional views of the AIAA.
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Prepared by the AIAA Intelligent Systems Technical Committee.
Editors:
Christopher Tschan, The Aerospace Corporation
Adnan Yucel, Lockheed Martin Aeronautics Company
Nhan Nguyen, NASA Ames Research Center
Collaborators:
Sam Adhikari, Sysoft Corporation
Ella Atkins, University of Michigan
Christine Belcastro, NASA Langley Research Center
Christopher Bowman, Data Fusion & Neural Networks
David Casbeer, Air Force Research Laboratory
Girish Chowdhary, Oklahoma State University
Kelly Cohen, University of Cincinnati
Steve Cook, Northrup Grumman
Nick Ernest, Psibernetix Inc
Fernando Figueroa, NASA Stennis Space Center
Lorraine Fesq, NASA Jet Propulsion Laboratory
Marcus Johnson, NASA Ames Research Center
Elad Kivelevitch, MathWorks
Chetan Kulkarni, NASA Ames Research Center / SGT Inc
Catharine McGhan, California Institute of Technology
Kevin Melcher, NASA Glenn Research Center
Ann Patterson-Hine, NASA Ames Research Center
Daniel Selva, Cornell University
Julie Shah, Massachusetts Institute of Technology
Yan Wan, University of North Texas
Paul Zetocha, Air Force Research Laboratory
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
TABLE OF CONTENTS
LIST OF FIGURES ....................................................................................................................................... viii
ACKNOWLEDGEMENTS ............................................................................................................................... 1
EXECUTIVE SUMMARY ................................................................................................................................ 2
1. INTRODUCTION..................................................................................................................................... 4
2. VISION FOR INTELLIGENT SYSTEMS IN AEROSPACE ............................................................................. 8
3. ADAPTIVE AND NON-DETERMINISTIC SYSTEMS ................................................................................... 9
3.1 Roles and Capabilities ..................................................................................................................... 9
Resilience Under Uncertain, Unexpected and Hazardous Conditions ......................................... 11
Operational Efficiency .................................................................................................................. 11
Ultra-Performance ....................................................................................................................... 11
3.2 Technical Challenges and Technology Barriers............................................................................. 11
Resilience Under Uncertain, Unexpected and Hazardous Conditions ......................................... 12
Operational Efficiency .................................................................................................................. 13
Ultra-Performance ....................................................................................................................... 14
3.3 Research Needs to Accomplish Technical Challenges and Overcome Technology Barriers ........ 14
Multidisciplinary Methods ........................................................................................................... 14
Simplified Adaptive Systems ........................................................................................................ 15
Real-time Self-Optimization ......................................................................................................... 15
Real-Time Monitoring and Safety Assurance ............................................................................... 15
Verification and Validation ........................................................................................................... 16
A priori Performance Guarantee .................................................................................................. 16
Dynamic Effective Teaming .......................................................................................................... 16
Additional Capabilities ................................................................................................................. 17
Research Investment Areas.......................................................................................................... 19
4. AUTONOMY ........................................................................................................................................ 20
4.1 Introduction .................................................................................................................................. 20
i
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
4.2 Key Autonomy Challenges Facing the Aerospace Community ..................................................... 21
What is Autonomy........................................................................................................................ 21
Fundamental Challenges .............................................................................................................. 23
Systems Engineering Challenges .................................................................................................. 24
Safety Challenges ......................................................................................................................... 24
4.3 Algorithm and Architecture Design Challenges ............................................................................ 25
Knowledge-based Autonomy ....................................................................................................... 26
Autonomy under Uncertainty ...................................................................................................... 27
Autonomy with Online Adaptation / Learning ............................................................................. 27
Multi-agent Autonomy ................................................................................................................. 28
Real-time Autonomy .................................................................................................................... 28
4.4 Roadmap to Success ..................................................................................................................... 29
4.5 Supplement ................................................................................................................................... 30
5. COMPUTATIONAL INTELLIGENCE ....................................................................................................... 32
5.1 Introduction .................................................................................................................................. 32
5.2 Computational Intelligence Capabilities and Roles ...................................................................... 33
5.3 Technical Challenges and Technology Barriers............................................................................. 34
Technical Challenges .................................................................................................................... 34
Technical Barriers ......................................................................................................................... 34
Impact to Aerospace Domains and Intelligent Systems Vision .................................................... 35
5.4 Research Needs to Overcome Technology Barriers ..................................................................... 35
Research Gaps .............................................................................................................................. 35
Operational Gaps ......................................................................................................................... 35
Research Needs and Technical Approaches................................................................................. 36
Prioritization ................................................................................................................................. 36
6. TRUST .................................................................................................................................................. 37
6.1 Introduction .................................................................................................................................. 37
ii
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
6.2 Capabilities and Roles ................................................................................................................... 37
Description of Trust in Intelligent Systems .................................................................................. 37
6.3 Technical Challenges and Technology Barriers............................................................................. 38
Technical Challenges .................................................................................................................... 38
Technical Barriers ......................................................................................................................... 38
Policy and Regulatory Barriers ..................................................................................................... 39
Impact to Aerospace Domains and Intelligent Systems Vision .................................................... 39
6.4 Research Needs to Overcome Technology Barriers ..................................................................... 39
Research Gaps .............................................................................................................................. 39
Operational Gaps ......................................................................................................................... 40
Research Needs and Technical Approaches................................................................................. 40
Prioritization ................................................................................................................................. 41
7. UNMANNED AIRCRAFT SYSTEMS INTEGRATION IN THE NATIONAL AIRSPACE AT LOW ALTITUDES . 42
7.1 Introduction .................................................................................................................................. 42
7.2 Intelligent Systems Capabilities and Roles.................................................................................... 42
Description of Intelligent Systems Capabilities ............................................................................ 42
7.3 Technical Challenges and Technology Barriers............................................................................. 43
Technical Challenges .................................................................................................................... 43
Technical Barriers ......................................................................................................................... 44
Policy and Regulatory Barriers ..................................................................................................... 44
Impact to Aerospace Domains and Intelligent Systems Vision .................................................... 45
7.4 Research Needs to Overcome Technology Barriers ..................................................................... 45
Research Gaps .............................................................................................................................. 45
Operational Gaps ......................................................................................................................... 45
Research Needs and Technical Approaches................................................................................. 45
Prioritization ................................................................................................................................. 46
8. AIR TRAFFIC MANAGEMENT ............................................................................................................... 47
iii
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
8.1 Introduction .................................................................................................................................. 47
8.2 Technical Challenges and Technology Barriers............................................................................. 47
Technical Challenges .................................................................................................................... 47
Technical Barriers ......................................................................................................................... 48
Impact to Aerospace Domain and Intelligent Systems Vision ..................................................... 48
8.3 Research Needs to Overcome Technology Barriers ..................................................................... 49
Research Gaps .............................................................................................................................. 49
Operational Gaps ......................................................................................................................... 49
Research Needs and Technical Approaches................................................................................. 49
Prioritization ................................................................................................................................. 50
9. BIG DATA............................................................................................................................................. 52
9.1 Roles and Capabilities ................................................................................................................... 52
Aircraft Engine Diagnostics .......................................................................................................... 52
Airline Operations ........................................................................................................................ 53
Computational Fluid Dynamics .................................................................................................... 53
Corporate Business Intelligence ................................................................................................... 54
9.2 Technical Challenges and Technology Barriers............................................................................. 54
9.3 Research Needs to Overcome Technology Barriers ..................................................................... 54
10. HUMAN-MACHINE INTEGRATION ...................................................................................................... 56
10.1 Introduction ................................................................................................................................ 56
10.2 Roles and Capabilities ................................................................................................................. 57
10.3 Technical Challenges and Technology Barriers........................................................................... 58
10.4 Research Needs to Overcome Technology Barriers ................................................................... 59
Research Gaps .............................................................................................................................. 59
Operational Gaps ......................................................................................................................... 60
Research Needs and Technical Approaches................................................................................. 60
Prioritization ................................................................................................................................. 61
iv
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
11. INTELLIGENT INTEGRATED SYSTEM HEALTH MANAGEMENT ............................................................ 62
11.1 Introduction ................................................................................................................................ 62
11.2 Roles and Capabilities ................................................................................................................. 64
11.3 Technical Challenges and Technology Barriers........................................................................... 64
Technical Challenges .................................................................................................................... 64
Technical Barriers ......................................................................................................................... 65
11.4 Research Needs to Overcome Technology Barriers ................................................................... 66
Research Gaps .............................................................................................................................. 66
Operational Gaps ......................................................................................................................... 66
11.5 Roadmap for i-ISHM ................................................................................................................... 66
1-5 year Goals............................................................................................................................... 67
5-10 year Goals............................................................................................................................. 67
10 years and beyond Goals .......................................................................................................... 67
12. IMPROVING ADOPTION OF INTELLIGENT SYSTEMS ACROSS ROBOTICS ............................................ 69
12.1 Introduction ................................................................................................................................ 69
12.2 Capabilities and Roles for Intelligent Systems in Robotics ......................................................... 70
Description of Intelligent Systems Capabilities ............................................................................ 70
Intelligent Systems Roles and Example Applications ................................................................... 71
12.3 Technical Challenges and Technology Barriers........................................................................... 72
Technical Challenges .................................................................................................................... 72
Technical Barriers ......................................................................................................................... 73
Policy and Regulatory Barriers ..................................................................................................... 74
Impact to Aerospace Domains and Intelligent Systems Vision .................................................... 75
12.4 Research Needs to Overcome Technology Barriers ................................................................... 76
Research Gaps .............................................................................................................................. 76
Operational Gaps ......................................................................................................................... 77
Research Needs and Technical Approaches................................................................................. 77
v
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Prioritization ................................................................................................................................. 79
13. GROUND SYSTEMS FOR SPACE OPERATIONS ..................................................................................... 81
13.1 Introduction ................................................................................................................................ 81
13.2 Intelligent Systems Capabilities and Roles ................................................................................. 83
Description of Intelligent Systems Capabilities ............................................................................ 83
Intelligent Systems Roles and Example Applications ................................................................... 84
Desired Outcomes ........................................................................................................................ 84
13.3 Technical Challenges and Technology Barriers........................................................................... 85
Technical Challenges .................................................................................................................... 85
Technical Barriers ......................................................................................................................... 86
Policy and Regulatory Barriers ..................................................................................................... 86
Impact to Aerospace Domains and Intelligent Systems Vision .................................................... 86
13.4 Research Needs to Overcome Technology Barriers ................................................................... 86
Operational Gaps ......................................................................................................................... 86
Research Needs and Technical Approaches................................................................................. 87
Prioritization ................................................................................................................................. 87
14. OBSERVATIONS ................................................................................................................................... 89
14.1 Positive Attributes of Intelligent Systems for Aerospace ........................................................... 89
14.2 Societal Challenges to Intelligent Systems for Aerospace .......................................................... 90
Acceptance and Trust of Intelligent Systems ............................................................................... 90
Fear of Intelligent Systems Technology ....................................................................................... 90
Policies Directed toward Intelligent Systems ............................................................................... 91
14.3 Technological Gaps Impeding Intelligent Systems for Aerospace .............................................. 92
14.4 Path for Enabling Intelligent Systems for Aerospace.................................................................. 93
15. RECOMMENDATIONS ......................................................................................................................... 95
16. SUMMARY........................................................................................................................................... 97
17. GLOSSARY ........................................................................................................................................... 98
vi
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Intelligent Systems TERMINOLOGY .............................................................................................. 98
18. ACRONYMS AND ABBREVIATIONS ...................................................................................................... 99
vii
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
LIST OF FIGURES
Figure 1. The AIAA Intelligent Systems Technical Committee’s logo titled “Brains in Planes” illustrates a
desired end state goal for intelligent systems in aerospace ........................................................................ 5
Figure 2. Levels of Autonomy Integration with Human Operators .............................................................. 9
Figure 3. Illustration of Adaptive Systems Multi-Level Role for Aircraft .................................................... 12
Figure 4. Research Needs for Improved Safety via Resilient, Semi-Autonomous and Fully Autonomous
Systems ....................................................................................................................................................... 17
Figure 5. Research Needs for Addressing the Certification of Resilient, Semi-Autonomous and Fully
Autonomous System Technologies ............................................................................................................. 18
Figure 6. Autonomy Decision-Making Layers. ............................................................................................ 26
Figure 7. Learning-based Autonomy Architecture ...................................................................................... 28
viii
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
ACKNOWLEDGEMENTS
The editors would like to acknowledge the contribution of numerous organizations and individuals with
suggestions and insight that helped guide the development of the first edition of the AIAA Roadmap for
Intelligent Systems between 2013 and 2015. One of our goals with this roadmap was to ensure we did
not limit ourselves to input from AIAA. It is our desire to make this document as inclusive as possible of
other viewpoints outside of the AIAA and aerospace community. As a result, we have reached out to other
professional organizations with complementary technical expertise, such as the Institute of Electrical and
Electronic Engineers (IEEE) Computer Society, which publishes the IEEE’s Intelligent Systems journal. We
want to specifically thank Dr. Robert Hoffman from the Florida Institute for Human and Machine Cognition
(IHMC), who is also an editor for the IEEE’s Intelligent Systems journal. Dr. Hoffman provided substantial
insight and numerous IEEE journal articles related to human-centered computing that served as
background and references for the ground systems for space operations section of the roadmap. We
hope the positive interaction with IEEE as well as with other organizations continues to expand as we
embark on an expanded second edition of this roadmap in the near future.
This roadmap would not have been as complete had the first and second AIAA Intelligent Systems
workshops organized by the ISTC not occurred as technical events to gather and vet thoughts for the
roadmap. The first workshop took place in Dayton, OH in August 2014. The second workshop occurred
in August 2015 at NASA Ames Research Center. The organizers of the workshops included David Casbeer,
Nick Ernest, Kelly Cohen, Nhan Nguyen. Many others assisted by putting in many hours to organize these
events and ensure they were superbly run. A third ISTC Workshop on Intelligent Systems is planned for
August 2016 at NASA Langley Research Center.
Finally, we’d like to thank the AIAA ISTC leadership and general membership for their perseverance and
unique contributions. We would be remiss if we did not mention friends of the ISTC who volunteered and
contributed to this roadmap. There were numerous backplane discussions and many of the priceless
thoughts expressed during those discussions were incorporated into this edition of the roadmap. We
close out with a final thought, “Never underestimate the value of being a member of a great technical
committee and ISTC certainly qualifies.”
January 2016
Christopher Tschan
Adnan Yucel
Nhan Nguyen
1
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
EXECUTIVE SUMMARY
Welcome to the first edition of the American Institute of Aeronautics and Astronautics (AIAA) Roadmap
for Intelligent Systems. The roadmap represents a sustained effort of more than two years and
incorporates feedback collected from the AIAA Intelligent Systems workshops held in Dayton, OH in
August 2014 and at NASA Ames Research Center in Mountain View, CA in August 2015.
There is no doubt that aerospace systems are becoming more intelligent. However, the potential
capabilities for intelligent systems in aerospace still far exceed the implementations. Changes would be
needed in order to unleash the full potential of intelligent systems. So, while reading this document, the
reader should not think of Intelligent Systems as a “hammer looking for a nail” in the aerospace domain.
Instead, consider the perspective that intelligent systems technologies have been advancing and have
matured to the point that they can readily be applied to a multitude of practical aerospace applications.
What may be needed now is a breakout event, such as an Intelligent Systems Challenge to boldly
demonstrate multiple intelligent systems technologies for aerospace, in the similar fashion to what the
DARPA Grand Challenges of 2004, 2005, and 2007 did to popularize and mature technologies needed for
autonomous driving vehicles. This roadmap is designed to start the dialog which could help precipitate a
similar breakout event for intelligent systems in aerospace.
There are 11 technical sections in this roadmap that were written by subject matter experts in intelligent
systems who are members the AIAA Intelligent Systems Technical Committee (ISTC) as well as outside
collaborators. The technical sections provide self-contained perspectives of intelligent systems
capabilities as well as technical challenges and needs for specific aerospace domains. Selected top-level
observations describing how intelligent systems can contribute from each technical section are shown
below. The technical sections were loosely aligned with either aviation or general aerospace domains to
help orient readers with an affiliation to one domain or the other, but the case can be made that many of
the technical sections contribute to both aviation and general aerospace domains.
Aviation-themed intelligent systems technical sections:
 Aerospace systems with adaptive features can improve efficiency, enhance performance and safety,
better manage system uncertainty, as well as learn and optimize both short-term and long-term
system behaviors (Section 3).
 Increasingly autonomous systems contribute to new levels of aerospace system efficiency, capability,
and resilience, such as “refuse-to-crash” through software-based sense-decide-act cycles (Section 4).
 Computational intelligence techniques can efficiently explore large solution spaces and provide realtime decision making and replanning capabilities for complex problems that are computationally
intractable for traditional approaches (Section 5).
 Methodologies exist that can help establish trust of non-deterministic, adaptive, and complex
intelligent systems algorithms for certification of aviation systems (Section 6).
2
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE


Integration of intelligent systems into unmanned aerospace systems in low-altitude uncontrolled
airspace will improve vehicle automation, airspace management automation, and human-decision
support (Section 7).
Intelligent systems can contribute to real-time solutions that facilitate not only air traffic control, but
strategic air traffic flow management, especially during and after disruptions (Section 8)
General aerospace-themed intelligent systems technical sections:
 Coupling intelligent systems applications with big-data will help the aerospace industry becomes
increasingly cost-effective, self-sustaining, and productive (Section 9).
 Human-Machine Integration (HMI) will be exploited to ensure that intelligent systems work in a way
that is compatible with people, by promoting predictability and transparency in action, and supporting
human situational awareness (Section 10).
 Aerospace systems using Intelligent Integrated System Health Management (i-ISHM) promise to
provide system of systems monitoring, anomaly detection, diagnostics, prognostics and more in a
systematic and affordable manner (Section 11).
 The coupling of intelligent systems with robotics promises faster, more efficient decision-making and
increased proficiency in physical activities (Section 12).
 Increasing the level of intelligent automation in ground systems for domains such as space operations
can help reduce human errors, help avoid spacecraft anomalies, extend mission life, increase mission
productivity, and reduce space system operating expenses (Section 13).
An important takeaway point is that intelligent systems for aerospace is not necessarily about replacing
humans with intelligent systems. Instead, it’s about finding the sweet spot where humans and intelligent
systems work effectively together, are safer, make decisions more quickly, and achieve a higher success
rate together as a human-machine team than either humans or machines by themselves.
The path for enabling intelligent systems for aerospace and the recommendations at the end of the
roadmap provide high-level insight, but are not a substitute for specific intelligent systems business-case
analyses. The contributors to this roadmap are ready to help facilitate business-case analyses and develop
specific business plans, as needed.
The ISTC sincerely hopes your expectations from this document are met or exceeded. We value your
feedback. Please address feedback, comments, questions, and suggestions to contributors whose contact
information is included in this roadmap in Section 16. We also welcome your contribution to this roadmap
to further enhance its appeal to your specific area of research and application. We look forward to hearing
from you.
What may be needed now is a breakout event such as an Intelligent
Systems Grand Challenge to boldly demonstrate intelligent systems
technologies for aerospace, the way the DARPA Grand Challenge did to
mature technologies needed for autonomous driving vehicles.
3
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
1. INTRODUCTION
A significant number of intelligent systems technologies have recently started to affect the lives of
everyday people. These improvements range from an ever-increasing number of smart safety features in
cars (e.g., anti-lock braking systems, automatic stability systems, obstacle avoidance systems, and
automated driving) to smart home appliances (e.g., washers controlled by fuzzy logic and thermostats
that learn user preferences) and voice-commanded capabilities on phone and computers (e.g., Apple Siri
and Microsoft Cortana). These advancements were facilitated by several factors:




Increasing processing power, data and communications networks.
Increased consumer demand for easy-to-use devices and safety systems.
The desire to push the boundaries of technological innovation.
Establishment of standards that enable interoperability and faster infusion of new technologies in the
market.
Supply and demand for intelligent systems consumer technologies are driven by fundamental enabling
technologies, evolving consumer preferences, and newly discovered possibilities for future developments
that emerge from research and development done in academia, national research laboratories, and
corporations. For background, a high-level overview of initiatives, standards, and groups involved in the
creation of intelligent homes can be found in a recent review by Institute of Electrical and Electronics
Engineers (IEEE) Institute.1 The IEEE vision for smart homes is aggressive and so is the AIAA ISTC vision for
intelligent systems for aerospace. IEEE wants to connect all the smart sensors and aggregate the
intelligence. Many of the IEEE activities can be used as templates for the technical activities of intelligent
systems for aerospace.
In contrast to intelligent systems for consumer applications, there are fewer purchasers of intelligent
system technologies within aerospace. To further complicate the acquisition of intelligent system
technologies for aerospace systems, the lack of awareness or misunderstanding of intelligent systems and
how they can potentially create game-changing capabilities often results in disparities among technology
developers, funding organizations, and end users. These disparities can come from system requirements
that do not adequately articulate the desired incorporation of intelligent system technologies, and
different expectations of system capabilities and performance with intelligent systems. This roadmap can
assist technology developers, funding organizations, and end users to increase awareness and improved
communication of intelligent system technologies in establishing system requirements and technology
development.
Although incorporation of intelligent systems has been slow in aerospace, there are a few success stories.
Currently, intelligent systems are in use or are in development for specific aerospace applications. Among
these are systems that monitor the health of aerospace systems, systems that allow the remote operation
of spacecraft and Mars rovers, systems that augment the abilities of piloted vehicles, and the increasing
autonomous capabilities of remotely operated vehicles.
1
“Laying the Foundation for Smarter Homes,” The Institute, Volume 39, Issue 4, pp. 4-6 and 8-9, Dec 2015. [Online]
Available: http://theinstitute.ieee.org/ns/quarterly_issues/tidec15.pdf.
4
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
While intelligent system technical developments have contributed useful capabilities, such as safety
improvements, there is a much greater potential for intelligent systems in aerospace. To unlock this
potential, there is a need to integrate intelligent systems with other more “traditional” aerospace
disciplines, to blur the boundaries between aerospace and other domains (e.g., the automotive industry)
in a way that will allow easier exchanges in capabilities between domains, and to invest in basic and
applied research in intelligent systems.
The motivation for this roadmap comes from the recognition for the need to grow awareness of the
increasingly important roles of intelligent systems in aerospace domains. In response, the AIAA ISTC has
been developing this roadmap with inspirations and contributions from many of the ISTC members whose
desires to better articulate how intelligent systems can benefit the wider aerospace community. The
specific objectives of this roadmap are the following:





Provide insight into what an intelligent system is and what it can do for the aerospace domain,
Examine why intelligent systems are critical to the future of aerospace and autonomous systems,
Identify key technical challenges and technology barriers in the implementation of intelligent systems,
Present a set of recommendations for how the research community, end users, and government
organizations should advance these systems, and
Propose a timeline when key milestones in intelligent systems capabilities could be reached
Since the earliest days of aviation and space travel, the human has played the primary role in determining
the success and safety of the mission. The piloting skill of the Wright brothers and the ability of the Apollo
13 astronauts to adapt to contingencies are notable examples. Now we seek intelligent systems for
aerospace that appropriately team with humans to provide performance that exceeds what is possible
with humans or machines by themselves. While intelligent systems advocates want to put “brains in
planes” (Figure 1), there is a lot to be done before that becomes a reality.
Figure 1. The AIAA Intelligent Systems Technical Committee’s logo titled “Brains in Planes”
illustrates a desired end state goal for intelligent systems in aerospace
As advances have been made in automation, we can now envision the future of aerospace where
machines are allocated more authority for safety and decision-making. In 2002 an intelligent aerospace
system was defined as a “nature-inspired, mathematically sound, computationally intensive problem-
5
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
solving tool that performs an aerospace function2.” Over the past decade, the attributes of an intelligent
system have broadened to include more than simply mathematically sound and computationally intensive
problem-solving tools. Plus, many of the elements of intelligent systems in the aerospace domain also
have cross-cutting applications to many other domains such as the automotive domain where intelligent
systems are being developed at a rapid pace for self-driving cars. The reverse is also true.
This roadmap advocates for intelligent systems that form a coherent human-machine team; a team that
is more efficient and safer than either a human or an intelligent system individually. The intelligent
systems portion of the human-machine team should be optimized for activities where intelligent systems
best complement human strengths. Moreover, the roadmap describes specific intelligent system
technologies and application domains that can contribute to improved operational efficiency, enhanced
systems performance, and increased safety of many manned and unmanned aerospace systems. These
technologies and application domains include, but are not limited to, adaptive and non-deterministic
systems, autonomy, big data, computational intelligence, human-machine integration, integrated systems
health and management, trust, verification and validation, space, unmanned aerial systems and air traffic
management. This is by no means a complete list of intelligent systems technologies, but serves as a basis
for the roadmap.
Development of intelligent systems is critical for the United States and its allies to maintain a competitive
advantage in the aerospace domain. Prudent and prompt development of intelligent systems will lead to
increased safety, as well as decreased manufacturing and operational costs. Aerospace systems that
incorporate these features of intelligent systems will be in higher demand and are expected to have
superior performance and better capabilities than current aerospace systems not assisted by intelligent
systems. The aerospace domains where intelligent systems could be applied to make aerospace systems
more competitive include research and development, manufacturing, testing, manned systems
operations, remotely piloted operations, aerospace ground systems and space systems operations.
In order to advance intelligent systems, this roadmap recommends a focused way of thinking and
investing in aerospace systems. The lines between domains that have shaped the aerospace community
for decades are blurred by intelligent systems. For example, the traditional domains of “software”,
“structures”, “materials”, and “flight controls” are indistinct when considering an intelligent aerospace
system that can change its shape in response to demands for increased performance, or changes to
system characteristics such as due to damage or unanticipated environmental conditions. Additionally,
we expect new generations of intelligent systems to feature designs that incorporate lessons learned from
both the intelligent systems engineering as well as the human-centered computing communities. This
roadmap works to eliminate stovepipes between traditional aerospace domains and human effectiveness
communities. Elimination of stovepipes is more likely to happen if common platforms that address the
needs of both communities can be used for data collection/decision making as well as hosting/testing
intelligent systems prototypes. The roadmap provides specific recommendations for investment in the
areas deemed critical to advancement of intelligent systems for aerospace.
Our near-term (now through 5 year) ISTC objective is to raise the level of awareness for intelligent systems
in aerospace by reaching out to the aerospace domain like we are doing with this roadmap and leveraging
2
K. Krishnakumar, "Intelligent Systems for Aerospace Engineering—An Overview," 2003. [Online]. Available:
http://ti.arc.nasa.gov/m/pub-archive/364h/0364%20%28Krishna%29.pdf.
6
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
intelligent systems technologies from industries that are already fielding them. We expect the outcome
of this objective to be better understanding of the use of and increased demand for intelligent systems in
aerospace. As the level of awareness is increased among all stakeholders in the aerospace community, we
expect mid-term (5 to 10 year) objectives to be research goals and funding in specific intelligent system
challenges from the government, academia, and aerospace industry. Our long-term intelligent system
objectives (10 to 20 years), such as intuitive human-machine systems or ultra-reliable autonomy, are likely
dependent on the aggressiveness of short-term investments and the careful implementation of intelligent
systems in pertinent systems to gain real-world insights and overcome societal stigmas. This roadmap
provides a set of recommendations on key research areas in intelligent systems disciplines that need
funding in order for the aerospace enterprise to maintain competitiveness. The benefits of this investment
will not only be reaped by the aerospace enterprise, but will also be widely shared with other economic
sectors of the society.
The existence of this roadmap for Intelligent systems for aerospace and its content help to illustrate that
the intelligent systems community feels that intelligent systems are ready to take on more important roles
in multiple aerospace domains. Some applied technology development is needed and contributors to this
roadmap believe the roadmap could help precipitate a watershed event that will change the course for
intelligent systems for aerospace applications. For technical application areas, such as autonomous
driving vehicles, the DARPA grand challenges of 2004, 2005, and 2007 proved to be similar watershed
events. The reader should consider whether intelligent systems would benefit from a grand challenge
type of event.
This roadmap is the first of its kind generated by the AIAA intelligent systems community. Subject matter
experts were asked to contribute technical sections, which represent the bulk of the content of the
roadmap and support the main ideas in the introduction and summary sections. A reader may find that
the perspective of some contributors is different than the perspective presented in other sections of the
roadmap. Since the contributors are experts in their technical areas no attempt was made to resolve any
differences in perspective, harmonize these sections, eliminate duplicative thoughts, or address specific
gaps. Instead, the editors used these sections to find common themes, challenges, and areas of needed
research in order to produce the roadmap recommendations. Take time to look over these detailed
intelligent systems contributions and note how they complement each other.
This document is organized as follows: in section 2 we describe the overall vision for intelligent systems
in aerospace, followed by 11 technical sections. The technical sections are grouped into aviation-themed
intelligent systems topics (sections 3 through 8) and general aerospace-themed intelligent systems topics
(sections 9 through 13). These technical sections cover individual areas of intelligent systems and how
each could be developed. Section 14 makes some key observations and provides the foundation to the
recommendations in Section 15. The roadmap is summarized and contact details are provided in Section
16.
This roadmap does not purport to cover all areas where intelligent systems are relevant to aerospace.
Follow-on editions of this roadmap are likely to contain additional sections specifically tailored for topics
such as intelligent systems role in guidance, navigation, and control (GNC) as well as intelligent systems
for cyber security in aerospace.
7
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
2. VISION FOR INTELLIGENT SYSTEMS IN AEROSPACE
Despite the many aerospace domains in which intelligent systems can contribute, the vision articulated in
this roadmap is straightforward. A brief description of an intelligent systems enabled aerospace
community in the near future follows.
The near future holds promise for a broad community of government,
academia, and industry that understands the contributions and
capabilities of intelligent systems for aerospace. As a result, new
aerospace systems will include requirements for intelligent systems
components. Breakthroughs in performance, safety, and efficiency
for aviation and other aerospace systems due to intelligent systems
will be common. New records will be regularly established. Students
and faculty at universities with aerospace departments will be
familiar with intelligent systems. Intelligent systems will also be
routinely created and their safety and reliability easily validated prior
to operational use. Intelligent systems will be responsible for driving
up revenue and profits of aerospace companies. Humans that just a
few years ago were fearful of intelligent systems can now not imagine
doing their jobs without the assistance of intelligent systems.
To achieve the AIAA ISTC vision for intelligent system requires a number of activities and several of these
activities may take time to be effective. We need to establish a new paradigm that describes what
intelligent systems for aerospace are capable of. This roadmap is a start in that direction. We also envision
the need to broadly increase awareness of intelligent systems through a period of socialization and
positive impressions of early intelligent systems capabilities and performance. In addition, we need to
establish the desire for collaboration between humans and intelligent systems, which is sometimes
referred to as human-centered computing.
To proliferate the use of intelligent systems for aerospace we need to establish an easy-to-use toolbox
with many ready-to-use intelligent system modules. We envision a common interface, open version of
the intelligent systems toolbox available for secondary school and university work. This would include the
capability to validate and document intelligent systems performance. More sophisticated and secure
version of the toolbox would be used for creating a hierarchy of intelligent systems for operational
applications that could be applied to many different aerospace technical functions.
To get there we need look for opportunities to fund applied development of intelligent systems. Some of
this funding to push the state-of-the-art for intelligent systems can come from demand to incorporate
intelligent systems into new aerospace systems. However, more basic opportunities may come from
competitions such as an Intelligent Systems Grand Challenge, mentioned earlier. With an intelligent
systems vision now stated, it’s time to move on to the technical sections of this roadmap.
8
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
3. ADAPTIVE AND NON-DETERMINISTIC SYSTEMS
3.1 ROLES AND CAPABILITIES
As demands on aerospace accessibility increase and become more complex, intelligent systems
technologies can play many important roles to improve operational efficiency, mission performance and
safety for current and future aerospace systems and operations. Future intelligent systems technologies
can provide increased adaptive and autonomous capabilities at all levels, as illustrated in Figure 2. At the
lowest level of autonomy, adaptation through closed-loop control and prognostics enables aerospace
systems to be more resilient and intelligent by automatically adjusting system operations to cope with
unanticipated changes in system performance and operating environment. At mid-level of autonomy,
planning and scheduling provide capabilities to perform automatic task allocation and contingency
management to reduce human operator workloads and improve situational awareness and mission
planning. At high levels of autonomy, automated reasoning and decision support systems provide higher
degrees of intelligence to enable aerospace systems to achieve autonomous operations without direct
human supervision in the loop.
Figure 2. Levels of Autonomy Integration with Human Operators
Adaptive systems are an important enabling feature common to all these levels of autonomy. Adaptability
is a fundamental requirement of autonomous systems that enable a wide range of capabilities at the
foundational level. Adaptive systems can learn and optimize system behaviors to improve system
performance and safety. Adaptive systems can also enable efficient, intelligent use of resources in
aerospace processes. Furthermore, adaptive systems can predict and estimate aerospace system’s longterm and short-term behaviors via strategic learning and tactical adaptation. As a result, performance and
safety improvements through resiliency and adaptability can be achieved by adaptive systems, which can
automatically perform self-optimization to accommodate changes in operating environments, detection
and mitigation of uncertain, unanticipated, and hazardous conditions, thereby enabling real-time safety
assurance.
9
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
As aerospace systems become increasingly more complex, uncertainty can degrade system performance
and operation. Uncertainty will always exist in all aerospace systems, no matter how small, due to
imperfect knowledge of system behaviors, which are usually modeled mathematically or empirically.
Uncertainty can be managed but cannot be eliminated. Typical risk management of uncertainty in
aerospace systems require: a) improved system knowledge by better system modeling which can be very
expensive, and b) built-in safety margins and operational restrictions which can sometimes adversely
impact performance if safety margins are unnecessarily large or operational restrictions are not well
established.
Adaptive systems can better manage uncertainty in aerospace systems if they are properly designed.
Uncertainty is managed by adaptation, which adjusts system behaviors to changing environment through
learning and adopting new behaviors to cope with changes. Adaptive systems provide learning
mechanisms to internally adjust system performance and operation to achieve desired system behaviors
while suppressing undesired responses, and to seek optimal system behaviors over long time horizon.
Adaptive systems achieve adaptation through short-term tactical adaptation and long-term strategic
learning and self-optimization. Tactical adaptation usually involves the need to adjust system behaviors
to cope with rapid changes in operating environments that could cause safety concerns. Model-reference
adaptive control is an example of a tactical adaptation strategy that has many potential promises in future
aerospace systems.
Strategic learning and self-optimization are learning mechanisms of adaptive systems that can take place
over a longer time horizon. This adaptation mechanism usually addresses the need to adjust system
behaviors to optimize system performance in the presence of uncertainty. Examples of strategic learning
are reinforcement learning and extremum seeking self-optimization in aerospace systems. Air and space
vehicles can leverage extremum-seeking self-optimization to adjust their flight trajectories, vehicle
configurations and performance characteristics to achieve energy savings or other mission requirements
such as noise abatement and reduced emissions.
Adaptive systems can provide many useful applications in aerospace systems. Adaptive flight control for
safety resiliency to maintain stability of aircraft with structural and/or actuator failures has been well
studied. Real-time drag optimization of future transport aircraft is an example of extremum-seeking selfoptimization that can potentially improve fuel efficiency of aircraft. Adaptive traction control of surface
mobility planetary rovers can be applied to improve vehicle traction on different types of terrain. Adaptive
planning and scheduling can play a role in air traffic management to perform weather routing or traffic
congestion planning of aircraft in the National Air Space. Adaptive systems could be used to supplement
analytical and experimental models with real-time adaptive parameter estimation to reduce modeling
cost in the design and development of aerospace systems.
Machine learning techniques are commonly used in many adaptive systems. These techniques sometimes
employ neural networks to model complex system behaviors. The use of multi-layer neural networks can
result in non-determinism due to random weight initialization. Non-deterministic behaviors of these
adaptive systems can cause many issues for safety assurance and verification and validation. Neural
networks are not the only source of non-determinism. Stochastic processes such as atmospheric
turbulence, process noise, and reasoning processes such as due to diagnostics/prognostics can also be
sources of non-determinism.
For the purpose of the roadmap discussion, we categorize the roles of adaptive systems under three
general broad capabilities: 1) safety enhancement by resilience under uncertain, unexpected and
10
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
hazardous conditions; 2) operational efficiency for aerospace systems; and 3) performance improvement
of ultra-performance adaptive systems.
RESILIENCE UNDER UNCERTAIN, UNEXPECTED AND HAZARDOUS CONDITIONS
Roles



Enables resilient control and mission management
o Detection and mitigation of uncertain, unanticipated, and hazardous conditions
o Improved situational awareness, guidance, and mission planning
Provides graceful degradation
Enables real-time safety assurance
OPERATIONAL EFFICIENCY
Roles



Reduces human pilot / operator workloads
Enables energy efficiency for fuel economy
Automatically adjusts system operations to cope with changes in system performance and operating
environment
ULTRA-PERFORMANCE
Roles



Adaptive guidance and mission planning for optimizing aerodynamic and propulsion performance
o Learns and optimizes system behaviors to improve system performance
o Predicts and estimates aerospace system’s long-term and short-term behaviors via strategic
learning and tactical adaptation
o Enables efficient, intelligent use of resources in aerospace processes
Mission-adaptive control for morphing vehicles and structures
Adaptive systems that enable envelope expansion
While adaptive systems offer potential promising technologies for future aerospace systems, many
technical challenges exist that prevent potential benefits of adaptive systems from being fully realized.
These technical challenges present technology barriers that must be addressed in order to enable
intelligent systems technologies in future aerospace systems.
3.2 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
Technical challenges and technology barriers must be defined at all levels of adaptive system integration
with human operators, vehicle dynamics and operations, as well as external environments for ensuring
safety, operational efficiency, and improved performance. Figure 3 illustrates this concept for aircraft
systems, with levels of adaptive system integration associated with a potential timeframe for
implementation. At the lowest level of integration, adaptive and reasoning systems can improve
performance through self-optimization and safety through resilience under widely varying, uncertain,
unexpected and/or hazardous conditions by providing the ability to reconfigure vehicle characteristics for
mission-adaptive performance or improved situation awareness, guidance, and temporary interventions
11
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
under emergency conditions. At a mid-level of integration, semi-autonomous systems can enable realtime trajectory optimization for strategic planning between vehicle and ground controllers to improve
mission performance, and synergistic dynamic teaming between human operators and intelligent systems
to improve safety and operational efficiency. At the highest level of integration, fully autonomous systems
can ensure safety and self-optimize for operational efficiency and performance, while keeping a (possibly
remote) human operator informed of current status and future potential risks.
Key Technology Impediment: Certification of Safety-Assured
Autonomy for Reliable Operation under Uncertainties & Hazards
Ultra-Reliable Fully Autonomous Systems
Pilot-Optional Aircraft
5 – 10 Years
Enable Safety-Assured Operations at All NAS Levels
(Vehicles, Infrastructure, and Operations)
Variable Autonomy Systems
1 – 5 Years
10 – 20 Years
Technical Challenges
Resilient Systems
Enable Synergistic Dynamic Teaming
Between Human and Intelligent
Systems
Provide Safety Augmentation,
Guidance & Emergency
Intervention to Support
Baseline Systems and Human
Operator
Single-Pilot Operations
Remotely Piloted
UAS
Baseline: Technology Used to
Automate Routine Operations
under Nominal Conditions and
Provide Information & Alerts
Current Operations
Figure 3. Illustration of Adaptive Systems Multi-Level Role for Aircraft
Technical challenges and technology impediments for adaptive systems are summarized below for
improving safety, operational efficiency, and performance at all integration levels.
RESILIENCE UNDER UNCERTAIN, UNEXPECTED AND HAZARDOUS CONDITIONS
Key technical challenges and technology impediments for achieving resilience of safety-critical aerospace
systems at all integration levels under uncertain, unexpected, and hazardous conditions are summarized
below.
Technical Challenges

Development and validation of resilient systems technologies for multiple hazards
o Reliable contingency management (control & routing) for unexpected events
o Accurate and fast situation assessment, prediction, and prioritization
o Fast decision-making and appropriate response
o Real-time sensor and information integrity assurance
12
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE

Development and validation of variable autonomy systems that enable effective teaming between
automation and human
o Standard, effective, and robust multiple modality interface system
o Common real-time situation understanding between human and automation (including standard
taxonomies and lexicon of terms)
o Real-time dynamic effective task allocation and decision monitoring

Development and validation of ultra-reliable safety-assured autonomy technologies
o Common real-time situation understanding between human and automation (including standard
taxonomies and lexicon of terms)
o Universal metrics and requirements for ultra-reliable safety-assured autonomy
o Hierarchical integration and compositional analysis between control and planning


Continuous certification of evolving adaptive systems with evolving behaviors
Design and validation of adaptive systems that only get better (not worse) with experience
Technology Impediments






Certification of safety-assured autonomy systems for reliable operation under uncertain, unexpected,
and hazardous conditions
Integration into existing flight deck equipment and operational system, e.g., air traffic management
(ATM) system
Public and policy perceptions associated with a lack of trust in autonomy technologies
Cyber security (both a challenge and an impediment)
Lack of alignment and integration between control, artificial intelligence (AI), and software validation
and verification (V&V) communities
Interface development that promotes pilot / user training, acceptance, improved situational
awareness, and teaming
OPERATIONAL EFFICIENCY
Key technical challenges and technology impediments for improving operational efficiency for semiautonomous and fully autonomous aerospace systems are summarized below.
Technical Challenges



Real-time decision support and mission planning for single pilot operations
Pilot monitoring and decision-making for impaired pilot (or human operator)
Methods for determining the intents and actions of adaptive systems for online visualization,
querying, and recording as well as post-flight reconstructions
Technology Impediments






Certification of adaptive systems and automatic takeover of control authority under pilot impairment
Integration into existing flight deck equipment and ATM system
Public and policy perceptions associated with a lack of trust in autonomy technologies
Cyber security (both a challenge and an impediment)
Lack of alignment and integration between control, AI, and software V&V communities
Interface development that promotes pilot / user training, acceptance, improved situational
awareness, and teaming
13
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
ULTRA-PERFORMANCE
Key technical challenges and technology impediments for improving ultra-performance for semiautonomous and fully autonomous aerospace systems are summarized below.
Technical Challenges






Real-time drag/aerodynamic optimization
Real-time optimization, convergence and computational intensity
Sensor technology limitations
Data/information fusion limitations
Risk of over-optimization of performance at the expense of safety
Ensuring robust performance
Technology Impediments







Closely coupled physics-based multidisciplinary solutions to address complex vehicle interactions with
adaptive systems
Certification of adaptive systems for aeroelastically or statically unstable aerospace vehicles
Lack of distributed sensor technologies to enable adaptive systems for improved performance by selfoptimization
Integration into existing flight deck equipment and ATM system
Public and policy perceptions associated with a lack of trust in autonomy technologies
Cyber security (both a challenge and an impediment)
Lack of alignment and integration between vehicle performance and dynamics, control, AI, and
software V&V communities
3.3 RESEARCH NEEDS TO ACCOMPLISH TECHNICAL CHALLENGES AND
OVERCOME TECHNOLOGY BARRIERS
These technical challenges and technology impediments define research needs that must be addressed in
a number of areas.
MULTIDISCIPLINARY METHODS
Despite many recent advances, adaptive systems remain at a Technology Readiness Level (TRL) 5. The
furthest advancement of this technology has been flight demonstrations on piloted research aircraft and
subscale research aircraft under simulated high-risk conditions, but no production safety-critical
aerospace systems have yet employed adaptive systems. The existing approach to adaptive control
synthesis generally lacks the ability to deal with integrated effects of many different (multidisciplinary)
flight physics. In the presence of vehicle hazards such as damage or failures or highly complex current and
future flight vehicle configurations such as aircraft with highly flexible aerodynamic surfaces, flight
vehicles can exhibit numerous coupled effects that impose a considerable degree of uncertainty on the
vehicle performance and safety. To adequately deal with these coupled effects, an integrated approach
in adaptive systems research should be taken that will require developing new fundamental
multidisciplinary methods in adaptive control and modeling. These multidisciplinary methods in adaptive
control research would develop a fundamental understanding of complex system interactions that
manifest themselves in system uncertainty that could impact performance and safety. With an improved
14
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
understanding of the system uncertainty, effective adaptive systems could be developed to improve
performance while ensuring robustness in the presence of uncertainty.
SIMPLIFIED ADAPTIVE SYSTEMS
Another future research goal is to develop simplified adaptive systems that reduce the introduction of
non-determinism. Despite the potential benefits of neural network applications in adaptive systems, realworld experiences through recent flight research programs seem to suggest that simplified adaptive
systems without neural networks may perform better in practice than those with neural networks.
Simplified adaptive systems may have other advantages in that they may be easier to be verified and
validated, and there are some existing adaptive control methods that can be applied to assess the stability
margins and performance of those systems.
REAL-TIME SELF-OPTIMIZATION
Applications of real-time self-optimization systems are still very limited, but the potential benefits of these
systems can be enormous. Aircraft with self-optimization can potentially achieve significant fuel savings
when equipped with suitable control systems and distributed sensors that enable self-optimization.
Research in methods of real-time extremum-seeking self-optimization is needed to advance the
technology to a level where it can consistently demonstrate reliability and effectiveness. For complex
flight vehicle configurations, highly integrated methods for adaptive systems should be developed to
address complex vehicle performance characteristics in research approaches. Reliable methods for
model-based machine learning for system identification of performance metrics should be developed to
estimate performance characteristics of flight vehicles from distributed sensors and flight data. This
information can be used to synthesize appropriate performance-enhancement actions to be executed by
adaptive systems.
Research in multi-objective optimization is needed to address multiple goals of vehicle performance
simultaneously. These goals, such as fuel efficiency and structural load alleviation for flexible flight
vehicles, can sometimes compete with one another. Multi-objective optimization for adaptive systems
can address complex systems with competing objectives to enable effective adaptation strategies.
Supervisory adaptive systems can provide autonomous decision-making and task allocation to local
adaptive systems that manages individual performance objectives.
REAL-TIME MONITORING AND SAFETY ASSURANCE
Research is needed in the development and validation of resilient control and mission management
systems that enable real-time detection, identification, mitigation, recovery, and mission planning under
multiple hazards. These hazards include vehicle impairment and system malfunction or failures, external
and environmental disturbances, human operator errors, the sudden and unexpected appearance of fixed
and moving obstacles, safety risks imposed by security threats, and combinations of these hazards.
Resilient control and mission management functions include adverse condition sensing, detection, and
impacts assessment, dynamic envelope estimation and protection, resilient control under off-nominal and
hazardous conditions, upset detection and recovery, automatic obstacle detection and collision
avoidance, and mission re-planning and emergency landing planning. Metrics and realistic current and
future hazards-based scenarios are also needed for resilience testing of these systems, including a means
of generating these hazards with an element of surprise during testing with human operators.
15
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Research is also needed for the development and validation of supervisory and management systems that
enable real-time safety assurance of resilient, semi-autonomous, and fully autonomous systems operating
under uncertain, unexpected, and hazardous conditions. These systems would monitor current vehicle
and environmental conditions, all information provided by and actions taken (or not taken) by human
operators and integrated intelligent systems (including vehicle health management, resilient control and
mission planning), and assess the current and future safety state and associated safety risks in terms of
multiple safety factors (including vehicle health and airworthiness, remaining margin prior to entering a
loss of control condition, and time remaining for recovery). This capability would require deterministic
and stochastic reasoning processes as well as an ability to reliably and temporarily intervene if necessary
over both human and intelligent automation systems, while providing situational awareness and guidance
to both.
VERIFICATION AND VALIDATION
Verification and Validation (V&V) research is viewed as a key research to enable adaptive systems to be
operational in future flight vehicles. V&V processes are designed to ensure that adaptive systems function
as intended and the consequences of all possible outcomes of the adaptive control are verified to be
acceptable. Currently, software V&V research is being conducted at a disciplinary level but is not aimed
at adaptive systems or the functional validation of these systems at the algorithm and integrated
disciplinary levels. To effectively develop verifiable adaptive systems, the roles of adaptive systems theory
as well as aerospace system modeling should be tightly integrated with V&V methods. Otherwise, the
V&V research could become stove-piped, thereby resulting in implementation challenges.
A PRIORI PERFORMANCE GUARANTEE
One of the fundamental needs for safe and resilient adaptive flight control is to achieve a level of a priori
guaranteed performance when dealing with anomalies resulting from imperfect aircraft modeling,
degraded modes of operation, abrupt changes in aerodynamics, damaged control surfaces, and sensor
failures. A major issue is the lack of a priori, user-defined performance guarantees to preserve a given safe
operating envelope of a flight vehicle. To address this challenging issue, current practice relies heavily on:
1) exhaustive, hence costly, simulations as a means of performing verification, or 2) validation tools for
existing adaptive algorithms. The drawback of exhaustive simulations is that they provide limited
performance guarantees with respect to a set of initial conditions, pilot commands, and failure profiles.
The drawback of validation tools is that such tools can only provide guarantees if there exists a priori
structural and behavioral knowledge regarding any anomalies that might occur. While such knowledge
may be available for some specific applications, the structural and behavioral characteristics of the
anomalies can change during flight (e.g., when an aircraft is subject to an unexpected turbulence or
undergoes a sudden change in dynamics) and the safe flight envelope guarantees provided by those tools
may no longer be valid. There is a need to develop adaptive control methods with a priori performance
guarantees that can preserve a given, user-defined safe operating envelope through formal analysis and
synthesis without requiring exhaustive simulations.
DYNAMIC EFFECTIVE TEAMING
Research into variable autonomy systems is needed to facilitate dynamic effective teaming between the
automation and human operators. Specific areas of research include real-time dynamic function allocation
and interface systems, resilient guidance and autonomous control systems for loss of control prevention
16
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
and recovery, as well as diagnostic and prognostic systems that enable information fusion, guidance and
decision support under complex off-nominal and hazardous conditions.
ADDITIONAL CAPABILITIES
Additional research needs related to all of the above capabilities include the ability to model and simulate
highly complex and multidisciplinary vehicle dynamics effects (e.g., associated with multiple hazards),
sensor and information integrity management to ensure that faulty data is not being used by human
operators or intelligent systems in decision-making and actions taken (or not taken), as well as improved
cost-effective methodologies for evaluating (through analysis, simulation, and experimental testing)
safety-critical integrated autonomous systems operating under uncertain, unexpected, and hazardous
conditions. These capabilities are needed for both the development and validation of advanced integrated
resilient and autonomous systems technologies at all levels of implementation, as well as in gaining trust
in their effective response under uncertain, unexpected, and hazardous conditions.
Figure 4 provides a detailed assessment of enabling technologies and research needs associated with
improved aircraft safety at all levels of implementation over the near, mid, and far term. Figure 5
summarizes the research needed to address a key technology impediment for fielding these systems –
certification.
Enabling Technologies /
Research Needs
Key Technology Impediment: Certification of Safety-Assured
Autonomy for Reliable Operation under Uncertainties & Hazards
Ultra-Reliable Fully Autonomous Systems
Safety-Assured Autonomy for
Reliable Operation under
Uncertainties & Hazards
Pilot-Optional Aircraft
5 – 10 Years
Enable Safety-Assured Operations at All NAS Levels
(Vehicles, Infrastructure, and Operations)
Variable Autonomy Systems
1 – 5 Years
10 – 20 Years
Technical Challenges
Resilient Systems
Enable Synergistic Dynamic Teaming
Between Human and Intelligent
Systems
Provide Safety Augmentation,
Guidance & Emergency
Intervention to Support
Baseline Systems and Human
Operator
Single-Pilot Operations
Remotely Piloted
UAS
Baseline: Technology Used to
Automate Routine Operations
under Nominal Conditions and
Provide Information & Alerts
• Real-Time Safety Assurance
• Resilient Control & Mission Management
• Integrated Vehicle Health Management
Current Operations
•
•
•
•
•
•
•
•
•
•
•
•
•
•
Real-Time Dynamic Function Allocation & Interfaces
Resilient Control under LOC Hazards Sequences
LOC Prediction, Prevention & Recovery
Resilient Mission Planning
Diagnostics / Prognostics & Decision Support
Information Fusion & Complex Situation
Assessment / Prediction
Adverse Condition Sensing, Detection & Impacts Assessment
Dynamic Envelope Protection
Resilient Control under Off-Nominal Conditions
Upset Detection & Recovery
Automatic Obstacle Sensing & Collision Avoidance
Emergency Landing Planning
Improved Situation Awareness & Guidance
Sensor & Information Integrity Management
Baseline: Altitude Hold, Autoland, Nominal Envelope
Protection, TCAS, EGPWS, No Significant Warnings
or Guidance under LOC Hazards
Figure 4. Research Needs for Improved Safety via Resilient, Semi-Autonomous and Fully
Autonomous Systems
17
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Enabling Technologies /
Research Needs
1 – 5 Years
5 – 10 Years
10 – 20 Years
Technology Impediments
Certification of Safety-Assured Autonomy Systems for
Reliable Operation under Uncertainties & Hazards
Validation of Safety-Critical Autonomous Systems
Pilot-Optional
Aircraft
Enable the validation of complex integrated safety-assured
autonomous and semi-autonomous systems with
deterministic & non-deterministic components
Validation of Integrated Systems
Enable validation of complex integrated
systems at the functional / algorithm
level (including error propagation and
containment between subsystems)
Single-Pilot Operations
Validation of Resilient
Systems
Develop analytical, simulation,
and experimental test methods
that enable validation of resilient
systems technologies
Remotely Piloted
UAS
(including LOC hazards
coverage and technology level
of effectiveness and limitations)
Baseline: Standard V&V
Techniques to Support Current
Certification Requirements
Current Operations
Validation Technologies for
Resilient & Autonomous Systems
• Integrated Validation Process (Analysis,
Simulation, and Experimental Methods) for
Complex Integrated Deterministic & NonDeterministic Systems
• Level of Confidence Assessment Methods
• Analysis Methods for Non-Deterministic / Reasoning
Systems
• Analysis Methods for Complex Integrated Systems
• Integrated Validation Process for Resilient Systems
• Experimental Test Methods for Integrated
Multidisciplinary Systems under Uncertain /
Hazardous Conditions
•
•
•
•
Nonlinear Analysis Methods & Tools (e.g., Bifurcation)
Robustness Analysis Methods & Tools for Nonlinear Systems
Uncertainty Quantification Methods & Tools
Stability Analysis Methods for Stochastic Filters and Sensor Fusion
Systems
• Multidisciplinary Vehicle Dynamics Simulation Modeling Methods for
Characterizing Hazards Effects
• Hazards Analysis & Test Scenarios for Resilience Testing
• Experimental Test Methods for High-Risk Operational Conditions
Baseline: Linear Analysis Methods, Gain & Phase Margins for SISO
Systems, Monte Carlo Simulations, Structured Singular Value
Robustness Analysis for MIMO Linear Systems
Figure 5. Research Needs for Addressing the Certification of Resilient, Semi-Autonomous and
Fully Autonomous System Technologies
18
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
RESEARCH INVESTMENT AREAS
Research in adaptive systems is considered as a fundamental key enabler in the technology development
of next-generation intelligent systems and autonomy. To achieve critical-mass research for transitioning
new ideas into future aerospace systems in this area requires strategic investments that could provide
near / mid / and far-term benefits. Some proposed research investment areas are listed below:
Multidisciplinary Modeling & Simulation Technologies

Experimental database, variable-fidelity models with multi-physics interactions, and realistic flight
test scenarios for evaluating adaptive systems
Vehicle Performance-Driven Adaptive Systems





Adaptive guidance and mission planning for optimization of aerodynamic and propulsion performance
for next-generation transport aircraft
Mission-adaptive control for morphing structures
Adaptive control that can enable envelope expansion or maintain existing envelope but with less
stability margins including flutter boundary
Integrated multidisciplinary design optimization with active adaptive systems in the loop to achieve
load reduction for improved flight vehicle aerodynamic-structural design
Multi-objective control and optimization for managing aerodynamic performance and flight loads for
energy and structural efficiency
Resilient Multidisciplinary Control System Technologies





Integrated adaptive flight-propulsion-structure control
Detection and mitigation of multiple key loss-of-control hazards & their combinations
Supervisory and hierarchical system architectures and technologies
Fail-safe operation with graceful degradation
Real-time multidisciplinary system identification technologies for coupled effects of hazards and
failures
Safety Monitoring, Assessment, & Management





Offline and real-time information fusion and integrity management technologies
Infrastructure for continual collection, storage, and mining of data
Sensor integrity management
Offline and real-time safety and risk metrics, assessment, and prediction technologies
Offline and real-time reliable decision process and reasoning technologies
Validation Technologies for Complex Integrated Deterministic and Stochastic Systems





Realistic (current and future) hazards analysis and hazards-based test scenarios for resilience
evaluation
Coordinated and correlated analysis, simulation, and experimental testing
Evaluation of system response under unexpected hazards
Real-time monitoring and continuous certification of evolving adaptive systems
Safety case development and level of confidence assessment technologies for integrated complex and
adaptive systems
19
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
4. AUTONOMY
4.1 INTRODUCTION
Autonomy can help us achieve new levels of efficiency, capability, and resilience through software-based
sense-decide-act cycles. Autonomy, however, can have a wide variety of definitions and interpretations.
Merriam-Webster defines automation as “the method of making a machine, a process, or a system work
without being directly controlled by a person” 3 and autonomy as “the quality or state of being selfgoverning.” 4 When do today’s “automation aids” designed to provide information for human
pilot/operator decision-making become “autonomous systems” capable of decision-making without
constant human supervision?
In aerospace, we tend to think of automation in terms of improved situational awareness and reduced
pilot workloads, which in turn leads to better collaborative human-machine decision-making, e.g., for air
traffic management (ATM).5 Currently, we program automation aids with explicit purposes: maintaining
stable vehicle control, or detecting and warning a crew about potential collision with other aircraft and
terrain.6 Automation aids are valuable for humans and machines. They augment perception, decisionmaking, and control (action) capabilities, but automation aids must be monitored and managed by human
supervisors without direct decision-making authority or “self-governance”.
Automation aids become autonomy by Merriam-Webster’s definition when they “make a process or
system work” and offer “self-governance” without [regular] human supervision or operation. For
example, a Mars rover might plan and execute its mission for the next day with only “acceptance” from
earth-based operators, rendering it “self-governing” unless the operators intervene. Similarly, an
envelope protection system7 8 that prevents a pilot from stalling “self-governs” the aircraft with respect
to stall represents a basic form of autonomy. Similarly software-based controlled flight into terrain (CFIT)
avoidance and detect-and-avoid systems that override rather than warning and providing
recommendations represent autonomy.
Two complementary decision-making skills offer autonomy: 1) the ability to succeed given complexity,
uncertainty, and risk: and 2) knowledge and learning. Knowledge can be compiled in onboard or cloud-
3
http://www.merriam-webster.com/dictionary/automation
4
http://www.merriam-webster.com/dictionary/autonomy
5
U. Metzger, and R. Parasuraman. "Automation in future air traffic management: Effects of decision aid reliability
on controller performance and mental workload," Human Factors: The Journal of the Human Factors and
Ergonomics Society, 47, no. 1, pp.35-49, 2005.
6
J. J. Arthur III, J. Lawrence, J. L. Prinzel III, J. Kramer, R. E. Bailey, and R. V. Parrish, "CFIT prevention using synthetic
vision," Proceedings of SPIE, vol. 5081, pp. 146-157, 2003.
7
C. Tomlin, J. Lygeros, and S. Sastry, "Aerodynamic envelope protection using hybrid control," Proceedings of the
American Control Conference, IEEE, vol. 3, pp. 1793-1796, 1998.
8
I. Yavrucuk, S. Unnikrishnan, and J. V. R. Prasad, "Envelope protection for autonomous unmanned aerial vehicles,"
Journal of Guidance, Control, and Dynamics, 32, no. 1, pp. 248-261, 2009.
20
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
based data that can be effectively accessed in real-time as needed. Knowledge for autonomy is analogous
to long-term training and testing (licensing) which we require pilots and drivers to complete before we
“trust” them with self-governance of a plane or a car. For autonomy, knowledge can take the form of
system models including analytical, empirical, and experimental data that can capture and efficiently
retrieve safe and appropriate actions given sensed and communicated state as well as mission objectives.
A combination of cloud-based and onboard sensing, data, and computational resources can be exploited
given reliable network access. Next-generation knowledge-based systems can allow autonomous vehicles
to execute the appropriate algorithms to make decisions based on long-term data analysis/mining as well
as real-time observations.
In new situations that have not been adequately captured by applying existing knowledge, an autonomous
agent must be capable of adapting or learning models and/or decision-making algorithms to meet goals
in a dynamically changing uncertain environment. Changes to vehicle dynamics can be accommodated
with parameter adaptation via system identification or adaptive control. Changes to mission goals or the
environment can be modeled by adapting higher-level models and algorithms using techniques such as
reinforcement learning. Data-intensive observations of the environment and vehicle behaviors within that
environment also provide rich data for subsequent mining, which in turn can improve the autonomous
system knowledge base for future missions.
Learning and knowledge-based systems are synergistic. Knowledge-based systems can be
comprehensively tested through certification, but they will be unable to handle new situations in realtime. Data storage, processing, and retrieval complexity and costs present a tradeoff between increasing
content in a knowledge base and adapting online. Adaptive systems will require new models of licensing
given that they cannot be proven correct over all possible stimuli, but they can reduce automation rigidity.
Most autonomy infused to-date has been focused on achieving safe, efficient aerospace vehicles and
payload operation, in some cases enabling unmanned air and space missions that could not otherwise be
achieved due to limited data throughput, delays, and human situational awareness. Yet there are
fundamental questions about how do we ensure that an autonomous system is trustworthy. Indeed, one
can view autonomy in many different ways. Most aerospace engineering applications today have focused
on autonomy that keeps an aircraft from crashing while providing high-quality science and surveillance
data, which we will eventually depend on, as much as we do on GPS and real-time traffic maps today. Yet,
active research in artificial intelligence, human-machine interaction, and control and decision-making is
leading to algorithms and architectures that may one day enable Unmanned Aircraft Systems (UAS) to
carry out difficult missions alongside their human counterparts.
Given the deep interest in this field from both industry/policy makers and from the research community,
the purpose of this autonomy roadmap is two folds. First, our goal is to present a higher-level overview
of autonomy and the key challenges that must be overcome in the near future. Accordingly, fundamental
challenges to autonomy, systems engineering challenges, and safety challenges have been outlined in
Section 4.2. In Section 4.3, we seek to highlight specific algorithmic and architectural challenges relevant
to aerospace autonomy, and highlight crucial research directions that are being actively tackled by the
research communities. We close the roadmap with a forward-looking view in Section 4.4.
4.2 KEY AUTONOMY CHALLENGES FACING THE AEROSPACE COMMUNITY
WHAT IS AUTONOMY
21
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
What is autonomy, and why is it important? As outlined above, a distinction between autonomy and
automation is related to the level of authority or “self-governance” endowed to human operator(s) versus
machine. Autonomy will enable new missions and holds promise to make existing missions even more
robust, efficient, and safe. Commercial transport aircraft are an extremely safe means of transit, and
people simply assume with reason that GPS data will always be available in open areas. Malicious operator
actions such as those taken by terrorists on 9/11 and those taken more recently by the co-pilot on
Germanwings Flight 9525 suggest infusion of refuse-to-crash autonomy with override capability into
future transport aircraft. The Germanwings accident could have been averted with refuse-to-crash
autonomy that activates just-in-time to avoid flight into terrain. At the most fundamental level, all
autonomous systems should be endowed with a life-preservation autonomy when threats are perceived.
Risk management in other cases might require refuse-to-crash autonomy intervention much earlier,
particularly when factors are expected to progressively increase risk over time. Predictions of future risk
are based on system health, environmental conditions, and crew input effectiveness. Autonomy can and
should intervene well before an aircraft might be unrecoverable particularly when past crew responses
are inappropriate, rendering predicted risk levels unacceptable in part because the inappropriate inputs
are likely to continue.
While most of this roadmap section discusses autonomy in the context of fully-autonomous systems, it is
important also recognize that autonomy in manned aerospace platforms also can enhance safety by
transferring authority as needed. Software and hardware systems will be imperfect and potentially
insecure, so any transfer of authority must be thoroughly analyzed to ensure overall risk is constant or
reduced. Similarly, autonomy has so far been seen limited application in space missions, primarily due to
risks since unlike aviation there are few opportunities to service spacecraft. The limited ability to service
spacecraft is an excellent reason for having a trusted ability to adapt with self-diagnostics to address
changing dynamics and environmental conditions. However, the space ground system section of this
roadmap provides examples of where increasing autonomy could be prudently introduced for satellite
command and control ground systems. While many autonomous applications tend to address safety and
operations, autonomy can also enable improved aircraft performance. Future advanced transport aircraft
could benefit from autonomy by means of autonomous decision-making using distributed sensor suites,
reconfigurable aerodynamic control technologies, and system knowledge to achieve improved fuel
efficiency and reduced structural loads, noise, and emissions.
A 2014 National Research Council (NRC) report entitled “Autonomy Research for Civil Aviation: Toward a
New Era of Flight” 9 intentionally used the term “increasingly autonomous” (or IA) without explicitly
defining autonomy to avoid the inevitable debate in finding a one “true” definition of autonomy. IA
systems were viewed as a progressively-sophisticated suite of capabilities with “the potential to improve
safety and reliability, reduce costs, and enable new missions”, providing focus on barriers and research
needs as opposed to a more controversial focus on “authority shift”. The NRC report’s barriers and highpriority research projects are listed in Appendix A with more information available in the NRC report. This
intelligent systems roadmap effort certainly does not seek to replicate the NRC process. It instead focuses
on presenting areas of autonomy research identified by our technical committee members and
participants in autonomy breakout sessions at the AIAA Intelligent Systems workshops held in August
2014 in Dayton, OH and in August 2015 at NASA Ames Research Center. This roadmap section is more
specific than the NRC report in that it represents the AIAA intelligent systems constituency primarily, yet
9
National Research Council. (2014) Autonomy Research for Civil Aviation: Toward a New Era of Flight. [Online].
http://www.nap.edu/openbook.php?record_id=18815
22
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
it is broader in that it extends beyond civil aviation to also include government and university researchers
as well as space applications.
To build an enduring roadmap to autonomy research, this report focuses on identifying autonomy
challenges rather than proposing projects, since specific autonomy research projects of interest to
different research groups and funding agencies would likely encompass several of the below challenges
in an application-oriented framework, e.g., aircraft autonomy, spacecraft autonomy, cooperative control
or system-wide management as in air traffic control, etc. Autonomy challenges are divided into three
categories: fundamental challenges that underpin most any Aerospace system endowed with autonomy,
systems engineering challenges, and challenges in minimizing risk / ensuring safe operation. This roadmap
section closes with a discussion on autonomy infusion opportunities that might lead to successful
development, testing, and acceptance of autonomy in future aerospace systems.
FUNDAMENTAL CHALLENGES
Autonomy will be embedded in complex systems that execute multiple local system, and system-ofsystems-wide sense-decide-act cycles. To act with authority rather than constant backup from a human
supervisor, the autonomy must be capable of achieving a level of situational awareness, adaptability, and
indeed “cleverness” that has not yet been realized in automation aids. Specific cross-cutting autonomy
challenges are summarized below:







Handling rare events: What strategies will succeed, and what tests can we perform to assure such a
system?
Handling unmodeled events: How does an autonomous system detect events that are not modeled,
and deal with such events in a manner that avoids disaster at least, and accomplishes the mission at
best?
Adapting to dynamic changes in the environment, mission, and the platform: Robust operation and
performance efficiency in the real-world will require the autonomous system to dynamically adapt its
control and decision-making policies in response to unforeseen or unmodeled changes. How do we
ensure that an autonomous system is robust to dynamic changes in the environment, mission
expectations, or itself?
“Creative” exploration and exploitation of sensed data: Sensors such as cameras, radar, lidar, and
sonar/ultrasonic augment traditional inertial and global positioning sensors with a new level of
complex data. An autonomous system must be capable of interpreting, fusing, and acting on incoming
information, not just feed it back to the user. This requires autonomy capable of acquiring and
processing sensor data in real-time to go from effective data representations to decisions.
New information-rich sensors: Sensors themselves still do not provide the diverse and comprehensive
dataset comparable to the human sensor system. Autonomy therefore can also benefit from new
sensor mechanisms to generate data which can be transformed into knowledge.
Advanced knowledge representations: Autonomous systems must be capable of capturing complex
environment properties with effective multidimensional knowledge representations. Once
representations are formulated, knowledge engineering is required offline to endow the autonomy
with a baseline capability to make accurate and “wise” (optimal) decisions. System-wide adaptation
of engineered knowledge will also be essential in cases where the environment is poorly modeled or
understood or deviated significantly from the baseline knowledge.
Intent prediction: Autonomous systems will ultimately interact with people as well as other
autonomous vehicles/agents. The autonomous system must not only act in a logical and transparent
23
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE

manner; it also must be capable of predicting human intent to the extent required for effective
communication and co-habitation in a common workspace.
Tools and algorithms for multi-vehicle cooperation: Autonomous vehicles must cooperate with each
other, particularly when operating in close proximity to each other in highly-dynamic, poorlymodeled, or hazardous environments. Research must extend past assumptions that platforms are
homogeneous. Indeed, vehicles may have distinct and potentially non-overlapping capabilities with
respect to motion (e.g., travel speeds, range/endurance), sensing, and onboard storage and
processing capacity. Autonomous teams must optimize behaviors to achieve new levels of capability
and efficiency in group sensing and action.
SYSTEMS ENGINEERING CHALLENGES




Establishing a common design tool/language base: Traditional V or Vee models of systems engineering
have proven difficult to apply to complex safety-critical systems such as modern aircraft and
spacecraft. Model-based engineering shows promise but protocols are not yet mature and accepted
across the different disciplines contributing to system design. Autonomy will add to existing system
complexity due to the need for adaptability and complexity in most cases.
Validation, verification, and accreditation (VV&A): V&V of complex systems with unknowns has
provided substantial challenges, particularly when budget constraints are tight. Autonomy will be
particularly difficult to V&V because in past systems we have relied on human operators, not software,
to provide a “backup”, and we have been tolerant of “imperfect human response”. For autonomy, we
must establish systems that incorporate probabilistic or uncertain models into V&V to ensure a
sufficient level of probabilistic validation and verification, as the complexity of the system and its
environment will prohibit guarantees of V&V. To this end, future autonomy will likely need to
incorporate procedures for accreditation and licensing currently available for human operators who
cannot be comprehensively evaluated for 100% correct behaviors. We also need the right rules and
abstractions to make full VV&A possible.
Robust handling of different integrity levels in requirements specifications: Integrity levels have
typically been specified manually by system designers, with levels such as those indicated by the FAA
in DO-178B leading to different levels of tolerance to risk. It is costly to require all elements of a tightlycoupled complex system to obtain the highest level of integrity required for any component in this
system. Automatic and robust techniques to specify and manage integrity levels are needed.
System engineering for the worst-case: Nominally, automation and autonomy can be shown to
function efficiently and safely. However, rare events can cascade into a worst-case scenario that can
provide responses much worse than expected in the design. Research is needed to ensure
autonomous systems can be guaranteed not to make a worst-case scenario much worse by, for
example, engaging humans or constraining adaptation in a manner that reigns in probability of
catastrophic failure.
SAFETY CHALLENGES


Risk assessment: Calculation of risk is not straightforward in a complex system. Endowing autonomy
with a high level of decision-making authority and ability to adapt compounds the risk assessment.
How can component-level, vehicle-level, and system-level risks be computed in a highly-autonomous
system, and what is the impact of false positives and negatives on the environment and other actors?
Risk bound specification: The FAA has established a simple bound on “risk of safety violation per hour
of flight”, but it is not clear this single number is the final word, nor is it clear this number translates
to different applications such as unmanned operations, flights over populated areas, or missions with
24
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE



such high value that risk is tolerated. A major safety challenge is therefore calculating, negotiating,
and accepting/establishing bounds on risk/safety for different systems, platforms, and scenarios. To
this end, “safe” test scenarios as well as constrained tests that exercise high-risk cases may be
beneficial to consider.
Level of safety with rogue / hostile vehicles: While assessing autonomous system safety with a single
vehicle or cooperative team is difficult, this challenge is compounded when rogue or adversarial
vehicles are nearby. Safety challenges may be faced due to potential for collision with other vehicles,
attack by munitions, or more generally adversarial actions that compromise targets, jam signals, etc.
Reliable fault (or exception) detection and handling: Fault and failure management is a challenge in
any complex aerospace system, regardless of level of autonomy. Today’s systems, however, rely
heavily on a human operator assessing the exception and dictating a recovery process. Autonomy is
beginning to handle faults/failures on a case-by-case basis, but failures that have not been explicitly
considered by system designers remaining difficult to handle through detection and reliable/safe
adaptation of models and responses. This problem is compounded for software-enabled autonomy
due to the potential for computing system failures, network outages, signal spoofing, and
cybersecurity violations.
Autonomy-Human Transitions: A major autonomy challenge is to ensure transitions of authority from
autonomy-to-human (and vice versa) are unsurprising, informative, and safe. This challenge is
motivated by numerous documented “mode confusion” cases in flight decks and by accidents where
automation “shut down” in the most difficult high-workload scenarios without providing warning or
any type of gradual authority transition. Autonomy may initiate actions to “buy time” in cases where
transitions would otherwise be necessarily abrupt.
4.3 ALGORITHM AND ARCHITECTURE DESIGN CHALLENGES
Fully-autonomous operation of Aerospace platforms requires a seamless integration of sensing,
environment perception, decision-making, and control algorithms, effectively performing Boyd’s Observe
Orient Decide Act (OODA) loop widely considered to be a reasonable abstraction of human, machine, and
collaborative decision systems. 10 Traditionally aerospace platforms have needed to perform OODA
functions onboard, but we anticipate pervasive and reliable network connectivity will also support vehicleto-cloud-to-vehicle (V2C2V) autonomy implementation models. Regardless of where computations are
performed and data is collected and stored, an autonomous vehicle needs to perform OODA functions
accurately and in time given normal and anomalous situations.
Some of the key challenges in autonomy come from the fact that agents need to operate in a real-world
environment with a structure that may be unknown a priori or that may dynamically change. Changes in
the environment’s static or mobile entities, degradation or loss of capabilities in the agent platform, or
higher-level changes in the mission objectives are but some examples of changes that an autonomous
agent may need to handle. Changes to the mission, environment, and platform will be capably handled
through a combination of two strategies: database (knowledge) retrieval and online adaptation. A series
of related challenges are highlighted below.
10
D. K. Von Lubitz, J. Beakley, and F. Patricelli, “‘All hazards approach’ to disaster management: the role of
information and knowledge management, Boyd's OODA Loop, and network‐centricity,” Disasters, 32(4), pp. 561585, 2008.
25
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
KNOWLEDGE-BASED AUTONOMY
Model-based state estimation, planning, and control algorithms can provide a suite of pre-computed
functions that can capably (and verifiably) cover the vast majority of scenarios that any autonomous
vehicle or more generally agent might encounter. Well-trained operators and pilots also rely on
knowledge-based autonomy through checklists, repetition and recall of specific responses that have been
successful, etc. “Instructions” and a priori training have proven valuable to both autonomous, human, and
collaborative decision systems. Deterministic state estimation, planning and control algorithms can
provide a strong base for autonomy that can be carefully analyzed, validated, and verified a priori. While
appreciable data and functionality can be stored onboard, autonomous vehicles supporting complex
missions may also rely on cloud-based storage and computational resources.
Knowledge-based OODA requires multiple decision-making layers. A typical layered autonomy
architecture11 is depicted in Figure 6. At the lowest level, middleware and operating system software
interfaces with available data storage, network, sensor, and actuation hardware. The next two layers
provide functions for tasks including payload handling, vehicle health management, and guidance,
navigation, and control (GNC). Activity planning/scheduling translates mission goals to task execution
schedules/sequences as well as translating waypoint goals to motion plans. “Automation” typically stops
at the task execution or activity scheduling layer relying on human supervisors/operators to specify and
update mission goals. Fully-autonomous operation requires an additional top layer to determine and
evolve mission objectives in response to changes in the environment, vehicle health, and information
retrieved from the cloud and other vehicles.
Figure 6. Autonomy Decision-Making Layers.
A careful software and model-based system engineering process can adequately capture knowledge and
functionality to effectively implement all autonomy layers in Figure 6. Predictive models of vehicle system
11
R. Alami, R. Chatila, S. Fleury, S., M. Ghallab, and F. Ingrand, “An architecture for autonomy,” The International
Journal of Robotics Research, 17(4), pp. 315-337, 1998.
26
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
performance and the environment can be exploited to plan policies that maximize current and future
expected mission reward while meeting imposed constraints related to safety and available resources. If
stored models and methods are reasonable approximations of the actual system and its environment,
deterministic planning and control solutions can be sufficient albeit costly and error-prone in large-scale
complex systems. Given scalability issues along with potential for incomplete and uncertain information,
a combination of knowledge-based, uncertain reasoning, and learning systems may be required.
AUTONOMY UNDER UNCERTAINTY
Resilient autonomy must be capable of recognizing and reacting to anomalies and uncertainties in the
environment, in onboard systems capabilities and performance, and in other agents’ instructions and
behaviors. A variety of uncertain reasoning algorithms have been developed, including path planning
algorithms such as the RRT (rapidly-expanding random tree) and the Markov Decision Process also known
as Stochastic Dynamic Programing. These techniques have demonstrated an ability to capture and handle
unknown and uncertain environments and outcomes. However, two of the most fundamental
assumptions in decision-making under uncertainty paradigms are often violated: stationarity and
ergodicity. Stationarity requires time independent state transitions and reward models, while ergodicity
essentially guarantees the non-existence of irrecoverable mistakes and the ability to repeat every
experience infinitely many times. Furthermore, in the presence of dynamic changes, scripted or stationary
mission policies will lead to brittle mission performance. In these cases, the ability to adapt the mission
policy during execution can prove critical. The real-world environment is noisy and uncertain. When the
uncertainty in the transition or reward model (which can be viewed as process noise) or in sensing
(measurement noise) is Gaussian, strong and elegant results for optimal decision-making and control are
available. These include the Kalman Filter or Linear Quadratic Gaussian regulator. However, rare events
or outliers in sensing measurements, are often non-Gaussian in nature. Ignoring these events can be very
costly. On the other hand, conservative Gaussian approximations could lead to conservative policies and
mission rewards. The accommodation of non-Gaussian uncertainties in learning and decision-making is
an important challenge facing the autonomy community.
AUTONOMY WITH ONLINE ADAPTATION / LEARNING
When a new situation is encountered, an autonomous system will not be able to recall data or a case to
directly apply. Learning or adaptation is then required. Figure 7 depicts an example of a learning-based
autonomy architecture. In this architecture, a predictive model of the environment takes the center-stage.
This architecture is designed to enable an Autonomous Utility-Driven Agent to utilize learning and
inference algorithms to continuously update the predictive model as data become available, and utilize
the predictive ability to make decisions that satisfy a higher-level objective. Learning and inference
algorithms are needed to make sense of the available data and update the predictive model of the nature,
intent, and transitions of entities in the environment. Learning and inference algorithms, typically
classified according to their functionality, include regression algorithms that are designed to learn
patterns from noisy data; classification algorithms (e.g. linear classifiers, Support Vector Machines) that
provide labels for entities in the environment; and clustering algorithms that seek to provide structure by
grouping together behaviors of the entities. Actions, rewards, probabilities, and model or control
parameters are among the many quantities that might need to be updated or discovered through online
learning. Once learned through exploration, new or adapted entities can be stored, shared, and later
exploited in as part of a knowledge base available to the particular platform, the team, or ultimately the
cloud-based “internet of things”. Knowledge retention and update are two important elements of a
learning-based autonomy architecture. The ability to retain information from past learning for future
27
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
exploitation ensures that the system knowledge continues to grow with learning in order to handle a wide
variety of scenarios that an autonomous system might encounter. Past system knowledge from times to
times needs to be updated by learning to reflect changes in the operating environment and system
dynamics. What might be an optimal policy for a healthy aircraft might no longer constitute a suitable
action for an impaired vehicle. Verifiable learning is an important challenge to learning-based autonomy.
How to ensure that machine-learning algorithms learn the correct behaviors of an autonomous system
can be particularly challenging. Thus, verification and probabilistic assessments of learning algorithms are
needed.
Figure 7. Learning-based Autonomy Architecture
MULTI-AGENT AUTONOMY
A strong case has been argued that collaborative operation with multiple smaller autonomous agents - as
opposed to a single larger agent - can be more robust. Additionally, sensing and acting can be
simultaneously performed in different geographic locations with a collaborative team promising
improvements in situational awareness and mission execution efficiency. In an autonomous multi-agent
system, each agent (vehicle) must still plan its own path, reliably execute this path, and collect pertinent
observations related to OODA functions as well as to accomplish the mission. Additionally, a team must
be coordinated. Team coordination requires a consistent understanding of the mission and team member
roles as a minimum, requiring a reliable and potentially high-bandwidth communication path between
team members in situations where mission plans cannot be planned and executed per the original (a
priori) mission plan.
REAL-TIME AUTONOMY
Current autopilot systems for platforms ranging from small UAS to commercial aircraft are manually
validated and verified using time-intensive analyses. For commercial FMS (flight management systems),
hard real-time task allocation and scheduling along with associated schedule validation and verification
have been effectively but required significant time and effort to accomplish. Small UAS autopilot code
tends to be deployed in single-core, single-thread implementations that can be more easily analyzed, yet
it is unclear that community-developed code chains ever undergo the rigorous real-time analyses required
for certified manned aviation products. Emerging autonomy capabilities will require significantly more
28
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
computational power than do current autopilots. Inevitably, processing will be performed in a multi-core,
distributed computational environment that includes GPUs (Graphics Processing Units) and cloud-based
resources. Reliable autonomy will therefore be critically dependent on improving autonomous real-time
software modeling, validation, and verification tool chains, as costs will be otherwise unmanageable.
4.4 ROADMAP TO SUCCESS
Autonomy research is currently “on the radar” of most major funding agencies, but setbacks and changes
in leadership could compromise momentum present today. The ISTC highly encourages the aerospace
autonomy research community to heed lessons learned in other fields to enable long-term progress
toward autonomy that will truly advance our aerospace platform and system-wide capabilities. Below is
a brief list of “rules” that we believe will promote a successful, collaborative community-based effort
toward future aerospace autonomy goals.










Always identify clear and tangible benefits of “new” autonomy; motivate the effort. Autonomy
research must be pursued because it is of potential benefit.
Be honest about challenges and tradeoffs and discover how to avoid them. Do not just advertise
autonomy by showing the “one demo that worked properly”.
Be cognizant of the depth, strength, and limitations of the work being done by the machine learning
and artificial intelligence communities in the areas of planning and decision-making, and knowledge
representation. This should help in avoiding reinvention-of-the-wheel, and ensure that findings from
those communities are extended and integrated with an engineering perspective.
Talk to people in other fields to ensure the “engineering autonomy” viewpoint that stresses safety
and robustness above all and grows to be more comprehensive. Autonomy has the potential to
benefit business, education, and other use cases related to air and space flight systems.
Develop and capitalize on collaborations, open source, standards (for representing data also), grand
challenges (a recurring theme), policy changes, and crowd sourcing. To this end, we recommend the
community create grand challenges to motivate and evaluate autonomy; develop benchmarks and
metrics.
Remember regulatory, legal, social challenges (public education and trust). These must be kept in
mind particularly when proposing autonomous systems that will carry or otherwise interact with the
public.
Leverage system knowledge through model-based approaches as widely as possible in development
of autonomous systems. System knowledge will provide robust and effective autonomous solutions
while reducing the burden that otherwise could be placed on machine learning.
Autonomy can occur at many different levels depending on applications. The end goal for some
autonomous systems could be a complete autonomous operation without human supervision, while
it could be a cooperative operation between machine and the human synergistically for other
autonomous systems.
Educate and advocate funding agencies the importance of autonomy research in enabling new
capabilities which otherwise cannot be conceived without research support.
Education and outreach are essential elements of long-term success in developing and infusing
aerospace autonomy technology. To that end we recommend the following:
o Develop aerospace autonomy tutorials.
o Educate through online interactive demos that are fun. Autonomy researchers can gain trust in
the community by helping all understand how autonomy can improve both mission capabilities
and safety.
29
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
o
o
Find places to "easily" transition autonomy in aerospace to demonstrate safety improvements.
Autonomy infusion opportunities include emergency auto-land for civil aircraft in “simple” cases
(e.g., engine-out) and maturing detect-and-avoid capabilities such as the ground collision
avoidance system at AFRL.
Encourage co-design of autonomy and human factors to enable interfaces to be informative,
unsurprising, and safe.
4.5 SUPPLEMENT
SUMMARY OF NRC AUTONOMY RESEARCH FOR CIVIL AVIATION REPORT
BARRIERS AND RESEARCH AGENDA
The NRC report9 on autonomy research for civil aviation is heavily cited in this roadmap because of its analogous focus
on autonomy or “increasingly autonomous” (IA) systems and because it represents a consensus view among community
experts. Note that the NRC report focuses on civil aviation, so our roadmap aims to address other use cases, e.g., DoD
and commercial, as well as considering autonomy research needs for space applications.
Barriers were divided into three groups: technology barriers, regulation and certification barriers, and additional
barriers. The full list is presented below for completeness. Most of these technology and regulatory barriers have
unambiguous meanings. Legal and social issues focused on liability, fear/trust, as well as safety and privacy concerns
associated with deploying increasingly autonomous (IA) crewed and un-crewed aircraft into public airspace over
populated areas. The committee called out certification, adaptive/nondeterministic systems, trust, and validation and
verification as particularly challenging barriers to overcome.
TECHNOLOGY BARRIERS
1. Communications and data acquisition
2. Cyber physical security
3. Decision making by adaptive/nondeterministic systems
4. Diversity of aircraft
5. Human–machine integration
6. Sensing, perception, and cognition
7. System complexity and resilience
8. Verification and validation (V&V)
REGULATION AND CERTIFICATION BARRIERS
1. Airspace access for unmanned aircraft
2. Certification process
3. Equivalent level of safety
4. Trust in adaptive/nondeterministic IA systems
ADDITIONAL BARRIERS
1. Legal issues
2. Social issues
The NRC committee identified eight high-priority research agenda topics for civil aviation autonomy. These were further
classified into “most urgent and difficult” and “other high priority” categories. These projects are listed below with the
verbatim summary description of each topic.
MOST URGENT AND DIFFICULT RESEARCH PROJECTS
1. Behavior of Adaptive/Nondeterministic Systems: Develop methodologies to characterize and bound the behavior
of adaptive/nondeterministic systems over their complete life cycle.
2. Operation without Continuous Human Oversight: Develop the system architectures and technologies that would
enable increasingly sophisticated IA systems and unmanned aircraft to operate for extended periods of time
without real-time human cognizance and control.
3. Modeling and Simulation: Develop the theoretical basis and methodologies for using modeling and simulation to
accelerate the development and maturation of advanced IA systems and aircraft.
4. Verification, Validation, and Certification: Develop standards and procedures for the verification, validation,
and certification of IA systems and determine their implications for design.
30
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
ADDITIONAL HIGH-PRIORITY RESEARCH PROJECTS
1. Nontraditional Methodologies and Technologies: Develop Methodologies for Accepting technologies not
traditionally used in civil aviation (e.g., open-source software and consumer electronic products) in IA systems.
2. Role of Personnel and Systems: Determine how the roles of key personnel and systems, as well as related humanmachine interfaces, should evolve to enable the operation of advanced IA systems.
3. Safety and Efficiency: Determine how IA systems could enhance the safety and efficiency of civil aviation.
4. Stakeholder Trust: Develop processes to engender broad stakeholder trust in IA systems in the civil aviation
system.
31
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
5. COMPUTATIONAL INTELLIGENCE
“If we knew what it was we were doing, it would not be called research, would it?”
Albert Einstein
5.1 INTRODUCTION
This contribution to the Roadmap for Intelligent Systems will focus on Computational Intelligence (CI).
“Computational intelligence is the study of the design of intelligent agents,” where an agent is an entity
that reacts and interacts with its environment. An intelligent agent refers to an agent that adapts to its
environment by changing its strategies and actions to meet its shifting goals and objectives. “Just as the
goal of aerodynamics isn’t to synthesize birds, but to understand the phenomenon of flying by building
flying machines, CI’s ultimate goal isn’t necessarily the full-scale simulation of human intelligence.” As
[aerospace] engineers we seek to utilize the science of intelligence as learned through the study of CI, not
for “psychological validity but with the more practical desire to create programs that solve real
problems”.12
The methodologies making up Computational Intelligence mostly fall under the areas of fuzzy logic, rough
sets, neural networks, evolutionary computation, and swarm intelligence.13 Each of these methodology
categories has varying sub-methods such as Mamdani fuzzy systems, recurrent neural networks, and
particle swarm optimization. Additionally, numerous hybrid methodologies are utilized including genetic
fuzzy systems and ant colony optimized neural networks. Again, these methodologies are a “broad and
diverse collection of nature inspired computational methodologies and approaches, and tools and
techniques that are meant to be used to model and solve complex real-world problems in various areas
of science and technology in which the traditional approaches based on strict and well-defined tools and
techniques, exemplified by hard mathematical modeling, optimization, control theory, stochastic
analyses, etc., are either not feasible or not efficient.”13
Computational Intelligence is a non-traditional aerospace science, yet it has been found useful in
numerous aerospace applications, such as remote sensing, scheduling plans for unmanned aerial vehicles,
improving aerodynamic design (e.g. airfoil and vehicle shape), optimizing structures, improving the
control of aerospace vehicles, regulating air traffic, etc. 14 Traditional aerospace sciences such as
propulsion, fluid dynamics, thermodynamics, stability and control, structures, and aeroelasticity utilize
first principles or statistical models to understand the system in question, and then use mathematical or
computational tools to construct the desired outcome. Naturally, to build these complex systems, a deep
12
D. Poole, A. Mackworth, and R. Goebel, Computational Intelligence: A Logical Approach. Oxford University Press,
1998.
13
J. Kacprzyk and W. Pedrycz, Eds., Springer Handbook of Computational Intelligence. Springer-Verlag, Berlin, 2015.
14
D. J. Lary, "Artificial Intelligence in Aerospace," in Aerospace Technologies Advancements, T. T. Arif, Ed. InTech,
2010. [Online]. http://www.intechopen.com/books/aerospace-technologies-advancements/artificialintelligence-in-aerospace.
32
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
understanding of the underlying physics is required. Years of research by many people were needed to
develop this theoretical foundation, upon which these systems could be built.
These traditional methods cannot solve all the problems confronting aerospace engineers. Primarily,
there is a fair amount of uncertainty in developing accurate models for simulation purposes. Analytical
approaches are generally limited to small-scale problems and further research in utilizing fundamental
principles is desirable but may be elusive. Secondly, the problem might be intractable given today’s
computational tools and hardware. Computational approaches in general can demand a great amount of
high-performance computing resources. As such, only a small subset of the solution space could be
explored efficiently due to the limitation on the available computing power. Computational Intelligence
can provide a way to effectively manage and utilize the data and information created by the traditional
methods to reduce the design and analysis cycle of a complex aerospace problem. For example, genetic
algorithms and response surface (surrogate) modeling have been frequently used in design optimization
of complex aerodynamic configurations modeled by CFD (Computational Fluid Dynamics). Therefore, we
propose that the knowledge gained from the field of computational intelligence can find practical
solutions to some of these problems, and that in the future it will become increasingly useful for aerospace
systems.
5.2 COMPUTATIONAL INTELLIGENCE CAPABILITIES AND ROLES
Computational Intelligence methods, including evolutionary computing, fuzzy logic, bio-inspired
computing, artificial neural networks, swarm intelligence as well as various combinations of these
techniques such as genetic fuzzy systems, have demonstrated the potential in providing effective
solutions to large scale, meaningful and increasingly complex aerospace problems involving learning,
adaptation, decision-making and optimization. These methods provide the potential to solve certain
aerospace problems that we cannot solve today using traditional approaches, e.g., aircraft with uncertain
models (e.g., hypersonics), missions where objectives are given in linguistic/fuzzy terms, planning robustly
for high-dimensional complex/nonlinear systems with uncertainty.
Furthermore, as the complexity and uncertainty in future aerospace applications increase, the need to
make effective, as well as real-time (or near real-time), decisions while exploring very large solution spaces
is quintessential. The salient figures of merit in the above class of applications is the quality of the decision
made, which is based on the minimization of a cost function, and computational cost while adhering to a
very large number of system level and sub-system level constraints which include safety and security of
operations. It is envisioned that the tools discovered, developed, and improved through research towards
computational intelligence will improve modern aerospace capabilities.
To illustrate the benefits of computational intelligence in solving problems such as those mentioned, we
reference an example involving a new CI tool, called genetic fuzzy trees, which has shown remarkable
promise. Applied to an autonomous Unmanned Combat Aerial Vehicles (UCAVs) mission scenario,
cascading genetic fuzzy trees have, despite an incredibly large solution space, demonstrated remarkable
effectiveness in training intelligent controllers for the UCAV squadron. Equipped with numerous defensive
systems, simulations confirmed that the UCAVs could navigate a mission space, counter enemy threats,
cope with losses in communications, and destroy mission-critical targets if intelligently controlled15. Even
15
N. Ernest, "Genetic Fuzzy Trees for Intelligent Control of Unmanned Combat Aerial Vehicles," PhD Dissertation,
Department of Aerospace and Engineering Mechanics, University of Cincinnati, Cincinnati, 2015.
33
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
while faced with a solution space so large that many alternative methods would be computationally
intractable, this example method utilizing a new type of genetic fuzzy control has shown robustness to
drastically changing states, uncertainty, and limited information while maintaining extreme levels of
computational efficiency 16 . A focus needs to be placed on these types of Computational Intelligence
methods that have extreme scalability due to the need to control problems with hundreds, and potentially
thousands of inputs and outputs.
5.3 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
TECHNICAL CHALLENGES
Traditionalists see the work being done by aerospace computational intelligence researchers and reject
this approach because the foundation does not appear sound. Their argument is valid: how can we solve
something without really understanding the problem? To address this challenge, a tight integration of
computational intelligence theory, which is based in computer science, with traditional aerospace
sciences is the only way forward. The application of CI to aerospace problems must only happen when
there is a true understanding of the problem and when CI offers tools that have the potential to overcome
the limitations of traditional solutions. Furthermore, certain CI tools are more amenable to incorporating
subject matter experts; it is these tools that allow workable insight that will prove more useful, because
they incorporate experience and knowledge. From this, we can see that one way to improve practical use
of computational intelligence tools (and other intelligent systems) in aerospace is by including such topics
in the education of aerospace engineers, along with a solid foundation in the basic science. What this gives
them are new ways to solve problems, while understanding the fundamental science.
In addition to bringing expertise and CI together, there must be a focus on producing implementable
systems. Certain applications require that a successful system show deep learning, be computationally
efficient, resilient to changes and unknown environments, and ultimately be highly effective. Many
problems such as these are “solved” by assuming away the complexity and converting the problem to
simpler scenarios where more traditional mathematical or game theory methods can be applied. Often
the results of these studies on simplified cases with many assumptions produce an attractive technical
report and nothing else. CI methods that can produce implementable systems must be the focus.
TECHNICAL BARRIERS
Closely related to the previous argument is the barrier that many CI tools come in the form of a “black
box.” The output and learning offers little intuition to the user of such tools. In this regard, it is necessary
to fully understand the problem, before applying “black box” tools. Doing so will help alleviate some of
this concern. However, there is a need to develop CI tools and understanding that allow us to gain an
intuition into the result. Similarly, applying the appropriate tool to the specific problem is important.
Knowledge of the aerospace problem is required, as well as an understanding of the CI tools. This gives
the researcher the best ability to practically solve the problem. More importantly, in order to develop the
necessary level of trust that the end-users of intelligent aerospace systems have in the results of the CI
tools, the CI tools will have to demonstrate a level of transparency that sheds light into that “black box”
16
N. Ernest, K. Cohen, C. Schumacher, and D. Casbeer, "Learning of Intelligent Controllers for Autonomous
Unmanned Combat Aerial Vehicles By Genetic Cascading Fuzzy Methods," SAE Aerospace Systems Technology
Conference, Cincinnati, 2014.
34
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
and allows the users to understand why a certain decision has been made by the CI tools. Some CI tools,
like fuzzy logic, are more transparent, and some, e.g., neural networks, will require additional work.
A major concern is that many CI tools do not offer analytic performance guarantees. Evolutionary based
methods cannot indicate how close they are to optimal. Learning methods do not provide bounds to
indicate how far the current solution is from the truth. Currently, these guarantees are given through
extensive testing and evolution of a prototype system. This method is not out of the norm. Typical
airplanes today must pass certain testing thresholds to validate their performance. However, we can and
should do better. Thought must be devoted to the development of methodology to validate and verify
performance bounds in a more rigorous manner.
IMPACT TO AEROSPACE DOMAINS AND INTELLIGENT SYSTEMS VISION
The ability to bring high-performance and efficient control to difficult problems with a far less intimate
study of the physics behind the system, and thus fewer, if any, unrealistic mathematical assumptions and
constraints is the highlight of the CI tools. This may lead to the counter-intuitive opportunity for CI
methods to help first-principles approaches more quickly increase their accuracy.
The success of CI tools in limited applications opens up the imagination and enables us to boldly envision
a wide variety of future aerospace applications involving numerous interactions between teams of
humans and increasingly autonomous systems. An additional advantage of this class of the hybrid CI
approaches is that while the exploration of the solution space utilizes stochastic parameters during the
learning process, once the learning system converges to a solution, the subsequent decision-making is
deterministic which lends itself far better for verification and validation.
5.4 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
RESEARCH GAPS
The ability and potential of CI to efficiently explore large solution spaces and provide real-time decisionmaking for a scenario of collaborating UCAVs has been demonstrated. The generality of CI techniques for
a wider range of applications needs to be explored and comparison made with alternative approaches for
large-scale complex problems. We feel that potential users need to see more evidence of the applicability
and suitability of CI and the role it may play in systems they have in mind. Furthermore, research is needed
in developing verification and validation techniques that will set the stage of implementing CI based
solutions and incorporating them in the full scale development program of future aerospace systems.
When looking at the broader problem of system design, verification, and validation, there is the potential
for CI methods to ensure guarantees for system specifications in early design states of a large complex
project. How can developmental risks associated with performance, robustness, adaptability and
scalability be assessed early on? What is the nature of the tasks required during the conceptual design
phase as we compare alternative approaches?
OPERATIONAL GAPS
Traditional aerospace missions or scenarios tend to limit themselves when it comes to concerns about
autonomous decision-making in uncertain large-scale problems. Operational doctrine development and
technology advancement need to go hand-in-hand as they are far more coupled in an increasingly complex
aerospace environment. A simulation-based spiral effort may be required to enhance the “daring” and to
35
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
develop the confidence in the development of operational doctrines. This calls for interaction between
the user and engineering communities that traditionally do not exchange much in terms of early research
and exploration of ideas and exploitation of potentially powerful computational intelligent tools.
RESEARCH NEEDS AND TECHNICAL APPROACHES
The following describe the desired features we seek using CI approaches:








Develop missions or scenarios that involve large-scale complex aerospace applications with inherent
uncertainty and incomplete information.
Develop a simulation-based environment to explore the missions or scenarios and establish figures of
merit for specifying tasks to be performed by CI agents.
Explore the potential of different CI approaches and hybrids in the above-mentioned simulated
environment.
Quantitatively evaluate the effectiveness of the developed CI approaches and hybrids examining
strengths, weaknesses and application areas they most lend themselves to.
Develop V&V techniques, which will establish trust in CI approaches across the aerospace community.
Develop a CI repository of missions or scenarios, approaches, results and recommendations to be
shared by the community.
Implement the ability of integrating CI with hardware/software architectures to enhance the
intelligence of future aerospace applications.
Educate (aerospace) engineers in broader fields to give them an understanding of new tools to solve
fundamental science problems.
A key to success will be the ability to imagine technologically achievable (from hardware perspective)
future missions or scenarios and quantify the impact of verifiable CI approaches.
PRIORITIZATION
As with several other areas in the field of Intelligent Systems, our first priority and the main impediment
is not technical, but rather policy and research priority as viewed by funding agencies. This often results
in insufficient investment levels to mature many potentially promising CI applications. CI tools have shown
promise in limited settings and this needs to be further explored and then exploited to the fullest in
making future aerospace applications that much more intelligent.
Secondly, we need more involvement from DoD and non-DoD funding agencies, which develop
appropriate challenge problems to engage our community and allow for a more open discussion and
comparison of CI approaches and their ability to be implemented in meaningful aerospace applications.
36
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
6. TRUST
6.1 INTRODUCTION
This contribution to the Roadmap for Intelligent Systems will focus on the need to develop trust in
intelligent systems to perform aviation safety-critical functions. Trust involves some level of human ascent
to the ability of the intelligent system to make correct safety-critical decisions. Intelligent systems are
characterized by non-deterministic processes, adaptive learning, and highly complex software-driven
processes. Traditional methods for establishing trust in aviation systems involve verification (i.e., is this
system right?) and validation (i.e., is it the right system?) and certification (i.e., does a trusted third party
believe it is right?). Traditional validation and verification (V&V) methods in this discipline generally rely
on repeatable experimental results and exercising fast time, exhaustive simulations coupled with flight
tests. An intelligent system may produce non-repeatable results in tests or may be so complex that the
traditional V&V process is impractical. New V&V and certification approaches are needed to establish
trust in these systems.
6.2 CAPABILITIES AND ROLES
DESCRIPTION OF TRUST IN INTELLIGENT SYSTEMS
There are many different perspectives on trust depending upon a person’s role in interacting with an
intelligent system. Their lives, livelihoods, or reputation may be at stake. Independent certification is one
way to increase trust in a system. The introduction of a trusted third party that takes some investment in
the relationship between the two parties may provide oversight, regulation or enforcement of a contract
(social, legal, or both) between the intelligent system and the end-user. Certification typically depends on
defining a standard for performance, building evidence to show compliance to that standard, and
identification of the means of V&V (e.g., analysis, simulation, flight test, etc.). Highly complex intelligent
systems may perform differently under different circumstances. For example, an intelligent system that
“learns” may produce different outputs given the exact same inputs depending on the level of training of
the system. It is presumed that traditional methods such as exhaustive testing, stressing case analysis,
and Monte Carlo simulation will not be sufficient to establish trust in intelligent systems. Therefore,
methods are needed to establish trust in these systems, either through enhancement of existing
certification paradigms or development of new paradigms.
Methods to Establish Trust in Intelligent Systems
There are a number of existing methods to establish trust in intelligent systems, some of which are:
 Formal methods seek to mathematically prove that an intelligent system will not exceed the bounds
of a specific solution set. Formal methods analyses examine the algorithms and formally prove that
an intelligent system cannot produce an unsafe output.
 Runtime assurance methods seek to monitor the behavior of an intelligent system in real time.
Runtime assurance programs – sometimes called “wrappers” – can detect when an intelligent system
is going to produce an unsafe result, and either revert to an alternate pre-programmed safe behavior
or yield control to a human.
 Bayesian analysis methods examine the outputs from an intelligent system and determine a level of
confidence that the system will perform safely. This is analogous to a qualified instructor pilot making
a determination that a student is ready to fly solo. The instructor cannot and does not test every
37
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE

possible circumstance the student may encounter, but infers from a variety of parameters that the
student is safe. Bayesian methods extend this approach to technical systems.
Simulation will continue to play a major role in the V&V of intelligent systems with adaptive learning.
Many aspects of adaptive learning systems, in particular convergence and stability, can only be
analyzed with simulation runs that provide enough detail and fidelity to model significant nonlinear
dynamics. Simulation provides a fairly rapid way to accomplish the following tasks:
o Evaluation and comparison of different learning algorithms
o Determination of how much learning is actually accomplished at each step
o Evaluation of the effect of process and measurement noise on learning convergence rate
o Determination of learning stability boundaries
o Testing algorithm execution speed on actual flight computer hardware
o Conducting piloted evaluation of the learning system in a flight simulator
o Simulating ad-hoc techniques of improving the learning process, such as adding persistent
excitation to improve identification and convergence, or stopping the learning process after error
is less than a specified error, or after a specified number of iterations
The current approach is to verify an adaptive learning system over an exhaustive state space using
the Monte Carlo simulation method. The state space must be carefully designed to include all possible
effects that an adaptive learning system can encounter in operation. A problem encountered in
performing simulation is proving adequate test coverage. Coverage concerns with program execution
of the software that implements an adaptive learning system to ensure that its functionality is
properly designed. Just because an adaptive learning system performs satisfactorily in simulation
does not necessarily mean that it would perform the same in real-world situations. Thus, simulation
could aid the V&V process but it is not sufficient by itself as a V&V tool.
6.3 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
TECHNICAL CHALLENGES
V&V of intelligent systems is highly challenging. To date, this effort is still evolving. Fundamental technical
challenges in establishing trust are plentiful. Below is a limited set of technical challenges that need to be
addressed in order to advance intelligent systems towards trustworthy systems.
 Runtime assurance methodologies that are robust enough to identify unsafe intent or restore unsafe
behavior of an intelligent system to a safe state
 Establishing methods and metrics to infer when an intelligent system can be relied on for safety critical
function
 Adapting existing software assurance methods or developing new ones for non-deterministic systems
 Expanding formal methods to highly complex systems
 Understanding the human factors implications of part-time monitoring of a trusted intelligent system
 Development of human factors standards to address part-time monitoring of safety-critical functions
(e.g., how to rapidly provide situation awareness to a disengaged pilot as the intelligent system
returns system control in an unsafe state)
 Lack of integration with other disciplines, such as adaptive systems, to produce feasible and
implementable trusted systems.
TECHNICAL BARRIERS
38
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Creating trustworthy intelligent systems represents a major technology barrier to overcome. Intelligent
systems with adaptive learning algorithms will never become part of the future unless it can be proven
that this system is highly safe and reliable. Rigorous methods for adaptive software verification and
validation must therefore be developed to ensure that software failures will not occur, to verify that the
intelligent system functions as required, to eliminate unintended functionality, and to demonstrate that
FAA (Federal Aviation Administration) certification requirements can be satisfied. To overcome the
technology barrier for trustworthy intelligent systems, research in the following areas are needed.





Advanced formal methods techniques
Robust wrapper technology
Human factors alerting methodologies
Advanced adaptive systems
Advanced prognostics and health management systems
POLICY AND REGULATORY BARRIERS
US leadership in autonomous systems development does not necessarily translate to leadership in the
enabling technology associated with establishing trustworthy systems. While both are extremely
important, most of the attention in the autonomy community is focused on systems development. There
is far less attention being paid worldwide and nationally in addressing the methods, metrics, and enablers
associated with determining the trustworthiness of autonomous systems. Historically, regulatory agencies
have been slow to approve new aviation technologies. With the advance of unmanned aircraft systems
throughout the world, US leadership in this area may hinge on its ability to rapidly establish trust and
safety of these systems. It is critical that researchers dialogue with the certification authorities early and
often to overcome these barriers.
IMPACT TO AEROSPACE DOMAINS AND INTELLIGENT SYSTEMS VISION
For widespread use of intelligent systems in aviation in safety-critical roles, development of certification,
V&V, and other means of establishing trustworthiness of these systems is paramount. Examples exist in
both the military and civilian aviation sectors where safety-critical intelligent features were “turned off”
prior to deployment due to the regulator concern over the trustworthiness of the system.
6.4 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
V&V research is viewed as a key research to enable intelligent systems to be operational in future
aviation. V&V processes are designed to ensure that intelligent systems function as intended and the
consequences of all possible outcomes of the adaptive learning process are verified to be acceptable.
Software certification is a major issue that V&V research is currently addressing. Understanding gaps in
software certification process for adaptive learning systems will provide the basis for formulating a
comprehensive strategy to address research needs to close the certification gaps.
RESEARCH GAPS
Certification of adaptive systems is a major technology barrier that prevents the adoption of intelligent
systems in safety-critical aviation. To date, no adaptive learning systems have been certified for use in the
commercial airspace. The certification process as defined by FAA requires that all flight critical software
to meet RTCA DO-178B guidelines or other methods accepted by the FAA. However, RTCA DO-178B
39
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
guidelines in general do not address flight critical software for adaptive learning systems, although this
may be changing as the use of adaptive learning systems in prototype or non-safety-critical systems is on
the increase. Therefore, there exist certification gaps for adaptive learning systems. Research to address
these certification gaps need to be conducted in order to realize future intelligent systems certified for
operation in the national airspace.



Learning system requirements: a critical gap which needs to be closed to facilitate certification is to
develop procedures and methodologies to completely and correctly specify the design requirements
of adaptive learning systems. These software requirements define as precisely as possible what the
software is supposed to do. These requirements could include performance and stability metrics,
precision, accuracy, and timing constraints. For adaptive learning systems, a particular challenge is
how to define requirements for performance and stability using some quantifiable and well-accepted
metrics. This will require fundamental research in adaptive systems (see the adaptive systems section
in the roadmap).
Simulation standards: Intelligent systems with adaptive learning are usually tested in simulation, but
rarely are the requirements themselves integrated into the testing. Since DO-178B presently allows
certification credit to be obtained for both simulation and flight testing, it is highly likely that
simulation will become an important part of the certification process for adaptive learning systems. A
difficulty for certification, however, is the lack of standards for simulation methodologies for nondeterministic systems with adaptive learning.
Stability and convergence: Stability is a fundamental requirement of any adaptive learning systems.
For systems with high assurance such as human-rated or mission-critical flight vehicles, stability of
adaptive learning systems is of paramount importance. Without guaranteed stability, such adaptive
learning algorithms cannot be certified for operation in high-assurance systems. Convergence
determines accuracy of an adaptive learning system. It is conceivable that even though a learning
algorithm is stable, the adaptive parameters may not converge to correct values. Thus, accurate
convergence is also important since this is directly related to the performance of an adaptive learning
system.
OPERATIONAL GAPS
The following are identified as operational gaps for the implementation of trustworthy intelligent
systems:
 Processes/methods for querying an intelligent system to understand the basis for an action.
 Cost-wise approaches to certification that allow flexibility and control costs.
RESEARCH NEEDS AND TECHNICAL APPROACHES
Some of the future research needs in software certification for adaptive learning systems to address the
above research gaps and operational gaps could include the following:

Model checking for hybrid adaptive systems: The formal method of model checking has become an
important tool for V&V of adaptive learning systems. Model checkers have found considerable
applications for outer-loop mission-planning adaptive system verification. Inner-loop adaptive
systems are usually controlled by an autonomous agent mission planner and scheduler using finite
state machine. The continuous variables in inner-loop adaptive systems could assume an infinite
number of values, thereby presenting a state explosion problem for the model checker. A hybrid
approach could be developed by using approximation function to convert the continuous variables
40
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE


into finite state variables that only take on relatively few values. This abstraction could allows for an
efficient exploration of the continuous model checking space to become possible.
Tools for on-line software assurance: Although simulation test cases may discover problems, testing
can never reveal the absence of all problems, no matter how many high-fidelity simulations are
performed. It is for this reason that undiscovered failure modes may lurk in the adaptive learning
system or be found at a test condition previously not simulated. To safeguard against these failures,
tools of verifying in-flight software assurance should be developed. Such tools would combine
mathematical analysis with dynamic monitoring to compute the probability density function of
adaptive system outputs during the learning process. These tools could produce a real-time estimate
of the variance of the adaptive system outputs that can indicate if good performance of the adaptive
system software can be expected or if learning is not working as intended so the learning process
could be stopped before the system reaches an unrecoverable unsafe state.. The tools could be used
for pre-deployment verification as well as a software harness to monitor quality of the adaptive
system during operation. The outputs of the tools might be used as a signal to stop and start the
adaptive learning process or be used to provide a guarantee of the maximum error for certification
purposes.
Stability and convergence: Complex adaptive learning systems with non-deterministic processes such
as neural networks are generally difficult to guarantee stability and convergence. Development of
methods that can reduce or entirely eliminate non-determinism and provide quantifiable metrics for
performance and stability can greatly help the certification process since these metrics could be used
to produce certificates for certification. Any certifiable adaptive learning systems should demonstrate
evidence of robustness to a wide variety of real-word situations which include the following:
o Endogenous and exogenous disturbance inputs such as communication glitches or turbulence
o Time latency due to computational processes, sensor measurements, or communication delay
o Unmodeled behaviors to the extent possible by capturing as accurately as possible the known
system behaviors by modeling or measurements
o Interaction with the human pilot or operator who themselves could be viewed as another
adaptive learning system operating in parallel which could provide conflicting response, leading
to incorrect learning
o Capability to self-diagnose and prevent incorrect adaptive learning processes by monitoring
output signals from some reference signals and terminating the adaptive learning processes as
necessary
PRIORITIZATION
The highest priority action in this discipline is for the researchers to understand the requirements and
perspectives of the regulator to achieve third-party trust in intelligent systems for aerospace applications.
Researchers should factor in the needs of the certification authority as they mature intelligent systems
technologies.
41
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
7. UNMANNED AIRCRAFT SYSTEMS INTEGRATION IN THE
NATIONAL AIRSPACE AT LOW ALTITUDES
7.1 INTRODUCTION
The use of unmanned aircraft systems (UAS) is projected to rise dramatically in next several decades.
Ongoing research has focused on the safe integration of UAS into the National Airspace (NAS); however a
number of technological, regulatory, operational, and social challenges have delayed the widespread use
of UAS in the NAS. This contribution to the Roadmap for Intelligent Systems will focus on the areas where
intelligent systems can address several of the technological challenges hindering safe integration of UAS
in the NAS.
The types of operations that will be addressed in this section are specifically UAS that are conducting
missions within visual line of sight (VLOS) and beyond visual line of sight (BVLOS) at low altitudes in
uncontrolled airspace (e.g. Class G). Operations in controlled airspaces introduce a different set of
technological, regulatory, operational, and social challenges and are beyond the scope of this document.
This section will focus on the areas where vehicle automation, airspace management automation, and
human-decision support tools have technical challenges where intelligent systems can contribute to a
solution. Low-altitude UAS operations are relatively non-existent in current operations in the US NAS, thus
there are many technical challenges that arise as these vehicles will perform missions with increased level
of automation in areas where there will be more frequent interaction with humans, man-made structures
and terrain than is common in today’s operations. Intelligent systems can contribute to the following areas
in low-altitude UAS operations:




Human-system collaboration, situation awareness, and decision support tools
UAS vehicle and ground support automation
Airspace management automation, security, safety, efficiency, and equitability
Mission planning and contingency management
The wide range of vehicle equipage, performance, mission, and challenges related to geography implies
that additional intelligent systems applications not included in the list above may be realized in the future
as UAS operations become fully integrated into the NAS.
7.2 INTELLIGENT SYSTEMS CAPABILITIES AND ROLES
DESCRIPTION OF INTELLIGENT SYSTEMS CAPABILITIES
Limited commercial UAS operations are currently allowed in the airspace. This limitation is largely driven
by policy. As a result, there are relatively few examples of intelligent systems technologies being used in
commercial UAS operations. Many potential applications of intelligent systems where small UAS operating
at low altitudes would be relevant are currently being provided by terrestrial systems equipment or
manned aircraft operations. In these current technical solutions, there is a strong demand to drive down
operational costs, to reduce risk of damaging equipment or loss of life, and to lower the potential for
human errors.
42
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Recently, there has been increasing acceptance of intelligent systems technologies in other industries,
such as the automotive self-driving car technologies. The rise in acceptance and success of these
technologies may increase the likelihood of social acceptance for small UAS. A path forward in integrating
intelligent systems into this domain is to demonstrate a variety of applications where intelligent systems
can increase the reliability, safety, and efficiency of systems, as well as procedures and operations. The
goal is to increase of intelligent automation to allow for numerous operations with vehicles that have
limited size, weight, and power operating in complex environments.
A short list of desired intelligent systems capabilities for each identified area of intelligent system
contribution include the following:




Human system
o Reduce the probability of human commanding error
o Improve the situation awareness of the operator
o Increase automation on the vehicle such that the operator tasks are manage-by-exception
o Enable decision support tools for emergency situations and contingency management
o Enable a single operator to command and control multiple vehicles
UAS vehicle and ground support automation
o Provide onboard and/or ground-based separation assurance from other airborne traffic, terrain
and natural obstacles, man-made obstacles, and people on the ground.
o Fault tolerant systems to reduce the risk in emergency situations (lost-link, hardware failure, etc.)
o Path planning in complex environments (GPS-denied environments, variable weather conditions
and obstructions, man-made structure and terrain avoidance, etc.)
o Vehicle health monitoring and diagnostics
Airspace Management
o Spectrum allocation and management
o Airspace management system health monitoring
o Flight monitoring and conformance monitoring
o Flight planning, scheduling and demand management and separation assurance
o Contingency management
o Providing information to various communities that are connected to the airspace (other ATM
systems, UAS operators, general aviation community, public, law enforcement, etc.)
Mission planning and contingency management
o Risk-based operational planning and contingency management
o Using vehicle performance modeling to determine operation feasibility and contingencies
7.3 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
TECHNICAL CHALLENGES
Due to the introduction of new vehicles into under-utilized low-altitude airspace, the state of practice of
new technologies in this area tends to err on the side of being risk-averse in nature. Most emerging
technologies to support small UAS operations are at a low technology readiness level (TRL) and have not
been tested in a variety of environments over a myriad of conditions. Several vehicle technologies
(automated take-off/landing, detect-and-avoid systems, lost link systems, etc.) have been lab-tested and
field-tested in a limited capacity, but few with intelligent system capabilities have made it to small UAS
operations.
43
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Intelligent systems are considered a path towards increasingly automated and safe UAS operations;
however the technology has a stigma for being unreliable and could generate hazardous situations under
the right conditions. To overcome this, the intelligent systems community needs to demonstrate the
technical ability to perform increasingly automated functions and to show tangible improvements in
safety and reliability of the vehicle systems and the airspace.
Another technical challenge may come from the human-centric mentality of most engineering systems
today. The incorporation of human-system teaming, collaboration and even human-assisted autonomous
control of the vehicles and airspace is the direction that small UAS business cases are moving towards. To
enable operations with operators that have limited training, with vehicles that have a wide range of
performance and equipage, and in potentially substantial traffic densities, the human-centric model is not
scalable and thus alternative architectures for managing vehicles and airspace should be explored. These
architectures will require a more automation-centric framework and the role of the human will change as
the increase of automation is introduced into the system.
There are a number of technical challenges associated with successfully achieving the vision articulated
above. These include:





Demonstration of certifiable detect-and-avoid solutions to provide separation assurance
Development of a UAS Traffic Management System
Demonstration of a reliable command and control link for beyond line of sight operations
Convergence on a universally accessible intelligent automation framework
Development of a risk-based safety framework for evaluating missions and managing contingencies
Laying the framework for evaluation of safety and development of appropriate metrics for increasing
levels of automation in the systems and changing human roles are essential to articulating and overcoming
the reliability stigma that limits the use of intelligent systems technologies. As more sophisticated
algorithms and technologies get developed having performance based standards to determine safety and
interoperability with the current airspace operations would yield a faster path towards adoption of new
technologies.
TECHNICAL BARRIERS
Many of the technologies needed to achieve the desired intelligent automation vision are in development
or do not exist today. The largest barrier for intelligent systems in this domain is to demonstrate the
reliability and safety of various technologies on a vehicle/ground platform, as well as demonstrating how
that technology will not degrade safety of the airspace when that technology is introduced and
interoperates with current airspace operations.
POLICY AND REGULATORY BARRIERS
While Class G airspace is currently uncontrolled, meaning that air traffic controllers do not provide
separation services, every aspect of operations in Class G airspace remains governed by the Federal
Aviation Regulations (FAR). Today, the FAA grants waivers to specific FARs on a case-by-case basis as a
result of in-depth safety reviews. A key challenge for enabling a high volume of small UAS operations in
Class G airspace, especially small low-cost vehicles across a wide range of missions, is to determine the
minimal set of regulatory requirements coupled with advanced traffic management tools and procedures
that ensures the continued safety of the NAS. Leveraging the existing FARs when appropriate is
advantageous because it allows new operations to be treated similarly to existing operations with a
44
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
proven safety record. At the same time, many of the existing FARs will not cost-effectively and promote
the wide variety of missions being considered. Ultimately, the regulatory requirements governing small
UAS operations will be a combination of both existing and new FARs.
IMPACT TO AEROSPACE DOMAINS AND INTELLIGENT SYSTEMS VISION
This contribution to the Roadmap for Intelligent Systems may provide a higher-level perspective to the
vision for intelligent systems for aerospace applications that is not addressed in other aspects of the
roadmap. For instance, technologies that are addressed using intelligent systems for low-altitude small
UAS operations may have relevance for advances in manned aviation operating in the air traffic
management system.
7.4 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
RESEARCH GAPS
Significant research gaps lie in the four areas: human-system interaction, UAS vehicle and ground support
automation, airspace management, and mission planning and contingency management. With limitations
on UAV payloads and costs, the roles of autonomy and human operators are not clear. What would be
the best sensor to use for sense-and-avoid? When to switch control between human and on-board pilot?
Furthermore, UAS significantly relies on the environment for safety and efficiency. How to quickly
disseminate evolving weather information? How does the UAV traffic system interact with infrastructure
systems, such as energy, communication, and possibly also ground transportation systems for applications
such as the last-mile delivery? In general, knowledge lacks on a modeling framework that allows us to
evaluate and design UAS integration solutions. Would “highways in the sky” be a potential solution? Could
we borrow advances in networked auto-driving to UAS traffic management? All these research gaps need
to be filled quickly to meet the needs from the fast-growing UAV industry.
OPERATIONAL GAPS
Despite the fast-growing UAV commercial applications, social acceptance of UAVs is still under question.
As commercial UAVs mostly use the low-altitude airspace, they are prone to significant interference with
human life. Privacy and security concerns are also barriers to social acceptance. Studies need to
conduct on a number of operational issues to foster the smooth integration of UAS into the NAS. Other
than risk perception, knowledge gaps need to be filled on the ownership of the low-altitude airspace,
flight procedures that meet the flexible on-demand use of UAVs, and environmental impact such as
noise levels, disturbance to birds and other wildlife, and pollution caused by UAV energy use.
RESEARCH NEEDS AND TECHNICAL APPROACHES
Some research directions and approaches in the four identified area of intelligent system contribution are
described in the following:

Human-System Interaction
o Identifying failure modes which result from non-collocation of pilot and aircraft and approaches
to circumvent them
o Human-in-the-loop interactions, including UAV pilot, aircraft, and air traffic management
o Human-driven adaptive automation
45
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Interactive automation with human reasoning
UAS vehicle and ground support automation
o Weather prediction
o Geo-fencing
o Effective information sharing
o Automated separation managed by exception, sense and avoid, and collision avoidance
o Physical system level readiness and protection
o Security and Authentication
Airspace Management
o UAS integration models and simulation tools that consider the heterogeneous missions of UAS
o Centralized versus decentralized responsibility; layers of responsibility
o Free flight versus more predictable highways in the sky
o Intelligent networked system and decentralized optimization
o Spectrum allocation and management
o Ground infrastructure communication for beyond visual range operations and charging/
maintenance systems
o Framework that considers the willingness to share data among competitors in the UAS industry
Mission planning and contingency management
o Agreed contingency planning
o Adapting contingency management solutions from traditional air traffic management to UAV
traffic management
o



PRIORITIZATION
Within 1-5 years, we anticipate the gradual integration of UAVs into the NAS in a positively controlled
airspace with the appropriate “sense and avoid”, separation and redundant systems (large fixed wing
aircraft) to ensure safety of operations on air and ground (take off/landing mainly from small airports
only). In Class G airspace, we envision the implementation of a limited UAV traffic management
solutions to a few rural areas and homogeneous aircraft. Performance and mission-based certification
are to be conducted. Intelligent systems will contribute with low-cost solutions that enhance mission
effectiveness, UAS information management and V&V of flight critical systems.
In the next 5-10 years, we anticipate increased capabilities and growth of areas for UAS integration.
Specifically, we will have solutions that enable dynamic airspace allocation, better weather prediction,
traffic management in urban environments, more ground-based sensors, and increased volume of UAVs
in a given UAV traffic management (UTM) controlled airspace. We will also have Interacting UAV traffic
management systems which interface with positively controlled airspace.
In the next 10-20 years, we anticipate new UAV traffic management architectures which permit
seamless interactions between manned and unmanned aircraft. These architectures will feature
scalability, robustness, and adaptivity to uncertainties. Intelligent logistics and supply chain
management will also be developed.
46
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
8. AIR TRAFFIC MANAGEMENT
8.1 INTRODUCTION
Air traffic management (ATM) is concerned with planning and managing airspace resources (e.g., routes
and airports) to ensure the safety and efficiency of air traffic operations in the National Airspace System
(NAS). ATM can be classified into two categories: 1) air traffic flow management (ATFM) which deals with
balancing air traffic demand with available resource capacities, typically with longer look-ahead horizons
(2-15 hours), and 2) air traffic control (ATC) which is concerned with the tactical guidance control of
aircraft within the NAS, typically with shorter look-ahead horizon of up to 2 hours.
Today’s air transportation system has many inefficiencies, as reflected by frequent delays, especially
during days with significant weather impacts. The Next Generation Air Transportation System (NextGen)
aims at improving the efficiency of the ATM, through smart technologies and new procedures. Intelligent
systems, which integrate technologies from artificial intelligence, adaptive control, operations research,
and data mining, are envisioned to play an important role to improve ATM and provide significant
contributions to NextGen.
Significant research efforts have been conducted over the years to improve ATM, with the development
of tools and procedures for enroute collision avoidance, departure and arrival runway taxiway
management, and Terminal Radar Approach Control (TRACON) automation among others. Some of the
advances have been successfully tested and implemented, among which the most significant is the
automatic dependent surveillance-broadcast (ADS-B), which establishes the foundation for enhanced
communication and navigation capabilities.
Significant studies on ATFM are needed to optimize resource allocation in the NAS at the strategic timeframe. The daily coordination of flow and resource management is currently being implemented through
a call meeting between Air Traffic Control System Command Center (ATCSCC) and other stakeholders,
including the airlines, Air Route Traffic Control Centers (ARTCCs), and others. The decision is made based
on human experiences and subjective judges, which are effective overall but have room for improvement.
Significant research is needed to understand human intelligence in high-level resource planning and
provide automation tools to support the decision-making.
8.2 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
TECHNICAL CHALLENGES
Robust Management Solutions under Uncertainties
The NAS is subject to a variety of uncertainties, such as take-off time delays, unscheduled demand,
inaccurate weather forecasts, and other types of off-nominal events. When traffic demands are close to
available resource capacities, these uncertainties can significantly disrupt the performance of the airspace
system. Among these impacts, convective weather and the uncertainty associated with its impact is the
leading reason for large delays in the NAS. In order to best allocate resources to address the uncertainties,
strategic ATFM is considered to be critical. Robust ATFM design is highly challenging, due to the largescale of the problem, strict safety requirements, heavy dependence on human decision-making, and the
47
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
unknown nature of some of these uncertainties. Intelligent ATFM solutions integrated into human
decision making processes and procedures are required to address these issues in real time.
Growing Heterogeneity
With the potential for an increasing number of unmanned aerial vehicles (UAVs) entering the airspace,
ATM is facing the challenge of growing heterogeneity and integrating manned and unmanned operations
into the NAS while maintaining current safety standards. In addition to traditional manned flights, the
airspace will also be shared with UAVs which fulfill a variety of military and civilian missions. The diverse
aircraft characteristics, missions, and communication capabilities complicate the management
procedures. The limited resources of human traffic controllers will not meet the management needs of
such diverse traffic types and heavy traffic loads. As such, diverging from the traditional ATM led by
centralized human controllers, part of the airspace may be dedicated to self-separating traffic with aircraft
equipped with the intelligence to sense and coordinate.
Cyber Security and Incident Recovery
ATM systems are prone to cyberattacks and more generally cyber-related failures. Cyber security issues
are becoming increasingly important to consider, with growing reliance of ATM solutions on automation,
software, and networks, and the switch of voice communication to data communication between
controllers and pilots. Recently, a number of cyber-related incidents were observed. In October 2014, an
insider attack on the Chicago Air Route Traffic Control Center (ZAU) communication equipment led to the
cancellation of thousands of flights over almost two weeks. Airline losses were estimated at $350 million.
In June 2015, a network connection problem (which may not be caused by an attack) caused ground-stop
of two hours for all United Airlines flights and affected half a million of passengers. Two challenging
directions need to be addressed: first, how to design an intelligent ATM system robust to cyberattacks
and failures, and second, how to restore normal operations within a short span following an emergency.
TECHNICAL BARRIERS
Automation is the core of NextGen to improve the efficiency and safety of the NAS. At a system-level,
practical issues are also critical to the successful implementation of automation solutions.



Multiple stakeholders of the NAS (e.g., dispatchers, traffic controllers, and airport operators) may
have conflicting objectives. The automation solutions must be fair to individual stakeholders for them
to be accepted, thus making it hard to quantify and validate the optimization goals. Quantifying
fairness and including that in system-wise planning needs to be addressed.
Human operators (pilots, controllers and ground operators) may be reluctant to accept new
automation solutions for multiple reasons, including limited trust to automation, learning curve to
work with automation tools, and job security. Human factors in the automation process need to be
better understood.
Due to the safety concerns of implementing any new process, significant costs of time and budget are
required before any new ATM automation solution can be put into practice. New methods to quickly
verify and validate potential technologies are needed. This becomes significantly more crucial, with
the information technology (IT) industries eagerly moving to the aerospace businesses.
IMPACT TO AEROSPACE DOMAIN AND INTELLIGENT SYSTEMS VISION
48
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
The aforementioned technology barriers adversely impact the intelligent systems vision, as they often
result in delays and sometimes infinite delays in implementing automation solutions that have the
potential to improve the safety and efficiency of the airspace system. Only if research expenditures on
the automation of air transportation system result in real implementation and visible performance
improvement will more investment on research and development be possible. The public is waiting to see
a plan for development and modernization of the air transportation system they rely on. This plan should
include the implementation of tested and validated intelligent systems.
The technical challenges also lead to policy and regulation barriers. For example, the challenge of
heterogeneity delays the issue of clear regulations of UAVs integrated into the airspace. The cyber security
issues create policy and regulation barriers. Due to the profound impact of air transportation incidents,
the air transportation system is always a target for potential attacks. Unless the resilience of air
transportation system to cyberattacks is addressed, the public will not take air transportation as their first
choice if other modes of transportation are available.
8.3 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
Intelligent systems are playing a crucial role in automating air traffic management procedures to improve
safety and efficiency of operations in the NAS. Some potential future developments are listed here.
Limited research efforts have been devoted to these areas, but more development is needed to eventually
lead to their implementation.
RESEARCH GAPS
The air transportation system is a highly complicated large-scale system that involves both cyber
components (computation, communication, and control) and physical components (resources, traffic
dynamics, and physical infrastructures). As such, many research gaps reside in the domain of decisionmaking for large-scale cyber-physical systems, with humans in the loop. Specific research gaps include
decision-making under high-dimensional uncertainty, decentralized optimization, distributed control,
human-intelligence, big data, human intelligence, human-machine interaction, modeling of cyberattacks,
fault detection and control, and verification and validation.
OPERATIONAL GAPS
The lack of test beds and tools to evaluate and validate ATM solutions at the NAS scale delays the
implementation of ATM automation solutions. The large costs associated with identifying/detecting a
wide variety of cyberattacks and building backup systems that can operate safely during and after
cyberattacks also create gaps to achieve a safe and resilient air transportation system.
RESEARCH NEEDS AND TECHNICAL APPROACHES
Strategic Air Traffic Flow Management under Uncertainties
Strategic ATFM identifies critical regions of resource-demand imbalance and robustly redistribute
resources to resolve such imbalances under uncertainties. While computers are excellent at tuning for
precise optimized values in reduced-scale problems, they are not good at finding global patterns and
prominent features in large-scale problems that are critical to strategic ATFM. Machine learning and data
mining techniques will help mimic human vision and intelligence to facilitate robust strategic ATFM under
49
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
uncertainties. These technologies will be able to look at historical data and correlate airline schedules,
weather, control tower interactions and possibly identify patterns to enable efficient pre-planning and
predict trouble spots.
Multiple Stakeholder Decision-Making
Ensuring fairness in automation solutions is critical for their successful implementation. Artificial
intelligence, game theory, and reinforcement learning techniques will be valuable in capturing the current
negotiating process among multiple stakeholders of the NAS in implementing ATFM and ATC plans. Such
understanding will help to define and implement equity objectives in automation solutions that are
acceptable by multiple stakeholders.
Human-Machine Interaction
Automation solutions are not aimed to replace human, but instead to assist human with information and
algorithms in making better decisions. Intelligence systems techniques will help us to understand how
human and machines interact and evaluate the performance of human-machine interaction. Ultimately,
such studies will help to improve the friendly interface of automation solutions, and to improve human
training programs to facilitate a seamless human-machine interaction.
Decentralized Air Traffic Management in Heterogeneous Airspace
Decentralized air traffic management is a major research direction in NextGen. Equipping UAVs and more
general flights with the intelligence to sense, coordinate and control will significantly reduce the work
loads of human controllers on the ground and improve the efficiency of resource usage in the NAS.
However, safety requirements are challenging to achieve, considering the complicated NAS environments
and the decentralized nature of such management solutions. Innovative intelligent system algorithms that
mesh advances from multiple disciplines will significantly enable NextGEN to tackle this complex problem.
Securing the Air Transportation System
Cyber security has been studied mostly for systems like the Internet. In air transportation, this field is
largely blank and significant research is needed in multiple domains under this direction. Air traffic
researchers need to work closely with cyber security experts in developing a cyber security framework for
air transportation systems. Example research topics include: 1) how to measure risk levels and creating
alerts for potential attacks, 2) how do human operators respond to attacks in an effective measure, 3)
how to build a database that allow quick identification of attacks and finding best recovery solutions, 4)
how to create backup operation systems without incurring more vulnerability to attacks, and 5) how to
verify and validate the effectiveness of security countermeasures.
Miscellaneous New Directions
Rich new directions are enabled by new technologies in multiple domains. Examples include: 1) advanced
planning solutions integrated with aircraft to support trajectory-based operations, 2) integration of large
sensing networks into future decision support tools, including aircraft based sensor data, and 3) airborne
networks to transmit data over multiple hops. Intelligent systems concepts and tools will find new
applications in these miscellaneous applications.
PRIORITIZATION
50
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
In the near term (0 to 5 years), we anticipate air traffic control advances such as Automatic Dependent
Surveillance-Broadcast (ADS-B), GPS Navigation, and Metroplex development to be fully implemented. In
addition, we expect initial automation solutions for air traffic management concepts such as Collaborative
Trajectory Options Program (CTOP), Airspace Flow Program (AFP), and Time-based Flow Management
(TBFM) will be developed and fully tested.
In the mid term (5 to 10 years), we envision that the implementation focus will shift from the automation
of air traffic control to the automation of strategic air traffic management. In particular, automatic
decision-support for strategic air traffic management that considers the benefits of multiple stakeholders
will be implemented. To enable that, a good understanding of the roles of humans and automation, and
human-machine interaction in air transportation systems will be developed.
In the far term (10 to 20 years and beyond), we expect an entire automation of air traffic management
solutions to be developed. The traffic management system will rely less on centralized decision-making,
and will be largely decentralized with built-in intelligence to sense risks, resolve congestions, and optimize
the allocation of resources.
51
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
9. BIG DATA
9.1 ROLES AND CAPABILITIES
As aerospace systems become more complex, large scale integrated intelligent systems technologies
comprised of multidimensional data from heterogeneous networked environments can play many
important roles to increase manufacturing, maintenance, and operational efficiency, mission
performance, and safety enhancements of current and future aerospace systems. Future intelligent
systems technologies can provide increased intelligence and autonomous capabilities at all levels, thereby
reducing cost and increasing predictive capabilities.
The amount of business and technical data available to aerospace and defense companies is exploding.
For any major aerospace product, the identities and attributes of thousands of suppliers in a chain
spanning from materials to components can now be tracked. The fine details of manufacturing logistics,
including tallies of which vendors have how much of a given product and their projected availabilities, can
be recorded. The challenge of harnessing all this enormous information — called big data — for
operational decision-making and strategic insight can at times seems overwhelming. The very point of
looking at big data is to analyze and spot patterns that answer questions you did not know to ask: Is a
vendor deep in the supply chain going out of business? Is there a developing pattern of critical component
failures? Big data can do that and more. What if you could evaluate, analyze and interpret every
transaction? What if you could capture insights from unstructured data? Or detect changing patterns in
best value supply channels? What if you did not have to wait hours or days for information?
Forward-looking aerospace and defense companies are fast adopting in-memory high-performance
computing, a relatively new technology that allows the processing of massive quantities of real-time data
in the main memory of a company’s computer system to provide immediate results from analyses and
transactions. Big data analytics also enables optimal decision-making in complex systems that are dynamic
and dependent on real-time data. Engineers can use big data in their design work as valuable guidance.
Spotting patterns of success and failure from the past data in a dynamic real-time environment brings a
new dimension in design optimization. A computer in a rocket using big data can autonomously decide its
next course of action by matching patterns from the past that worked. Cybersecurity applications in
aviation can use big data predictive analytics to initiate preventive actions to protect an aircraft. Using
predictive patterns from the past, an autonomous system can make intelligent decisions in a challenging
dynamic environment. Big data analytics can crunch massive quantities of real-time data and reliably
balance safety, security and efficiency. Airlines are adopting big data analytics to maximize operational
efficiency, minimize cost and enhance security. Computational fluid dynamics organizations continue to
manage the vast amounts of data generated by current and future large-scale simulations. Aerospace
industry, research, and development are impacted profoundly by the big data revolution.
AIRCRAFT ENGINE DIAGNOSTICS
52
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Pratt & Whitney, for example, is collaborating with IBM to use its big data predictive analytics to analyze
data from thousands of commercial aircraft engines.17 The data is used for predicting and interpreting
problems before they occur. Huge amounts of data generated from aircraft engines are analyzed and
interpreted with the help of big data analytics, resulting in foreseeing discrepancies and early signs of
malfunctions. Shrewd insights like these can help companies alert their customers with maintenance
intelligence information and provide intuitive flight operational data at the right time. Reducing
customers’ costs, a major strategic goal of any company, is accomplished by this proactive real-time
monitoring of the state and robustness of customers’ engines. In addition, it provides sustained visibility
to plan ahead for optimized fleet operations. Applying real-time predictive analytics to huge amounts of
structured and unstructured data streams generated by aircraft engines empowers companies to utilize
proactive communication between services networks and customers, resulting in critical guidance at the
right time. Pratt & Whitney anticipates an increase in its product’s engine life by up to six years with the
help of big data predictive analytics, according to Progressive Digital Media Technology News. The
company also forecasts a reduction in its maintenance costs by 20 percent.
AIRLINE OPERATIONS
Generally, an airline depends on the pilots for providing estimated times of arrival. If a plane lands later
than expected, the cost of operating the airline goes up enormously because the staff sits idle and adds
to the cost of associated overhead. On the other hand, if a plane lands ahead of the estimated arrival time
before the ground staff is ready for it, the passengers and crew are effectively trapped in a taxiing state
on the runway, resulting in customer dissatisfaction and operational chaos. Andrew McAfee and Erik
Brynjolfsson, writing in the Harvard Business Review in October 2012, described how a major U.S. airline
decided to use big data predictive analytics after determining that approximately 10 percent of flights into
its major hub were arriving 10 minutes before or after the estimated time of arrival.18
Today airlines are using decision-support technologies and predictive analytics to determine more
accurate estimated arrival times. Using big data analytic tools and collecting a wide range of information
about every plane every few seconds, the airlines and airport authorities are virtually eliminating gaps
between estimated and actual arrival times. This requires handling a huge and constant flow of data
gathered from diverse sources interfacing various networks. A company can keep all the data it has
gathered over a long period of time, so it has a colossal amount of multidimensional information. This
allows sophisticated predictive analytics and deployment of pattern matching algorithms with the help of
data mining tools, machine learning technologies and neural networks. The pattern predicting and
supervised and unsupervised learning algorithms answer the question: “What was the actual arrival time
of an aircraft that approached this airport under similar conditions? Given the current condition is slightly
different, based on learning algorithms when will this aircraft really land?”
COMPUTATIONAL FLUID DYNAMICS
17
CIO Review. IBM Helps Pratt & Whitney to Enhance Their Aircraft Engine Performance. [Online].
http://aerospace-defense.cioreview.com/news/ibm-helps-pratt-whitney-to-enhance-their-aircraft-engineperformance-nid-2774-cid-5.html
18
A. McAfee and E. Brynjolfsson, "Big Data: The Management Revolution," Harvard Business Review, Oct. 2012.
53
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
According to NASA’s “CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences,” 19
effective use of very large amounts of data generated by computational fluid dynamics will be critical to
advancing aerospace technologies. Big data predictive analytic tools have already started analyzing large
CFD-generated data sets to immensely improve the overall aerodynamic design and analysis process. With
the advent of more powerful computing systems, big data predictive analytics will enable a single CFD
simulation to solve for the flow about complete aerospace systems, including simulations of space vehicle
launch sequences, aircraft with full engines and aircraft in flight maneuvering environments.
CORPORATE BUSINESS INTELLIGENCE
Today’s businesses require fast and accurate analytical data in a real-time dynamic environment. Traditional database technologies cannot cope with these demands for increased complexity and speed. The
new computing trend supporting big data analytics in corporate environments is to process massive
quantities of real-time data in the main memory of a server to provide immediate results from analyses
and transactions. This new technology is in-memory computing. It provides the ability to open up
predictive and analytical bottlenecks and enables companies to access existing as well as newly generated
or acquired, granular and accurate trend-predicting large data sets. Real-time enterprise computing
infrastructure with in-memory business applications modules enables business processes to analyze large
quantities of data from virtually any source in real time with fast response time.
Big data predictive analytics combined with in-memory computing has had a massive impact on program
management, manufacturing, procurement, supply chain management and planning, operations, and
aftermarket services. The biggest corporate headaches today are reduced customer intuitiveness and
familiarity, missed revenue opportunities, blind spots in the supply chain, and increased exposure to
regulatory risk resulting from distributed processes, disparate information and unmanageable amounts
of data from diverse sources. Companies are gaining sustainable competitive advantages by effectively
managing their big data and associated analytics. Excellence in big data management and analytics
enables an organization to improve its ability to sense changes in the business environment and react
quickly in real time to changes in trends and data.
9.2 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
Unfortunately, the potential of big data analytics has not been fully realized. Some executives and
managers do not understand how to apply statistical, predictive analytical tools and machine-learning
algorithms. In addition, the process of collecting multidimensional data from many sources impacts the
quality of massive data sets. The real potential of big data analytics comes from harnessing data sets from
diverse sources with unpredictable quality of data. The technique of pre-processing the data to achieve
high quality is critical for the success of big data implementation. We are seeing some early pioneers trying
to implement predictive analytics by using big data to improve technical and business processes.
9.3 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
19
J. Slotnick, et al., "CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences," NASA
NASA/CR–2014-218178, 2014. [Online]. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20140003093.pdf
54
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Big data has all the characteristics of small data. Data becomes information when it becomes effectively
usable. Big data like any other data needs to be clean and consistent. If the data is unstructured, it can be
processed into structured data sets with the help of natural language processing and text mining tools.
The biggest challenge in big data analytics is dealing with missing or corrupted elements, rows, columns
and dimensions. Modern applied statistical data mining tools are employed to remove these anomalies,
readying the data for predictive analytics. Assuming the right choices are made, the next few decades will
see enormous big data applications in medicine, business, engineering and science. Aerospace will
become intelligent, cost effective, self-sustaining and productive with big data applications.
55
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
10. HUMAN-MACHINE INTEGRATION
10.1 INTRODUCTION
A key aspect of utilizing intelligent systems in aerospace is designing the methods by which humans
interact with them: that is the purpose of the field of human-machine integration or human-machine
interaction (HMI). This section describes the efforts in HMI to maximize the effectiveness of these
interactions by reaching an optimal balance between functionality and usability.
Human-machine interactions can be of different types: a) physical, dealing with the actual mechanics of
the interaction; b) cognitive, dealing mostly with communication and understanding between user and
machine; c) affective, dealing with the users’ emotions. Classical work on HMI has dealt primarily with the
design of uni-modal physical interfaces interacting through a single human sense: a) visual (facial
expression recognition, body movement and gesture recognition, eye gaze detection), b) auditory (speech
and speaker recognition, auditory, noise/sign detection, musical interaction), or c) touch (pen,
mouse/keyboard, joystick, motion tracking, haptics). However, more recent work has emphasized multimodal HMI, in which interactions occur simultaneously over two or more senses (e.g., lip movement
tracking to improve speech recognition, or dual commands using voice and pointing with the finger).
Perhaps two of the most noteworthy examples of holistic HMI are ubiquitous computing, related to the
Internet of Things revolution 20 and brain-computer interfaces, which are being studied primarily as a
means to assist disabled people21.
Much progress is being made in cognitive aspects of HMI, especially in the robotics community. This body
of work is attempting to design effective communication protocols between humans and robots, as well
as methods for the machine to explain its internal state and the rationale behind its actions in a way that
is useful and clearly understandable to the user. Another important body of work in this area is the study
of the mental models that humans have of machines when interacting with them.
Affective aspects have grown in importance in recent years, mostly due to technological advances, but
also due to the realization that an interface that ignores the user’s emotional state can dramatically
impede performance and risks being perceived as cold, socially inept, or perhaps more importantly
incompetent and untrustworthy. Hence, visual (face) and auditory (voice) emotions analyses are currently
being used to assess the emotional state of the user and adapt the interaction to it.
The remainder of this section reviews the state of the art of HMI with an emphasis on the roles and
capabilities this technology can provide for aerospace intelligent systems, identifies the technical and nontechnical challenges and proposes a research agenda.
Note that HMI is strongly related to other aspects of intelligent systems, such as autonomy (Section 4),
computational intelligence (Section 5), and trust (Section 6).
20
M. Weiser, “Ubiquitous Computing,” IEEE Computer, pp. 71–72, 1993.
21
J. R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller and T. M. Vaughan, “Brain–computer interfaces for
communication and control,” Clinical Neurophysiology, vol. 113, pp. 767–791, 2002.
56
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
10.2 ROLES AND CAPABILITIES
Intelligent systems have become a pervasive component of aerospace systems as well as supporting
systems used during the design, fabrication, testing, and operation of the system. These intelligent
systems span a wide range of applications and autonomy/automation levels, from completely
autonomous systems with little human intervention (e.g. Deep Space spacecraft, autopilot during cruise
flight of airliners, multidisciplinary design optimization) to partially automated decision-support systems
offering visualization capabilities or providing alternatives to the human (e.g., Shuttle landing, advanced
concurrent mission design at JPL Team X).
However, the introduction of these intelligent systems has also introduced new complexities in design
and validation to support effective interactions with people. In many cases the intelligent system does not
simply replace the role of a person; it fundamentally changes the nature of human work. This raises
important questions as to how we design and validate intelligent systems to work compatibly with
humans who remain in- or on- the decision-making or control loop in some fashion. The goal is to gain the
potential performance benefits of intelligent systems without adversely impacting system safety or
human well-being.
The human-machine integration (HMI) research topic area aims to provide design guidance through
empirical study, modeling, and simulation, to ensure that intelligent systems work in a way that is
compatible with and enhances people, e.g. by promoting predictability and transparency in action, and
supporting human situational awareness. A successful human-machine interface is a technology that
supports a human operator, supervisor, or teammate in effectively understanding the state of the system
and environment at an appropriate level of abstraction, and allows the person to effectively direct
attention and select a course of action if and when it is necessary. Examples include the following:









Automation and decision-support for pilots within cockpits
Remote pilot collaboration with onboard autonomy for aircraft
Human-robot collaboration within cockpits
Human augmentation for Intelligence, Surveillance, and Reconnaissance (ISR) analysis and
exploitation
Coordination of distributed manned-unmanned systems involving air, ground, and sea assets22
Human-robot collaboration in replenishment and maintenance for military operations.
Astronaut-robot interaction on the International Space Station (ISS) and beyond
Human-machine collaboration during mission and vehicle design23
Human-machine collaboration in the operation of constellations of satellites (see Section 13)
The user interface has long been identified as a major bottleneck in utilizing intelligent, robotic, and semiautonomous systems to their full potential. As a result, significant research efforts have been aimed at
22
J. Y. C. Chen, M. J. Barnes and M. Harper-Sciarini, “Supervisory control of multiple robots: Human-performance
issues and user-interface design,” IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and
Reviews, vol. 41, pp. 435–454, 2011.
23
J. Olson, J. Cagan and K. Kotovsky, “Unlocking Organizational Potential: A Computational Platform for
Investigating Structural Interdependence in Design,” Journal of Mechanical Design, vol. 131, pp. 031001–1–13,
2009.
57
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
easing the use of these systems in the field, including careful design and validation of supervisory and
control interfaces. However, new complexity of human-machine systems increasingly requires that the
systems support more sophisticated coordination across multiple humans and machines, requiring efforts
beyond traditional interface design. Ultimately this requires the architecture, design, and validation of an
integrated human-machine system, where the role of the person is incorporated explicitly into all phases
of the process, to support richer and more seamless human-machine cooperation. The next section
discusses the various facets of HMI, and the last section addresses open challenges in effective design of
HMI systems.
10.3 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
The key challenges in HMI involve the measurement, testing, and design of solutions that manage the
risks of intelligent systems and support human performance. These solutions include experimental
methods, computational tools, visualizations, haptics, computer programs, and interaction and training
protocols aimed at supporting single human operators as well as multi-member teams of people and
intelligent systems. For example, in the near future, we may see situations in which remote pilots
collaborate with onboard cockpit autonomy, or collaborate with an onboard team composed of both
autonomy and a human pilot. These new architectures for cooperative work require careful study to
ensure that human cognitive performance is maintained, and to support the remote pilot’s situational
awareness.
Transparency and predictability of the system must be ensured for the operator. This requires that the
intelligent system support the person in building an accurate mental model of its behavior, and that
protocols be developed for instruction and training. Human behavioral models, such as intent recognition,
that support system adaptation to the operator can be specified or learned. Simulations, models and
experiments are then used to investigate what level of system adaptation is acceptable to the human
operator. The ability to communicate intent and use intent to adapt plans to new situations is
fundamental to effective collaboration, and the communication channel ultimately mediates all
interactions.
Multi-modal interactions offer potential human performance benefits in effectively conveying state and
directing action. When situations change, the interfaces and communication protocols must effectively
convey information and analysis to the user in manner that supports their decision-making.
Supervisory control models, in which one operator directs or controls a large set of intelligent systems of
UAVs or ground vehicles, open the door to new opportunities and also new challenges. Effective multitasking of the human operator promises substantial gains in performance through efficient use of the
operator’s time and resources. However, challenges remain relating to operator situational awareness,
attention, workload, fatigue, and boredom. The level of autonomy, system performance and error
measures, and individual differences among operators substantially influence these factors. Flat peer-topeer architectures for coordination mitigate the human information processing bottleneck, but require
alternate architectures and protocols for supporting complex, potentially distributed networks of
exchanges among humans and machines, for example, for collaborative data sharing, analysis,
negotiation, and direction of action.
Finally, intelligent systems must be designed to build trust with the operator through continued
interactions and calibration to the user’s needs and capabilities. The system must also be validated as
58
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
trustworthy, in that it acts and communicates in manner that is compatible with the operator’s calibration
of the system’s capabilities.
10.4 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
RESEARCH GAPS
Measuring and modeling human performance: Work in HMI has been based on the human information
processing framework, by which humans perform three basic functions: stimulus identification, response
selection and response programming. 24 Much research has been devoted to studying the cognitive
processes that underlie perception and action, including the effects of attention, memory, learning and
emotions, particularly as applied to perceptual-motor behavior. Examples of classical results used in
aerospace systems include Fitt’s law25 or Hick-Hyman’s law26 used in cockpit design. While classical work
is mostly based on simple chronometric measures of reaction time, newer techniques such as advanced
computing vision, motion capture, and medical measurement, recording and imaging have provided
researchers with the ability to obtain large quantities of high quality data that can be used to estimate
quantities of interest such as pupil dilation, eye gazing, brain blood flows and others that have been shown
to be good predictors of key attributes such as cognitive workload, fatigue, levels of attention or
emotional state27,28.
These techniques are enabling the development of new cognitive models that predict the performance of
humans in complex problem-solving and decision-making tasks in aerospace systems. The utility of such
models is three-fold: a) if we understand the limitations of human performance, we can design
computational tools to compensate or alleviate those limitations; b) if we can model human performance,
we can measure the impact of different computational tools and determine which ones are more
promising; and c) if we understand the strategies people use to tackle tasks at which they excel, we can
attempt to mimic those strategies in intelligent systems.
Enabling mixed-initiative systems: It has been pointed out multiple times in this report that future
aerospace systems will require true cooperation and collaboration of humans and computers to perform
complex tasks effectively and efficiently. For example, Section 5 describes opportunities to bring
computational intelligence into the ground systems that are used to operate our fleet of satellites. The
recognition that the intelligent system does not simply replace the role of a person, but fundamentally
24
J. A. Jacko, Human Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging
Applications, CRC Press, 2012.
25
P. M. Fitts, “The information capacity of the human motor system in controlling the amplitude of movement,”
Journal of Experimental Psychology, vol. 47, pp. 381–391, 1954.
26
R. Hyman, “Stimulus information as a determinant of reaction time,” Journal of experimental psychology, vol. 45,
pp. 188–196, 1953.
27
S. P. Marshall, “The Index of Cognitive Activity: measuring cognitive workload,” Proceedings of the IEEE 7th
Conference on Human Factors and Power Plants, pp. 5–9, 2002.
28
A. Gevins and M. E. Smith, “Neurophysiological measures of cognitive workload during human-computer
interaction,” Theoretical Issues in Ergonomics Science, vol. 4, pp. 113–131, 2003.
59
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
changes the nature of human work, leads to new questions. What new roles will emerge as we re-architect
the interactions of people and machines?
Mental models, transparency and explanations: How do we ensure the system remains predictable in its
behavior as situations change and conditions deviate from expected operating parameters? What
methods do we use for providing transparency regarding intelligent system behavior and system “mental
state,” in addition to physical state? How do we elicit the mental models that the users have of the system
or the problem at hand?
Enabling robust decision-making: Future aerospace systems must achieve high levels of performance
under a wide range of dynamic, uncertain and adversarial scenarios by implementing multiple flexibility
strategies including adaptation, resilience and self-repair among others. From the HMI perspective, how
do we design and model the human’s role in a way that preserves the human operator’s flexibility to
intervene to “save the day” when necessary. Human pilots demonstrate resilience in the face of offnormal and high demand situations, and our air transportation system relies on this capability to achieve
safe operations. In new hybrid manned-unmanned systems, how do we determine which human
capabilities remain necessary and add value, when and where?
Control and delegation protocols: Mechanisms for transfer of control and delegation must be designed
carefully to avoid situations in which implicit mode changes reduce situational awareness. It remains an
open question how these modes of interaction may need to change with new circumstances or varying
temporal demands. New architectures for communication and coordination are also required to support
complex, distributed networks of collaborating humans and machines. Interfaces and protocols must be
designed and validated for effectively managing uncertainty and failure in communication. How do we
appraise the robustness of a particular human-machine system, for example, to certify its effective
operation in response to failures in communication or failure in capability of an agent?
OPERATIONAL GAPS
Trust: Trust in the intelligent system remains a primary barrier to wider adoption of the technology.29 We
still have open questions regarding the psychological and physiological components and factors that affect
trust. We lack general and accepted methods for testing and validation of HMI systems.
Transition to new HMI methods: Intelligent systems must be deployed and integrated over time. It is still
unclear how to support the transition from current systems to new HMI models, and how do we ensure
graceful degradation of capability when the work performed by the intelligent system must be transferred
back to a human counterpart.
RESEARCH NEEDS AND TECHNICAL APPROACHES
The research and operational gaps identified in the previous subsection can be addressed by an approach
based on three axes: applied research, multidisciplinary research, and dissemination of results.
Applied research: While fundamental research is needed to advance the state of the art of HMI, we believe
that the most fruitful and impactful approach to improve HMI in aerospace systems is to conduct applied
29
S. D. Ramchurn, T. D. Huynh and N. R. Jennings, “Trust in Multiagent Systems,” The Knowledge Engineering
Review, vol. 19, pp. 1–25, 2004.
60
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
HMI research in the context of applied research in relevant applications such as cockpit design, humanrobot interaction, and satellite operations. For example, research in measuring human performance
should be conducted in the context of domain-specific tasks that are relevant and important to the
community such as operating a satellite.
Multi-disciplinary research: Best results in HMI research are likely to come from collaborations between
industry, government and academia including experts in multiple disciplines such as aerospace
engineering, cognitive psychology, or computer science. For example, NASA operations engineers can
team up with faculty in aerospace engineering and/or cognitive psychology to derive new models of the
performance of satellite operators when doing specific tasks.
Dissemination of results: Research in HMI tends to be scattered across multiple venues due to its applied
nature. This can hinder progress. Therefore, we recommend that the results of new HMI research in
applied contexts should be shared with HMI experts doing research in other applications in order to
maximize synergies and avoid reinventing the wheel.
PRIORITIZATION
Current HMI research is often driven by an urge of developing new tools, methods, or interactions that
perhaps incorporate some of the newly available technologies, at the expense of validating them in
different contexts, i.e. measuring how good they are or how much they actually enhance human
performance compared to the state of the art. While it is desirable to develop new tools that make the
most of technological advances, these tools are not useful if we cannot compare their goodness to the
ones we have now. Therefore, we argue that research exploring ways of validating new HMI methods
should be high in the priority list. How else can we advance the state of the art of HMI in aerospace
systems, if we cannot even agree on what exactly is good HMI? This is of course a wide area of research
including the development and validation of objective and quantitative models and metrics of human
performance.
Building on top of a solid foundation of validity in HMI, we can go on to address the other issues. While
specific applications may have different priorities for HMI research, there are two fundamental HMI issues
that are repeatedly cited in this report as bottleneck problems in many applications, namely: 1) trust and
2) the transition from the current state of practice in which most systems are at one of the extremes of
the autonomy/automation continuum to a new paradigm driven by mixed-initiative human-computer
teams.
61
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
11. INTELLIGENT INTEGRATED SYSTEM HEALTH MANAGEMENT
11.1 INTRODUCTION
The purpose of this section on Intelligent Integrated System Health Management (i-ISHM) is to motivate
the development of ISHM technologies that are critical to advancing the state-of-the-art in intelligent
systems for the aerospace domain. Here “management” broadens the scope from strictly a monitoring
function to include: (1) analysis required to support i-ISHM design and operation and (2) i-ISHM-specific
responses that are designed to mitigate a system’s loss of function due to a degraded health state.
Further, “integrated” implies that an i-ISHM system understands the integrated effects of critical failure
effects that propagate across system boundaries. When appropriate, knowledge of the integrated effects
is then used: (a) to identify anomalies, (b) to appropriately determine the root cause of failures that
originate in one system and manifest themselves in another; (c) to analyze data at various hierarchical
levels to provide increasingly higher-level knowledge about the system’s health state; and (d) to support
the prioritization of responses required to compensate or mitigate loss of functionality.
For the purposes of this roadmap, i-ISHM includes the following functional capabilities: Health State
Awareness (HSA), Failure Response, and Design and Operations Support. These functional capabilities are
briefly described as follows:
Health State Awareness: HSA is a comprehensive understanding of the system health state during
both nominal and off-nominal operation. HSA may use system state information from onboard
controllers, ground systems commands, measurement data, and analytical estimates of unmeasured
states that are derived from measurements or other parameters. Further, HSA analyzes this system
state information to generate actionable knowledge about the system health state. These analyses
include, but are not limited to, the detection, diagnosis, and/or prognosis of performance
degradation, anomalies, and system failures in both hardware and software portions of the system.
The analyses may be applied at various levels of a system while also considering interactions due to
integration: individual components (e.g., sensors, data systems, actuators), subsystems (e.g., avionics,
propulsion, telemetry), systems (e.g., aircraft, satellites, launch vehicles, ground support), and
potentially systems of systems.
Failure Response: Also known as redundancy management, accommodation, and other monikers.
Failure response may be onboard or off-board actions taken to preserve system function by mitigating
the effect of failures that result in reduced system health and performance. To perform its function,
Failure Response relies on data provided by the HSA function. Failure response is particularly
important for failures that, without mitigation, may ultimately result in loss of mission (LOM) or, for
human missions, loss of crew (LOC).
Design and Operations Support: This element encompasses the large contingent of models, analytical
capabilities, systems engineering processes, and standards required to support the adequate
implementation and verification of i-ISHM-specific requirements (e.g., failure detectability, detection
latency, line replaceable unit) levied on a system during the design process. It also includes conceptsof-operation and user interfaces that provide the benefits of an i-ISHM capability during the life cycle
of a system.
It is very important to stress that the functional i-ISHM capabilities described here may be implemented
with various degrees of maturity or completeness and the intent to augment them over time. Therefore,
62
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
i-ISHM should be implemented in an evolvable architecture that enables higher levels of i-ISHM capability
through systematic progression of knowledge and information. Technologies and tools for i-ISHM must
enable this process of augmentation and evolution. Any implementation will start at a basic level, and
improve through usage and advances in technology.
These functional capabilities are intended to provide a structure supporting the discussion of an i-ISHM
roadmap and help bound the scope of the discussion without intentionally over-constraining it. Here,
boundaries between the elements and, indeed, between i-ISHM and other functions are intended to be a
somewhat gray or fuzzy. Many i-ISHM practitioners would use somewhat different descriptions and draw
the boundaries differently due to the large variation in i-ISHM architectures, capabilities, and levels of
maturity across an abundance of aerospace applications and communities. In an attempt to achieve broad
support for this roadmap, the structure presented is intended to be general enough to represent the
majority of these without specifically representing any particular view. Further, the words used to define
the i-ISHM functional capabilities are intended to provide a common terminology that can be used to link
the i-ISHM roadmap to other sections of this document.
Motivating this roadmap discussion is the long-term vision of an intelligent system that is capable of
autonomous mission operation. This includes but is not limited to the following i-ISHM functionality:



Aware of its health state and the functional capabilities associated with that health state
Able to identify or predict the degradation of that health state, the cause of a particular degradation,
and the resulting loss of function
Able to decide and act autonomously (a) to mitigate effects of health state degradation so that existing
goals may be achieved or (b) to provide resources to autonomous operation plans in order to select a
different goal that can be achieved with the limited functional capabilities of the reduced health state.
Additionally, it is also important to recognize that, as systems become more integrated and complex,
intelligence and autonomy will be required, not just for systems, but for processes that are used to design,
develop, analyze and certify those systems. This is particularly true for i-ISHM where the development of
intelligent and/or automated processes from model building to verification, validation, and accreditation
has the potential to increase the accuracy of the final product and reduce development time and life cycle
cost of the i-ISHM capability by several orders of magnitude.
Further, as systems increase in intelligence and autonomy, new intelligent i-ISHM technologies that allow
designers to more efficiently build on previous work will be required to reduce development time and
keep costs manageable. For example, with the proper algorithms, i-ISHM could be implemented using
higher levels of conceptual abstraction for reasoning and decision-making. Rather than targeting one-ofa-kind solutions, this approach could provide for more efficient implementation by allowing i-ISHM
designers to use generic models and strategies developed for application to broad classes of systems and
processes. This requires that i-ISHM systems be “intelligent” and embody scripting of i-ISHM strategies at
conceptual levels, as opposed to application-specific cases.
The goal of this section is to ascertain the short term, midterm, and long term technology needs as a tool
to help policy makers and funding organizations appropriately prioritize their future i-ISHM investments.
The i-ISHM Roadmap discussion is organized in a manner similar to the other topics within this document.
In Section 11.2 i-ISHM roles and capabilities are described with the intent of capturing, at a high level, the
current state-of-the-art as well as the role of i-ISHM in future intelligent systems and the i-ISHM
capabilities required to support that role. Section 11.3 presents the envisioned technical challenges and
technical barriers associated with implementing i-ISHM for future intelligent systems. Here, the technical
63
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
challenges and barriers are intended to represent difficult, and as-yet-undeveloped, technologies that are
required to move from the current state-of-the-art to the future vision. The discussion in Section 11.4
attempts to identify research needed to overcome the previously identified technical challenges and
technical barriers as a means of realizing the future vision. Finally, Section 11.5 Roadmap is intended to
fit future research and technology development needs into a timeline with bins of 1 to 5 years, 5 to 10
years, and 10 plus years.
Note that the various technologies identified in this roadmap for development, and the associated
timeframes for that development, have typically been posted by i-ISHM subject matter experts from an
application-specific perspective that is not currently identified in this document. It is left to the reader to
determine whether or not these technologies and timeframes apply to a specific context.
11.2 ROLES AND CAPABILITIES
This section describes the role of i-ISHM and the broad spectrum of capabilities that are encompassed.
i-ISHM is an enabling capability for intelligent aerospace systems. As an example, in aeronautics, the main
motivators for i-ISHM are increasing safety and lowering the cost of operations. The condition-based
maintenance of commercial aircraft allows maintenance to be scheduled at the earliest indication of
degraded performance or impending failure. Data driven methods employed by fleet supportability
programs increase aircraft availability and reduce maintenance costs. Autonomous space missions employ
techniques such as redundancy management that enable continuous operation for long durations when
maintenance operations would be impossible. Crewed space missions depend on i-ISHM for abort system
design and implementation, increasing the safety of those missions. Ground systems for aeronautics and
space applications mirror the roles of their flight counterparts, lowering maintenance costs and assuring
flight readiness.
i-ISHM is akin to having a team of experts who are all individually and collectively observing and analyzing
a complex system, and communicating effectively with each other in order to arrive at an accurate and
reliable assessment of its health.
Simple examples of health state awareness in everyday life are check engine lights in automobiles (and
more advanced health status indicators in modern vehicles), error codes in home appliances, and preflight checkout for commercial aircraft. Other advanced health state awareness applications are rotorcraft
health and usage systems (HUMS) and health monitoring for high performance race cars.
A key concept in i-ISHM is the notion that the way a system can fail should be considered during the design
phase along with nominal functional requirements. For complex systems involving highly integrated
subsystems, an analysis must be performed across subsystem boundaries so that all interdependencies
are identified and understood.
In order to achieve credible i-ISHM capability, technologies in relevant areas must reach a critical level,
and must be integrated in a seamless manner. The integration must be done according to models that
provide the opportunity to analyze in terms of system-of-systems with many types of interactions. The
technology areas for i-ISHM must cover the functional capabilities described in Section 11.1.
11.3 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
TECHNICAL CHALLENGES
64
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Here, “technical challenges” implies challenges to fielding an operational i-ISHM system that are of a
technical nature and that are difficult to overcome without the development of new technologies.
Currently identified i-ISHM technical challenges are:













Architectures for integrating multiple i-ISHM algorithms including different algorithm types and for
scaling i-ISHM from a single subsystem to a system or a system-of-systems.
Software environments must enable i-ISHM capability that is generic and not application-specific.
Creating a software environment that supports integration of the variety of algorithms required to
identify the system health state and the functions associated with that state, and that defines
parameter lattices to be able to compare and contrast state from more than one algorithms (at
different levels of abstraction) in order to determine consistency.
Understanding failure mechanisms and the physics of failure needed to support prognostics
Under-sensing and fault ambiguity
Measurement and parameter uncertainty
Mitigation of effects of latent failures - failures that exist but are not apparent until the associated
system is activated.
Infrastructure that provides continuous, dynamic feedback of all systems from design tools,
deployment, missions performed, operational conditions, environmental conditions maintenance,
health management, to retirement of the system.
The integration of i-ISHM goals with higher level system goals which are often defined without i-ISHM
in mind.
Support for efficient integration of systems models into a system-of-systems model, lacking in many
of the existing i-ISHM design tools.
Efficient and effective verification, validation (V&V) strategies for intelligent systems, particularly
those that adapt or learn over time.
Gaining sufficient confidence in sensors, sensor data, and i-ISHM algorithms to warrant the
implementation of onboard critical decision-making prior to system operation or performance
impacts.
Obtaining sufficient data during off-nominal operation to meet requirements for V&V of i-ISHM
capabilities.
TECHNICAL BARRIERS
Here, “technical barriers” implies technical issues associated with development, implementation, and
operation that cannot be overcome without the development of new technologies. Currently identified
i-ISHM technical barriers are:








Integration with the flight architecture of the system. A huge barrier to the entry of i-ISHM is an
inability to define clean interfaces with the baseline architecture.
Enabling intelligent and integrated capability. Software, architectures, Con-Ops, and paradigms are
needed to meet this challenge.
i-ISHM is localized and prescribed at specific application levels.
Knowledge applied is specific to a small part of a system.
Evolution and expandability is difficult and costly.
Integration of various tools into a capable i-ISHM system is done in an ad-hoc manner.
Issues associated with limited telemetry bandwidth.
Linking i-ISHM development to early design for cost benefit, then leveraging that to benefit
operations.
65
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE



There is a lack of consistent methodologies for assessing risk based on the impact/consequence and
probability of performance degradation, anomalies, or failure.
i-ISHM models with large failure spaces (i.e., thousands or tens of thousands of failure modes) are
relatively new. Consequently, it is not yet clear whether the models can perform efficiently enough
to provide failure mode detection and isolation in a time-critical system.
The development of formal and automated methods to support the VV&A (Verification, Validation,
and Accreditation) of non-deterministic (i.e., typically artificial intelligence-based) i-ISHM systems is
in its infancy. This is a huge barrier for the use of i-ISHM in time-critical systems, particularly human
space flight systems.
11.4 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
As used in this section, the term “gaps” implies the difference between the future vision and the current
state-of-the-art.
RESEARCH GAPS






Need an integrated information system that contains the dynamic health state based on inspections,
operation, environment and estimated life and usage per the system’s design. Most systems are
designed for a finite life based on assumptions made during the initial design. These original design
assumptions need to be defined and adjusted with actual usage measurements.
Intelligent sensors and components to support a distributed health state awareness.
Integration of Prognostics into i-ISHM System Design and Operation
Integration of detailed physics-based models (or results thereof) into the reasoning process.
Integrated reasoning across inter-related systems or subsystems.
Development of formal and automated methods to support the verification, validation, and
accreditation of i-ISHM algorithms (e.g., neural nets, knowledge-based systems, probabilistic
methods) is in its infancy. This is a significant barrier for the broad acceptance of i-ISHM.
OPERATIONAL GAPS







Requirements for legacy systems often do not include i-ISHM requirements with enough definition to
guide the development of an i-ISHM capability.
Interfaces that provide accurate knowledge about the system state (includes available functionality)
to onboard and/or off-board decision makers or algorithms, including knowledge navigation tools
which can rapidly focus the information for the user.
Common system architecture paradigms that are designed to support i-ISHM from the perspectives
of both integration and operations.
Evolution of i-ISHM technologies from design to operations support, including verification, validation,
and certification.
Methodology for partitioning between onboard and ground-based i-ISHM.
Automation of i-ISHM processes including but not limited to: development of i-ISHM-relevant models
from specifications and schematics, VV&A of models and i-ISHM designs.
Handbook/guidebook on i-ISHM systems implementation.
11.5 ROADMAP FOR i-ISHM
66
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
In this section, research goals corresponding to the previously described challenges, barriers, and gaps
are listed. The goals are allocated to timeframes of 1 to 5 years, 5 to 10 years, and 10 plus years based
on the opinion of contributing i-ISHM subject matter experts.
1-5 YEAR GOALS











Evolutionary/augmentative technology insertion, enabled by defining clean interfaces with baseline
avionics architectures.
Ground-based operational implementations.
i-ISHM sensor systems that provide information about the confidence in their data and their own
health state.
A paradigm shift to a system-level concept of operations that includes an i-ISHM capability.
Operator-oriented user interface screens for integrated awareness with the ability to navigate both
functional and structural hierarchies of the system.
Detection of loss of redundancy and the identification of other functionality that may be lost as a
result.
Capability to rapidly develop a system prototype that includes critical i-ISHM elements as a means of
supporting systems studies and preliminary performance assessments and requirements verification.
Tools to support the verification of system-level integrated hardware/software models.
Standards to support Intelligent System and i-ISHM hardware and software interfaces.
Development of a preliminary library of standard i-ISHM functions to support the consistent
implementation of i-ISHM capabilities and the integration of i-ISHM capabilities across various
subsystems, systems, and system-of-systems.
Tools that enable the integration of knowledge across system(s) to achieve i-ISHM as an evolutionary
and scalable capability.
5-10 YEAR GOALS








Flight demonstrations of on-board i-ISHM.
Technology demonstrations of nondeterministic i-ISHM algorithms.
Integration of ground-vehicle information integration.
Software environments and tools that enable the reusability of i-ISHM building blocks in different
applications.
Intelligent sensors/components and standards/architectures to integrate them into the i-ISHM
capability. Distribute processing to sensors and components (physical and virtual).
High data rate, bandwidth, vehicle telemetry for systems of systems i-ISHM, and precision pointing
capability
Broad testing to support the development of physics of failure databases, particularly for prognostics.
Clear entry/exit criteria for i-ISHM products at critical milestones in the systems engineering process
to enable i-ISHM to become a standard part of the early design process.
10 YEARS AND BEYOND GOALS




Significant and scalable demonstrations that include i-ISHM as part of autonomous operations.
Evolvable system models that adapt to degradation.
Inexpensive secure communications.
i-ISHM solutions that incorporate uncertainty methodologies into aerospace engineering design,
development, and certification processes.
67
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE

Verification and validation methodologies that keep pace with advanced i-ISHM algorithms and
processes.
68
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
12. ROBOTICS AND IMPROVING ADOPTION OF INTELLIGENT
SYSTEMS IN PRACTICE
12.1 INTRODUCTION
This section of the Roadmap focuses on the introduction of improved automation and intelligent system
capabilities for robots across all domains: land, sea, air, and space, with and without nearby human
presence. In particular, we focus our discussion on the needs for robotic technologies enabled by
intelligent systems and enabling applications that could open the door to the wider use of robotics. Our
goal is to articulate which intelligent system technologies would be required to dramatically increase
robotics capabilities and the research contributions necessary to build these enablers. These technologies
could enable the broad use of robotics – all robotics – across all aerospace domains to achieve operational
capabilities without the human presence, such as mobile robotic platforms for planetary exploration in
deep space. The broad adoption of robotics could occur in the future when advanced intelligent systems
technologies coupled with advanced capabilities in software and hardware will become available.
Computer and mobile phone technologies illustrate what we might expect in terms of the dramatic
increase in technology adoption over time that could also be applicable to robotics in the future. Increases
in robotic capabilities made possible by intelligent systems may stimulate the demands for robotic
applications in the consumer market as well as in industrial, business, and academic settings.
Development of intelligent systems technologies for robotics including tool suite development and testing
capability is already happening in many industrial sectors such as the automotive industry. Google’s selfdriving car is a good example of intelligent robotic platforms. Technology development for robotics in the
aerospace domain is also advancing, but the pace of advancement varies depending on the level of
mission criticality. Aerospace robotic systems generally must demonstrate a high level of reliability for
extended operations without the human presence, or must be designed to provide highly specialized
functions to reduce operational or mission risks to the human operator. System faults or failures could
mean a complete loss of the mission or the system. Mission criticality in aerospace robotic applications
thus requires a higher degree of intelligent systems technologies than perhaps consumer or industrial
robotic applications. Nonetheless, certain robotic technologies could have cross-cutting applications that
could be leveraged for the aerospace domain. For example, machine-learning perception technologies
being developed for self-driving cars could be applied to aerospace robotic platforms for feature detection
of stationary and moving obstacles, and for vision-based navigation.
In the context of this discussion, a robotic system refers to a physical robot that comprises mechanisms
and hardware, sensors and sensing systems, actuators and interfaces, computer systems and software,
and processing capabilities. The latter would include automation components that could, for example,
use sensory information to learn and perform onboard decision-making. The environment, or space, in
which the robot operates may be shared with the human during robotic operations, or the robot may
otherwise have to interface with humans via some remote connection or tele-operation. Thus, the robot
may need to sense, take direction from, interact with, collaborate, and cohabitate with the human,
depending on the particular domain and specific applications.
There are some common technical challenges that exist in all robotic application domains. For example,
one common technical challenge in machine-learning perception technologies is the ability to accurately
detect subtle changes in geometric features of moving objects and the operating environment in which a
69
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
robot operates in order to perform correct decision-making. Other technical challenges may be
application and domain-specific. These technical challenges will be further discussed in the following subsections.
12.2 CAPABILITIES AND ROLES FOR INTELLIGENT SYSTEMS IN ROBOTICS
DESCRIPTION OF INTELLIGENT SYSTEMS CAPABILITIES
Robotics technology has classically been applied to dull, dirty, and dangerous tasks that a human would
otherwise have to perform instead. More recently, this has come to include repetitive tasks that would
otherwise fatigue a human quickly (leading to a lack of quality control), and tasks that must be carried out
over very short or exceeding long timescales with consistent accuracy and precision. Intelligent systems
can contribute to robotic applications where repetitive tasks are extended to problems that require
judgment calls or tradeoffs that are today made by humans for platforms with better sensing and
actuation that can support such operations. Whereas a current robotic system could remove a control
panel and make a repair, a robot with intelligent systems technology could recognize that a fault has
occurred, run through a series of diagnoses to determine the root source of the issue if one exists,
determine what repair actions need to take place, and then remove that control panel and make the
necessary repair, with perhaps some follow-up testing afterwards to make sure the problem has been
resolved. An intelligent robotic system with advanced monitoring and decision-making capabilities could
even determine when to schedule such a task, e.g. during a specific ‘after-hours’ time block when it
‘knows’ that the repair work will not negatively impact other operations, if the repair is not of vital and
overriding importance, or vice-versa. This is not science fiction; these examples are within our capabilities
to do today.
Today, state-of-the-art intelligent systems for robotics are rarely seen outside of a research lab
environment. Take, for example, the symbolic reasoning architectures such as the University of Michigan’s
SOAR and MIT’s Enterprise that can perform ‘human-like’ analyses and determine a list of step-by-step
procedures for what the robot would need to do in that domain, even taking into account probabilistic
measures and uncertainties in sensing and action-outcome. Highly capable robotic systems with high
levels of intelligent systems could also be found primarily in industrial and in some cases military settings,
in large part due to system development cost and safety issues. There are also examples of robotic
systems ‘out on the street’ that display a high level of intelligent systems and autonomy – such as Tesla’s
Autopilot system (low- to mid-level), Google’s self-driving car project (high-level), and some of the ground
mobile systems developed for the DARPA Grand Challenges (mid- to high-level) – but these do not
illustrate widespread use of intelligent systems in robotics.
Increasingly, robotic systems are being developed to have expanded intelligent systems capabilities. The
military has had a long interest in increasing autonomy for rover and UAV platforms. This is driven by the
needs to carry out Intelligence, Surveillance, and Reconnaissance (ISR) without risks to the human
operator, to reduce operator workload, and to prevent situational awareness issues associated with teleoperation. There have also been similar pushes for robotics with intelligent systems capabilities in other
industrial fields, such as
 Industrial shipping (aircraft, trucking, loading and unloading of container ships)
 Warehousing (storage and retrieval)
 Automobiles (safety features for accident minimization)
 Oceanic science applications (long-term collection of data; costly multi-sensory single-vehicles giving
way to cheaper more autonomous multi-vehicle groups)
70
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Robotic applications generally are developed to solve very specific, targeted problems. Intelligent systems
capabilities in some robotic applications such as those in industrial setting generally tend to be limited in
scope. This is due to difficulty in technology development which for intelligent robotic is still at an early
stage of development. The lack of common standards or specifications for intelligent systems technologies
for robotics makes development of robotic systems highly customized, driving up development costs.
For robots operating outside of a structured setting, control of the robotic system has remained teleoperation-based in the past mainly due to issues with sensing and the amount of processing that needs
to be done to evaluate trajectories in real-time versus the computing power available. This is
demonstrably solvable in research settings, but the transfer of this technology to non-specialized settings
and other platforms still poses as a technology barrier. There is also the matter of safe operation of these
systems, and how to handle responsibility and accountability of the system-under-operation, at whatever
level(s) of autonomy the system is capable of.
To enable the quick adoption of new intelligent systems technologies for robotic applications, we propose
the creation of a better development, integration, testing/V&V, and deployment chain for those new
decision-making, modeling, prediction, and risk-analysis (online safety) technologies. We can speed the
process of integrating new components by providing an established flexible architecture, alreadyimplemented and supporting a variety of existing toolsets that include known-stable and functional
platform with analysis tools, pre-evaluated baselines, and pre-defined scenarios ready for testing. This
will significantly reduce the development cost required to integrate and use those new algorithms and
techniques.
INTELLIGENT SYSTEMS ROLES AND EXAMPLE APPLICATIONS
The primary avenues for helping intelligent systems technologies make better inroads into more
widespread use are: human-assistance technologies (including physical robots in human spaces) and semiautonomous operations (moving from tele-operation to human-selected levels of automation, with
decreasing need for direct oversight, by increasing system capabilities).
We need to move past having a “safe mode” be the only acceptable option for onboard autonomy.
Increasing the role of the robot in the decision-making process does not necessarily mean decreasing the
role of the human, but it does shift the human’s role to supervisory. In implementation, we should strive
to make the human’s job easier and less stressful while:



Providing automated robots with more control over the higher-level aspects of the problem (e.g., one
or more humans able to issue commands to a fleet of robots working in conjunction with each other
rather than many humans coordinating with each other and commanding a single robot each)
Allowing automated robots to concentrate more on their individual work effort (e.g., the robots
require little to no oversight and communication happens only when necessary)
Allowing automated robots to discover, learn and remember more efficient, accurate, and safer
methods and sequences for performing repetitive, physical manipulation or exploration tasks and
(eventually) allowing automated robots to learn, build, and change the structure of their own models
themselves – not just the parameters – because a truly “intelligent” system requires this capability.
Desired intelligent systems for robotics roles include the following:
71
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE









Automated UAV deployment for surveying and environmental testing in rural area (farmland,
pipelines, power lines, sewage plants, forestry, etc.) and in oceans, lakes, and rivers (marine wildlife
tracking, NOAA weather survey, measure of pH and algae levels, etc.)
Automated vehicular transport systems (cars, taxis, buses, trucks – could start with designated lanes
that only automated systems can use)
Automated commercial transport systems (cargo ships, trains, planes – could concentrate on
automated transit capability, monitoring and notifying the human and asking for oversight only when
off-nominal conditions occur)
Upcoming space missions that require a significant increase in onboard autonomy, due to
environmental uncertainty and communication delays (e.g., the Kuiper Belt and Venus lander
missions)
Smarter on-orbit operations in human spaces, such as:
o Worker unit, taking over repetitive tasks (e.g., Robonaut2 could do automated checklists,
SPHERES can take environmental measurements or act as in-situ sensor platform for Robonaut2).
Collaboration in work assignment (assistant or enabler supporting human operations).Robots that
build and repair themselves, from factory floor to self-reconfiguration during operations.
Distributed robots, e.g. the Internet of Things (e.g., in a smart house, even a microwave could be
considered a robot, when given a reporting mechanism and coupled with other platforms).
Smarter factories (e.g., combined human-robot work situations with networked components,
including factory robots that could learn to handle new situations on their own)
Robots that build and repair themselves, from factory floor to self-reconfiguration during operations.
Desired intelligent systems for robotics capabilities to satisfy the roles listed above include the following:






Facilitate human-provided, high-level specification of goals which the robot can automate and
accomplish without the continuous need for human tele-operation.
Help with scheduling of collaborative human and robotic tasks.
Integrator / aggregator of data for human consumption in an understandable form (translation for
more effective oversight/overwatch).
Pinch-hitter for fastest or slowest periods of operation (hand-off of control to robot for either
emergency situations or normal operations, depending on the domain, e.g. imminent car crash to an
onboard computer, steady level flight to an autopilot, or repetitive tasks for Robonaut 2).
‘Meta-human’ capability during long time-delays or communication blackout periods (ability to work
within given constraints to perform ‘extra’ tasks without unduly jeopardizing the robot platform, while
waiting for the next set of ‘major’ goals or instructions).
Learning from a template, up to meta-learning (learning the templates for the models it uses and
needs), and learning when it is safe to do this (bounds on learning).
12.3 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
TECHNICAL CHALLENGES
Technical challenges needed to be overcome to improve intelligent systems for robotics are:

No known working method for auto-definition of problem domains for the robot – e.g., goals and
constraints, scope of the problem, and priorities for tradeoffs (humans still need to do this for each
and every new case and problem class and scenario).
72
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE











Lack of agreed-upon domain and vehicle type-specific risk metrics of sufficient scope for interesting
use cases.
Lack of common safety metrics (recommended: grouped sets appropriate to function via: type of
robotic platform, human-interaction rating, safety-critical operational level, autonomy level).
Lack of common techniques for maintaining and enforcing consistency in the models and constraints
across levels of abstraction, and no formal methods for guaranteeing appropriate overlap in capability
(no gaps).
Lack of common methods and APIs defined for connecting ‘higher-level’ decision-making processes
to the lower-level trajectory planners and controllers.
Lack of ontology to describe general functionality of algorithms for robotics use.
Lack of test procedures (exhaustive, non-exhaustive) that give quantitative confidence in the systemsunder-test performing to spec in their operational environment(s) (without learning).
Lack of trusted techniques for V&V of learning systems (e.g., learned models, or model-learning
techniques) for safety-critical use.
No optimization procedure for choosing the ‘best’ algorithms to use in an architectural
implementation for a robot, or update rate of data, or acceptable level of uncertainty of sensor data,
etc. (these choices are currently made at design-time and implicitly encoded within the decisionmaking structure of the robot).
No explicit determination method for choosing the level of abstraction used by each algorithm / in
problem definition and models used.
No rigorous procedures for deriving models from a central model at a given level of abstraction.
No known way to encode a “central model” from which specific, useful models can be derived.
TECHNICAL BARRIERS
Many of the fundamental technologies needed to achieve the desired capabilities described above exist
today. Achieving the vision of using intelligent systems to advance robotics for aerospace and other
domains requires both fundamental and applied research. Demands for technologies need to be
established by the end-users and the robotics community.
At the system level for robotic systems, there are technology barriers that need to be overcome in order
to progress towards the widespread adoption of intelligent systems technologies. Some technology
barriers include:




No known, easy end-to-end procedure (development, integration, testing/V&V, and deployment
chain) for export, sharing, and use of decision-making, modeling, prediction, and risk-analysis (online
safety) technologies.
Lack of common safety certifications and procedures for physical robotic systems (industrial and
medical settings have some agreed-upon guidelines and ISO standards to meet; NASA has their own
internal guidelines; etc.).
Stovepiping in sub-disciplines associated with robotics (tunnel vision is almost required to dive deeply
enough into an individual problem to solve it, which can lead to issues where not enough overlap
occurs with other areas to fully cover the problem being solved; e.g., some assumptions may not be
valid, or, conversely, may not be recognized or considered when they are actually crucial to the
outcome).
Lack of safety certification criteria and/or procedures for testing and certifying learning systems (e.g.,
controllers, estimators/models), both offline and online.
73
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE




The current tendency towards closed-source (hardware and software) model development that stifles
innovation, research, and scholarship (e.g., proprietary hardware technology that cannot be serviced
or debugged by end-users, proprietary software that has a rigid API and available I/O that cannot be
extended, is not feature-rich enough for developers to leverage for alternate use).
No rigorous approach for determining what processes or algorithms should occur at which times, in
which order, within what bounds (e.g. duration, flops, accuracy), at what update rates, for real-time
operations (e.g., when should sensor updates occur in the course of calculations? Exactly what data
is necessary to supply to each level of operations, and with what level of uncertainty, to guarantee
stability of the entire robotic system?).
Lack of definition or understanding of what total system stability or robustness means at the systems
level (e.g., robustness to failure? what types of failures? robustness to bad input? at what parts of the
process?).
Widespread misuse of the term “unknown” for “unexpected” when discussing short-term required IS
capabilities – meta-learning for robots is at least 20 years away (even humans have difficulty in trying
to “handle unknown situations”, e.g., if aliens landed on Earth, humans wouldn’t know what do to).
Without lowering or removing the above technology barriers, we can make few guarantees about a
robotic system’s operations, which in turn makes it very difficult to convince others to use such new
technology – and for good reason.
POLICY AND REGULATORY BARRIERS
We advocate a mixed human and intelligent robotics environment for aerospace and other domains with
user adjustable levels of automation. A mix of humans and intelligent robotics is expected to demonstrate
both increased efficiency and safety, given the adoption of a carefully developed framework, operations
within which could guarantee such safety. As a result, we hope to avoid some of the policy and regulatory
issues currently associated with autonomy.
However, even separated spaces (in time or space) where robots simply ‘stay out of the way’ are not
necessarily a guarantee of safety, or of pre-emptive regulatory compliance. For instance, there are
currently many policy and regulation barriers that still stand in the way of UAS currently, even UAS outside
the national shared airspace (see that topic in the roadmap). For the other realms (land, sea, space), the
problem in a sense is that we have no real policy for the inclusion of advanced robotics in public spaces
yet and, admittedly, this is because the technology has not been ready to be included in these spaces until
recently, primarily due to safety concerns, secondarily due to sensing and estimation issues (level of
uncertainty, etc.).
Current policy and regulation barriers include:


Lack of rules for ‘social etiquette’ between humans and robots (and vice-versa) in public spaces where
robots move.
Lack of safety rules or guidelines for robots in non-industrial settings (new rules exist for shared
human-robot workspaces in factory settings, but not elsewhere), and there is a lack of good,
universally-applicable safety metrics for general spaces (stores, offices, city streets, county roads,
rivers, coastlines, international waters, LEO (Low-Earth Orbit), near ISS (International Space Station)
or near space shuttle, geosynchronous Earth orbit (GEO), open plains, etc., and how these might
change according to the human population, such as people in hospitals or people in pressure suits or
spacesuits who are more endangered).
74
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE



Lack of formal, unified rules for testing robots that are meant to operate within human-shared spaces
independent review boards and researchers have no universal guidelines; currently, the spirit of IRB
testing might require that all testing of highly-capable robots that are strong enough to hurt a person
would need to occur in virtual reality simulation first prior to testing interactions between humans
and the actual robot hardware, and this would make testing much more difficult).
Policy on environmental impact and testing should be summarized and made available to a wider
audience (the converse to the previous – robots being tested to make sure that they do not
contaminate or degrade the (space) environments to which they are sent).
Lack of set or widely understood rules for the distribution of control and responsibility for what
robotic platforms do (e.g., when an error of a particular type occurs, is it primarily the ‘fault’ of the
end-user due to operation? manufacturer due to hardware fault? programmer due to software fault?
all three to varying degrees?).
Possible approaches or solutions:





Work with policymakers to determine what appropriate regulations could be put in place for varying
types and levels of robotic systems that can be (a) phased in over time and (b) changed as capabilities
increase.
Requirements on the information made available to other human-controlled actors in nearby spaces
(e.g., planned trajectories that could be displayed on a heads-up display to a human, so they can more
easily determine what the robot will do next, or set methods for showing ‘intent’ that a human can
parse, like ‘body language’ and ‘eye motion’ for humanoid platforms).
Work on reasonable guidelines for control-handoff for centralized and distributed control, with
sliding-mode autonomy.
Work on guidelines for shared-responsibility of robots being operated in certain spaces for both teleoperation and robots in semi-autonomous and fully-autonomous modes.
Develop recommended virtual reality and augmented reality simulation environments, sensors,
tracking units, haptic devices, and associated sets of hardware and software that can be put together
for independent review board-approved human-subject testing.
IMPACT TO AEROSPACE DOMAINS AND INTELLIGENT SYSTEMS VISION
Robotics have always fascinated the general public, but the public has also been somewhat disappointed
in the level of remote human direction necessary for robots to perform, such as was demonstrated in
2013 during the DARPA Robotics Challenge Trials30. While the robot operates remotely from the human,
the human operator’s tele-presence is necessary. When common intelligent systems tools for robotics are
developed, it should be faster and easier to develop and implement variable levels of automation for
robots. With the ability to demonstrate this to the public, demand for intelligent systems that add to
robotic automation should increase dramatically.
The recognition of the potential importance of intelligent systems to the robotics community will
influence where the intelligent systems community puts its emphasis for applied research and
development.
30
(2013) DARPA Robotics Challenge Trials. [Online].
http://archive.darpa.mil/roboticschallengetrialsarchive/gallery/
75
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Impact of these technology barriers if not resolved could lead to:


Stovepiping / lack of common open-source baseline tooling for integration:
o Process of technology transfer is much more difficult.
o Lack of collaboration between many different specialized groups to create an entire working
robotic system with the required functionality.
o Companies that specialize in creating platforms explicitly designed for robotics research for
academic and other research labs could have a significant, possibly negative impact on intelligent
systems development (proprietary hardware and software can impede progress, as can a lack of
basic functionality for supporting intelligent systems techniques).
o Stifled development, unnecessarily slow progress in innovation, research, and scholarship.
Lack of safety metrics and certification criteria:
o Operations of physical robotic systems will be much reduced in scope.
o State-of-the-art adaptive and learning systems will not be able to be implemented on advanced
robotic systems, restricting use to known, well-characterized environments.
12.4 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
RESEARCH GAPS
To institutionalize the use of intelligent systems in robotics, an applied and coordinated national research
program is needed to create a common architectural framework to allow modular development and
testing of intelligent systems technologies. This should include modular components for general robotic
domain functions such as perception, situation assessment, activity planning, movement coordination,
feedback, outcome experience archival, learning for agility and dexterity, safety assessment and
evaluation, fault detection and mitigation and recovery, etc. The software modules should be created for
easy integration with other like modules using to-be-developed standard interfaces between intelligent
systems components.
This separation of software modules might result in some loss of optimality; however, conversely this
decomposition of the larger problem of intelligent control should allow us to be able to tackle problems
that would otherwise be intractable, infeasible to solve. If the interfaces and framework are set up in a
rigorous manner, and are able to be characterized and thus well-understood, verification and validation
of the individual components and the general system should be easier to manage. Further, the addition
of new modules might then only require V&V of the individual component itself, and also its (limited)
impact as-situated within the rest of the already-verified existing framework. Modularizing the
components and ‘robotics OS’ could also allow for robotics to extend in ways that we don’t normally
consider – for instance, many biological systems have distributed brains and control mechanisms.
However, more research needs to be done in this area: a lack of non-exhaustive testing methods, and
the need for commonly-known and -used formal V&V analysis methods, for these more intelligent
systems are currently a bottleneck for the development and widespread adoption of intelligent systems
in robotics. We need to induce further collaboration with the relevant experts in the field.
A very glaring gap in intelligent systems research is also the desperate need for software security.
Currently the problems we face are difficult enough that we have as of yet ignored the need for security
as we are still trying to solve what we consider the basic, fundamental problems in the field, especially
V&V, trust, and certification. However, software security is a vital piece of this that must be addressed,
sooner rather than later. Most engineers not specializing in the field generally think of software security
76
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
and similar issues as something to be solved in application, in the final deployment, as an add-on to the
functionality we want. However, with the very complex intelligent systems, we may need to take this into
account in a more integrated manner as part of the early design and analysis process. Those working with
the broader field of cyber-physical systems have already begun to show how, in distributed robotics, this
is a very important concern; for instance, if we need to take into account attack signals or ‘untrustworthy’
sensors, this can have a significant impact the overall design of our systems, such as the number and
placement of those sensors on our platforms. We need to start a dialog with the community on this topic,
and more research needs to be done. But these are very real concerns, and they impact the
trustworthiness of the systems as much as a lack of V&V analysis would. No end-user would want to have
to worry about the possibility of someone else accessing or, worse, controlling their robot, car, or smart
house, just as they would not want to worry about someone hacking their cellphone or laptop computer.
In some ways, the hacking of robotic platforms are more of a safety concern, as they are physical systems
that by their nature can have a physical impact on anything, or anyone, in their immediate surroundings.
OPERATIONAL GAPS
The longer the intelligent systems and robotics communities procrastinate on development of a
standardized suite of intelligent systems modules for robotics, the longer we will have before we can enjoy
the business efficiency and increases in human safety that we could achieve by transferring the most
dangerous jobs to robot hands. Promoting open standards and standardized tools will make it easier for
new researchers in related fields to enter the community and contribute their own ideas, and will also
allow laymen to leverage these advances and explore their use in alternate settings, leading to an increase
in the use of intelligent robots. Being able to label ‘safe-use’ bundles of components, with some guarantee
when run within specified conditions and with clear explanations of the limits of the system, will also
promote the expansion of (and safety record of) collaborative human and robotic environments.
The lack of open APIs and a closed-source model for advanced technologies also hampers development.
Lack of data-sharing between components and arbitrary omission of internal dependencies can also
hamper development. Modularization only goes so far; differing levels of data-sharing between
components is necessary for whole system stability. One example of this is the DARPA Grand Robotics
Challenge Atlas platform by Boston Dynamics, the legs of which had limited interfaces available and were
meant to operate ‘separately’ from the top half of the robot. However, there was restricted dataflow in
terms of what could be ‘told’ to the legs from the upper half / torso, and thus the system could become
unstable, i.e. the robot could fall over, if the arms or torso moved too quickly or the forces were too high.
Characterizing these limits of the system was likely difficult, and produced severe limits on what could be
done with the Atlas robot, which produced unnecessary restrictions on motion and general capability.
Restricting dataflow unnecessarily between components should be avoided, if not actively discouraged.
Cyber-physical systems concepts could be folded in here, e.g. having ‘heads-up’ messages that could be
sent from the ‘arms’ to the ‘legs’, to give the ‘legs’ some idea of what forces they may need to
counterbalance or offset, and/or perhaps prompt a speed-up of sensor readings and processing power in
the lower portion of the unit – not working in a vacuum – since the upper and lower halves of the robot
are not truly independent. There are also the issues of the inability of a research team to service or debug
problems with a closed-source platform, or a platform where parts of the platform use proprietary
technology, since this can cause long development lead times.
RESEARCH NEEDS AND TECHNICAL APPROACHES
77
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Research needs and technical approaches to advance intelligent systems technologies in robotics,
research needs and technical approaches for intelligent systems include:







Survey existing intelligent systems for robotics toolsets and propose that known, stable, functional
intelligent systems components are recommended for use with the overarching architectural
framework.
Consider standards for intelligent systems module interfaces, so the intelligent systems modules
become plug-and-play.
Ensure community validation of the processes above and publish success stories of implementation
of intelligent systems for robotics.
Advocate for research and educational platforms that incorporate pre-loaded advanced planning and
scheduling algorithms that are open-source, in an easily extensible software ecosystem.
o There is currently a lack of structural support for the easy addition and integration of advanced
intelligent systems algorithms and techniques on top of current state-of-the-art capabilities that
the platform offers.
o The research robots available, their capabilities, and ease of use could have a significant impact
on the directions of the research.
o ‘Solving’ this is an ongoing challenge, but would go a long way towards lessening the impact of
discipline-stovepiping.
Support the creation of architectural frameworks for intelligent systems in robotics that more easily
allow for the development, testing, V&V, and implementation of intelligent systems technology (e.g.,
such as a Robot Operating System for multiple task decision-making under sloppy, distracting, realworld conditions):
o Identify common sets of core intelligent systems capabilities necessary for certain classes of
robotic operations (e.g., learning, fault detection, intent prediction, risk analysis, safety margin)
to help drive development of missing capabilities
o Evaluate which current intelligent systems technologies apply to multiple domains.
o Develop standard interfaces between core intelligent systems capabilities (e.g., inputs and
outputs of classes of algorithms, model representations)
o Develop verification and validation procedures for the architectural connections between
modular intelligent systems components (e.g., no deadlock, sufficiency of functional coverage for
real-time operations)
Identify better metrics and analysis procedures for evaluating these intelligent systems, especially
during the development and testing stages:
o Define new metrics to quantify, and techniques to evaluate risk, and safety associated with a
particular system implementation, relative to its ability to achieve goals in a given domain
o Develop analysis procedures to determine whether a set of given algorithms will support robot
operations for a particular use case (e.g., uncertainty in sensor data and time delay is low enough
that the entire system can be considered stable)
Develop methods for defining a general model and constraints, and methods for its abstraction or
refinement (problem consistency), for intelligent systems use:
o Identify types of failures common to specific problem domains and/or classes of robotic systems,
and start building up a database of these for common use/reference
o Determine general methods for encoding domain knowledge that could more easily allow for the
automated construction of problem domains
o Develop common ontologies or descriptive languages to encode domain knowledge across
problem domains
78
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
PRIORITIZATION
The following are a brief outline of priorities for research to overcome technical challenges that could
slow the adoption of intelligent systems technologies for robotics:




Consider forming a small technical group that represents subject matter experts (SMEs) from both the
robotics and intelligent systems communities. Ensure government, industry, and academia
communities are equally represented.
Prioritize opportunities for injection of intelligent systems technologies into robotics.
Solicit funding from targeted organizations for research opportunities and challenges.
Use the funding not only to provide a demonstration of an intelligent system capability for robotics,
but to also follow through on establishing a flexible overarching architectural framework for
intelligent systems in robotics.
For the injection of intelligent systems technologies into robotics, specifically, we should:





Connect with the companies that produce research robots and work with them to produce systems
that give a top-shelf set of core attributes.
Work on a better open-source model for doing state-of-the-art robotics research. Avoid closedsourcing new technologies prematurely. Encourage the use of common models and frameworks for
faster development, implementation, and testing of new technologies.
Encourage and support the development and use of existing open-source software to avoid having to
“reinvent the wheel” (e.g., the Robot Operating System (ROS)), and help further identify and
popularize and rank what alternatives available and for which purposes.
Work on better structures for intelligent systems development support.
o Agree upon what functionality is most necessary to support in the short- and long-term, the
interfaces that are necessary between most components (algorithms and technologies that
supply information useful to each other that could theoretically be chained together, that require
and/or could supply the data necessary for a set of components to function). This is important as
common API interfaces are helpful for integrating different pieces, and with testing.
o Determine what benchmarks are necessary and relevant for each type of module/functionality in
the global structure, and identify common metrics for evaluating an intelligent system as a whole.
o Encourage the development of support code (implemented code structures) that can act as the
“glue” between disparate components, from the highest- to lowest-level control code.
Determine “killer applications” that would benefit highly from the introduction of intelligent systems
technologies. “Advertise” these to a wide audience.
o Ideally, this would be a highly-desirable application that would require advanced algorithms to
work at all, or with any reasonable efficiency. Some examples might include:
 “Drones”/UAVs that do crop-dusting and/or survey herds of livestock
 A personal or home assistant robot.
o The “killer app” platform should be safe for use in the intended environment and should also be
expendable. The platform should promote interactivity with the user to provide useful feedback
to the intelligent systems community. To do this, we need to, at a minimum
 Make the robots physically more robust (e.g., HW-hardened, waterproof)
 Make the robots low-cost and replaceable/easily-serviceable
o As more people use these robotic systems, the more likely it is that they will be used in
unexpected ways. By pushing the current boundaries that the robotics community can learn from
those applications and the user feedback; we can iterate on the designs and expand them into
79
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE






new applications and domains. This process will help speed development of acceptable practice
both socially and ethically where robots would be viewed as friends of the human.
Start a discussion on the necessary enablers for widespread use (e.g., V&V and certification, “userfriendliness”, social acceptance, laws and/or responsibility contracts that cover accidents and
intentional misuse).
o One way to help overcome the social inertia is to showcase a desired capability that will generate
a great demand for the “product” being offered, up to the point that people will accept the risk,
for example, flying.
o Further, identification of the benefits that will come out of the inclusion of intelligent systems
technologies will show that, for each “killer app” use case, it is worth the risk of
implementing/introducing intelligence.
o Another way to grow acceptance and to allow social inertia to shift naturally is to make a system
very good at minimum capabilities, but provide a way to gradually add in or upgrade the onboard
autonomy that can be tailored as trust is gained. There can also be training and trials where people
can try out the system beforehand, and decide what and how much autonomous capabilities they
want to have. The drawback is that this could introduce difficulty in managing different elements
of autonomy.
o For aerospace platforms, before deploying intelligent systems in space or in aircraft, test and
verify new intelligent systems technologies on the ground first via real-world use (e.g., baggage
handling, real-time goal-following crowd/obstacle navigation, target survey, self-reconfiguration
and repair, etc.).
Leverage current human-human interaction knowledge more heavily, as some of this is already known
to transfer to human-robot and robot-robot interactions. Study of what can and cannot be leveraged
is also important, and may help better distinguish the (evolving) boundaries in what robots can/should
and cannot/should-not do.
Stay aware of the development drivers and capability bottlenecks for widespread robotics adoption.
o Develop good robot capability / “killer app” use case now to help drive intelligent systems
development.
o Collaborate with robot manufacturers to provide certain advanced capabilities out-of-the-box.
o Increase modularity and Application Program Interfaces (APIs) to help development.
o Develop reliable communications, especially for large, decentralized groups of robots.
Encourage systems-of-systems thinking, and help advance systems engineering and system-ofsystems engineering. Maturing these fields is crucial to being able to evaluate and analyze these
complex robotic systems properly.
We should also attempt to transfer controls systems thinking and concepts to interested individuals,
as the tools and rigor that the field gives us are useful in a broader context, and can and should be
extended to systems-level analysis of these complex robotic systems (and their general
interconnected architectures).
Solicit funding from government organizations and robotics industry for applied research
opportunities and challenges. This significantly helps boost robotics and intelligent systems
development. ””
80
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
13. GROUND SYSTEMS FOR SPACE OPERATIONS
13.1 INTRODUCTION
For over 20 years, both the space systems and computer science research communities have been
addressing satellite ground system automation.31 The convergence of human-machine teaming coupled
with hierarchical intelligent systems may lead to more interdependent man-machine systems that
perform faster, safer, and less costly space operations. A key objective of this converged technology is to
reduce the probability of near-miss catastrophes.32 This section focuses on the approaches available and
technologies needed for increased man-machine interdependence as well as intelligent automation of
ground systems that support space operations. Space operations examples are used when needed to
illustrate potential implementation. An abbreviated history and perspective on a path forward for
automation of ground systems for space operations is provided below.
Early ground system automation efforts were supported by NASA Goddard Spaceflight Center (GSFC)33.
Numerous papers document the efforts to achieve “lights-out” payload and satellite operations for NASA
science missions.34 35 36 37 Many of the early ground system automation efforts took advantage of things
that were easy to automate. For example, several of the instantiations were rule-based and alerted
satellite operators via pager, text message, or e-mail and could execute authorized, well-understood
procedures when key variables were trending toward set limits. If a satellite and payload were well
behaved, then alerts were infrequent. Despite papers touting success, automation of ground systems for
space operations is not yet as widespread as anticipated.
During this same period, a separate technical development area focused on common ground systems for
space operations began to get emphasis. According to a NASA GSFC website,38 the “Goddard Mission
Services Evolution Center (GMSEC) provides mission enabling, cost and risk reducing data system solutions
applicable to current and future missions managed by GSFC. GMSEC was established in 2001 to coordinate
ground and flight system data systems development and services.” The United States Air Force is
31
P. Zetocha, R. Statsinger and D. Frostman, “Towards Autonomous Space Systems,” Software Technology for Space
Systems Autonomy Workshop, Albuquerque, NM, 22-25 Jun 1993.
32
R.L. Dillion, E. W. Rogers, P. Madsen and C.H. Tinsley, “Improving the Recognition of Near-Miss Events on NASA
Missions,” IEEE Aerospace Conference, Big Sky, MT, 2-9 Mar 2013.
33
J. B. Hartley and P. M. Hughes, “Automation of Satellite Operations: Experiences and Future Directions at NASA
GSFC,” The Fourth International Symposium on Space Mission Operations and Ground Data Systems; Volume 3,
Nov 1, 1996.
34
J. Catena, L. Frank, R. Saylor and C. Weikel, “Satellite Ground Operations Automation-Lessons Learned and Future
Approaches,” International Telemetric Conference; Las Vegas, NV, 23 Oct 2001.
35
R. Burley, G. Gouler, M. Slater, W. Huey, L. Bassford and L. Dunham, “Automation of Hubble Space Telescope
Mission Operations,” AIAA SpaceOps, 2012.
36
A. Sanders, “Achieving Lights-Out Operation of SMAP Using Ground Data System Automation,” Ground Systems
Architecture Workshop (GSAW), Los Angeles, CA. Mar 20, 2013.
37
A. Johns, K. Walyus and C. Fatig, “Ground System Automation-JWST Future Needs and HST Lessons Learned,” AIAA
Infotech@Aerospace 2007 Conference, Rohnert Park, CA, 7-10 May 2007.
38
https://gmsec.gsfc.nasa.gov
81
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
reportedly pursuing a similar Enterprise Ground Services (EGS) concept.39 The European Space Agency
(ESA) has similar intentions.40
Emphasis on the common satellite ground system combined with a desire for “lights-out” operations
provides an excellent opportunity for intelligent systems to contribute to development of an appropriate
level of human-machine teaming, and automation of ground systems for space operations. But there are
acknowledged pitfalls. For example, user-centered design lessons learned 41 were detailed in the
Autonomy Paradox.42 Researchers found that “the very systems designed to reduce the need for human
operators require more manpower to support them.” The cognitive engineering, human effectiveness,
and human-centered computing communities indicate that the lack of manpower savings associated with
autonomy is primarily due to trying to add a human interface on autonomy programs as one of the last
steps in development. Most people believe that adding a graphical user interface (GUI) is easy, so the
interface can be added last, but after-the-fact engineering of the human-machine interface does not
create an effective man-machine team43. To establish proper human-machine teaming, the man-machine
work environment should be designed first. Applications are then implemented to work within a natural
man-machine teaming environment. This avoids the commonly encountered environment where
operators have to learn multiple interfaces and needing to translate results between programs.
Ground systems for space operations perform functions such as space vehicle commanding, mission
planning, state of health monitoring, and anomaly resolution, as well as the collection, processing, and
distribution of space systems payload data. Ground systems for space operations may also include
functions such as the tracking of space objects, collision avoidance, rendezvous and proximity operations.
Traditionally the ground systems segment of most space programs has received less emphasis than the
development of the on-orbit space vehicle technologies which has delayed the advancement of ground
systems segment. This is one of the reasons why ground system functionality for maintaining safe
spacecraft operations, maneuvering, and responding to anomalies has not changed in recent years. In
particular, the core anomaly detection and reporting technique of limit checking in the current space
command and control ground infrastructure has not advanced substantially over the past several decades.
The primary advance has been that space ground systems now run on commodity workstations rather
than mainframes. While there are efforts to create an enterprise ground service across satellite
constellations, there will still be issues with space data interoperability.
A few additional issues with legacy space command and control systems include:


Substantial amounts of spacecraft state-of-health and payload telemetry data are brought down, but
underutilized.
Primitive anomaly detection and reporting techniques miss important abnormal signatures.
39
http://www.reuters.com/article/2015/04/16/us-usa-military-space-ground-idUSKBN0N72QO20150416
http://www.esa.int/About_Us/ESOC/Europe_teams_up_for_next-gen_mission_control_software
41
J. Fox, J. Breed, K. Moe, R. Pfister, W. Truszkowski, D. Uehling, A. Donkers and E. Murphy, “User-Centered Design
of Spacecraft Ground Data Systems at NASA’s Goddard Space Flight Center”, 2nd International Symposium on
Spacecraft Ground Control and Data Systems, 1999.
42
J. L. Blackhurst, J. S. Gresham and M.O. Stone, “The Autonomy Paradox,” The Armed Forces Journal,
http://www.armedforcesjournal.com/the-autonomy-paradox/, Oct 2011.
43
M. Johnson, J. M. Bradshaw, R. R. Hoffman, P. J. Feltovich and D. D. Woods, “Seven Cardinal Virtues of HumanMachine Teamwork: Examples from the DARPA Robotics Challenge,” IEEE Intelligent Systems Journal, pp. 74-80,
Nov-Dec 2014.
40
82
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE


Abnormal and anomalous event signatures are not autonomously archived and aggregated for realtime context during future events
Human subject matter expert (SME) technical expertise and decision-making are not being archived
and retained
The issues above result in the operator’s inability to consistently and objectively put current space
operations events into the context of success or failure related to all previous spacecraft events, decisions,
and possible causes. Keeping track of this has become a “big data” problem. Big data problems often
require big data solutions, not just simple extrapolations of the current methodologies. The continuation
of legacy satellite operations approaches is not to be a recipe for success. Development, testing, and
evaluation of a more top-down holistic man-machine teaming approach to introduce more
interdependent man-machine management of space command and control across multiple constellations
appears to be a worthy alternative. It is preferable to consider intelligent systems contributions to ground
system automation now, while space operators are converging on common ground systems.
There are few intelligent systems being used today by ground systems for space. Part of the reason for
this is the risk adverse nature of space programs. The result is that the number of people generally
required to support manual space operations is larger than it needs to be and the cost for space operations
remains higher than it should be. There is demand to drive down space system operations costs, to reduce
the response time to the detection and resolution of anomalies, and to reduce the potential for human
errors. Appropriate combinations of human-machine interdependence and the application of intelligent
systems below the Human-Machine Integration (HMI) layer to achieve operator goals and objectives,
manage ground system resources, as well as capture and assess human subject matter expertise should
be appealing to both the common ground system and “lights out” communities. The authors believe that
an applied research and development effort could substantially advance the technology readiness level
(TRL).
There are strong synergies between this section and the HMI, ISHM, Big Data, and Robotics sections of
this roadmap. Our desire is to build on these synergies and collaborate rather than have each domain
exist and compete for resources independently. Solutions developed for effective human-machine
teaming and ground systems for space operations automation can be applied to other complex technical
operations.
13.2 INTELLIGENT SYSTEMS CAPABILITIES AND ROLES
DESCRIPTION OF INTELLIGENT SYSTEMS CAPABILITIES
Recently there has been increasing acceptance of intelligent systems technologies performing “big data”
evaluation of satellite states for abnormality detection and reporting. 44 There is demand for more
comprehensive detection and reporting, but feedback from space operators is that the interface to “big
data” techniques has to be intuitive. This presents opportunities for higher-level intelligent systems to
contribute to the management of “big data” techniques. A likely place to start is to focus on human-
44
C. Bowman, G. Haith and C. Tschan, “Goal-Driven Automated Dynamic Retraining for Space Weather
Abnormality Detection,” AIAA Space 2013 Conference Proceedings, May 2013.
83
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
machine teaming and intelligent automation. A path forward is to develop and demonstrate tools that
facilitate reliable, trustworthy human-machine teaming and intelligent automation of technical tasks
currently performed solely by competent, ground system operators.
A short list of desired intelligent systems capabilities in ground systems for space operations include the
following:








Optimize human-machine teaming to augment human operator performance.
Reduce the probability for human commanding error.
Increase situational awareness of potential threats to satellite/mission health by fusing data from all
relevant sources.
Reduce elapsed time to detect problems and make decisions.
Avoid or minimize space system anomalies due to interference from other space systems, internal
equipment, or the natural environment.
Optimize spacecraft operations to extend mission life.
Increase mission productivity and data return.
Automatically maintain the viability of intelligent systems based on user goals and performance
feedback even as the space system behavior changes with age.
INTELLIGENT SYSTEMS ROLES AND EXAMPLE APPLICATIONS
Intelligent systems could perform the following roles during a phased increase of human-machine teaming
and ground system automation for space operations:






Efficient management of ground system resources to achieve operator-specified goals and objectives
including automation of low-level technical activities at a ground system.
Automated detection and reporting of abnormal spacecraft states based on comprehensive
evaluation of spacecraft and payload telemetry.
Archive, analyze, and quantify technical skills of human Subject Matter Experts (SMEs) currently
performing technical tasks on ground systems.
Monitor operator actions and advise SMEs when an action proposed to be taken on a ground system
has previously resulted in undesirable results.
Hot back up available to take over limited ground system control from human operators, if needed.
Optimized mission planning.
DESIRED OUTCOMES
The proper introduction of human-machine teaming for space operations will have been successful if the
following top-level metrics are achieved:





The space command and control mission requires less personnel over time to satisfactorily accomplish
more work than is possible today.
The system actively helps operators stay at a high technical skill level so that they do not become
complacent or dependent.
Management is able to assess the skill and accuracy of each human space operator as well as humanmachine teams on specific technical tasks and how that skill level varied over time.
The skill of the machine portion of the system can be quantified and improves over time.
The machine portion of the system can reliably perform an increased number of tasks over time.
84
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE



The human-machine teaming is configurable on the fly and changes are automatically tracked so that
there is full accountability for system configuration and tasking.
The system can revert back to complete human control instantaneously, if needed.
Technical skills to accomplish specific command and control tasks become embedded and retained by
the system so that the details of these skills are available to all future space operators.
13.3 TECHNICAL CHALLENGES AND TECHNOLOGY BARRIERS
TECHNICAL CHALLENGES
Due to the risk adverse nature of space programs, the state of practice for intelligent system technologies,
human-machine teaming, and ground system automation are generally at a low TRL. Exceptions were
sampled in section 13.1, but these examples generally have been low-level automation developments,
such as rule-based scripts for one-of-a-kind spacecraft and ground systems written by knowledgeable
ground system engineers that exploited low-hanging fruit opportunities. Ground- and flight-based
intelligent systems prototypes have been developed within various laboratories and in many cases have
been demonstrated in limited shadow mode operations. However, far fewer intelligent systems tools have
made their way into spacecraft operations. Intelligent systems are not automatically considered as the
technology best suited to providing increased ground system automation for domains such as space
operations. To overcome this, the intelligent systems community needs to demonstrate the technical
ability to perform these functions and to quantitatively show improvements in response time, reduce
costs, and increase system performance. Our goal is to raise the TRL level for generic, easy-to-use,
hierarchical intelligent automation and user configurable human-machine teaming to TRL 6.
Another technical challenge may come from the traditional automation community. Traditional
automation development usually involves an outside organization studying current operations, analyzing
work flow, decomposing human tasks, followed by the recommendation to conduct first-principles
software development that creates custom automation for that specific operation. While that process
works, we propose an intelligent systems alternative here that compliments the traditional approach. This
intelligent automation approach may prove to be faster, less costly and more easily trusted. This approach
involves expanding on the concept of an intuitive application that uses human SMEs for mentoring,
feedback, and goal establishment as the basis for human-machine teaming and intelligent automation.
There are a number of technical challenges associated with successfully achieving the vision articulated
above. Several of them are articulated below.







Converge on a generic human-machine teaming and intelligent automation framework.
Establish extreme ease-of-use capability.
Establish the ability to archive human actions, decisions, and outcomes in order to provide thought
process traceability.
Human system operator activity archival, so that human technical expertise is never lost.
Establish the ability to score the success of individual humans, human-machine teams, and the
intelligent automation on specific tasks or ensembles of tasks, as well as how those scores evolve over
time.
Establish the capability for the intelligent automation to learn and adapt in order to attempt to
improve its success rate.
Establish the ability for human reviewers to easily review specific actions and provide constructive
feedback both to humans and the intelligent automation.
85
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE




Ensure the intelligent automation has the ability to access, use, run and manage lower-level intelligent
systems.
Determine innovative operator training methods so operators can provide goals, feedback, and the
ability to step in and take over operations, if needed.
Establish the ability for the intelligent automation to perform as a test bed for both intelligent and
non-intelligent techniques, so that the platform can be used to evaluate suitability for various
techniques applied toward performing a task.
Determine how to characterize and adapt to uncertainty in reasoning systems that perform space
operations.
Along the way, we desire the ability to evaluate more sophisticated aspects of intelligent automation as
feedback for iterative development and in order to make accurate recommendations for technology
adaptation. We expect to conduct experiments to document skill in performing deterministic versus nondeterministic tasks, long-term versus short-term tasks, as well as success rates for adaptations on systems
that are dynamically stable as well as systems that have stability issues. As the practical ability of the
intelligent automation matures in the long term we anticipate not having to specify the algorithms used
to intelligently automate a task. Instead, we anticipate the intelligent automation having the ability to test
several solutions and determine/converge on the best suited algorithm or suite of algorithms to perform
technical tasks.
TECHNICAL BARRIERS
Many of the technologies needed to achieve the desired intelligent automation vision exist today. So
achieving this vision is less of an exercise in fundamental research, and more of an applied collaborative
development activity without substantial technology barriers.
POLICY AND REGULATORY BARRIERS
Our vision is for increased human-machine teaming and human-on-the-loop automation, not autonomy,
so we do not expect regulatory barriers. There will be information assurance and cyber security to
overcome since this capability is a suite of algorithms performing functions potentially across multiple
domains that previously were performed entirely by humans. It would be desirable for parties responsible
for information assurance and cyber security policy to be thinking now of methods for successful
certification of intelligent automation software.
IMPACT TO AEROSPACE DOMAINS AND INTELLIGENT SYSTEMS VISION
If successful, this easy-to-deploy and easy-to-use intelligent automation capability may have relevancy to
numerous aerospace domains along the spectrum from automated conduct of long-term research and
development to performing many more instances of automated day-to-day aerospace operations.
13.4 RESEARCH NEEDS TO OVERCOME TECHNOLOGY BARRIERS
OPERATIONAL GAPS
The lack of tools for easy automation of ground system activities leads to the continuation of highly
manual and expensive status quo operations. Further, there is no ability to capture and comprehensively
quantify the skill of the humans performing these operations. As a result, we do not really know how good
86
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
they are or when the next human error could result in loss of control of a billion dollar space system. Tools
and methods are needed in order to help quantify the benefits and increase the trust of automated
systems over traditional methods.
RESEARCH NEEDS AND TECHNICAL APPROACHES
Without getting into technical design aspects of intelligent automation software development, the
following describe the desired functionality of software suite:








Embrace the concepts and lessons learned from the human-centered computing community and the
Autonomy Paradox. Exploit these lessons learned and use them to create practical applications that
facilitate easy automation of ground system for space operations.
Develop and implement a generic software framework that is capable of autonomously executing and
managing all or nearly all existing intelligent automation and intelligent systems Data Fusion and
Resource Management (DF&RM) algorithms.
Develop an intuitive capability for organizations to easily monitor and archive the activities and
decisions of human SME system operators performing specific technical tasks including outcomes
Develop the capability for management to easily review, evaluate, and establish a quantified skill level
based on individual and aggregated sequences of archived decisions made by SMEs, the intelligent
automation, and human-machine teams in conjunction with the current/historical information
available at the time the decision was made.
Develop the capability for goal-driven intelligent automation to learn from the archives of human
sequences of actions and skill levels to create modified timing and sequences of actions that may
increase the intelligent automation’s skill level over that of individual human SMEs. When such
archives are not available, the intelligent system needs to discover how to manage the system
processes to meet user goals and respond to feedback.
The intelligent system needs to be able to discover and compare unforeseen relevant data sources
with the baseline situation assessments and recommend responses.
Allow intelligent automation with the capabilities above to continue monitoring SMEs as a safety net,
notifying them if an action they are taking could result in an adverse outcome.
Implement the ability for the intelligent automation to be certified to conduct specific tasks with a
human-on-the-loop either as a hot backup or the primary.
Our technical approach to testing this intelligent automation is to begin with simple, individual serial tasks
performed on a space operations ground system and evaluate human only performance on those tasks.
Then we anticipate evaluating intelligent automation of human-machine teaming on parallel tasks.
Assuming positive results, we would follow this with the evaluation of the new capability on more
complex, hybrid (serial and parallel) tasks. Finally, we desire to evaluate the ability of intelligent
automation to manage hierarchical tasks, especially for instances where the intelligent automation gets
to manage lower-level serial, parallel and hybrid tasks. A key to success will be to accurately quantify the
improvements over traditional methods based upon mission requirements.
PRIORITIZATION
Our first priority and the main impediment is not technical, but rather insufficient funding levels. We have
proposals, concepts and designs, but we do not currently have funding to fully pursue them.
87
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Our second priority is securing a small technical development team with the proper skills. To be successful,
we do not just need developers, we need the right developers. Access to cooperative ground operation
facilities and ground operators is also critical.
Third, having seen the outcome of DARPA funding during the past decade that resulted in Apple’s Siri and
the DARPA Grand Challenge that ultimately resulted in Google cars, we would like to advocate a similar
event for intelligent systems. Consider encouraging DARPA to hold an Intelligent Systems Challenge
focused on space or other system that monitors or performs complex technical operations (e.g., air traffic
management, physical security, integrated system health management, and intelligence analysis).
88
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
14. OBSERVATIONS
Sections 3 through 13 above have discussed how the use of intelligent systems can complement current
applications and drive new applications in specific aerospace domains. A central theme that runs through
these is the thought that aerospace systems will become more intelligent over time. The integration of
intelligent aerospace systems will help the US and its allies stay competitive from multiple perspectives
such as increased safety, increased operational efficiency, improved performance, lower manufacturing
and operating costs, improved system health monitoring, as well as improved situational awareness and
accelerated data to decision cycles.
Other common themes for Intelligent Systems for Aerospace are clustered into several broad categories
below.
14.1 POSITIVE ATTRIBUTES OF INTELLIGENT SYSTEMS FOR AEROSPACE
There are numerous positive expectations for intelligent systems mentioned in sections 3 through 13. A
sample of those expectations that have been extracted from the individual sections are summarized here:











Aerospace systems with adaptive features can improve efficiency, improve performance and safety,
better manage aerospace system uncertainty, as well as learn and optimize both short-term and longterm system behavior (Section 3).
Increasingly autonomous systems also contribute to new levels of aerospace system efficiency,
capability, and resilience, such as “refuse-to-crash” through software-based sense-decide-act cycles
(Section 4).
Adapting non-traditional computational intelligence approaches to multiple aerospace domains
promises to help us solve aerospace-related problems which we previously could not solve (Section
5).
Development of new intelligent systems and solution methodologies can help establish trust of nondeterministic, adaptive, and complex systems for aviation (Section 6).
Integration of intelligent systems into unmanned aerospace systems in low-altitude uncontrolled
airspace will improve vehicle automation, airspace management automation, and human-decision
support (Section 7).
Intelligent systems can contribute to real-time solutions that facilitate not only air traffic control, but
strategic air traffic flow management, especially during and after disruptions (Section 8).
Coupling intelligent systems applications with big-data will help the aerospace industry becomes
increasingly cost-effective, self-sustaining, and productive (Section 9).
Human-Machine Integration will be exploited to ensure that intelligent systems work in a way that is
compatible with people, by promoting predictability and transparency in action, and supporting
human situational awareness (Section 10).
Aerospace systems using Integrated System Health Management (ISHM) promise to provide systemof-systems monitoring, anomaly detection, diagnostics, prognostics and more in a systematic and
affordable manner (Section 11).
The coupling of intelligent systems with robotics promises faster, more efficient decision-making and
increased proficiency in physical activities (Section 12).
Increasing the level of intelligent automation in ground systems for domains such as space operations
can help reduce human errors, help avoid spacecraft anomalies, extend mission life, increase mission
productivity, and reduce space system operating expenses (Section 13).
89
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
The intelligent system attributes above indicate the potential for commonality as well as synergy between
aerospace domains. The timely, productive and safe development of intelligent systems for aerospace
requires collaboration between the intelligent systems community and aerospace partners. The crossdomain activities necessary to overcome the techno-social challenges and pave the way for industrial
scale deployment of intelligent systems are summarized below.
14.2 SOCIETAL CHALLENGES TO INTELLIGENT SYSTEMS FOR AEROSPACE
The implementation of intelligent systems for aerospace domains faces societal challenges in addition to
the technical challenges and barriers. In this section we note common observations that are societal
challenges to the use of intelligent systems for aerospace, extracted from the preceding domain-specific
sections.
ACCEPTANCE AND TRUST OF INTELLIGENT SYSTEMS
In spite of the vision projected in this roadmap for teaming of humans and intelligent systems, the media
and the public often consider worst-case scenarios. While worst-case, rare events can happen, intelligent
systems for aerospace need to be designed so that potential worst-case scenarios are minimized by
engaging humans or constraining the adaptation of increasingly autonomous intelligent systems in a
manner that reins in probability of catastrophic failure (Section 4).
Data should be collected to quantify the benefits of human-machine teaming to safety as compared to
human decision-making by itself. For example, we can reference the data that Google and other
companies that develop autonomous vehicles have used and how they compare it to the safety records
of human drivers. New methodologies for validation and verification of intelligent systems (Section 6) that
are coupled with extensive data should provide the evidence needed for rational confidence in intelligent
systems for aerospace. In addition, intelligent systems should be phased in first for aerospace applications
where human safety is not at risk.
FEAR OF INTELLIGENT SYSTEMS TECHNOLOGY
Technological progress has changed the world substantially in ways that many would say are for the better
as there are more prosperous people and people live longer than ever before. However, there is still fear
of new technologies, such as intelligent systems. To overcome fear of new intelligent systems
technologies from the aerospace community, the general public and outspoken critics, we should develop
education and outreach initiatives. One specific suggestion is the development of interactive tutorials
(Section 4) that help everyone from aerospace researchers to the general public understand how
intelligent systems work, what they can do when teamed with humans, and provide perspective on the
safety record of humans, human-machine teams and autonomous systems.
Intelligent systems in these outreach initiatives should be projected explicitly for the purpose of
establishing better collaborative human-machine decision-making (Section 10) when that is the most
prudent course. Intelligent systems should be created to fulfill specific, focused aerospace-related
activities. Intelligent systems for aerospace are designed to accomplish missions and tasks; they are not
being designed to replace the human attributes of free will, imagination, conscience, and self-awareness
(Section 5).
Intelligent systems are likely to take on dull, dirty, and dangerous jobs that are hazardous to humans. For
jobs where human safety is not an issue, it is expected that human-machine teaming will be established
90
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
with increased safety. The intent is to help aerospace system operators and the general public understand
and accept the benefits of intelligent systems capabilities (Section 5). Additionally, some work that
intelligent systems will do will be completely new, in other words not currently done by humans, but will
open up more opportunities for humans, similar to the way the Internet has brought about the jobs
associated with the Information Age we experience today. To further overcome barriers, it is suggested
that pilot implementations be constructed, so that the public can tangibly experience what intelligent
systems can do. In addition, the aerospace community has to get better at quantifying the benefits of
intelligent systems, so that decision makers and the public have evidence of the bottom line benefits.
Additionally, it is worth noting that several high priority research projects were identified by the NRC
Autonomy Research for Civil Aviation Report that are related to overcoming fear of intelligent systems
technologies, including:



Determining how the roles of key aerospace personnel and aerospace systems, as well as humanmachine interfaces, should evolve to enable the operation of increasingly autonomous intelligent
systems
Determining how increasingly autonomous systems could enhance the safety and efficiency of civil
aviation
Developing processes to engender broad trust in increasingly autonomous intelligent systems for civil
aviation
POLICIES DIRECTED TOWARD INTELLIGENT SYSTEMS
The aerospace community would like to avoid overreaction to intelligent systems that could result in
preemptive regulations and policies. In order to preclude negative policies, the intelligent systems
community should be proactive in proposing progressive policies for the successful and safe
implementation of intelligent systems. This could be extremely useful for implementations involving
information assurance and cyber security (Section 6).
Allowing regulatory events to negatively affect intelligent systems development and implementation
could translate to a loss of world leadership in this technical area (Section 7). Using the integration of lowaltitude unmanned aircraft systems into the national airspace as an example, the intelligent systems
community should help determine the minimal set of regulatory requirements coupled with advanced air
traffic management tools and procedures that ensures the continued safety of the National Airspace
System (Sections 7-8). The same advice applies to the intelligent systems community assisting the
government in establishing certification of intelligent systems for aerospace systems operating in
uncertain, unexpected, and hazardous conditions and for adaptive systems in unstable aerospace vehicles
(Section 3).
Overall, the societal barriers to intelligent systems are not insurmountable. The average traveler thinks
nothing of boarding an unmanned train at the airport to transit between terminals; while not a direct
analogy to aerospace, this supports the idea that society will embrace progress if it is shown to be safe.
The general public appears to be open-minded toward self-driving cars; something not anticipated a
decade ago. However, the societal barriers mentioned above should be anticipated and addressed early.
Proactive actions could reduce the time needed for intelligent system implementation as opposed to
reacting after-the-fact to restrictive intelligent systems regulations and policies.
91
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
14.3 TECHNOLOGICAL GAPS IMPEDING INTELLIGENT SYSTEMS FOR
AEROSPACE
The list of technology gaps holding back the development, implementation, proliferation, and
implementation of intelligent systems in aerospace domains from Sections 3-13 is substantial. Since those
lists exist for a multitude of diverse aerospace systems, such as adaptive and non-deterministic systems
(Section 3), they are not repeated here in detail. Instead, we attempt to establish higher level technology
gap summaries, building on the trends seen in Sections 3-13.
High level intelligent aerospace system technology development needs include the following:

Develop and validate ultra-reliable, safety-assured, and resilient intelligent systems technologies for
adaptive command and control of aerospace systems that facilitate:
o Contingency management, situation assessment, impact prediction, and prioritization when faced
with multiple hazards
o Effective teaming between human operators and intelligent system automation including
transparent real-time situation understanding between humans and intelligent systems
o Ability to certify intelligent aerospace systems with anticipated operation under uncertain,
unexpected, and hazardous conditions
o Ability to provide adaptability to dynamically changing operating environments to improve
performance and operational efficiency of advanced aerospace vehicles.

Develop and validate intelligent systems for aerospace autonomy that address:
o Handling of rare and un-modeled events
o Adaptation to dynamic changes in the environment, mission, and platform
o Exploitation of new sources of sensed data
o Exploitation of knowledge gained from newer sensors
o Capture of multi-dimensional knowledge representations and knowledge engineering to improve
decision options
o Ability to handle heterogeneous, multi-vehicle cooperation
o Ability to correctly predict human intent in order to operate in common workspace

Develop or adapt easy-to-use big data applications tailored for aerospace applications and use these
big-data applications to enable additional intelligent aerospace system functionality
Harness computational intelligence techniques that efficiently explore large solution spaces to
provide real-time decision-making for intelligent aerospace systems
Develop intelligent system software standards focused on affordability to realize the benefits of
integrated system health management (ISHM)
Establish an environment for more rapid development, integration, metrics, testing, and deployment
of intelligent systems for robotics including focus on enabling applications that make new levels of
robotic functionality available to aerospace domains
Develop human-centered technologies for management of multiple mission-oriented intelligent
aerospace systems that can learn and adapt from human decision-making
Establish non-traditional methods of testing intelligent systems for aerospace that develop trust, such
as
o Advanced formal methods for highly complex, non-deterministic systems
o Runtime assurance methods that detect and avert unsafe results





92
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
o

Systems that continually assess the level of confidence of safe operations
Demonstrate the reliability and safety of intelligent system technologies for small unmanned
aerospace systems such as:
o Automated takeoff and landing
o Detection and avoidance
o Autonomous operation during lost link conditions
o An objective framework of safety metrics for increasing levels of automation
Although the summary above represents a partial list of technology development needs for intelligent
systems extracted from the individual roadmap contributions in Sections 3-13, the list represents
aerospace community collective thoughts for the scope of multi-domain commitment needed to push
intelligent systems forward.
14.4 PATH FOR ENABLING INTELLIGENT SYSTEMS FOR AEROSPACE
In this section we reflect on (a) the positive attributes of intelligent systems for aerospace, (b) the societal
challenges, and (c) the technological gaps from previous sections above and use that insight to establish
a list of common objectives needed to enable intelligent systems for aerospace domains.










Demystify intelligent systems through education, outreach and aerospace tutorials, both for the
aerospace community and the general public
Encourage brainstorming between intelligent systems visionaries and potential government users of
intelligent systems. Create specific government and industry requirements for intelligent systems,
both for new aerospace system development and for adaptation of intelligent systems technology for
existing aerospace systems
Create an environment where there is positive demand for intelligent systems technologies and
expertise. A desirable outcome is the intelligent systems buy their way into new aerospace
development programs because of the overwhelming benefits
Encourage technically oriented intelligent systems communities of interest for aerospace domains
Formulate a logical technology development timeline for progression from basic research to advanced
applications of intelligent systems for aerospace domains
Apply intelligent systems first in domains where human safety and/or mission success is not at risk
and use these experiences to build experience, trust, and confidence. Document both successes and
failures
Ensure intelligent systems are developed with the vision for managing aerospace operations, not
simply working on specific low-level technical challenges
To the extent possible, avoid the development of custom, one-of-a-kind intelligent systems that are
not reusable or intelligent systems that focus on solving specific problems, instead of developing
intelligent systems that can be used for multiple, general problems
Ensure that general intelligent system tools are developed that can be easily applied to multiple
aerospace domains. Establish intelligent system interface standards, so that intelligent systems can
be modularized along the lines of “plug and play.” In addition, ensure that intelligent systems that
comply with these new interface standards can also be connected and interface with non-standard,
legacy aerospace systems.
Consider establishing a plan for open architecture for intelligent systems that encourages
interoperability between systems and reuse of proven capabilities from one generation to the next
93
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE












Consider establishing a common human and intelligent system data/decision collection, archival,
performance assessment, and intelligent system hosting infrastructure that encourages productive
collaboration between the intelligent systems engineering and human effectiveness communities
Establish long-term data sets that contain decision-making timing, accuracy, and outcome records for
humans, intelligent systems, and human-intelligent system teams to help indicate which are best
suited for specific aerospace activities.
Establish long-term data sets that quantify the efficiency and cost savings of intelligent systems and
human-machine teams over humans alone
Eliminate traditional barriers and establishing cooperation between the aerospace controls
community, the aerospace intelligent systems community, and the software validation and
verification community. Incentivize and reward successful collaboration
Similarly eliminate traditional barriers and establish cooperation between the aerospace intelligent
systems community and the non-aerospace intelligent systems communities, such as the automotive,
computer, and science communities
Explore opportunities for collaboration with the traditional aerospace communities such as
aerodynamics, propulsion, and structures on new intelligent systems methodologies that could
benefit these communities from reduced engineering development life cycle
Interact with and leverage work done by non-aerospace communities developing and validating
intelligent systems
Develop both strong multidisciplinary modeling and simulation of intelligent systems as well as strong
validation and verification
Consider ways to use intelligent systems that have human experience embedded in them as force
multipliers that can be deployed more quickly than humans
Enable intelligent systems researchers to understand the requirements and perspectives of the
regulator to achieve third-party trust in intelligent systems for aerospace applications
Ensure that all the above do not overlook information assurance and cyber security considerations
Communicate to government agencies the value of intelligent systems in aerospace and advocate for
funding support for research and development of intelligent systems technologies
All of the intelligent system objectives above point to a commitment and financial strategic investment in
both basic and applied research. Some of these common objectives are used to form the basis for
recommendations in the next section.
94
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
15. RECOMMENDATIONS
If increasing intelligent systems in aerospace is a desired end state, then this roadmap needs to propose
logical paths to achieve those end states with prudent proliferation of intelligent systems for aerospace.
Below is a multi-year timeline for commitment to development of several generations of intelligent
systems for aerospace.
Year 0:
1. Identify an expanded list of aerospace-community intelligent systems stakeholders
2. Start brainstorming between intelligent systems visionaries and potential government users of
intelligent systems
3. Establish and publish government requirements for intelligent systems both for new aerospace
system development and for adaptation of intelligent systems technology for existing aerospace
systems
4. Identify immediate, short-term, and long-term opportunities for intelligent systems and
intelligent systems infrastructure development
5. Identify funding sources. Establish intelligent systems research programs that provide funding
through competitive source selections and internal research programs
6. Create a dialogue between IS researchers and the certification authorities regarding paths to
certification for non-deterministic systems
7. Survey intelligent systems work done by non-aerospace communities developing and validating
intelligent systems
Years 1 to 5:
1. Prioritize requirements for intelligent systems development, testing, and deployment
2. Determine if intelligent systems contests, such as a DARPA Intelligent Systems challenge are
desirable
3. Develop intelligent system education, outreach, and tutorials
4. Start technically oriented intelligent systems communities of interest
5. Establish standard intelligent system taxonomies and terminology lexicon
6. Develop a plan for open architecture and interface standards for intelligent systems to facilitate
modular and system-level interoperability
7. Develop draft guidance on certification of intelligent systems for safety-critical aerospace
applications
8. Form an information assurance and cyber security team that focuses on intelligent systems
9. Complete intelligent systems development for specific short-term objectives, such as:
a. Create government and industry requirements for intelligent systems both for new
aerospace system development and for adaptation of intelligent systems technology for
existing aerospace systems
b. Focus on applied technology development for domains such as ISHM, low-altitude UAVs
c. Establish cross-domain multidisciplinary modeling and simulations of advanced concepts
for autonomous vehicles, as well as strong validation and verification of intelligent
systems by addressing interactions between intelligent systems disciplines and software
engineering
10. Start development for basic intelligent systems technologies
a. Develop higher-level intelligent systems for managing aerospace operations
95
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
b. Develop general intelligent system tools that can be re-used or easily applied to multiple
aerospace domains
c. Establishing a common human and intelligent system data/decision collection, archival,
performance assessment, and intelligent system hosting infrastructure for collaboration
between the (a) intelligent systems engineering and (b) human effectiveness
communities
11. Start development of longer-term advanced intelligent systems technologies
12. Update AIAA Roadmap for Intelligent Systems, as necessary
Years 6 to 10:
1. Pursue intelligent systems development for specific mid-term basic and advanced technologies
2. Publish standards for certification of intelligent systems for aerospace applications
3. Consider the idea of building an X-vehicle as an Intelligent Systems technology demonstrator
Years 11 to 15:
1. Pursue intelligent systems development for specific longer-term basic and advanced technologies
Years 16 to 20:
1. Pursue intelligent systems development for the longest-term basic and advanced technologies
envisioned
Through this timeline, we will continue to seek communication with stakeholders and the aerospace
community to share progress and advancements made in the development of intelligent systems
technologies. It is hoped that the industry, government organizations, and universities will become
increasingly attuned to the benefits of intelligent systems technologies in aerospace domains, and thereby
provide broad support for research, development, and deployment of intelligent systems technologies.
96
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
16. SUMMARY
The authors sincerely hope you have gained insight from the first edition of the AIAA Roadmap for
Intelligent Systems. Intelligent systems can be added to multiple domains to improve aerospace efficiency
and safety as well as to create new capabilities. The recommendations above provide a prudent path to
accelerate technology development and implementation of intelligent systems in aerospace domains.
The authors value your ideas, comments, and feedback. The authors may also want to participate in the
development of business case analyses or business plans for the development and testing of applied
intelligent systems for aerospace domains. Feel free to contact the roadmap collaborators using the
contact information below:
Section 3: Adaptive and Non-Deterministic Systems
Christine Belcastro, NASA Langley Research Center, christine.m.belcastro@nasa.gov
Nhan Nguyen, NASA Ames Research Center, nhan.t.nguyen@nasa.gov
Section 4: Autonomy
Ella Atkins, University of Michigan, ematkins@umich.edu
Girish Chowdhary, Oklahoma State University, girish.chowdhary@okstate.edu
Section 5: Computational Intelligence
Nick Ernest, Psibernetix Inc, nick.ernest@psibernetix.com
David Casbeer, Air Force Research Laboratory, david.casbeer@us.af.mil
Kelly Cohen, University of Cincinnati, cohenky@ucmail.uc.edu
Elad Kivelevitch, MathWorks, elad.kivelevitch@gmail.com
Section 6: Trust
Steve Cook, Northrup Grumman, stephen.cook@ngc.com
Section 7: Unmanned Aircraft Systems Integration into the National Airspace System at Low-altitudes
Marcus Johnson, NASA Ames Research Center, marcus1518@yahoo.com
Section 8: Air Traffic Management
Yan Wan, University of North Texas, yan.wan@unt.edu
Kamesh Subbarao, University of Texas at Arlington, subbarao@uta.edu
Rafal Kicinger, Metron Aviation, kicinger@metronaviation.com
Section 9: Big Data
Sam Adhikari, Sysoft Corporation, sadhikari@sysoft.com
Section 10: Human-Machine Integration
Julie Shah, Massachusetts Institute of Technology, julie_a_shah@csail.mit.edu
Daniel Selva, Cornell University, ds925@cornell.edu
Section 11: Intelligent Integrated System Health Management
Fernando Figueroa, NASA Stennis Space Center, fernando.figueroa-1@nasa.gov
Kevin Melcher, NASA Glenn Research Center, kevin.j.melcher@nasa.gov
Ann Patterson-Hine, NASA Ames Research Center, ann.patterson-hine@nasa.gov
Chetan Kulkarni, NASA Ames Research Center / SGT Inc, chetan.s.kulkarni@nasa.gov
Section 12: Improving Adoption of Intelligent Systems across Robotics
Catharine McGhan, California Institute of Technology, cmcghan@umich.edu
Lorraine Fesq, NASA Jet Propulsion Laboratory, lorraine.m.fesq@jpl.nasa.gov
Section 13: Ground Systems for Space Operations
Christopher Tschan, The Aerospace Corporation, chris.tschan@aero.org
Paul Zetocha, Air Force Research Laboratory, paul.zetocha@us.af.mil
Christopher Bowman, Data Fusion & Neural Networks, cbowman@df-nn.com
97
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
17. GLOSSARY
INTELLIGENT SYSTEMS TERMINOLOGY
While there may be differences in the meaning of terms and the context in which they are used, the
broader concept is that these technologies enable users of aerospace systems to delegate tasks and
decisions to intelligent systems to make important operational decisions. Some of the intelligent systems
terminology is defined below.
Adaptive: the quality of being able to respond to unanticipated changes in the operating environment or
unforeseen external stimuli in a self-adjusting manner to improve performance of the system.
Automated: the quality of performing execution and control of a narrowly defined set of tasks without
human intervention in a highly structured and well-defined environment.
Autonomous: the quality of performing tasks without human intervention in a more unstructured
environment which requires (a) self-sufficiency, the ability to take care of itself, and (b) self-directedness,
the ability to act without outside control.
Computational intelligence: the study of the design of intelligent agents that can adapt to changes in its
environment
Intuitive: the quality of establishing knowledge or agreement to a given expectation without proof or
evidence
Learning: the ability to acquire knowledge from internal response to external stimuli by probing, data
collection, and inference.
Non-deterministic: having characteristics or behavior that cannot be pre-determined from a given set of
starting conditions or input from the operating environment.
Self-optimization: the ability to seek an optimal design or operating condition by learning in real time from
data and input as well as system knowledge.
98
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
18. ACRONYMS AND ABBREVIATIONS
AFRL
Air Force Research Laboratory
AI
Artificial Intelligence
AIAA
American Institute of Aeronautics and Astronautics
API
Application Programming Interface
AR
Augmented Reality
ARTCC
Air Route Traffic Control Center
ATC
Air Traffic Control
ATCSCC
Air Traffic Control System Command Center
ATFM
Air Traffic Flow Management
ATM
Air Traffic Management
BVLOS
Beyond Visual Line of Sight
CALCE
Center for Advanced Life Cycle Engineering
CBM
Condition-Based Maintenance
CI
Computational Intelligence
ConOps
Concept of Operations
DARPA
Defense Advanced Research Projects Agency
DIaK
Data, Information, and Knowledge
DM
Domain Model
DoD
Department of Defense
EGPWS
Enhanced Ground Proximity Warning System
EULA
End User License Agreement
FAA
Federal Aviation Administration
FAR
Federal Aviation Regulation
99
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
FMEA
Failure Modes and Effects Analysis
GP
Gaussian Process
GP-GPU
General Purpose Graphics Processing Unit
GPS
Global Positioning System
GUI
Graphical User Interface
HMI
Human-Machine Integration
HMI
Human Machine Interface
HUMS
Health and Usage Monitoring System
IA
Increasingly Autonomous
IEEE
Institute of Electrical and Electronics Engineers
IRB
Institutional Review Board
IS
Intelligent System
ISHM
Integrated System Health Management
i-ISHM
Intelligent Integrated System Health Management
ISO
International Standards Organization
ISR
Intelligence, Surveillance and Reconnaissance
ISTC
Intelligent Systems Technical Committee
IVHM
Integrated Vehicle Health Management
LOC
Loss of Control
MDP
Markov Decision Process
MIMO
Multiple Input Multiple Output
NASA
National Aeronautics and Space Administration
NextGen
Next Generation Air Transportation System
NOAA
National Oceanic and Atmospheric Administration
NRC
National Research Council
100
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
OSA
Open Systems Architecture
ROI
Return on Investment
SISO
Single Input Single Output
SME
Subject Matter Expert
TCAS
Traffic Collision Avoidance System
TRACON
Terminal Radar Approach Control
TRL
Technology Readiness Level
UAS
Unmanned Aircraft System
UAV
Unmanned Aerial Vehicle
UCAV
Unmanned Combat Aerial Vehicle
V&V
Verification and Validation
VLOS
Visual Line of Sight
VR
Virtual Reality
VV&A
Verification, Validation and Accreditation
101
DRAFT - NOT YET APPROVED FOR PUBLIC RELEASE
Download