A strategic framework for program managers to improve E.

advertisement
A strategic framework for program managers to improve
Command and Control system interoperability
by
Steven E. Frey
B.S. Mechanical Engineering
Renesselaer Polytechnic Institute, 1991
Submitted to the System Design and Management Program
in Partial Fulfillment of the Requirements for the Degree of
Master of Science in Engineering and Business Management
at the
Massachusetts Institute of Technology
May 2002
The author hereby grants MIT permission to reproduce and distribute publicly paper and electronic
copies of this thesis document in whole or in part.
Signature of Author
Steven FIry
System Design and Management Program
Certified by
Daniel Whitney
Thesis Supervisor
Center for Technology, Policy & Development
Accepted by
Steven D. Eppinger
Co-Director, LFM/SDM
,-
Co-Director, CIPD
M4FM Professor of Managemeat Science and Engineering Systems
Accepted by
Paul A. Lagace
Co-Director, LFM/SDM
Professor of Aeronautics & Astronautics and Engineering System
MASSACHUSETTS INSTITUTE
OF TECHNOLOGY
AUG 0 1 2002
LIBRARIES
BARKER
MITLibraries
Document
Services
Room 14-0551
77 Massachusetts Avenue
Cambridge, MA 02139
Ph: 617.253.2800
Email: docs@mit.edu
http://Iibraries.mit.edu/docs
DISCLAIMER OF QUALITY
Due to the condition of the original material, there are unavoidable
flaws in this reproduction. We have made every effort possible to
provide you with the best copy available. If you are dissatisfied with
this product and find it unusable, please contact Document Services as
soon as possible.
Thank you.
* The Archives copy contains poor quality image reproductions.
The images contained in this document are of
the best quality available.
A strategic framework for program managers to improve
Command and Control system interoperability
by
Steven E. Frey
Submitted to the System Design and Management Program
in Partial Fulfillment of Requirements for the Degree of Masters of Science
in Engineering and Management
Abstract
An emphasis on fielding interoperable Command and Control systems .to support military
operations increases the burden on system program mangers to define and coordinate interfaces
with external partners. This thesis studies barriers to increasing the interoperability of C2
systems presented by system complexity and acquisition culture and makes recommendations to
assist program and enterprise managers in reducing and overcoming these barriers.
Architectural analyses of C2 systems using Design Structure Matrices (DSM) to model the
internal and external information flows were developed. The number of interfaces defined by
the system specification and represented in the DSM was found to be an excellent indicator of
the risk for successful integration based on C2 system development case studies. The DSM was
then used to evaluate the level of control a program office has over interfaces which were part of
the architecture. This analysis revealed only 34% of interfaces were under the direct control of a
case study C2 system and provides a map of interface risks which need to be actively managed.
Additionally the DSM suggests the type of mechanisms/relationships/agreements that should be
put in place to manage the interfaces based on differing levels of management influence and
control. Further analysis using the DSM allows the program office to develop isolation layers
and information hiding strategies to reduce the complexity of system development activities and
reduce the disruption caused by technology refresh of highly inter-dependent sub-systems.
I
Over 73 interviews were conducted with program managers, system developers and users to
develop the DSM, confirm analysis findings, and identify perceived barriers to C2 systems
development. Additionally, a questionnaire was developed and circulated to investigate the
beliefs and mental models held throughout the acquisition community. The thesis goes on to
explore the acquisition culture and mental models held by personnel involved in C2 systems
acquisition and how the culture creates barriers to improving interoperability. Aggressive
decision making on program requirements, technical specifications, and high risk schedules,
coupled with a lack of accountability and a poor reputation/relationship with the user community
are significant problems. Building a culture which recognizes and rewards excellence in
delivering interoperable systems to military warfighters should be a goal for the C2 acquisition
community. Creating awards for achievement in interoperability excellence based on the model
of Malcom Baldridge Quality Award is a first step. Increasing accountability and reducing
program manager turn-over during system development cycles can be accomplished through
pursuit of 'micro-development projects' which are small focused developments using less than 6
developers and delivering finished products in less than 6 months. The limited scope of these
projects has the added benefit of reducing complexity. Finally, greater involvement and
interaction with the user community is recommended to identify and leverage lead user
innovations and deliver desired capabilities back to the them as part of the formal systems.
Using the recommendations and tools developed in this thesis as a template, C2 programs can
generate common simple DSM representations of system interfaces and leverage them for risk
management, interoperability metrics and reporting, system design decisions and management of
technology transition efforts. Furthermore, the incentives and micro project approach can help
reduce the cycle time for delivering tailored capabilities to end users while minimizing the
learning curve of the developers. Overall the thesis presents recommendations which are
designed to stimulate the shift in thinking and practice required to meet demands of improved
interoperability.
Thesis Supervisor:
Daniel Whitney
Senior Research Scientist
Center for Technology, Policy & Development
2
Acknowledgements
As I reflect on what it took to complete this thesis I realized the number of people who
directly or indirectly helped me was tremendous. There are too many to list individually, but I
feel compelled to mention some of the folks who made it possible for me to do this work. The
experience was definitely one of the most challenging during my time at MIT.
At the top of the list is Dan Whitney whose wisdom, encouragement and constant challenges
to explore ideas and dive deeper into the subject made the thesis a better product. Dan's
guidance really helped bring me through the process. Thanks Dan.
To the LAI office at MIT, and Hugh McManus in particular, for their support and guidance
during my first two semesters at school and for making the transition back to school possible. To
the U.S. Air Force and the outstanding military members who are enduring hardships to protect
and defend our nation, a tremendous amount of what I learned came from the community of C2
systems developers and active duty military members who are directly involved with the
systems. To everyone who took the time to explain the issues and problems they faced and share
their thoughts and perspectives on the systems, Thank you. And a special thanks to Col Al
Baker, who is the finest individual I've had the privilege of working for in my career. He more
than anyone, helped to put me in a position to embark on the 2 year journey through the System
Design and Management Program. Thank You Sir!
Many of my friends and co-workers were involved and in brainstorming and acting as a
sounding board to develop and refine ideas. Rich Hubbard, Pet Robson, Deb Schuh, Russ
Graves, Marco Serra, Art Faint, Mike Ripley, Dave Humphrey, and many others let me try out
ideas and gave me sanity checks as I went along. They also picked up the slack at work and
pushed me along as the thesis progressed. Thanks to all and the many others there just isn't
enough room to recognize by name.
To my Mom, Dad, brothers and sister who are a constant source of encouragement and
support. You helped make me the person I am and the courage and ambition to strive to become
a better person every day. Thank you for giving me such a strong start in life, and the security of
knowing there's always someone there for me.
And last, but definitely not least, to my wife Lynnette, daughter Brittney, and son Kelly, for
their understanding and support throughout the process. I owe you a deep and heartfelt thank
you. You shared in the sacrifice required to complete the degree and now we can look forward
to sharing in the rewards. I was often pre-occupied with this thesis and didn't have the time I
wanted to devote to you and what's really important in my life. Thanks for your love, patient,
understanding, and support.
3
Table of Contents
A bstract..............................................................................................................................
I
A cknow ledgem ents ......................................................................................................
3
Table of Contents ........................................................................................................
4
List of Figures....................................................................................................................
6
List of Tables .....................................................................................................................
6
Introduction.......................................................................................................................
7
Problem Statement ......................................................................................................................................
Research M ethodology ................................................................................................................................
Background .....................................................................................................................
7
8
10
. .. . .. .. .. . .. .. .. . .. .. . .. . .. .. .. .. .. .. .. . . .
What are Command and Control systems?.................................................
10
Operational Aspects of C2.........................................................................................................................12
What is interoperability and why is it important?........................................... .. .. .. . .. .. .. . .. .. . .. .. . .. .. .. .. .. .. . . 16
How C2 Systems are developed and fielded .........................................................................................
19
Research & Case Studies................................................................................................
27
Case Study of an AOC System ..................................................................................................................
27
Defense Information Infrastructure Common Operating Environment (DII COE)...........................38
M odernized Intelligence Database (M IDB).....................................................................................
40
United States M essage Text Format (USM TF).................................................................................
42
Joint M apping Toolkit (JM TK).............................................................................................................44
Targeting and W eaponeering (TW )...................................................................................................
Time Critical Targeting (TCT)..............................................................................................................48
Situational Awareness and Assessmnet (SAA)................................................................................
Execution M anagement (EM ) ...............................................................................................................
Case Study Summary ................................................................................................................................
Barriers to Enhancing Interoperability....................................................................
Complexity ................................................................................................................................................
Architectural representations.................................................................................................................54
Stability & technology turnover .......................................................................................................
Emergence.............................................................................................................................................62
The testing treadmill ..............................................................................................................................
Effects in a closed system......................................................................................................................68
Summary ...............................................................................................................................................
Culture .............................................................................................
.........................................................
Incentives reward aggressiveness.....................................................................................................
Turn-over eliminates accountability.................................................................................................
45
50
51
52
54
54
58
65
69
70
70
74
4
View from the user/customer's perspective ......................................................................................
Summary ...............................................................................................................................................
81
84
Recom m endations...........................................................................................................85
DSM s and Enterprise Architectures ......................................................................................................
Introducing enterprise interoperability metrics.....................................................................................
Collaboration for enterprise decisions..................................................................................................
Interfaces, isolation layers, and information hiding .............................................................................
Incentives...................................................................................................................................................93
Creating accountability..............................................................................................................................94
85
85
88
90
Conclusions......................................................................................................................
97
Future W ork....................................................................................................................
99
A ppendix A - Q uestionnaire........................................................................................
101
Bibliography ..................................................................................................................
106
5
List of Figures
Figure 1 - Hierarchy of C2 Enterprise ..............................................................................
Figure 2 - C2 Nodes......................................................................................................
Figure 3 - Current DMPI Process.................................................................................
Figure 4 - Air Force Task List 7 for Command and Control ........................................
Figure 5 - DoD System Acquisition Process....................................................................
Figure 6 - C2 Enterprise Integration Management Model.............................................
Figure 7 - CAOC Node Operational View....................................................................
Figure 8 - New DoD Systems Acquisition Process.......................................................
Figure 9 - AOC System Code Growth .............................................................................
Figure 10 - AOC Process and Organizational Divisions...............................................
Figure 11 - Basic AOC Process, Divisions, and Data Flows ........................................
Figure 12 - Joint Technical Architecture......................................................................
Figure 13 - Interfaces between Mission Applications..................................................
Figure 14 - AOC System Data Flow Diagram .................................................................
Figure 15 - Simple DSM Example...................................................................................
Figure 16 - DSM of System Database Flow ....................................................................
Figure 17 - History of Targeting Application Development........................................
Figure 18 - Results from survey question .....................................................................
Figure 19 - AOC System Data Flow Diagram.................................................................
Figure 20 - Results from survey question ........................................................................
Figure 21 - DSM of System Database Flow .....................................................................
Figure 22 - Problem Resolution Rates .............................................................................
Figure 23 - Cost Reporting Under PM 1......................................................................
Figure 24 - Cost Reporting Under PM 2..........................................................................
Figure 25 - Cost Reporting Under PM 3..........................................................................
Figure 26 - Cost Reporting Under PM 4..........................................................................
Figure 27 - Comparison with and without rebaselining...............................................
11
15
17
20
21
23
24
26
29
30
31
33
34
36
37
37
47
55
56
59
61
67
76
77
78
79
80
Figure 28 - Evolution of the programming system product [4] ..................
82
Figure 29 - Operational Feedback Delay ......................................................................
Figure 30 - DSM for Interoperability Metrics..................................................................
83
87
Figure 31 - DMPI Flow .................................................................................................
88
Figure
..........................................................................
Figure 33 - DSM Interface Trace......................................................................................
Figure 34 - Introduction of TW-MIDB Design Rule ....................................................
89
92
92
List of Tables
Table 1 - DII COE Software Lines of Code..................................................................
Table 2 - Cost Reporting Definitions ...........................................................................
40
75
6
Introduction
Problem Statement
Program managers at the Electronic Systems Center (ESC) are responsible for
acquiring Command and Control systems for the Air Force and Joint military services.
Individually these Command and Control (C2) systems are complex, software intensive
collections of functions, equipment and processes used by military commanders to
maintain awareness of the battlefield situation, arrive at decisions, direct actions, and see
their orders are carried out. The collective set of C2 systems interoperating forms the
nervous system of our nation's military muscle coordinating the actions of troops,
weaponry and intelligence sensors. One of the main goals for C2 systems is to improve
interoperability by allowing them to provide dynamic interactive information and data
exchange for planning, coordination, integration, and execution of a commander's
mission.
Historically, the acquisition of C2 systems has treated them as 'stovepipes', where
individual systems are developed by separate program offices managing their own budget
and maximizing performance for their user's environment. By themselves, these systems
have a track record of cost over-runs, schedule slips, and trouble meeting performance
goals. Increasing the emphasis on interoperability increases the inter-dependencies
among multiple systems and the overall complexity of the systems themselves. Program
managers will have to coordinate between independently managed programs on issues of
schedule, interface conventions, specifications, configuration management, integrated
testing, etc... Program managers and system program offices will lose a degree of
autonomy and must work together to negotiate trade-offs and make group decisions based
on technical input from multiple domains. Already complex system development efforts
will have the additional challenge of interoperating with other programs and systems in
the C2 domain. Interoperability increases the likelihood that changes in individual
systems will cause cascades or ripples throughout the system of systems. This presents a
new acquisition management challenge which the current culture and processes have had
limited success handling.
7
A shift in thinking and practice will be required by the acquisition community to meet
the demands of improved interoperability. Better tools and methods are needed for
defining interdependencies between C2 systems, for creating shared boundary objects,
representations and nomenclatures, for anticipating and minimizing downstream impact
of changes or upgrades, for negotiating trade-offs and implementation decisions between
programs, and for allocating tasks and costs to support interoperability requirements
among individual systems. Achieving improved interoperability in a C2 System of
Systems will require an acquisition approach which takes these issues and others into
account.
This thesis attempts to answer the following question:
Given the system of systems paradigm, what are the barriers to improving
interoperability and what methods, tools and technologies can be used to
improve the efficiency and outcomes of individual program deliveries?
Research Methodology
"We have a habit in writing articlespublished in scientificjournalsto
make the work asfinished as possible, to cover up all the tracks, to not
worry about the blind alleys or describe how you had the wrong ideafirst,
and so on. So there isn't any place to publish, in a dignified manner, what
you actually did in order to get to do the work."
Richard Feynman, American physicist, Nobel Lecture, 1966.
Research supporting this thesis was conducted over a one year period and relies
heavily on direct interaction with command and control program office staff,
development contractors, and operational users. The thesis started with an interest in
collaborative decision making. A research question was proposed, and investigation into
how collaborative decision making could be applied in the C2 domain began. As the
research progressed, we found that the cost, schedule and performance of C2 systems
under development were often affected by decisions made by external systems
developers. The focus of the thesis then shifted to understanding decision making in a
system of systems development environment and what methods, tools and technologies
could be utilized to improve the outcomes of decisions. As the research continued, the
goal of increasing interoperability between C2 systems and the inter-dependencies it
creates in the system of systems emerged as an issue program managers need to
8
understand and manage more effectively. Ultimately, the thesis settled on the topic of
identifying barriers to improving interoperability and methods, tools and technologies
needed by program managers to improve the performance of individual programs.
The thesis research included; 1 - literature reviews of C2 program documents,
acquisition regulations and directives, software development practices, complexity
management, and collaborative decision making, 2 - case studies of C2 programs to
include analysis of architectures using Design Structure Matrices, and review of cost
performance reporting, 3 -73 personal interviews with program office personnel,
development contractors, and operational users, 4 - analysis of data collected through a
questionnaire on the beliefs and mental models held throughout the acquisition
community, and 5 - prior personal experience as a military officer and current experience
supporting defense acquisitions. The research findings identified barriers to improving
interoperability in the areas of acquisition culture, system complexity, and program
management practices. Finally, approaches to overcoming the identified barriers and sets
of recommendations were developed.
9
Background
What are Command and Control systems?
Command and Control (C2) systems are complex, software intensive collections of
functions, equipment and processes used by military commanders to maintain awareness
of the battlefield situation, arrive at decisions, direct actions, and see that their orders are
carried out. The collective set of C2 systems interoperating forms the nervous system of
our nation's military muscle coordinating the actions of troops, weaponry and
intelligence sensors. In essence these are decision support systems which provide data
and automation services to commanders and their staffs. They include computers,
communications networks, sensors, input/output, displays, storage databases, and
business logic designed to provide the "right information, at the right time, in the right
place, in the right format" [I] to support a quick decision.
The hierarchy for Command and Control systems starts with the C2 Enterprise Figure
1. The Enterprise is made up of C2 nodes which are collections of C2 systems. Each of
the C2 systems is an integrated set of mission applications, The mission applications
themselves are made up of software modules or components. Operational users employ
mission applications to perform tasks. The applications retrieve, manipulate, display,
generate and store data using a common infrastructure. Interactions and dependencies
between mission applications and software modules are not depicted in the hierarchy for
clarity, but are an important part of C2 systems.
10
Figure 1 - Hierarchy of C2 Enterprise
The goal of an integrated and interoperable C2 Enterprise Integration is to allow
people and machines to seamlessly exchange information and transact business across the
enterprise. One of the most important aspects of the C2 system is the ability to transmit
tasking orders and control military forces. Air Force Instruction 10-207 provides the
following definitions:
Command and Control-The exercise of authority and direction by a
properly designated commander over assigned forces in the
accomplishment of the mission. Command and control functions are
performed through an arrangement of personnel, equipment,
communications, facilities, and procedures employed by a commander in
planning, directing, coordinating, and controlling forces and operations in
the accomplishment of the mission.
Command and Control System-The facilities, equipment,
communications, procedures, and personnel essential to a commander for
planning, directing, and controlling operations of assigned forces pursuant
to the mission assigned.
The concentrations of systems which make up a C2 node are generally located in
facilities like command posts and operation centers. Few people have an appreciation for
the diversity of systems which make up a modem command post, the functions they
perform and the information they deliver unless they have been exposed to the systems in
11
use during operations. A partial list of C2 systems similar to what would have been used
to control the Air War for operations in Kosovo and Afghanistan is instructive.
" Space Battle Management Core System - provides satellite track data and
analysis tools (what's orbiting in space, and what it can do)
" Tactical Intelligence Broadcast System - provides track data to generate a theater
air picture (what's flying in the air, if its friendly or hostile, status and intentions)
" Worldwide Origin Threat System - provides launch warning for Theater Missile
Defense
- Theater Battle Management Core System - used for Air Tasking Order
production and execution (plans for, monitors and directs air operations)
" Theater Weather Server - provides theater weather information to support
operations
" Time Critical Target Functionality - provides capabilities required to find and
direct attacks on time critical targets (what targets are emerging on the battlefield,
how important are they, and what's available to engage them)
Each of these systems provides unique data and functional capability to allow an
operational commander to direct and control the war effort.
Operational Aspects of C2
Central to the Air Force's ability to effectively employ combat power is the ability to
react to changing circumstances and direct a wide variety of activities, equipment, and
personnel across geographic boundaries. Prior to the explosion in computer and network
technology C2 systems were far less complex. Up through the 1980s wall mounted
grease boards, file cabinets, and standard forms were the primary methods used to store
and track essential information with decisions and orders being transmitted via
messenger, radio, field phone, or formal message. Advances in computers,
communications and networking throughout the 1990s allow commanders to store,
retrieve and operate on data faster and in more complex ways. This gave rise to an
explosion of C2 systems which support individual commanders needs. These systems are
often referred to as 'stovepipes' because they did not communicate to other systems or
inject information into other data channels.
12
The stovepipes created and distributed data and information more rapidly, but the
interfaces between systems were more or less non- existent. Human beings would print
out and re-enter data to move it from one system to the next. As the systems evolved,
individuals began to refine their data needs and define standards for how information
should be produced so that it could be re-used. An early example of data standards
emerging is United States Message Text Format (USMTF) which is a set of highly
formatted messages. These messages capture and transmit essential information to
commanders over secure channels. The highly formatted nature of the messages allows
machines to 'read' the massage and automatically transfer or parse the message
information into mission applications or databases. Efforts to parse the USMTF
messages into 'stovepiped' systems had mixed success because these messages are
difficult to work with, and small human errors in formatting often corrupt the import and
users are forced to re-enter the data by hand. The overall trend among C2 systems has
been the need/desire for more automated interchanges of data (a.k.a. increased
interoperability).
From an operational perspective, modem command posts and operation centers are
where the action is, and commanders are under constant time pressure to make decisions
and direct activities. Decision deadlines are generally dictated by the operational
situation. Consequently, decisions are made as early as possible to keep things moving,
or based on the best information available when a deadline is reached. For instance,
suppose an intelligence tip off comes in that a meeting between key leaders of the
adversary forces is taking place and will disband in the next 2 hours. The meeting
subject is reported to be planning for a weapons of mass destruction (Chemical,
Biological, or Nuclear) attack. This scenario would characterize the meeting location as
a Time Sensitive Target and a fleeting opportunity. To capitalize on the opportunity, the
operational commander needs to know if the tip off is reliable, if the facility where the
meeting is being held is vulnerable to attack, if a suitable military assets is available and
can reach the target in time, if there are collateral damage risks in the target area, what
the risks to friendly forces are, what the opportunity costs of diverting forces are. Then
13
the commander must be able to issue orders. All questions must be resolved, a decision
made, orders issues and received, and forces positioned within the 2 hour timeline.
This stressing and very specialized type of military operation draws on C2 systems to
orchestrate activities and has its own specialized set of terms and language. How the
commander and his battle staff think about and resolve the problem is to draw on
information from battlefield sensors, intelligence databases, and operational databases.
They determine which weapons have a capability against the target, check the availability
and status of the weapons, and check the availability of support assets required. They
review the current plans, determine if retasking the assets is advantageous, and if so they
provide authentication for dynamic retasking. For this scenario an air component
commander might use the Space Battle Management Core System to check the status of
space based sensors and the Global Positioning System constellation needed to support
the engagement, the Tactical Intelligence Broadcast System to maintain situational
awareness of real time air tracks and threats, Worldwide Origin Threat System to monitor
for missile launches, the Theater Battle Management Core System to analyze threats and
task assets to form a strike package, the Theater Weather Server to provide climatological
data for mission planning and predictions of sensors ability to acquire targets, and the
Time Critical Target Functionality to task intelligence sensor, positively identify the
target, determine the target priority in relation to other pre-planned targets, and determine
an optimal weapon target pairing.
Without integrated C2 systems the information needed to make a decision can not be
assimilated in time and the commander is forced to make a judgment call under greater
uncertainty. The opportunity may be lost. With an appropriately integrated C2 systems
providing critical information in a timely, useable format the commander has an option
and can capitalize on the opportunity.
The set of C2 nodes, each with a unique configuration of C2 systems, that a
commander may want to draw on is illustrated in Figure 2. The air component
14
U
I
I I
-
nrr
commander is located at the node labeled CAOC (Combined Air Operations Center) and
draws on information from other nodes and the ring of assets through the
communications infrastructure available to the CAOC.
Figure 2 - C2 Nodes
These C2 systems must support a commander's information and control needs across a
broad spectrum of conflict from relatively benign peacetime situations, to the highly
dynamic unstructured situations encountered during war. The ultimate objective of the
C2 system is toproduce unity of effort by allowing input of multiple commanders and
key warfighters to be brought to bear on a given task, exploit totalforce capabilities by
allowing individuals to be effective during high-tempo operations while under great
stress, properly position critical information and quickly present requested information
where it is needed, and achieve information fusion by reducing information to the
minimum essentials and putting it in a form that people can act on.
The available C2 systems have the capability to completely inundate a commander
with data. Commander's don't need mountains of data, they need specific relevant pieces
15
of information to support decisions. In a tactical situation it would be far more valuable
to know the time and place of a leadership meeting planning the use of weapons of mass
destruction, than to have a mountain of data about the leadership personalities, weapons
development programs, facilities, technical capabilities and researchers who built the
weapons. The point is that increasing system interoperability and facilitating the rapid
exchange and transmission of data does not necessarily create value for the commander.
Decisions about what information not to provide and under what conditions to actively
suppress information may be even more important to an effective C2 system than the
capability to generate a comprehensive data set.
What is interoperability and why is it important?
Interoperability of C2 systems is a key enabler needed to unify the efforts of military
forces, achieve higher military effectiveness, exploit and coordinate the capabilities
available to a commander. When interoperability works it is transparent. When
interoperability is broken there are often multiple systems performing the same or similar
functions to ensure the data is available within the system and information is often
manually entered and processes multiple times. Users are forced to find work-arounds to
gather and input the data they need to do their jobs. They accept voice data, hand written
notes, faxes, emails, hardcopy documents... and must rekey information into their
system. Not only is this inefficient, but it is error prone.
For example, many Precision Guided Munitions (PGMs) require accurate target
coordinates to guide the bomb to the target. The coordinates are needed in
Latitude/Longitude/Elevation are Degrees, Minutees, Seconds format down to
milliseconds (DD MM SS.SSS) and appear as:
022 44 37.456 N 122 33 27.123 W, 126.4 ft MSL
This represents a precise point in space that the weapon should fly to and is known as
a Desired Mean Point of Impact (DMPI). The current process to get the DMPI from its
16
point of origin to weapon's guidance seeker is a good example of poor interoperability.
Multiple systems, multiple steps, and human intervention are involved in moving the
DMPI through the system. The process starts with a precision point being derived from
imagery at an external intelligence organization. The information is annotated onto a
reference image and the annotated image is sent to an Air Operations Center. Within the
Air Operations Center the precise point is manually entered into a database for use in
creating an Air Tasking Order. At another position within the Air Operations Center the
DMPI record is retrieved from the database and transferred to an Excel spreadsheet to
send it to an operational unit to initiate mission planning. At the operational unit the
DMPI is manually entered into an imagery exploitation system to verify that it is correct,
and visually compared with the original reference image. The DMPI is then manually
entered into a Mission Planning System where it is exported to a data transfer device to
program the aircraft and weapon guidance seeker. The steps in the process are illustrated
in the Figure 3 below.
Figre Annotated Image
AOC
Product
Unit Mission
Planning Cell
y-
-
MPS
-
WW I
Figure 3 - Current DMPI Process
17
The process involves four manual entries of the same 22 character lat/long string and
must be done exactly right each time. A typing or transposition error in the seconds digit
(DD MM S .SSS) represents 100ft on the ground. Imagine the tedium and potential for
error while generating 80 individual DMPIs for a single B2 mission. If we assume every
DMPI takes 60 seconds to type and verify during a manual process, then there exists the
potential to save 5 hrs and 20 minutes of effort for every B2 mission. Enhancing system
interoperability can have significant operational payoffs in accuracy, efficiency and
timeliness.
For interoperability to work in this example, every system along the chain must agree
on data formats, geodetic coordinate conventions, units of measure, communications
protocols... When approached from a systems perspective, interoperability is primarily a
technical question. Are there physical or electronic connections between the systems?
Are the communications protocols compatible? Are there shared data conventions or
conversion utilities? These are technical interoperability issues, but interoperability
doesn't stop with a technical implementation. Operational commanders must have the
authority and capability to task or request support from other nodes and systems. They
must have processes and coordination measures in place to take advantage of the
information and services being provided. Hence there is an aspect of operational
interoperability which is defined as:
The ability of systems, units, orforces to provide services to and accept
servicesfrom other systems, units, orforces and to use the services so
exchanged to enable them to operate effectively together.
[21
True interoperability requires both technical and operational aspects to be addressed
and achieving interoperability is more than a systems design and engineering effort. It
requires an interaction between operational processes and technology designed to assist
or automate those processes. Many times there is a trade-off which must be negotiated to
modify an operational process to take advantage of available technology in addition to
the systems engineering and design work to apply the technology to the operational
domain.
18
How C2 Systems are developed and fielded
Historically, the acquisition of C2 systems has treated them as 'stovepipes', were
individual systems are developed by separate program offices to maximize performance
for a particular user's environment. Little consideration was given to interoperability (as
highlighted by the PGM example). Individual system program offices pursued capability
upgrades and migration to the latest technologies based on their cost, schedule and
performance goals. Interdependencies and inter-relationships between programs in the
C2 domain exist, but are not under the direct control of the individual program managers.
Part of the reason for the 'stovepiped' nature of the development had to do with the
requirements, budgeting, and program definition processes. Requirements for a system
start with operational user needs and shortfalls of the current systems and process used to
perform an assigned mission. The missions are based on universal set of high level Air
Force Tasks. Commanders develop Mission Essential Tasks Lists which they are
responsible for, and develop procedures to accomplish those mission essential tasks. An
example of Air Force Tasks for Command and Control is given in Figure 4.
19
[3]
Figure 4 - Air Force Task List 7 for Command and Control
These needs and shortfalls for performing Mission Essential Tasks are documented in
a Mission Needs Statement. New concepts for performing the mission and applying
technology to improve the mission are explored and result in recommendations for either
material solutions (systems), or non-material solutions (new processes or additional
manpower). The need for a material solution spawns an Operational Requirements
Document (ORD) which is intended to state user requirements in performance-based
operational terms (what the system must do) rather than specifications or engineering
design goals (how the capability is achieved). The ORDs are critical document because
they are used to establish programs and budgets in the System Program Offices (SPO).
The ORD is the reason for building a system.
Once an ORD has been reviewed, coordinated, and validated, a SPO is assigned the
responsibility to develop a system which meets the operational requirement. The SPO
develops a Program Objective Memorandum (POM) which is a 5 year projection of what
20
it should cost build, field, and support a system. The POM submission goes into the DoD
Planning, Programming, and Budgeting System and ultimately budget authority is
delegated down to the SPO. Then the SPO has a 5 year spending plan to build a system
which meets the user requirements set out in the ORD. At this point there is a huge
knowledge gap between user needs and potential systems solutions, but the direction and
commitment to build a system is made. The System Program Office generates Requests
for Proposals, commercial firms submit proposals, a source selection board reviews and
chooses the most promising proposal and a program is launched.
New ORDs and program starts are rare occurrences and are long term commitments to
meeting a continuing user need. Therefore cancellation of a program would require
another program to be stood up in its place. Additionally, the process which delegates
budget authority down to programs places restrictions on how the money can be spent.
The program offices are legally obliged to spend budget dollars on the ORD requirements
which justified the budget. When changes to another system are required to improve
interoperability, the other system has little incentive to spend its budget, and transferring
funds between programs to pay for work is a gray area which is likely to incur scrutiny
from auditors. The process for system acquisition as it was defined in the Federal
Acquisition Regulation DoD 5000 series is shown in Figure 5 below. Milestones (MS 0,
I, II, III) are points at which decisions to continue, redefine, or kill a program are made.
The actual program launch starts at MS I, a full system design based on risk reduction
prototyping in ready at MS II, and by MS III a fully tested and certified system is ready
for fielding. Long term sustainment, operations and maintenance of the system follow.
MS 0
MS I
MS II
1& 14 4+
MS III
Figure 5 - DoD System Acquisition Process
21
C2 systems are significantly different than most weapon system acquisitions. Like
many product development projects, users do not always know what they want until they
see it, therefore the requirements are incomplete and prone to change as the system
matures. To complicate matters, the technology life-cycle for IT is much shorter than
most technology and as users are exposed to the latest advances in commercial IT, they
expect to see similar advances in fielded C2 systems. This leads to further requirements
instability. C2 systems require the integration of many independently developed software
modules, many on independently managed delivery schedules, where changes and
upgrades to versions are almost continuous, and module to module version compatibility
must be established and tested. The challenge of getting all the module versions to align
for an integrated delivery is a significant challenge. The likelihood that they will all work
together the first time is slim. Add the probability of a late delivery or deferred piece of
functionality in a module version, and the likelihood of successful system integration and
delivery starts to look vanishingly small.
To help cope with the rapidly changing environment the Air Force established a new
approach to acquisitions for C2 systems in April 2000. Air Force Instruction 63-123,
Evolutionary Acquisition For C2 Systems, treats C2 systems as one integrated weapon
system, and creates a more flexible acquisition strategy based on spiral development to
adapt to evolving requirements and shorter technology life-cycles. Under the new C2
systems acquisition strategy incremental deliveries of capability are packaged in spirals
and requirements and system design evolve based on user feedback and testing results.
The goal is to deliver spiral increments to the field in less than 18 months realizing that
they will be an "80% solutions" and will be enhanced in subsequent deliveries.
The April 2000 policy change at the Air Force level filtered down to the Electronic
Systems Center and direction for organizational changes to support the Evolutionary
Acquisition approach began approximately 1 year later. In Jun 2001 the Designated
Acquisition Commander (DAC) for the C2 Enterprise released Directive 001 which
introduced the concept of managing C2 Nodes. In Oct 2001 DAC Enterprise Directive
22
003 established C2 Node Responsibilities and created an initial set of 72 C2 nodes
grouped into 10 functional areas. The new multi tiered acquisition management structure
defined by the DAC Directives mirrors the operational employment of the systems. The
tiers represent a logical decomposition of responsibilities starting with the establishment
of a C2 Enterprise Integration commander. Below the enterprise level are a series of C2
nodes and node managers, and below the nodes are individual C2 programs and program
managers. At the lowest level C2 programs are composed of individual software
segments and projects which are either contracted for development or purchased as
Commercial Off-The-Shelf (COTS) software. An overview graphic of the C2 enterprise
management structure is presented in Figure 6 below.
Command-level
- Policy
- Direction
Tier-O
EIM-level
* Plan, Budget
* Architect
- Assess, Analyze
}..Tier-1
Execution-level
- Plan, Budget, Architect
- Integrate, Test, Certify
- Deliver, Sustain
}..Tier-2
Execution-level
* Plan, Budget
- Design
- Acquire
Figure 6 - C2 Enterprise Integration Management Model
A year after the organizational changes, individual system program offices are starting
to overcome the resistance to change and grappling with what it means to do spiral
development. To realize the benefits of C2 enterprise management and improve
23
interoperability, cooperation and collaboration will have to occur between managers at
the C2 node and program levels and these relationships are still forming.
As an example, lets consider the scope of the C2 node known as the Combined Air
Operations Center (CAOC). The node manager has responsibility for planning, delivery,
testing, fielding and sustainment of a certified 'weapon system' designated the AN/USO163 Falconer (a.k.a. the CAOC), which is itself a system-of-systems. The 142 systems
which make up the CAOC node are either commercially purchased and integrated, or
developed and delivered by individual program managers. The CAOC node manager has
primary responsibility for ensuring intra-nodal interoperability between the component
systems. Additionally, inter-nodal interoperability issues must be resolved by mutual
agreement between interfacing node managers. Figure 7 shows the inter-nodal interfaces
which must be resolved by mutual agreement.
Figure 7 - CAOC Node Operational View
Program Managers (PM) are responsible for acquiring the systems, subsystems,
components, and/or services utilized by the C2 Nodes. These systems are themselves
often large and complex with many mission applications developed by sub-contractors
24
and delivered to a prime or integrating contractor to assemble and test. The integrating
contractor is often tasked with the responsibility of supply chain management for 3 rd
party deliveries, schedule coordination, integration and testing which adds to the overall
management complexity of delivering a certified program to a node manager. Cost
growth, schedule slips, discrepancies and rework on third 3 rd party deliveries can directly
affect the critical path of the larger program. In some cases the
3rd
party developers have
more than one customer, and the integrating contractors and program managers have
limited leverage over the sub-contractors to meet their schedule or performance
requirements.
Accommodating shorter technology life-cycles and evolving requirements in this
process is an ongoing challenge for program managers. Greater flexibility in the
acquisition strategy for funding, and incremental deliveries using a spiral development
process are enablers for program managers to meet these challenges. The DoD revised
the Federal Acquisition Regulation DoD 5000, to include a new acquisition process
model. See Figure 8 below. Based on this model a commercial or experimental
technology can be inserted at Milestone A, B, C, or directly into operations depending on
the maturity and suitability to the mission area. Additionally, future block releases are
shown to the lower left which indicates evolutionary development. Ultimately, the
development and fielding of C2 systems follows a relatively simple path from mission
needs, to operational requirements definition, to system development and testing, to
fielding and operational use.
25
1C
C
AB
Sustainment
Systems Acquisition
Pre-Systems
(Engineering and Manufacturing
Development Demonstration, LRIP
Production)
=
&
Acquisition
FOC
ORD
MNS
1 bM
lhoy
A Weureet
LCK3
Relationship to Requirements Proces
Figure 8 - New DoD Systems Acquisition Process
The strategic and organizational changes for C2 systems acquisition are well
underway. The acquisitions community is now entering the period where new tools, and
methodologies to deal with the additional complexity required by interoperability must be
introduced. New relationships and processes must be forged to negotiate changes
between inter-dependent systems. And finally, incentives should be developed to
encourage cultural changes better suited to the new paradigms of interoperability and
spiral development.
26
Research & Case Studies
The research and case study analysis for this thesis centered on C2 systems developed
for use in an Air Operations Center (AOC). The case study description draws on a
review of program documentation, cost performance reporting, acquisition regulations
and directives, software development practices, and personal interviews with program
office personnel, development contractors, and operational users. Analysis of system
architectures using Design Structure Matrices pointed to a set of 'high risk' sub-systems
and mission application which are dealt with in greater detail to provide background on
how issues with development and system integration can create barriers to
interoperability.
Case Study of an AOC System
The case study of an AOC System and its relationship to a number of sub-systems is
not meant to be exhaustive. Instead the case focuses on program management and
interoperability aspects from an internal and external standpoint. The analysis starts with
an overview of chronological progression of the program development and then moves
into AOC process and systems details. The AOC system being studied was not a new or
radical concept, but was a migration and merging of functionality contained in three
legacy systems. The legacy systems were 'stove-pipes' that did not interchange
information, and the effort to combine them into one common system was an attempt to
break down the walls of the existing stove-pipes. The requirements and budgets of the
legacy systems were merged with some additions, and analysis of the AOC processes
were conducted.
The program was approved in Oct 1995 with software development starting in Dec 95.
The initial program schedule was 36 months with a $95M dollar budget to deliver a
version 1.0 in Oct 98. The estimates for code development costs were based on the total
System Lines of Code (SLOC) from the legacy system with minor adjustments. The
original baseline was estimated at 181,422 SLOC. Total SLOC divided by a productivity
factor of 69 Lines of Code/Staff Month (LOC/SM) for difficult software was used to
determine 2,629 staff months of development effort would be required. Assuming a
27
$12k/staff month rate the software development estimate was for -$31.5M with the
remaining $63.5M covering hardware, software licenses, overhead, configuration
management, training, documentation, overhead... The underlying assumption was that
the majority of the code from the legacy systems could be re-written and integrated into a
single system. The staffing profile was to ramp up to 100 software developers and ramp
down or shift staff to a version 2.0 project as version 1.0 approached test and delivery.
By Dec 99 the total SLOC had grown to 1,142,000 over 6 times the original estimate,
the staffing level had grown to 350 developers, and version 1.0 still had not delivered
(see Figure 9). Interviews and written correspondence from program office staff
attributed the code growth to an underestimation of the complexity to develop both the
infrastructure and mission applications software required for the system. This initial
underestimate of the complexity resulted in the need for increased staffing levels, and
extended the development schedule to complete the system. A rough projection based on
the 1,142,000 SLOC in Dec 99 would require 16,550 staff months of development effort
and $198M in software development costs to complete. This is consistent with the overall
staffing profile and costs actually incurred to complete the system. In the end it took 65
months to deliver version 1.0 to the field and cost at completion was -$260.1 M. It
should also be noted that many of the original requirements were omitted to save time,
money, and actually deliver a functioning system.
28
AOC System Code Growth
1,200,000
1,000,000
800,000
0
600,000
400,000
200,000
0
May-96
Dec-96
Jun-97
Jul-98
Jan-98
Feb-99
Aug-99
Mar-00
Date
Applications
---
Infrastructure
-A7- Total
Figure 9 - AOC System Code Growth
The AOC system was developed to provide automated decision support tools to
improve planning, preparation, and execution of joint air combat operations and was
designed to operate in a distributed fashion linking together a number of C2 nodes. The
process which the AOC system was designed to automate is shown in Figure 10. The
personnel who use the system are organized into four functional divisions: Intelligence,
Strategy, Combat Plans, and Combat Operations. The AOC process is largely cyclical
starting with the Strategy division establishing the major objectives for an air war and
the
selecting targets for attack to meet those objectives. The Strategy division documents
objectives and targets in an Air Operations Directive (AOD) and hands it to the Combat
Plans division were resources (fighter, bomber, tanker aircraft... ) are assigned to attack
the targets. The Combat Plans division documents their resource assignments in an Air
Battle Plan (ABP) and hands it to the Combat Operations division for execution. The
Combat Operations division creates an Air Tasking Order (ATO) which it distributes to
air forces and this becomes the basis for pilots to plan their combat missions. The
29
Combat Operations division then monitors the mission execution, manages changes in a
dynamic manner as execution progresses, and provides reporting of the results and Battle
Damage Assessment for the day's operations. Reporting and analysis from operations
are fed back into the Strategy division and the process continues until the objectives are
met and the war is ended. While the basic process appears almost serial, the reality is
that activities are running in parallel with Strategy working on the war two days from
now, Plans working on the war for tomorrow, and operations working on today's war. In
short, 3 plans are in parallel development at different stages of maturity at any given time
within the AOC. Intelligence is considered a supporting function and is integrated into
all three divisions simultaneously.
Strategy
Division
Combat
Plans
Intelligence
Combat
Ops
Figure 10 - AOC Process and Organizational Divisions
The information flows between divisions are better represented by Figure 11 which
shows information products being exchanged between divisions internal to the AOC and
external organizations tasked to fly combat missions or to collect and deliver intelligence
30
9
information. The more complete process starts with Step 1 when raw intelligence
information is delivered to the AOC. Step 2 has the intelligence division generating a
requests for information (RFIs) to determine the status of adversary forces. The RFI is
converted by an external organization where a collection manager tasks battlefield
sensors and other sources to collect the required information for step 3. Step 1 completes
the I" loop with information to answer to the RFI delivered to the intelligence division
for exploitation. In Step 4 a tailored intelligence product describing the current
battlefield situation is passed to the strategy division to initiate strategy development.
This often results in more detailed intelligence questions, so Step 5 is another RFI
generated, which runs through the loop to collect, analyze, and build a tailored product
for the strategy division. The process continues with strategy feeding plans, plans
feeding operations, operations tasking external combat assests at the flying units, and
intelligence supporting the information needs of all three divisions simultaneously.
4*
7*
AOC
Non-AOC
Figure 11 - Basic AOC Process, Divisions, and Data Flows
31
1.
Intelligence & Sensor information
2.
3.
Intel RFIs
Sensor Tasking (Collection Requirements)
Tailored Intel support to Strat
4.
7.
Strat RFIs
Finished Strategy Information (AOD)
Tailored Intel support to Plans
8.
Plans RFIs
9.
Finished Plans Information (ABP)
10.
11.
Operations RFIs
Tailored Intel support to Operations
12.
Sensor Operations Coordination
Operations Execution Coordination
Execution Tasking (ATO)
5.
6.
13.
14.
Creating a software system to automate and support the information flows between
organizational divisions within the AOC and external organizations like the intelligence
community and flying units was the objective of the AOC system development. The
system was based on a client-server configuration with multiple client workstations
accessing shared database servers. The architecture for the system software was to be
based on the Joint Technical Architecture which is a layered software architecture (see
Figure 12) consisting of Mission Applications, Application Services, and Infrastructure
Services. Interfaces between mission applications and databases were to be provided by
the application services layer. This was meant to provide isolation and decouple
databases and mission applications thereby allowing greater flexibility in modifying the
system. The AOC system was decomposed into 13 mission application areas which are;
" Air Campaign Planning (ACP)
" Airspace Deconfliction (AD)
" Defensive Planning (DP)
" Execution Management (EM)
"
Intelligence Data Management (IDM)
"
Spectrum Management (SM)
32
0
Resource Management (RM)
" Situational Awareness and Assessment (SAA)
" Targeting and Weaponeering (TW)
=
Theater Air Planning (TAP)
" Threat Evaluation (TE)
" Time Critical Targeting (TCT)
-
Weather (Wx)
Mission Applications
User Interffice. Application Logic
Application Services
Data Access Agents, Alerts, Message Services, etc.
Infrastructure Services
Operating System, D11 COE Kemel, Databases
Figure 12 - Joint Technical Architecture
For reasons which are not entirely clear, the Joint Technical Architecture was violated
early in the system design. Interviewees attribute this to a desire to start development
quickly and show progress. Since many of the specifics of the data interfaces had not
been defined, it was easier to establish direct interfaces between mission applications and
other components of the system and work out the specifics as they went. In this way, one
to one interfaces could be built and modified without the need to redesign the application
services layer in addition to the application and infrastructure layers. An example of an
interface between two mission applications as described in System/Segment Specification
text is
33
"EM - RM Changes to Mission Tasking Data. There shall be an internal
interfacefrom EM to RM to provide mission re-tasking, mission tasking
changes and support data related to that tasking."
No diagrams were prepared showing the interfaces between mission applications and
system components in the specification. The diagram in Figure 13 was generated for this
thesis from the details in the system specification. The diagram maps the relationship of
the software segments to one another and shows where they would be used within the
organizational structure of the AOC. The mission application interfaces bear little
resemblance to the AOC process shown in Figure 11. Looking at the information flows
acoss the division boundaries it becomes apparent the Intelligence systems are highly
interfaced, with all the other divisions.
Strateg y
Combat Operations
Combat Plans
AD
Cs,
Dl
?R
EM
TAP
AC P
Wx
Intelligence
TEB
TW
TCT
IDM
SAA
-
Specified Internal Interfaces
Figure 13 - Interfaces between Mission Applications
As system development began, a number of pictures were created to help staff
describe and understand the system. The early architecture diagrams lacked detail
needed to understand the system interfaces and reason about it to make decisions. The
System Version Requirements Document (SVRD) for version 1.0 had a total of
34
approximately 1012 documented requirements of which 472 (or 47%) are related to
mission application functionality that provide users with features important to completing
C2 tasks. The remainder of requirements were common support functions like disk
access, system administration, networking and communications, etc...needed to build a
working system. Of the 472, approximately 193 (or 41%) dealt with sharing information
between C2 nodes or other mission applications. Examples of mission application
functional requirements which supported interoperability are:
"The ATO/ACO tool shall provide the capability to receive the ATO and
the ACO messages at the unit and remote sites via an external interface"
and
"TW shall provide the capability to transferpost strike reconnaissance
requirements to collection managers."
To assist in understanding the complex relationships which were specified, the
program management office created their own architecture representations. The products
used by the program managers to guide the development of C2 system were primarily
power point diagrams. Figure 14 below was used extensively by program managers over
a three year period. Elements of the Joint Technical Architecture are present in the
diagrams, but the layered nature of the architecture is not evident. In the diagram,
cylinders represent databases, rectangles represent mission applications, ovals represent
application services, the areas with dashed lines are external interfaces, and hexagons are
externally developed hardware systems.
35
--- I>Ii--
Joint
r
AOWSANOWN
FrOB
UNIT
CRC/ICRE
A
ACO
Feed-
Aispace.
Scheduling.
TOC
TNL
eed
];.a F;Wd
Scramble
&
MAGTFC4]
or secwe
isso1
WA
stara
c4J
ATO
P,
TNL
CTL
CAM
ABP
(GTACS)
TargeA
Data
I.
IJNI'TAFORMS:
B&
Remote Termina
AC-
ATO.
AC
DataC
* Secue
WAN
ABP. FrO
Ai space
>
>
ATO
ACO
AMTWINt
ARP
GCCS-A
Secure WAN
:Note ASAS
AIP. FrOB, Airspace
C5
FrO
Weallher
rck
I(UBo
UB)
:USM
AC
Track
V
ABP
Slatus
BCtI
Daa
2
mage(all
XD
:El~ecuxe-WN
---
---
TADIL A, B, J
NATO LJnk I
TDDS
TOPS
-
- ----
HUAV Weather Feed
(AWN, AWDS,
-
Track
--- -
o Secure WAN) C2P- GCCS--A
--- - -- ----- - ----- ---- -- -- - ----- -- -- - --- ---- -I--
-
-STARS
ATDSI
Sie
LEGEND
M
Database
AFATDS
Applicalion
oDataMessage
CST
AFATDS:
-- :
- -- --- ---
Service
Exlenal Interface
Externally Developed
Figure 14- AOC System Data Flow Diagram
A Design Structure Matrix (DSM) was generated for the thesis to perform additional
analysis. Further details on DSMs can be found on line at http://web.mit.edu/dsm. To
build a DSM, the process steps or system components are used to populate the row and
column headings of a square matrix. In this adaptation of a DSM, a square matrix is
populated with system components, one per row/column and ones marked in the matrix
denote an interface or data flow between two components. Reading across the rows
allows a developer to see all the inputs to the component. Reading down the column
allows the developer see all the outputs from a component. Figure 15 is an example of a
simple DSM with three components A, B, and C. A Has an output to B and an input
from C; B has an input from A and an output to C; and C has an input from B and an
output to A. The example system has a feedback loop, A to B to C to A.
36
I
System
Components
A
C
B
4
B
to
:data
Provides
Receives data from
A "1" in the matrix shows that:
Component A provides data to component B
or
ComponentB receives datafrom component A
Figure 15 - Simple DSM Example
The DSM in Figure 16 below is an alternative representation of the system dataflow
diagram in Figure 14 (with minor modifications and additional interfaces that were
omitted from the original diagram for clarity).
dMnfB&*S)
flivi
-'
-1-
1
e
.MaqippTook)
n
II
-
aaab.
A 11
S1
1
-
}1-- -
11
1
-
1
1
1
-
a
icmes
*s
n FU
i i
~
-
-
1
1A
1
- --
i
-
-
-
I
-
-
--
i t
C1
1
1
1
1
+ 1 U-
-.
t
1
-
j
I
~~T
_
Figure 16 - DSM of System Database Flow
37
This DSM representation of the system organizes the components by type, which is
more consistent with the layered nature the architecture software is intended to have. The
software layers are color coded showing the Defense Information Infrastructure Common
Operating Environment (DII COE) in top (row 1), databases next (rows 2-5), application
services (rows 6-11), mission applications (rows 12-32), externally developed systems
(rows 33-35) and external systems (rows 36-42). The DSM quickly reveals that many
applications are directly accessing the database for both reads and writes and shows the
interdependence of mission applications. The DSM also allows one to quickly identify
external interfaces and the mission applications with the greatest number of dependencies
(both internal and external). The benefits of using a DSM representation of C2 software
architectures will be explored in greater detail later.
By quickly summing the rows and columns of the DSM we can see which system
components have more than 10 data input or output interfaces. The list of components
with greater than 10 interfaces could be considered high risk and includes: DII COE,
AODB, MIDB, Message Processing (USMTF), Joint Mapping Tool Kit (JMTK), TW,
TCT, SAA, TAP, and EM. This list of components identified by interface count using
the DSM corresponds almost exactly to components which presented the greatest
challenge during development. Each of the high risk components will be discussed in
greater detail. TAP and AODB were not included in the detailed discussions for brevity.
Both of theses components were under the direct control of the prime contractor and
while difficult to implement were successful in the fielded system.
Defense Information Infrastructure Common Operating Environment (DII COE)
Compliance with the Defense Information Infrastructure Common Operating
Environment (DII COE) was a downward directed requirement and mandated for all C2
systems. This meant that every system component had to be built to DII COE
specifications, and every time DII COE was changed it affected every component in the
system. The rework associated with compliance was significant.
38
What is DII COE? DII COE is a software infrastructure for supporting mission
applications, and a set of guidelines and standards. It includes a kernel, segment installer,
and a library of common segments for functions like alert services, mapping, common
operational picture and overlays. The intent was to provide an approach for building
interoperable systems by creating a collection of reusable software components. Under
the DII COE concept system functionality can be added to or removed in small
manageable units called segments. These segments are the building blocks of mission
applications and are essentially software modules that provide system capability.
Structuring mission applications into segments is meant to provide flexibility in
configuring the system because segments are meant to be individually replaceable. In
fact the DII COE installer can add or remove individual segments quite easily allowing
rapid upgrades. The guidelines and standards specify how to reuse existing software, and
how to properly build new software so that integration is seamess. DII COE is a "plug
and play" open architecture and a foundation for building an open system. The shortfall
is that segment dependencies are not addressed by DII COE. While DII COE makes it
easier to remove and install segments by supplying its version of an install wizard, it does
not explicitly consider the potential conflicts, or incompatibilities in new configurations.
Experience with DII COE in the AOC system was not entirely positive. DII COE was
the major infrastructure component that all mission applications were built to work on.
New releases of the DII COE kernel and common segments are available every six
months and an array of C2 program offices are constantly developing DII COE compliant
segments for reuse. For the AOC system, every mission application that was developed
had to be segmented for DII COE and reuse of DII COE common segments was required.
One of the DII COE common segments that was problematic was the Joint Mapping
Toolkit (JMTK) which is discussed in more detail later. The expense involved in
achieving DII COE compliance was high and the payoff was questionable. The table
below provides a breakout of coding effort associated with building the initial release of
the AOC system software. DII COE compliance affected 48% of total developed SLOC
and segments, and 42% of total software engineering effort. DII COE was associated
with $43M - $51 M of a $106M total development effort.
39
Developed SLOC
Segments
Infrastructure
Services
238,000
173
Application
Services
287,000
66
DII COE Total
525,000
239
Release Total
1,100,0002
500
Table 1 - DII COE Software Lines of Code
From a development standpoint DII COE suffers from many of the same problems that
are encountered with other 3 rd party software deliveries. Overall the integration took
longer than planned, costs associated with upgrading were extremely high because all
segments must be retested following an upgrade, and fixes in new versions came with
new problems & bugs. Future systems may benefit from DII COE segmentation work
done as part of this development, but the opinion of most program managers and
technical staff is that DII COE standards are already outdated and commercial standards
are more useful for improving interoperability. The requirement to adhere to DII COE
standards is viewed as a burden which increases cost and schedule without providing
direct benefits to the program.
While DII COE is an attempt to create a more flexible architecture, the need for an
architecture which plans for and anticipates variation is not addressed. The dilemma
faced by program managers is managing and synchronizing upgrade cycles under
conditions of cost and schedule uncertainty. DII COE segmentation allows program
managers greater flexibility in recombining segments to create a system, but simply
determining what combination of components or upgrades to include in a version release
to meet the required functionality can be a challenge given the range of choices available.
Tools for understanding interdependencies between software segments and the
functionality achieved in differing combinations are sorely needed. The DSM is an
excellent toll for documenting and tracing component dependencies or interfaces.
Modernized Intelligence Database (MIDB)
The MIDB is relational database populated with worldwide intelligence information.
MIDB is managed and developed by an external organization with their own set of
contractors and sub-contractors. As the DSM shows, 13 different mission applications
40
are making calls on the database. Individually the database can respond to the database
calls and provide data and development work proceeded to build and test each interface
individually. The MIDB development cycle releases updates to tables, elements and
rules every 6 months. These periodic updates force rework and retesting of the interfaces
to multiple mission applications. When put in conjunction with DII COE changes, the
rework cycles were excessive and development progress was slowed.
The database itself uses a highly complex set of tie tables to reduce the amount of data
stored, but this has adverse effects on performance because changes and queries must go
through a large number of tie tables to retrieve or update a record. When a query or store
action is initiated on a record in the database, all the tie tables associated with that record
are effectively locked until the query or store action is completed. In a database where
thousands of records are associated with a single tie table these locks are required to
insure updates are completed properly and the relationships are preserved. When the
database is shared by multiple users, contention for access to the database tables becomes
a problem. Within the AOC environment where as many as 450 users are on the system
simultaneously and 40 may be running complex queries or store procedures against the
database, contention within the database became a significant problem when database
query responses of 1,000 records took times on the order of 20 minutes. This
performance issue was not adequately addressed during design.
The highly complex nature of the database required a significant learning curve on the
part of the prime contractor to integrate as part of the system. A number of specialized
stored procedures and shadow tables were created to improve performance. With the
stored procedures and shadow tables, similar query response times were reduced to
approximately 2 minutes. This was a laudable achievement on the part of the
development contractor and absolutely essential to system usability, but the database still
had not been placed under operational loads. The focus was on the unique
implementation necessary to meet system performance and usability needs. This unique
implementation carried with it an impact on the ability to make changes and upgrade to
new database releases available every six months. The rework required to accommodate
MIDB upgrades was extremely difficult and costly. Since MIDB was on a 6 month
update cycle, the integration was consuming a large amount of staff effort and causing
41
cost growth. As a result a decision was made to freeze the MIDB baseline for a two year
period. The MIDB configuration freeze allowed development of the AOC system to
progress more smoothly.
When the system went through component testing performance was acceptable. When
integration testing began with limited load on the database the performance continued to
be stable but poor. When the system went to full scale testing under operationally
representative loads database contention caused performance to drop. As the operational
load on the database increased row and table level locking caused delays in database
responses. When the delays exceeded a preset limit, the database generated an timeout
message which the mission applications had not been designed to handle. This caused
the AOC system client workstations to freeze, but provided no outward indication of the
error. The errors were only detectable by the fact that the client workstation remained
frozen or hung indefinitely. Ultimately, this single MIDB performance problem caused
the system to fail test. Database specialists were brought in to 'tune' the database server
and optimize performance. The originally specified dual processor server was upgraded
to a faster quad processor, and a number of AOC procedural changes were made to
deconflict high load activities on the database. Ultimately the changes to address this one
issue resulted in $1.4M of unanticipated costs and a 4 month schedule delay. Other cost
increases due to MIDB problems were not included in this case study.
United States Message Text Format (USMTF)
USMTF is a highly structured set of text message formats. The structured nature of
the messages allows machines to 'read' the data contained within them and pull out the
relevant data fields for automated processing. The ability to receive a structured or
delimited text message and use it to populate fields in databases or mission applications is
extremely valuable because it eliminates the manpower intensive and error prone task of
retyping data into an application. The value of USMTF made it an integral part of the
system as a whole and there were many mission application, database, and external
dependencies which shared USMFT formats.
42
There are two drawbacks to USMTF. First, the message formats are changed and
updated on an annual basis. This requires the entire set of message formats used to be
reviewed and retested in the system to ensure the data automatically transfers as
designed. Small changes of even one misplaced or changed character can cause the data
to be transposed in the databases or mission applications and result in completely
corrupted data. Validation scripts were developed by the AOC system to protect against
this problem. USMTF format updates impact the validation scripts, mission applications,
and databases behind them (MIDB and AODB). The second drawback is centered on the
humans who input data into the systems. In some cases humans prepare the USMTF
messages directly. Small formatting errors can cause the messages to fail validation, so
users receiving the messages are forced print the messages and retype the data into the
systems on the distant end. In other cases, the systems generate the USMTF messages
automatically, and system operators inputting data must follow strict data formatting
conventions. The difference between "F15" and "F-15" or "F-15E" and "F-15e" are
subtle, but critical to the mission applications. For the systems to interoperate properly
all the mission applications and databases must agree on common formats. Every change
to a USMTF format must be traced through the system to ensure consistency with the
change. When human operators fail to follow the established conventions, subtle
problems and bugs emerge which can be difficult to trace and fix. This requires the
mission applications to enforce consistency by using data-masks or pick lists and
generally make the systems less user friendly. For example, when a mission application
requires multiple data fields to be input by an operator and then processed, if one of the
fields is incorrect the application must handle the error gracefully. There is nothing more
frustrating than to have the system reset all the fields without an explanation of what was
wrong. While these implementation details are not technically challenging, they are
absolutely essential to system interoperability and usability.
The annual upgrades to USMTF were too costly to maintain within the AOC system.
A bi-annual upgrade strategy was adopted for even year USMTF formats (i.e. USMTF
'98, USMTF '00, USMTF '02). The cost to make the USMTF upgrades was high and
delivered no discernable functional improvements to the users. Additionally, external
systems which implement odd year USMTF formats have the potential to be
43
incompatible with the AOC system. Overall, new strategies and approaches using XML
translators may provide isolation and improve interoperability, but they have not been
implemented yet.
Joint Mapping Toolkit (JMTK)
JMTK is a DII COE common segment and was required to meet compliance
mandates. While the mapping tool did not cause system conflicts it did cause usability
problems. During recent operations in Afghanistan where the AOC system was
deployed, the JMTK was deemed unsuitable for operational employment and emergency
patches and work arounds were required to complete the mission.
"The graphicalnature of the targeting mission, intended to be instituted
on the provided JMTK failed. This specific requirementwas not
recognizedfrom targetsearly in the development of the AOC system, but
had been levied from otherfunctional areas... JMTK had been identified
in testing since 1997 as being unable to support warfighter
requirements... It is clear that the acquisition rules levied via thirdparties
had influence on the suitability of the system for operationaluse and that
the testing environment was not of high enough fidelity to ensure
operationalviability of the applications."
Interim Report - AOC Intelligence Capability,
ENDURING FREEDOM, 1 Oct 01- 1 Dec 01
The fact that JMTK was unsuitable for operational use was primarily because the
graphics, map data, and geospacial information products available on JMTK were
inferior. It is reasoned by a number of interviewees that the high level of interfaces (25
identified in the DSM in Figure 16) and required compatibility with other mission
applications limited the choices and development effort devoted to visualization and
mapping data options. Since the real value of the mapping tool is in the visualization
capability it provides, this trade-off made it an inferior choice. Commercially available
mapping products have far surpassed JMTK in terms of performance and provide 3D
visualization and navigation capabilities. JMTK is a 2D tool with limited data sets
available. DII COE has not yet migrated to the next generation of mapping products due
to the expense involved with segmentation. This may seem contradictory since
segmentation is intended to be beneficial to interoperability, but there is a high initial cost
44
to properly segment a software application for DII COE. The requirement to segment
therefore means additional cost, but allows for better code reuse, sharing and
commonality later. The fact that the commercial products have standard Application
Program Interfaces (API) which makes them easy to integrate with other systems, leads
some architects interviewed to believe the DII COE is becoming obsolete as a standard
for interoperability.
Targeting and Weaponeering (TW)
TW is a critical area of the AOC system and has been the nexus of a great deal of
controversy and competition. Over the last decade a number of targeting applications
have emerged on the scene. The first software system focused on the targeting function
was RAAP (Rapid Application of Air Power). It was developed and received a limited
fielding in conjunction with a legacy intelligence system known as CIS (Combat
Intelligence System). When the AOC system was approved, the program migrated a
great deal of the CIS functionality into the new system, but chose to re-engineer the
targeting capability. The new TW application implemented all the previous RAAP
functional requirements in the new architecture, but provided no real enhancements. The
original RAAP developer initiated a new program JTT (Joint Targeting Toolbox) in
parallel with the AOC system development which promised vastly improved capabilities.
The JTT developers leveraged their previous experience, requirements, and contacts
throughout the targeting community to gain early buy in and commitment to the new
concept. JTT aggressively marketed their plans and solicited feedback on requirements
for an enhanced set of capabilities through a series of JTT User Group meetings. These
meetings provided overview briefings and story boards of the Graphical User Interfaces.
This effectively educated the user community on what JTT planned to do and gave JTT
insight into how the users wanted to see the application implemented.
By mid 1998 JTT had broad buy in and backing from the targeting community as a
whole. The fact that the legacy system (RAAP) was no longer being supported and that
the new AOC system with TW was having problems delivering due to schedule slips
increased the emphasis and desire to field JTT quickly. The AOC system made a
commitment in mid 1999 to freeze further development on its own TW application in
45
favor of working with JTT to integrate the improved capabilities. A commitment to fund
$1M a year to support integration of JTT into the AOC was made in addition to the $9M
per year JTT development effort already underway.
As the AOC development schedule continued to slip the targeting community was
growing impatient for a solution and a series of user initiated solutions began to spring
up. The first delivery of the AOC system version 1.0 in Apr 00, with an already outdated
TW application failed to meet user expectations. This was due to the aggressive JTT
marketing and freeze on TW development in mid 1999. The user community expected to
see functionality promised by JTT developers, but received the equivalent to RAAP
functionality they had 5 years earlier.
JTT relied on the Modernized Intelligence Database (MIDB) for targeting data and ran
into performance problems similar to the AOC system. The promise of JTT
enhancements was broken when JTT development activities slipped and the JTT delivery
of version 1.0 was delayed into mid 2000. Initial attempts to integrate JTT 1.0 in the
AOC system encountered further difficulties due to differing approaches to interfacing
with the MIDB. JTT had opted to develop a proprietary set of SQL queries which were
incompatible with the AOC stored procedures and shadow tables. Despite the $1 M per
year integration funding, the AOC system did not have sufficient influence over the JTT
program to alter their implementation. Incompatibilities have made every integration
attempt to date unsuccessful.
Figure 17 shows a history of targeting application development over the last 5 years.
The current fielded system in the AOC is based on the original RAAP with no upgrades.
Progress in fielding enhanced targeting applications has been effectively stagnant for 5
years with the notable exception of user developed initiatives. JTT integration efforts
continue, and the first successful use of JTT with the AOC system was implemented as
an emergency patch for operation Enduring Freedom at a cost of over $3M. It is a unique
implantation which is tailored for one operational command.
46
-U
___________________________________
2001
2000
1999
1998
1997
1996
Intelligence
Combat
AOCSystem
System
1.0
(CIS)
RAAP (CIS)
F uture
ne
TCTF
TCT (beta)
2003
2002
AOC Sys
AOC Sys
AOC Sys
1.0.1/1.0.2
1.1
1.2
,i.W
Firsttargeting tool developed by
automate target system analysis.
JTT 2.0
JTT 1.0
Vulture
*AFDele
-isM~kstutto
User Developed
JTT 3.0
FATE
APEX
.*V&AF
'
I
)TT 2.1.2
Deelopei
-Teto
J
Quiver
.1~o~zmw- F,.l,.
MS ExcelI MS Access DB
ITE
A
magtre
limif
-A
ard copy Target Folders - Collection of imagery, Basic Target Graphics, and intel reporting. Histor & details of the target
Figure 17 - History of Targeting Application Development
The user developed initiatives by contrast have been highly successful and continue to
deliver enhanced capabilities building on previous successes. The best of breed of the
tools are adopted and absorbed into the next generation of tools. These user initiatives
are.based on commercial technology and use web based html front end interfaces and
simplified databases in the back-end. These applications opted to take a small sub-set of
the larger MIDB database and included only relevant data fields, table, record, and
relationships. The user applications thereby avoided the majority of the complexity and
difficulty of implementing an MIDB based solution and stuck to short development
cycles done in conjunction with teams of operational targeteers. Many of the
development cycles are done in preparation for an operational exercise where the
application is used and lessons learned provide the basis for the next set of enhancements.
As a result, the tools are well suited to the users needs and the constant prototyping,
feedback, and design changes increase the capabilities faster than standard acquisition
processes can react to and plan for changes.
47
ITS (Interim Targeting Solution) was developed, tested and fielded in approximately 5
months with a small team of 5 developers and 5 targeting users. The total cost for
development, testing, travel, hardware, and software licenses was less than $1.2M and the
system delivered more functional capability than is planned for JTT 3.0. The 'interim'
designation is viewed as part political necessity and part face saving. The politics of JTT
have to do with the Joint designation and fact that there are multiple customers/investors
paying for the development. Senior decision makers in the Air Force have committed to
building JTT for Army, Navy, and Marine customers in addition to internal Air Force
customers. Killing the JTT program with over $36M invested over 4 years is viewed as a
failure. The fact that ITS was able to deliver more capability in a rapid cycle is viewed
by some as a new development model where software can be considered a disposable
commodity developed for a purpose, and used until it is overcome by the next generation
of technology and enhancements. Others view ITS as a prototype to base larger, long
term developments on and therefore characterize it as 'interim' and unsupportable. No
matter which group individual opinions fall into, there seems to be a universal agreement
on the value of user involvement in the development process, and the fact that small
projects are free of many of the political and bureaucratic burdens which slow progress
on larger projects.
Time Critical Targeting (TCT)
Time Critical Targeting started with a need identified during operation Desert Storm
and the problem of SCUD hunting. Most people remember the SCUD missiles that were
used as terror weapons, launched by Iraq into Saudi Arabia and Israel. The record of
engagement against SCUD Transporter Erector Launchers (TELs) is 84 misses and 1
success according to some sources. The only successful engagement occurred when
special operations forces happened on the SCUD by chance. Early concepts for how to
identify SCUD operating locations focused on terrain delimitation, a process by which
areas too rough for the SCUD TELs to traverse are eliminated from the search areas.
Better intelligence analysis of the battle space to identify support facilities and probable
operating locations was added. The need to task intelligence collection sensors to
positively identify emergent targets, and the capability to identify and task the right strike
48
asset to engage the target were all identified as needs which had to be met in a 30 minute
timeline for target engagement. The result was a basic set of functional requirements and
the initiation of a system acquisition.
The process for identifying and engaging time critical targets was very immature.
When the program was launched in the 1994 SCUD hunting was the focus of the effort
and justification for the funding. Delivery of an operational capability was expected in
1998. The time critical targeting system, which is still in development and has not been
operationally tested or fielded, now consists of 4 primary components. A terrain
delimitation tool, a Intelligence Preparation of the Battlespace (IPB) tool, a WeaponTarget pairing tool, and an Intelligence Surveillance and Reconnaissance Management
tool. Issues stemming from the fact that the time critical process was immature when the
program was launched include the fact that the tool does not deal well with targets other
than SCUDs. There was a recognition that the target set was much broader, but without a
clear concept or process for how to solve the SCUD problem there was little effort
devoted to other classes of targets.
Problems with implementing the TCT system can be tied to the fact that it is a
relatively new concept and the supporting infrastructure and procedures are not in place.
For example, the terrain delimitation tool requires detailed information on the target
classes' (i.e. SCUD, tank, truck, artillery, SUV, radars...) capability to traverse or operate
on different types of terrain. The tool primarily delimits terrain based on the grade or
slope of the terrain. For example if a SCUD can not drive over an incline of greater than
10 degrees or a radar can not be set up on an area with greater than 5 degrees of slope
then those areas would be eliminated from the search area. The terrain delimitation tool
produces marginally valuable results but does not limit the search areas to a great extent.
The IPB tool requires extensive background data to populate its graphical models and
displays. Current estimates would require two staff years of effort to build a model for a
small theater of operations and the intelligence community does not have production
requirements or formats to produce products to support the IPB. Ties into the databases
to support weapons target pairings have not been established and the operations databases
49
have latency issues which means the data does not support the timeliness requirements of
the tool. And finally, the intelligence collection management community is able to retask
sensors, but the exploitation systems tend to treat incoming information as a first come
first served queue, so time critical responses are significantly delayed. The overall
picture is that the time critical targeting system must interoperate with a variety of other
systems, but due to immature process definition the development has been hindered. The
TCT process falls into the class of "and then a miracle happens" and we have this
information to make a time critical decision.
The time critical targeting system example is a valid requirement to pursue and
funding will continue to be added as the system overcomes it's integration and
interoperability issues. Some unrealistic assumptions early on have arguably made the
system less capable, less supportable, and less cost effective than it could have been.
Major changes in production and exploitation of intelligence will have to occur and
current intelligence architectures do not support the new requirements well.
Situational Awareness and Assessmnet (SAA)
SAA focuses on the real time fusion and correlation of multiple intelligence sources.
The system receives feeds from ground based radars, airborne radar, and Identify Friend
or Foe (IFF/SIF) aircraft transpoders to create a comprehensive picture of what is flying
in the airspace. The airspace data is then merged with multiple other intelligence sources
which provide the locations of ground threats based on electronic signatures,
communications intercepts and alerts, thermal signatures from missile launches or aircraft
afterburners... By plotting the composite set of information sources on a common map
display a Common Operational Picture (COP) is born. One of the limiting factor is that
many sensors are seeing and reporting the same events, so the display becomes a
cluttered mess where 5 tracks may be plotted simultaneously for the same object. This is
where a correlation engine is needed to eliminate duplicate reporting of the same tracks.
The complexity of handling multiple inputs from a wide variety of sensor systems and
displaying them on the same map was the initial step and required a fair amount of
engineering work. This work started in the 1950's and 60's with ground radar units
50
linking their radar pictures together to extend the coverage of the airspace. As network
technology came into being within the intelligence community multiple feeds were
integrated into a tactical intelligence broadcast system that could provide the picture as
seen by sensors anywhere in the world. Depending on the sensors available, fields of
view, and filtering established by the user, a picture could be tailored for a particular
geographic area or purpose. The task of actually correlating data from multiple sensors,
in different reporting formats and accommodating time delays was one of the goals of the
SAA mission application.
The first deliveries of SAA in an integrated environment did not work. The feeds
from multiple sources overwhelmed the workstation and the system crashed when too
many tracks were displayed. Additionally, the monitor would become so cluttered that
the display would become meaningless. Human operators or COP managers in the AOC
intervened to set up filter criteria and eliminate multiple tracks manually. The picture
they create is shared across the AOC and can be further refined at individual workstations
to answer specific questions. Improvements in correlator capability continue to be made,
but the initial set of requirements were de-scoped or reduced to simply providing the
capability to simultaneously plot 3,000 tracks on a map display within 15 seconds.
Execution Management (EM)
The original EM mission application was developed by a third party subcontractor and
delivered to the prime contractor for integration. It was highly interfaced with the Air
Operations Database (AODB), ACP, TW, TCT, AD, TAP, external C2 nodes to allow for
dynamic monitoring and retasking of air assets during execution of an air campaign. The
highly coupled nature of the original EM application made it extremely difficult to
integrate and there were numerous problems throughout development. It was so
problematic that an acquisition officer was assigned to monitor its development full time.
Despite the extra-ordinary effort to make this highly coupled application work, the fact
that it was being developed outside the control of prime contractor meant that there were
fewer opportunities for coordination and discussion between mission application
developers that had interfaces and dependencies with the EM application. Many required
51
changes and discrepancies were not identified until integration of EM with other mission
applications and the determination of the cause of the problems became politically
charged since neither contractor wanted to take responsibility for the problem or shoulder
the cost for fixes. Negotiations over how to implement fixes generally met with wide
differences of opinion as to the causes and best approach for solution with the program
manager caught in the middle of a dispute. Subsequent changes led to discovery of the
next level of incompatibilities and another round of tough negotiations. Progress toward
a working solution was very slow.
After a number of in plant integration tests, failures, and fixes, the cost growth and
disruption caused by the EM application was becoming excessive. A dual path approach
was initiated, and second application was introduced which duplicated key functions of
EM, but with drastically reduced functionality and interfaces. The second application
development was launched within the prime contractors development facility. After a
three month development cycle the two applications were placed side by side in a
demonstration to users and the scaled down version of EM was selected as the path
ahead. The first EM application development contract was killed and $25M in
development effort were scrapped in favor of a $2M minimalist version that was less
disruptive to the baseline, had fewer features and as a result was easier to understand and
learn.
Case Study Summary
The AOC system and sub-systems present a significant software engineering
challenge and this system is one of the most complex ever attempted by the C2
acquisition community. The individual components of the system are complex
undertakings and many are driven by requirements outside of the direct control or
influence of the AOC system program manager. The fact that the program was launched
based on crude estimates of the scope and complexity of the integration task and a
minimal understanding of the interfaces and architecture is surprising. As with any large
project, detailed design can not be completed up front, and many of the design challenges
can only be solved after building critical portions of the overall project. In the case of the
AOC system, some of the underlying components like DII COE and MIDB which were
52
critical to the design and equivalent to the load bearing walls of a structure, were not
sufficiently understood by the system architects and engineers at the outset. Additionally,
these load bearing walls were undergoing constant changes and updates, but there was no
way to replace them without causing major rework and re-engineering of the entire
system. It is roughly equivalent to replacing the foundation of a building periodically as
construction progresses.
Hopefully the case study description has provided a basis for understanding the scope
of the development and integration effort faced by program managers at the C2 node and
system level. An exhaustive description is not possible in the context of this thesis and
the purpose is to highlight the major themes and challenges. Increased emphasis on
interoperability between systems only serves to increase the difficulty of planning for and
synchronizing the multiple changes which are ongoing in the system of systems context.
To be successful in delivering such large complex undertakings on time, on budget, and
meeting all the functional performance requirements set at the beginning of a project
requires a degree of collaboration and sophistication which does not appear to be present
in the current practice. The rest of this thesis will go on to explore barriers to increasing
systems interoperability which underlie current program management practice and
propose approaches to minimize their effects.
53
Barriers to Enhancing Interoperability
Complexity
No discussion of software intensive projects would be complete without some
discussion of complexity. Many of the recognized leaders in the software field point to
complexity as a major stumbling block for developers and integrator. Fredrick Brooks
stated "software entities are more complex for their size than perhaps any other any other
human construct..." [4]. Ubiquity Magazine interviewed Paul Gross, Vice President in
charge of the Developer Tools Division at Microsoft, in a September 1998. When asked
"What's holding back most developers today? What are their biggest problems?" Gross
answered, "I think the biggest problems developers face today center around complexity,
the speed of change, and the ability to keep their skills up to date." The following
sections will explore some of the dimensions of complexity which present barriers to
improving C2 systems interoperability.
Architectural representations
For C2 systems and other large software development project, dealing with system
complexity is a major challenge. Architectures define the arrangements and relationships
between components in the system design, and ideally hide system complexity inside
components while presenting simple, understandable interfaces to external components.
A good architecture therefore relieves component developers from the burden of
understanding the entire system and allows them to focus on their piece of the
development task. In a C2 system where there are many hundreds of software modules
built by teams of developers and interconnected by a tangle of interfaces, the full system
complexity is beyond human cognitive capacity. Architectures allow developers to take
part in a complex undertaking, but limit the amount of complexity they must deal to their
component and its immediate interfaces.
54
-U
The importance of good software architectures and their usefulness in managing and
reducing system complexity is a generally accepted principle. However, in the survey
conducted as part of this thesis, 73% of respondents felt that architectures were only
rarely or occasionally well documented and widely understood. See Figure 18 below.
Are architectures well defined and
widely understood?
20
18
161412108642
0
Q3. Please provide your opinion on the following statements as they relate to acquisition programs.
System architectures are well documented
and widely understood
N/A
Rarely
Occasionally
Usually
Often
Always
[]
[]
[]
[]
[]
[I
Figure 18 - Results from survey question
This is a startling finding, since no complex development project should begin without
a clear architecture in hand. You might build a dog house without the benefits of plans or
a formal architecture, but only a fool would start construction of a house without a set of
blueprints to organize the work of masons, carpenters, electricians, plumbers... C2
systems are the software equivalent of sky-scrapers and without formal architectures to
guide the construction and engineering work the project would be doomed to fail. Follow
on interviews revealed that formal architecture products as defined by the DoD
Architecture Framework are largely an afterthought. The framework includes 23
different architecture views divided into three categories (operational views, system
views, and technical views). While the set of architecture views is fairly comprehensive
and potentially useful, they are also time consuming to generate and maintain. These
architecture views are still considered more of a resource drain than a tool to improve
interoperability. Furthermore, the framework is silent on graphical representation and
55
.__.
_
-
Il
naming conventions and there is no formal validation or coordination process between
interdependent programs.
The type of architecture products used to by the program managers to guide the
development of C2 systems are primarily power point diagrams. Recalling the case study
example of the AOC system we can gain a sense of the complexity of the system under
development in Figure 19.
t
UNIT
AT Ow
J ecuAeC
UNIT
2'S
CAMS
Weathr
Data
CRCcRE
FrOB
ABP. Airspace.
(GTACS)
Scheduling.
Scranble
& rission
status
TOC
L
ACO
AO
TNL
CTL
-
-
-
*TrackJd
I urSecure
WAN
BP.
&
BB
Remote Terivital
bo
EMIR
Am
AC
Daia
Weather
sWa1us
Data
I1WAN
cx~~~
.ission
ACM
ABP, FrOB. Airspace
Secure WAN) 2SOCAP. FrOB.
at
>
ACO
ATO
ACO
AFADS,
AC
Tr--k
~
Secure w
Noe
AC
l
Track
SSUB 1.
-----
-A
,MFrOB'
Stti
Data
--TDDS
TOPS
P
--
TADIL A, B. j
NA'TO Link I
-s
STR
- -- - - HUAV
Weather Feed
AN
cir Sece
LEGEN
-
GCCS-
AFA'IDS
WS
WAN)
CCS
-------....-----.--------..--..-----....- t
CT
AN
.. -- --.-....
--
Database
Appliation
Da/MessageService
External huafxe
Externally Developed
Figure 19 - AOC System Data Flow Diagram
Managing the development of such a system is a complex undertaking with many
components and interfaces to develop, integrate and test. The diagram above captures the
high level relationships, but it does not provide a good way of thinking about how to
56
organize tasks or manage changes. Simply finding all the instances of the AAT
application on the diagram is difficult.
... real software systems have many such components, and there is no
repetitive structure to simplify the analysis. Even in highly structured
systems, surprisesand unreliabilityoccur because the human mind is not
able to fully comprehend the many conditions that can arise because of the
interactionof these components. Moreover,finding the right structure has
proved to be very difficult. Well-structured real software systems are still
rare." [5]
The inability to manage complexity and support change decisions within the C2
manifests itself in the form of frustration and decisions which reduce the overall
complexity of the system by eliminating desired features. In the case study of the AOC
system, program managers had to contend with requirements to meet 914 operational
functions implemented in 42 mission applications made up of 387 software segments,
providing 669 external information exchanges. Ignoring for a moment the implications
of the larger enterprise, let's focus on the reaction to a level of system complexity that
was unachievable in given the budget and schedule constraints. When the AOC program
was in imminent danger of being killed due to an inability to deliver the system with the
full set of capabilities, the program managers in conjunction with the operational
requirements managers established a smaller set of 'key performance parameters.' The
set of key performance parameters allow them program to ignore problems with functions
that were not deemed critical. A large number of problem reports that previously
required fixes before the system could field were now deferred and would be revisited in
future versions. Some capabilities were removed from the baseline altogether. For
example in Feb 99 $3.5M of planned capability was deferred, and in Aug 99 another
$7.4M in planned capability was deferred to cover other costs. A $25M dollar
investment in the Execution Management (EM) mission application development was
scraped in favor of a greatly simplified EM alternative solution prototyped and
demonstrated in just 3 months.
In principle, the architecture products should have identified the 'load bearing walls'
of the system and limited changes in those areas to ensure the overall system was stable.
57
The design structure matrix developed for the case study (see Figure 16) and the joint
technical architecture (see Figure 12) pointed to these load bearing walls, but they were
not available or leveraged during the early phases of the program. Efforts to build
comprehensive sets of architecture views as prescribed by the DoD Architecture
Framework are potentially beneficial, but have yet to be fully embraced by program
offices because of the expense and detail required to build and maintain them. In many
ways the gap between known best practices and actual program management practice are
increasing as more ambitious projects are conceived and launched.
Architecture representations are a great tool for enhancing interoperability. In practice
they are not effectively leveraged by the program offices for decision making. Part of the
reason program staff do not take the time to really dig into the architectures and
understand them because the current architecture products are too detailed. This
contributes to the cost to build and maintain them and reduces the likelihood anyone will
actually use them. Rechtin contends that, "It is the responsibility of the architect to know
and concentrate on the critical few details and interfaces that really matter and not to
become overloaded with the rest." [6].
Stability & technology turnover
The C2 systems being used by today's military are subject to the same forces driving
technology turnover in the commercial sector. Upgrades to operating systems,
commercial software applications, mission applications, middleware, services, database
schema modifications, improvements in processors, specialty cards and drivers, better
displays, faster communications, new processes and methods... To take advantage of
advances in technology or to simply fix existing bugs and defects in the current system
requires turnovers and updates to components. Modifying or re-engineering processes
and procedures can drive component redesign and turnover as well. Depending on the
level of dependency in the system, upgrades can be as simple as a single component
swap, or for highly integrated designs, may require a complete system redesign. The
complexity and the cost of changes can be reduced with appropriate up front planning.
58
The goal should be to prepare the system for the inevitable changes that will take place
[7]. During case studies of C2 systems architectures there were attempts to plan for
changes in the systems, and these were documented in Roadmaps and way ahead plans as
well as architecture system views (SV-8s). These plans were focused on how to migrate
the current architecture to some future state. There was little evidence of attempts to
create an architecture that accommodates modifications. There was no focus on
providing hooks, stubs, or other architecture constructs to allow for future variation. An
architecture that easily accommodates unplanned for changes is referred to as flexible.
Rigid architectures on the other hand require more work and effort to accommodate
changes. C2 systems architectures tend to be rigid. This perception is supported by the
survey where 58% of respondents felt that rigid or inflexible architectures were a
significant or primary cause contributing to cost and schedule growth as shown in Figure
20.
Rigid architectures contribute to
cost/schedule growth
14
10
8
0
Q4. What factors contribute to cost and schedule growth?
N/A
Small
Moderate
Significant
Heavy
Primary
Contributor
Contributor
Contributor
Contributor
Cause
Rigid/inflexible architectures
Figure 20 - Results from survey question
59
The case study discussion on Defense Information Infrastructure Common Operating
Environment (DII COE) points to the impact and cost growth that can occur under a set
of architecture mandates which require additional work. DII COE was a significant
contributor to the sense of inflexible architecture felt by staff working on the AOC
systems development. The ability of the architecture to accommodate changes without
requiring a great deal of rework is highly desirable, but should not be considered a
panacea.
Lets consider the simple case of a system that has been logically decomposed into
three functions each of which will be performed by an independent component. If there
are three options available for each component (i.e. different versions of software, or
different vendors or products to choose from), then how many possible ways can the
system be built? Applying the Product Rule from the field of combinotronics, we can
consider each function as a Task (Ti) and each options for a different component to do the
,
task (ni). The product rule states - if task Ti can be done in ni ways after tasks TI, T2 , ...
Ti-I have been done, then there are n1 x n2 x ... nm ways to carry out the procedure. In the
case of the simple system we have T1 , T2, and T3 where nI=3, n 2=3, and n 3=3, therefore
there are 3 x 3 x 3 = 27 ways available to build the system.
Since the components are independent we can assume all the possible configurations
are valid and would work. The process of deciding which combination of upgrades to
implement in the next version would be a trade off between the benefits of an upgrade
and the cost/work to upgrade to a particular option. A fixed budget would potentially
eliminate certain options. In reality, subtle dependencies between options are likely to
exist and must be evaluated to make an informed decision. If component 1 were an
operating system, component 2 were office automation applications and component 3
was personal organizer software we can get a sense of the possible incompatibilities
where windows 2000 might be required to run the latest MS Office suite which included
interfaces to the handheld being purchased in bulk for a corporate workforce. Choosing
Linux and Lotus Notes might meet the functional requirements of the system, but could
force the selection of a different handheld or require special modifications.
60
The example above is only meant to make the point that a small set of components and
options can result in a large set of considerations and trade-offs. This example is trivial
when compared to the real world system represented in the DSM in Figure 21 from the
case study where there are 42 system components. If in version planning there is a binary
choice fro each component to either leave it unchanged or upgrade it, then all ni's=2. The
space of combinations of components to build the system would be 242 = 4.398 x 1012. It
is far beyond human capacity to evaluate all the possible combinations. Even a computer
evaluating 1 million configurations per sec would require 51 days to exhaustively
evaluate all the possible combinations. Simply increasing the number of system
components to 49 yields 249 = 5.6295 x 1014 possible combinations and the same
computer would take an amazing 17.85 years to evaluate all the configurations.
.1
....... .........
I
V,
1..
. 1......
.....
1 1-
rDiWnes
1
4M Mappg TOC&R)
iaa~h~eNdcn
-7
-0 1-
:i
A - ...
-----
Ii
...
-
~
-
-.
-.
~
*7Mg
-r
'slTDDScacticjkWNgffeB.
ices Target AcqiskionRadar (J
.
~
-
---...
-
-
s
-------
Figure 21 - DSM of System Database Flow
Clearly there are no attempts to make exhaustive evaluations of potential upgrade
spaces. Budget limitations, deficiencies and user priorities narrow the range of options
quite effectively. A more realistically set of upgrade decisions may involve 7 system
components each with 3 upgrade options. This yields a more modest 37 = 2,187 possible
61
new configurations assuming the other 35 are left unchanged. This space is still too big
and the real nuances of component interactions have not been addressed. The reality of
version planning comes down to a number of upgrades or changes being dictated by
external forces (i.e. a component version is no longer supported) and recommendations
for upgrades made on the remaining system components in play. The cost for the
upgrades are totaled and then the budget line is drawn. Any initiatives falling below the
line are deferred until the next version. Some re-evaluation and 'horse-trading' can be
done at this point, but the basic decision is in place and the impact of system interactions
and dependencies associated with the changes are tenuously understood at this point. As
development proceeds on the new version, the impact of dependencies becomes apparent
and modifications/fixes are made. Given the difficulty of making upgrades and
understanding dependencies between components, tight configuration management and
code freezes must be enforced or the system would go into a perpetual change cycle and
never complete a test or fielding.
Architectural representations like the DSM Figure 21 are excellent tools for
understanding and evaluating component dependencies. The relationships between
components are instantly identifiable and investigation of how a change will effect the
rest of the system components can proceed quickly. With some early consideration given
to which components are most likely to change, the DSM can be used to evaluate where
isolation layers, services, middleware layers or new system decompositions can be added
to isolate those components and facilitate variability. These concepts will be explored in
greater detail in the recommendations section.
Emergence
Emergence deals with the idea of collective behavior arising from the interaction of
component parts. The emergent behavior is not a function of any one component, but
instead is a large scale property of the system. The complete specifications for all the
components do not necessarily capture or fully express the behavior of the system.
Another way of thinking about emergence is the lock and key metaphor. A complete
62
specification of a key, its geometry, materials, production process... does not convey the
fact that it unlocks a door. The properties of the key when combined with the properties
of a lock give rise to the functions of locking and unlocking. Interestingly, properties of
the systems which can not be assigned to individual components are often only achieved
through the correct relationships or combinations of components. Now in a simple
system like a lock and key emergence is not a very interesting topic, but in complex
systems made up of many independent yet interconnected parts the emergent properties
can be quite unexpected and counter intuitive. Swarm theory and artificial life are
exploring collective the behavior of unsophisticated agents interacting locally. Ant
colonies, bee hives and bird flocks are examples. In a complex system of heterogeneous
components (or agents) following complex rules (or programs) the emergent behaviors
can be highly unpredictable. The basic precursors for emergent behavior are that there is
no centralized control, the subsystems are autonomous, there is a high level of
connectivity or communication, and causality in the system is nonlinear.
How does this relate to C2 systems? Well to start C2 systems possess the precursor
properties which allow emergent behavior to occur. There is no central control residing
over the C2 systems domain, the subsystems are autonomous and perform a variety of
functions independently, they systems are highly interconnected and interoperability is
driving greater interconnectivity, the only question is whether or not there exists nonlinear causality across the system. Early investigation points to the fact that there is a
level of non-linear causality and emergent behavior does exist within the C2 system.
During early integration testing on the AOC system from the case studies experienced
a phenomenon characterized as a 'network storm'. The network storm caused the system
to fail test. Prior to the integration test, individual components had passed lab tests and
the system had passed an integration test in the lab environment. AODB, TAP, and
USMTF, which were all characterized as 'high risk' in the case study, are believed to
have been major contributors to the network storm problems. When the system was
integrated for the first time in a more operationally realistic environment with distributed
63
communications and a full complement of users, the network performance became an
issue. When multiple users began generating data on the system and interacting with
other systems, updates in the form of information flows began to occur. These included
mission application write backs to databases, data base exchanges, database change
notifications, and message traffic from live feeds which updated the databases. The
distributed nature of the system allowed updates to occur at multiple locations and on
multiple systems simultaneously (i.e. no centralized control). The locally generated
updates were propagated through the system to re-synchronize the system with slight
delays. Each update required a comparison of stored data with the update received and
notification of any changes made were propagated. As the level of user and
automatically generated changes on the system increased the system went unstable
because updates began cascading through the system similar to feedback over a public
address system, and hence the network storm. The network storms were intermittent and
often resolved themselves in a matter of minutes, but during the network storms latency
in the communications network and increased the CPU usage effectively stopped all work
on individual client workstations. The overall effect was the system could not meet the
mission requirements.
Initially, development contractors pointed to problems with the underlying
communications and networking infrastructure used for the test. The problem did not lie
in the network, but in the combination of multiple uncoordinated updates occurring
simultaneously with delays. Experimentation with database update and refresh rates
during the test reduced the network storms and the system remained stable. The system
went back into development to fix problems identified during the testing and
experimentation and steps were taken to eliminate multiple updates and centralize update
transmission. While all the component systems and business logic behind them
functioned as designed and worked individually, the combination and interaction of the
components resulted in emergent behavior that was not anticipated by the system
engineers. The real challenge with emergent behavior is that it is not predictable and our
limited understanding and ability to model the dynamic interactions of a highly complex,
heterogeneous system are very limited. Simulation based acquisition is one of the more
64
promising approaches which has a chance of identifying risks of undesirable emergent
early.
The testing treadmill
Closely related to the emergence is the problem of repeated integration testing of
complex arrangements of sub-systems and components where multiple changes are being
made simultaneously. This makes it more or less impossible to isolate variables and
perform trouble shooting. Decisions making about what to fix and how the fixes will
effect one another becomes an exercise of judgment and experience where the next large
test will provide feedback on the changes. Additionally, there tend to be layers of hidden
software bugs and problems. As thels' level problems are resolved, like the network
storms encountered during the early integration test, deeper problems like the MIDB
performance load problems are encountered. The general trend is a reduction in the
number and severity of problem reports, with intermittent surprise serious emergent
problems. There also tends to be an over-optimism about the ability of the system to
'pass' the next test. This over-optimism is due to a tendency by program managers to
focus and track on work to fix known problems and a reluctance to make estimates about
new problems that will emerge during the next round of tests. The program managers
receive closure reports for problems as they are fixed and tend to base their projections
for the project completion on those closure rates. The tendency is to ignore the fact that
additional problems and bugs will emerge as other problems are fixed.
During testing of the AOC system the program manager was under extreme pressure
to deliver a system without Priority 1 or 2 problems. Priority 1 problems are
characterized as extremely serious and will result in mission failure or loss of life.
Priority 2 problems are characterized as causing serious mission impact and no
acceptable workaround exists. An operational system cannot be fielded with Priority 1 or
2 problems. The AOC system began it's testing with government partners in the
development plant in Oct 98 and identified
-
364 priority 1 &2 problems. Based on the
number of problems identified and the ability of the contractor to fix them, the next test
expected to start on a system with few bugs and no serious problems. See Figure 22.
The high level of problems (312 priority I &2's) at the end of the next test was somewhat
surprising to the program management staff. Negotiations with the contractor to fix new
65
problems were quickly completed, and direction was issued to prepare the system for a
third test. The third test was intended to provide the basis for a fielding decision. Prior to
the third test an independent team of engineers working directly for the program manager
and without ties to the development contractor, compiled data on problem open and
closure rates and projected
150 priorities 1 & 2 problems would surface during the third
-
test and as a result the system would not receive a favorable fielding decision. When the
engineers presented the data to the program managers he stated that the development
contractor assured him the problems were known, were the highest priority, would be
fixed, and the system would pass test. The 3rd test resulted in 250 priority 1 &2 problems
being opened! This was higher than the independent engineer's projections and
completely out of line with the contractor assurances. Part of the reason the engineer's
projections were off was due to an assumption that the problems would become easier to
fix as the system matured and the contractor would get better at fixing the problems as
their processes became established. These hoped for improvements never materialized.
Problem Resolution Rates Drive
Over-optimistic finish projections
Problem Reports
0
CL
CE
E
A
o
0.
0
z=1
-+-
400
350
300
250
200
150
100
50
0 1 1
1
Priority 1 Problems
1
-i-
1 1
1
Date
2
Problems
-+-Cumm 1 & 2 Problems
Priority
66
Figure 22 - Problem Resolution Rates
The pressure to deliver a working system prior to Y2K roll-over continued, but now
drastic cuts in functionality and a strict redefinition of what would be counted as a
Priority 1 or 2 problem was implemented. The redefinition of the system delivery
removed many of the priority 1 &2 problems from the system by recasting them as future
enhancements and deferred functionality. The AOC system did not field until Apr 2001
after having gone through a Government In-plant test, a Functional Development Test
(FDT&E), a second functional development test (FDT&E 2), a joint factory acceptance
test (JFAT), a Development/Operational Test and Evaluation (DT/OT), a second
development/operational test (DT/OT 2), a Multi-service Operational Test and Evaluation
(MOT&E), and a second multi-service test and evaluation (MOT&E 2). In total seven
major test events involving 250+ users at each event, plus engineering and contractor
support were required to field the system. The original acquisition plan called for only
three formal tests.
While most of the system components worked well in the labs, the integration of the
system gave rise to a new set of emergent problems that was not planned for or managed
effectively. The inter-dependence of the components makes fixes and changes risky
since they are likely to have effects on other components, and the need to make multiple
changes simultaneously makes isolation and trouble shooting very difficult. The process
of successive fixes, integration and retesting becomes a very expensive approach to
system development and fielding. The testing treadmill has significant effects on the
operational community as well, since they have a military mission to perform in addition
to supporting test events. Deployment requirements to support operational C2 centers
and real world missions have already exceeded the Air Forces manpower capacity. The
additional burden of testing C2 systems in an almost hap-hazard manner is viewed with
extreme disfavor by operational commanders. The Commander of Air Combat
Command was quoted as saying "We will make improvements were we can at the
expense of long term modernization."
Perhaps the only way to identify the emergent system properties and apply fixes is to
do full scale testing. Improvements in this area of acquisition management practice are
67
needed and architecture which do a better job of reducing system complexity and interdependence can help limit the emergent system properties and improve test/fix cycles. A
more regimented approach to building and integrating in small increments and adding
capability to the baseline AOC system is the likely future path. Arriving at a baseline
system to build from is a major hurdle and still requires improvements in methods to
support future systems development.
Effects in a closed system
As costs on a program grow there are limited options for how to deal with the
problem. A program manager can kill the project as being unfeasible, but this is
extremely rare since it is a black mark on the program manager's career, and the
requirement would be left unanswered. Additionally, the program incurs close out costs
on the contract which are often close to projected costs to complete the project.
Additionally, the program managers feel a sense of responsibility for the development
team who will have to look for other work if the contract is killed. This makes killing
even a bad program very unlikely. An alternative is to request additional funding for the
program to cover cost growth. The DoD planning, programming, and budgeting process
allocates budget authority in 5 yrs blocks as described in the background section. This
budgeting process allocates all the available funds to projects and there is not a ready
source of additional funds to draw from for cost growth. Most programs have a
contingency pool to allow some internal flexibility and occasionally a program manager
can get funds from other program's contingency funds, but it is another undesirable
practice and viewed as poor planning on the program manager in need. The reality in
practice is program managers reduce functionality for a given build. This is within the
program managers discretion, and simply moves functionality to future versions of the
system. It also provides a means to increase requests for future budget allocations to
cover the increased costs, so in some regards it grows the program and can be viewed
positively.
In the realm of inter-operable systems, programs with external dependencies must
have some level of insight into and influence over changes in external programs content.
68
Ideally, the program managers have established relationships with external partners and
are actively managing the risks due to inter-dependencies. Cascading effects due to
deferred functionality are not generally catastrophic, but they can be very disruptive and
costly. As an example, an imagery exploitation application was being developed for the
government by a 3rd party commercial contractor. It was planned for delivery to a
number of C2 systems which had a functional requirement for the capability. One of the
receiving C2 systems fully committed to the application and assisted in the development
over a two year period, a second C2 system chose to use commercially available product
in the interim and planned to evaluate the new tool when it was ready. When the
application developer failed to deliver the imagery tool, the
1 st
C2 system was forced to
initiate a short notice acquisition strategy to replace the tool. The cost of the new plan
was approximately $4M and there were $2+M in sunk development costs lost. In the end
the C2 system program manager had incurred $4M in additional costs which displace
other planned for functionality and delayed fielding of capabilities to the users. The
impact of deferring other capabilities has not been traced, but can be expected to affect
functionality related to interoperability with external systems since these are generally
'lower' priority requirements to a program. The second C2 system on the other hand
invested -$750K in fielding the commercial application, provided limited functionality 2
yrs early and avoided the major disruption altogether.
Maintaining cognizance of the development activities of external partners is an
additional burden on program managers, but is absolutely essential to managing risks and
improving interoperability.
Summary
Complexity is a major issue which needs to be addressed and managed as part of C2
systems acquisitions. The complexity of the development task faced by program
managers is significant, and external dependencies play a growing role as interoperability
increases. When the complexity of a system design is too great it undermines confidence
and responsibility, and systems thinking is an antidote [8]. Better, more useable
architectural representation using DSMs is one way to facilitate systems thinking and
69
reduce complexity. Unfortunately, architectural representations are developed
independently, have inconsistent architecture and interface views between interdependent programs exist, and creating shared understanding of dependencies is still
problematic. Stability or changes of sub-systems and components are not sufficiently
isolated, and have downstream effects on external programs thereby increasing the need
for tight configuration management and schedule coordination while decreasing system
flexibility. Attempts to automate immature operational processes require greater than
planned degrees of discovery and iteration and can be expected to cause architecture
instability. Emergent behavior during integrated testing results in new problems which
are difficult or impossible to predict and must be resolved before the systems can be
certified and fielded.
Culture
"Talent and genius operate outside the rules, and theory conflicts with
practice."
Major General Carl von Clausewitz, On War
Incentives reward aggressiveness
The Air Force officer core is a highly competitive group and the importance.of
contributing to the mission and making an impact are strongly held values. This is
reflected in the motivation and dedication of the service members and it is reflected in the
rating and evaluation system. The Air Force Pamphlet (AFPAM 36-2404) on how to
write an officer job description for an evaluation is an enlightening example. The
pamphlet provides weak and strong examples of how to write a job description. The job
descriptions are very important because they are the opening paragraph of the officer's
annual evaluation which are compiled and reviewed by promotion boards. [9]
" Weak Job Description:
o Duty Title: Chief, Resources and Requirements
o Key Duties, Tasks, Responsibilities: Responsible for numerous aspects of
civil engineering operations, including construction and repairs of base
facilities and grounds maintenance. Supervises three personnel and
oversees work force. Manages large supply account for unit. Responsible
for unit vehicles.
" Strong Job Description:
70
o Duty Title: Chief, Resources and Requirements
o Key Duties, Tasks, Responsibilities: Plans, requisitions material, and
schedules civil engineering operations, maintenance, and repairs for base
facilities, including housing and over 5,000 acres of grounds. Directly
supervises three personnel and oversees a 37-person work force. Manages
multi-million dollar account for supplies and equipment. Also responsible
for civil engineering vehicles.
Strongest Job Description:
o Duty Title: Chief, Resources and Requirements
o Key Duties, Tasks, Responsibilities: Responsible for receiving, planning,
programming, material requisition, and scheduling of all civil engineering
in-service operations, maintenance, repair, and minor construction work
on 279 base facilities, 790 family housing units, and 5,100 acres of
grounds valued at $72 million. Oversees a work force of 37 people and
directly supervises 3 section chiefs. Manages the expenditure of $2.5
million for supplies and equipment to accomplish work. Also responsible
for all civil engineering vehicles.
When promotion boards review evaluations, the size of budgets and span of control
become easy metrics to use in deciding who gets promoted and who doesn't. The
officers who are rated by this system know the underlying criteria and mechanics of the
evaluation system. The message sent to officers is that bigger programs are better from a
career standpoint. Occasionally competition can encourage dysfunctional behavior. One
anecdote which illustrates the point comes from an instructor at the Defense Acquisitions
University.
"...when I was an Air Force major at Hanscom AFB, Mass. One of my
friends came back afterfinishing the ProgramManagement Course.
When I asked about the course, he said it was great (the course always
had a top reputationfrom the overwhelming majority of its graduates), but
that it was really competitive. He indicatedthat there was a lot of pressure
and competitionfor grades. He said students were expected to help their
work group and to work together on cases, so one had to be very clever to
providejust enough good help to get by, but keep others a bit confused on
the nuances. By giving or allowingjust enough misinformation in his
area of expertise, he could do better on the exams and have a better shot
at "A's"and top graduatedesignation. I was disappointedto hear the
system discouragedcooperationand encourageddysfunctional behavior,
which sounded like "cheating" other classmatesfrom optimal learning."
[10]
71
This should not be taken as a blanket condemnation of the system, but it is important
to recognize that incentives like 'top graduate designation' can result in undesirable
behavior. Aggressive officers will manage their careers according to their best interests
and seek opportunities to extend their span of responsibility and control. As a result more
aggressive officers are rewarded by receiving stronger written evaluations and ultimately
have greater promotion potential. In some circles 'aggressive' has a negative connotation
but in the military culture aggressiveness is a positive, and desirable trait. Aggressive
officers are go getters, they are leaders, they create opportunities and make things
happen, they make tough decisions, they are not afraid, timid or overly analytical, they
don't wring their hands and worry, they take action and live with the outcome. In a
speech by Anthony G Bohannan, a former senior British Army officer, he stated
"Military history tells us time and again that it is the commander who has the courage to
defy conventional logic and precedence and who reacts to his instincts contrary to the
available information and consensus, who more often surprises his opponent and wins."
While these are positive attributes for a battlefield environment and well matched to a
military culture, they are not necessarily the same set of attributes which are desirable in
military program managers who are part of that culture. Program managers should be
striving to adopt and implement acquisition best practices which bring order,
predictability, and higher levels of productivity and quality to their programs.
The implications for program managers who are part of both the military culture and
acquisition community are potentially counter-productive and have negative effects on
the DoD's capability to acquire systems for the best price. If managing a large program
that delivers more functionality and has a larger budget than a small focused program on
a short schedule is perceived as career enhancing, then officers will naturally compete for
them. Smaller, focused programs are less likely to be proposed or pursued within the
culture. This runs directly counter to findings of the Standish Group who found that the
probability of success for a software program is inversely proportional to its size and
approaches zero for projects costing $10 million or more. [11] Small projects tend to
succeed because they reduce confusion, complexity and cost. The incentive to have a
greater mission impact and control large budgets can be a driver in the opposite direction
72
encouraging project managers to take on additional functional requirements, pursue
advanced technologies, compress schedules for fast deliveries and adopt high risk
acquisition strategies. A General Accounting Office report on Major Management
Challenges and Program Risks reported that;
"We continue to find that the desire of programsponsors to keep cost
estimates as low as possible and to present attractive milestone schedules
encourages the use of unreasonableassumptions about the pace and
magnitude of the technical effort, materialcosts, production rates, savings
from competition, and otherfactors. For example... the originalschedule
for developing the Joint Air-to-Surface Standoff Missile was ambitiously
set at about half of what previous missile programs required. The schedule
was laterdelayed by 22 months, and total program costs increasedby
$500 million." [12]
Aggressive program management and competition among program managers within
the AF acquisitions career field where huge budgets are concerned and decisions early in
a program's lifecycle have long term implications for affordability, maintainability,
scalability, supportability... is entirely undesirable. It is generally accepted that changes
early in the product development process are much less expensive to deal with than
changes late in the process. Following a standard waterfall development model, changes
during requirements definition have almost no cost associated. Changes during the design
phase are relatively small, but as you move into implementation and unit test changes
cost approximately 10 times what they would have if caught during design phase and
increase to 100 times the cost if made during system test or following fielding.
Unfortunately, the incentives structure suppresses early identification of potential
downstream problems and encourages aggressive and ambitious planning to launch large
programs.
"The competition forfunding when a program is launched encourages
aspiringDOD program managers to include performancefeatures and
design characteristicsthat rely on immature technologies. In this
environment, risks in the form of ambitious technology advancements and
tight cost and schedule estimates are accepted as necessaryfor a
successful launch. Problems or indicationsthat the estimates are
decaying do not help sustainprograms in lateryears, and thus admission
73
of them is implicitly discouraged. There arefew rewardsfor discovering
and recognizing potentialproblems early in program development." [131
Ultimately, the officer evaluation system emphasizes "Impact on Mission" as a
primary rating criteria and should self correct if aggressive program management leads to
failed programs with negative impacts on the mission. Which leads into the next topic,
where program manager turn-over virtually guarantees that the program managers who
launch the programs are not held accountable for the outcome.
Turn-over eliminates accountability
Interviews with program managers and questionnaire responses showed that military
program managers on average are assigned to a program for 2.4 years. When compared
to the 5 year spending plans generated by program managers as part of the DoD
programming, planning and budgeting process we can immediately see that the sitting
program manager sets budgetary the stage for next 1 to 2 program managers to follow. A
lack of management continuity needed to oversee large-scale, long-term software
developments has adverse affects on program focus and progress. It also reduces or
eliminates accountability for decisions made during tenure as program manager.
Following the cost performance reporting data and program management changes over a
5 year period for a major C2 system acquisition is instructive.
The cost and schedule performance was based on discrete tasks defined in a Work
Breakdown Structure. The contractor provided the following data in a monthly Cost
Performance Report (CPR):
BCWS
Budgeted Cost of Work
How much effort was scheduled for
Scheduled
period
BCWP
Budgeted Cost of Work Performed How much effort was accomplished
ACWP
Actual Cost of Work Performed
for period
How much the BCWP eventually
74
cost
Table 2 - Cost Reporting Definitions
By comparing BCWS and BCWP the contractor's schedule performance can be
established. For example, if BCWP < BCWS, then contractor is behind schedule.
Similarly, BCWP and ACWP are compared to identify cost performance. If ACWP >
BCWP, then contractor is over planned cost. Cost and Schedule Performance Indices
(SPI and CPI) are developed from this data.
SPI= BCWP/ BCWS
If SPI = 1, contractor is on schedule
If SPI < 1, contractor is behind schedule
If SPI > 1, contractor is ahead of schedule
CPI= BCWP/ACWP
If CPI = 1, contractor is on cost
If CPI < 1, contractor is over cost/overrun
If CPI > I, contractor is below cost/underrun
This cost and schedule performance data is only a trend indicator. Unrealistic
contractor frontloading of planned effort, inaccurate reporting and program rebaselining
can adversely impact the value of this tool. If the contractor has performed well to date,
it does not guarantee future efforts, such as system testing, will not encounter
cost/schedule performance problems. While this is only a trend indicator, findings from a
study of DoD programs behind schedule and over cost reported that no programs
recovered.
The cost and schedule data from program launch through the tenure of the first
program manager for the C2 initiative studied is shown in the graphs in Figure 23 below.
The original program plan was for a $95M dollar development effort starting in Dec 96
and delivering its first version by Oct 98. Numbers in the CPR are in $K dollars and the
timeline is in months.
75
Cumulative Cost Performance Data
under PM 1
40000
3000e
BCWS
---
10000
-h-- ACWP
1
2
3 4
5 6
7 8 9 1011 121314
Cummulative Program
Schedule & Cost Variance under PM1
r-
0)
0
-+-1000
--
-1500
00--w-
Cost
Variance
Schedule
Variance
1.000
0.950
0.900
0.850
-
Ifl
Cummulative Program SPI & CPI
under PM1
-
co
BCWP
20000
0.8W
40
0.750
1 2 3 4 5 6 7 8 9 10 1112 13 14
-2000
Program
Manager 2
takes over
Figure 23 - Cost Reporting Under PM 1
It is worth noting that in month 11 'end of year money' was added to the contract to
put it back on track. The incoming program manager takes over a program that appears
to be recovering after some initial instability standing up the project. Only five months
after the PM2 took over the program was announces as an "Acquisition Reform Success
Story" by the Assistant Secretary of the Air Force (Acquisition), projecting a $6M
savings.
"The xxx (name deleted) program consolidatedthe effort of three separate
program offices and over 10 development organizationsinto a single
integratedprogram office with a single integrationand development
contractorand companion hardwarecontract (doing it better). This
consolidation reduced the manpower necessary to manage the program
(doing it cheaper) and increasedthe ability of the contractorto
accomplish development and integration--resultingin a fully integrated
system provided to Air Force combat units in FY 98 (doing itfaster)."
Air ForceAcquisition homepage, http.iwww.safaq.hq.afmil
Program manager 2 is in place for 20 months and toward the end of PM2's tenure the
program is 'rebaselined.' The practice of rebaselining adjusts for errors in the original
estimates for budgeted and actual work by increasing the BCWP. This has an instant
76
impact on cost and schedule variance. See graphs below. As program manager 3 takes
over the program is now in its
3 3 rd
of 36 months overall and was expected to deliver
version 1 at a total cost of $95M. The program has already expended $119M which is a
25.3% over-run, yet this is not apparent from the cost performance reporting graphs in
Figure 24 below. The cost over-runs are partly due to contractor performance and partly
due to requirements growth and promises for additional functionality.
Cummulative Cost Perfomance Data
under PM 2
140000
120000
100000
80000
-_-BCWS
BCWP
-4-
---
ACWP
20000
Cummulative Cost & Sched Variance
under PM2
Component
--
O- Integration test begins 0
'
i
-
Cumm ulative Program SPI & CPI
test begins
M
0.980
-1
2000
-3000
-4000
-5000
-6000
-u-
Sched Variance
Cost Variance
-7000
under PM 2
0.960
0.940
0.920
Original
Acquisition Success
story announcement
--SPI
+CP1
1.0 delivery
Program
Rebaselined
Program
Manager 3
takes over
Figure 24 - Cost Reporting Under PM 2
As Program Manager 3 takes over the program it is behind schedule and under
incredible pressure to enter operational testing. The series of full scale operational tests
reveal multiple problems which make the software unacceptable until fixed. To bring the
program back on track and actually deliver a product PM 3 was forced to 'descope' the
functionality delivered. Descoping essentially takes tasks off the contract and adjusts
BCWS. This helps to reduce the workload on the contractor, and focuses the team on
critical functionality. Descoping also reduces the functionality that was promised to
77
operational users by previous program managers and used to justify increased budgets.
Another effect of descoping is that less work has been completed for the actual costs
incurred (ACWP) and cost variance grows. This in turn forced two rebaselinings to
adjust BCWS and improve the cost variance. See the graphs in Figure 25 below.
Cummulative Cost Perfornance Data under
PM3
250000
200000
BCWS
150000--
-a-BCWP
100000
P
-i-ACWP
50000
0
M
LWtO
t-
O)
-
M~
Cummulative Cost & Sched Variance under
PM3
.
.
..1-
crQ
r-
Cummulative Program SPI & CPI
Under PM3
(D
1.050
1.000
-
0
-
LO
0
--- Sched
-5000
Variance
-10000
-'-
-15000
Cost Variance
0.950
0.900
0.850
0.800
-20000
C0
to
C.
) r
Program
Rebaselined
Program
Manager 4
takes over
Figure 25 - Cost Reporting Under PM 3
In the end PM 3, who was considered a rising star in the acquisition career field, was
forced to make some tough decisions, break promises with the operational users for
additional functionality; and had to request additional funding to cover additional testing
and rework. These actions put the program in a position to deliver a greatly reduced
product at a higher than expected cost. The program manager was able to avoid
cancellation on a program that had expended $196M against an original $95M dollar
budget and was now 16 months away from operational fielding and would be 36 months
late of the original estimate.
78
-u
Program manager 4 took over the program and the pattern of growing cost variance,
and rebaselining before a new program manager takes over continued. The product was
delivered to the user community seven months after program manager 5 took over. See
the graph in Figure 26 below.
Cummulative Cost Performance Data under
PM4
240000
220000
-+-
200000
BCWS
--- BCWP
-
ACWP
180000
16000026
1
2
3
4
5
6
7
8
9
Cummulative Program SPI & CPI
Cummulative Cost & Sched Variance under
PM4
1.000
1
2
3
4
5
6
7
8
0.995
0.990
9
0
-1000
CPI
Sched Variance
-2-3000
-4000
-a- Cost Variance
0.975
0.970
-5000
Rebaselined/
Program
Manager 5
takes over
Figure 26 - Cost Reporting Under PM 4
Overall, the pattern of over-optimistic cost and schedule estimates, growth in promised
functionality, increasing budgets, and slipping schedules are masked by the practice of
rebaselining. Taking the same cost reporting data and removing the rebaselining
adjustments we get a very different picture of the program's performance. See the graph
in Figure 27 below.
79
Cummulative Cost & Sched Variance (with
Rebaselining)
0
-10000
-20000
$
m COST VAR
-30000
$
n SCH VAR
-40000
-50000
-60000
_
_
_
_
I
_
_
_
_
_
I
_
_
I_
_
_
_
Cun mulative Cc st & Sche4 I Van ance
(Rebaselinir Adjustei I Out
0
-10000
-20000
$
m COST VAR
-30000
$
n SCH VAR
-40000
-500CO
-60000
PMl
PM2
PM3
PM4
PM5
Figure 27 - Comparison with and without rebaselining
Looking at the graph with rebaselining adjusted out we can see that cost growth really
begins to occur as the program moves into testing. The most likely explanation is that the
impact of poor design choices, over-optimistic estimates, coding defects and bugs are not
revealed to the government program managers until testing begins. The true scope of the
problem can only be understood when 380 plus software segments which make up the
overall program are integrated and tested as a whole. Fixing first order problems gives
rise to new problems and reveals still undiscovered second order problems. The
implementation decisions and acquisition approach established by PMs 1 & 2 have
tremendous implications for the success of the program. Early in the program planning
stages faulty assumptions and emergent problems are difficult to anticipate until testing
80
begins to reduce uncertainty and demonstrate the true capabilities of the developed
product. By the time problems are revealed the original program managers have usually
moved on and are no longer accountable. Not only are they protected from
accountability; they often miss the opportunity to learn from the experience of addressing
problems they may have created. The incoming program managers are thrown into a
heroic struggle to save the program.
View from the user/customer's perspective
The user community has been frustrated by the lack of responsiveness of ESC and its
ability to deliver systems. The systems generally take too long to reach the field, are too
costly, are difficult to support, and fail to meet user expectations. What is the cause of
this dissatisfaction? To start, technology holds the promise of improving operational
efficiency and program managers pursue opportunities to build those systems
aggressively. When program managers over promise and then have to slip schedules,
increase costs, and deliver less capability than originally promised, the organization's
credibility suffers.
In today's fast clockspeed development timelines the user community has come to
expect the latest technology delivered now, and for a competitive price. Many times
users are able to implement software solutions to local problems using the 'a couple of
guys in the garage' approach. User developed solutions are often a source of key
innovations which drive larger more formal programs. Harvesting the innovative
potential of lead users has been the subject of reseach by Prof Eric von Hippel.
"contrary to conventional wisdom, successful innovations are often first
developed and tested by the users themselves, "lead users," ratherthan by
the firms that arefirst to bring those innovations to market. "[14]
Leveraging user solutions can be a double-edged sword for program managers. First,
the lead users tend to create solutions tailored to their way of thinking about and
visualizing important information. They know the domain and challenges far better than
81
any team of software developers can hope to learn in a short development cycle. As a
result, the user developed solutions are often an excellent match to their particular
problem. Unfortunately, these solutions do not always scale well or translate into a
generalizable solution. Second, the lead users can implement a solution for extremely
modest costs. When 'professional' developers adopt the solution and deliver to a wider
user audience there are large overhead costs incurred. These costs are easily an order of
magnitude or more. Fredrick Brooks' explanation for this is enlightening. The user
developed 'garage solution' is generally a software program without documentation and
without formal system interfaces and specifications. Generalizing the program so that
other users can read, learn about and use the program increases the cost by a factor of 3.
Integrating the program and building interfaces can increase the cost by another factor of
3. See Figure 28.
A
A
3X
Program
Programming
System
(Interfaces,
System integration)
13x
A
Programming
Product
(Generalization, testing,
documentation, maintenance)
A
Programming
Systems
product
Figure 28 - Evolution of the programming system product [4]
In the end a solution developed and implemented by a user site costing $200,000, can
be expected to require over $2M to re-develop and is likely to deliver a less capable
(albeit supportable) solution. A program office taking on such a development effort runs
the risk of criticism for inefficient and ineffective management of development dollars
and the inability to even copy what was provided to them. The pressure to produce
similar results reusing a known solution can lead to unrealistic cost and schedule
estimates.
82
One final aspect worth considering is the immediacy of the users experience.
Operational commander's are presented with highly dynamic situations and are required
to make decisions and implement solutions on the fly. They see the world in very short
decision cycles and are used to near immediate feedback. When you connect the
operational community's fast feedback cycles to the acquisition community's product
definition and development process there is an immediate tension. Great ideas, concepts,
and innovations rising from the operational user community seem to take an inordinate
amount of time to find their way into operational systems. Causal loop diagrams are a
tool for system thinking and understanding the dynamics of feedback [15]. Figure 29
below is a causal loop diagram that illustrates the point.
Plannin
Acquisition
Community
J 4%
Strategy
Implementation
Operational
Community
6 months
to Years
Months
Pro c ed u re
Lessons Learned
Hoc
%
4Fielding
Operational
3
4
Operational
Solution
System
Warfight
Probem
Days or
Months
Immediate
Figure 29 - Operational Feedback Delay
83
Summary
The cultural tension between operational user community and the military acquisition
community is not conducive to building program management best practices. The culture
creates incentives and competition for increased span of control and influence,
recognition, rewards and promotions which can lead to adoption of high risk acquisition
strategies. The culture further shelters program managers from accountability for bad
decisions and reduces the opportunity to learn from experience. Finally the operational
culture tends to hold a negative image of acquisition professionals (a.k.a. acquisition
weenies) who do not understand the warfighters' needs, are inefficient managers of large
budgets, and unable to deliver the latest technical innovations in a timely manner. The
reality of the situation is that there are many dedicated and conscientious program
managers who are aggressively pursuing enhancements to the nation's warfighting
capabilities. Efforts to improve incentive structures, create accountability and manage
user perceptions could have lasting positive effects on military acquisition acquisitions
and help facilitate the adoption of strategies more in line with commercial best practices.
84
Recommendations
DSMs and Enterprise Architectures
Software projects are more difficult to manage than construction projects [161 and C2
programs are among the most complex and difficult software projects in the world today.
The human mind has a limited ability to deal with complex, probabilistic or dynamic
problems intuitively. C2 Program management decisions tend to fall into the complex
category with many variables and relationships to consider. The sheer number of subsystems and components in a C2 system make them extremely complex. The example of
a C2 system with 49 components and 2 options or choices for each component yields 249
= 5.6295 x 10" possible system configurations. Layer over that the unique functions and
potential incompatibilities inherent in certain combinations and the level of complexity is
beyond a program manager's ability to manage exhaustively. C2 program managers are
also forced to deal with the dynamics of schedules and trying to get the releases of
multiple components to deliver on time so they can be integrated into a unified system.
And there is always the probability that one or more of the systems will deliver late or
create unexpected conflicts within the system. Clearly, C2 program managers need tools
and strategies for managing and reducing the complexity of their task. This section will
provide some suggestions and recommendations to apply to help enhance the ability to
develop and deliver interoperable C2 systems.
Introducing enterprise interoperability metrics
If fielding interoperable systems is a goal for the military then being able to measure
progress on improving interoperability is necessary. There has been no systematic effort
to measure interoperability for the AOC system. One of the primary reasons is that
interoperability is an inherently multi-dimensional concept and there is no single metric
that will give a clear picture of the system's interoperability. Defining and applying a set
of quantitative metrics would be extremely difficult, with the increasing reliance on C2
systems and the need for improved interoperability, demand a more explicit attempt to
85
measure it. Rather than attempting a precise measure which would create controversy,
the thesis recommends a subjective interoperability measure based on practical
assessment and judgement. The basic measures are Green/Yellow/Red for a
Pass/Marginal/Fail assessment of the systems interoperability. This framework can be
applied in three ways. First, as a way to measure interoperability or compliance
assessment for issues like DII COE or the Joint Technical Architecture. Second to
measure information flows between applications or external systems as defined by the
system architecture. And third, to trace a mission thread and determine if the systems
support the end to end mission performance.
The first step in measuring interoperability is to determine where systems, subsystems and components are required to interoperate. Rather than 're-invent the wheel'
program managers will want to use information that is readily available. Adapting the
Design Structure Matrix (DSM) used in earlier in the architecture representations and
interface design would be a logical choice. The DSM in Figure 30 has been shaded to
highlight where all the interfaces exist. Color coding the DSM squares where an
interface exists as Green/Yellow/Red for Pass/Marginal/Fail is a quick way to visualize
the status of the system. In this case the DSM includes DII COE and external systems
and interfaces as well as internal system interfaces. Using the DSWM as an
interoperability metric tool is a great way to track progress and manage the development
activities on the system.
86
-
-
'~1
- -
Figure 30 - DSM for Interoperability Metrics
The Green/Yellow/Red assessments can be used for mission threads as well. The
overall assessment of the interoperability for a thread is equivalent to the worst
interoperability rating for an information path along the thread. In other words, a chain is
only as string as its weakest link. The mission thread for a mensurated point or Desired
Mean Point of Impact (DMPI) was discussed in the background section and provides an
excellent example (Figure 3). A simplified data flow diagram is provided here in Figure
31. From a program management standpoint, five different program managers and six
different systems are involved in providing systems to support this operational thread.
Each program manager and system has a different set of requirements and priorities, so
building consensus on the thread is difficult. It is also noteworthy to point out that some
of the interfaces are not under the control of any single program manager. The use of this
kind of simplified diagram becomes an interoperability tool and is extremely valuable for
building common understanding and determining what negotiations on interfaces must
take place.
87
PM2
PM 1
System
RD
1
Annotated
Image
...................................
Spreadsheet
~.......
............................
PM 4P
PM 3
System 3
Excel
System 2
AOC
ysm4System
5Arra/
MPS
System 6
Aircraft/
Weapon
Figure 31 - DMPI Flow
The use of DSMs and thread diagrams as metrics tools is suggested here to evaluate
systems interoperability. These are simple yet powerful tools for program managers.
The fact that the interfaces for a system architecture or mission thread can be formally
identified and characterized allows for reporting on progress. Error! Reference source
not found. show 246 total interfaces with 179 green, 40 yellow, and 27 red. The system
has achieved 73% of its interoperability requirements and can formally report that metric.
More importantly it can focus on areas where interoperability has not been achieved and
search for trends. Sub-systems and components which are having problems integrating
are highlighted and can be added to a risk management watch list. Figure 31 tells a story
of interfaces between systems that are not being managed. It also highlights the need for
coordination and collaboration between programs across the enterprise. For the 7
interfaces along the mission thread only one has been satisfactorily implemented. Here a
C2 node manager or enterprise manager can assign responsibility for un-managed
interfaces and provide direction to individual program managers to work together on
solutions. Important organizational and teaming relationships can be formed aligned with
system interoperability requirements.
Collaboration for enterprise decisions
Simply establishing interoperability requirements and documenting them in systems
architectures does not insure interoperability will occur. A deeper level of cooperation
88
between sub-system and component developers may be required to achieve
interoperability goals. It is particularly important when a program office does not have
direct control over the interface either because the boundary between sub-systems lies in
a gray area where no explicit control or authority has been established, or where the
interface is between two sub-systems which are outside of the developers control.
Using an abbreviated DSM to assess the AOC system interfaces as documented in the
system specification provides very important insights. The AOC architecture has
requirements for interfaces among sub-systems and therefore has a vested interest in their
interoperability, but a number of the sub-systems and components which are part of the
architecture are built and delivered by external program offices. In the DSM below,
names of sub-systems being delivered by external program offices are highlighted in light
blue. The interface requirements are marked by ones in the DSM cells and are based on
the AOC System Specification. The cells where interfaces are required can then be color
coded based on the level of control the AOC program office has over the interface.
Green indicates the program office has complete control over the development of both
sub-systems which are interfacing. Yellow indicates the program office is in control of
one of the systems, and can negotiate with the other sub-system developer to build to a
common interface. Red indicates the program office is in control of neither sub-system
development and therefore must rely on the delivered systems to work together or must
build a specialized interface after delivery. The red interfaces are extremely undesireable
and can create problems.
DIO
AODB MIDB
USMT
JMTK SM
TE
Wx
AD
DP
TW
ACP IDM
TAP
EM
RM
SAA
TCT
Dli COE
AODB
MIDB
l
I
USWF
.t
S ctrum Ma nent SM
7Te& EvaDa
MTe
t
Airspce Deconffiction (AD)E
-1
-
Theater Ax
Targein
andlniT
Weaom
(I
Air Can ' n Phonng ACP)%
(IDM
Manaent
Data
Intelftence
Theater Air Plannin (TAP)
ment
Execution Ma
enent (R
Resource Ma
S9lhaionatAwareness and Assessment SAA
(rCr
Tre Cnr Twpeft
Figure 32- - Interface Level of Control
89
The DSM above is a good representation of the AOC system and shows 18 subsystems with a focus on the mission applications and critical infrastructure of the
originally planned system. Of the 18 sub-systems 10 were under AOC program office
control. The breakdown of "Interface Level of Control" for all 59 interfaces is 20 are
green (34%), 26 are yellow (44%), and 13 are red (22%). The program office had direct
control of only one third of the interfaces required to build the system! This metric is a
potential indicator for program success and likelihood of cost growth.
This DSM provides a blueprint for collaboration on system interfaces. Green
interfaces can be handled through internal working groups. Yellow and Red interfaces
require a different level of collaboration and coordination. At a minimum, the program
office should have memorandums of agreement with the programs for all interfaces in the
yellow category, and requirements or funding lines established with programs for
interfaces in the red category. Logical sets of interfaces based on a partitioning of the
DSM could be used to establish focused interface control working groups and funding
vehicles to meet these interoperability requirements.
The DSM for interface level of control Figure 32 and mission thread diagram Figure
31 highlight the decision space where collaboration and negotiation must occur. The
DSM and thread diagram aide the process of generating common understanding and
building consensus to resolve interoperability issues. There is ample literature available
on processes for organizing and running effective meetings. The real value to a program
manager is knowing which meetings are important and where to invest limited time and
resources. The DSM for "Interface Level of Control" provides an important tool for
identifying and managing interface collaboration.
Interfaces, isolation layers, and information hiding
As early as 1972 David Parnas introduced the idea of information hiding information
as a way to modularize software designs and improve adaptability [17]. The concept was
based on anticipating where changes are most likely to occur in the software architecture
and modularizing around software components that would undergo change. Papers
produced as recently as 2001 by the Software Engineering Institute support the concept
90
of incorporating mechanisms in software architecture to accommodate change and allow
for future upgrade options. What is not addressed by these approaches is the fact that
modularization can have significant cost implications for the software development, can
delay the initiation of coding until the standardized interfaces are designed, and can
become constraining and costly to change. The approach recommended in this thesis is
to leverage the DSM of a software system to evaluate where modularization mitigates
risks due to change, and provide a tool for selecting and implementing information hiding
strategies.
The DSM describing Interface Level of Control (Figure 32) can be rearranged to
place externally developed software components at the top of the matrix. This provides
a better organization of the interfaces in terms of green, yellow or red levels of control.
Knowing what percentage of external interfaces are not under your direct control is a
critical measure and should be assessed and monitored as a part of any software project
[18]. Following the rearrangement an analysis of the interface between the Targeting and
Weaponeering (TW) and Modernized Integrated Database (MIDB) is traced to determine
how decisions on that implementation can affect other interfaces within the AOC system.
Figure 33 shows the trace and inspection shows that TW must negotiate three interfaces
for data output to AOC components for TAP, EM, and ACP, and on input interface from
ACP. Additionally, there are common data needs where ACP and TAP are building
direct interfaces with MIDB as an additional path to receive targeting data. The overall
result is six specialized or integrated interfaces within the AOC system for targeting data
exchange.
JMTK MIDB USMTIDP
D1
DIICOE
JMTK
MIDB
USMTF
Defensive Planning (DP)
Situational Awareness and Assessment (SAA)
Targeting and Weaponeering (TW)
Threat Evaluation (TE)
Time Critical Targeting (TCT)
Weather (Wx)
Spectrum Management (SM)
Airspace Deconfliction (AD)
AODB
Theater Air Planning (TAP)
Intelligence Data Management (IDM)
Execution Management (EM)
Resource Management (RM)
Air Campaign Planning (ACP)
SAA
TW
TE
TCT
Wx
SM
AD
AODB TAP
IDM
EM
RM
ACP
1
'1
1
1
91
Figure 33 - DSM Interface Trace
By introducing a set of design rules for target data exchange in the form of a
standardized data structure, the set of six unique interfaces can be combined into a single
common interface for input/output of target data. The design rule does not eliminate the
requirement for data to flow, however it does introduce a level of isolation between the
independently managed programs. See Figure 34. With the design rule in place,
changes on either side of isolation layer can take place without the need for formal
coordination. Upgrades to components are simplified and require only an abbreviated
regression testing to ensure the design rules have not been violated. Furthermore the
design rule effectively hides the detail and complexity of components on the other side of
the isolation layer and allows developers to build to an interface which should remain
stable over time. The DSM can be used to evaluate how the risks associated with
interfaces which are not under the direct control of the program office can be mitigated
through the implementation of new design rules. Provided the design rules are
implemented correctly and adhered to, they provide isolation between component
changes [19].
DII
DI1 COE
JMTK
MIDB
USMTF
Defensive Planning (DP)
Situational Awareness and Assessment (SAA)
Targeting and Weaponeering (TW)
Threat Evaluation (TE)
Time Critical Targeting (TCT)
Weather
x
MIDB-TW Interface Rule Set
Spectrum Management (SM)
Airspace Deconfliction (AD)
AODB
Theater Air Planning (TAP)
Intelligence Data Management (IDM)
Execution Management (EM)
Resource Management (RM)
Air Campaign Planning (ACP)
JMTK
MAD
LISMI DP
SAA
TW
TE
TCT
Wx
SM
AD
AODB TAP
IDM
EM
RM
1
Figure 34 - Introduction of TW-MIDB Design Rule
The design rules require significant input and coordination from multiple subsystem
developers, but once established become stable and provide a standard for new external
systems to build to as part of a growing enterprise. The design structure matrix provides
92
ACP
a means to identify and evaluate where introduction of design rules are likely to have the
greatest impact to reduce risk due to changes and improve the system flexibility.
Incentives
Producing a cultural shift and establishing an organizational mindset that values
interoperability and provides individuals with strong incentives to pursue interoperability
is an important undertaking. The Total Quality Management movement was extremely
strong in the 1980s and 1990s largely due to then Secretary of Commerce Malcolm
Baldrige's support and championing of quality initiatives. His commitment to quality
was formalized with the creation of the Malcolm Baldrige Quality Award. The idea was
to promote total quality management practice and to encourage American businesses to
practice effective quality control in the provision of their goods and services. The Award
was enacted by congress and signed by President Regan on Aug 20,1987 [20]. The
Malcolm Baldrige National Quality Improvement Act of 1987 helped to lead a shift in
U.S. businesses practice and nurtured the quality movement. The name of the award and
formal act of congress demonstrated buy in and support for quality at the highest levels of
the government.
There is no reason why a similar effort could not be pursued at ESC or the DoD level
to lead a shift for C2 program development in the pursuit of interoperability excellence.
C2 systems are a critical component of our nation's military capability and the goal of
increased interoperability is a key to the military transformation being pursued by
Secretary of Defense Rumsfeld. In a speech on the State of ESC by Lt Gen Leslie Kenne
she stated,
"Integrated C2 is one of Secretary Rumsfeld's top priorities. He
recognizes transformationwon't be successful without integrated
command, control, intelligence, surveillance and reconnaissance. We're
going to help him achieve that here at ESC...."
The State of ESC address 2001
Lt. Gen. Leslie F. Kenne
Electronic Systems Center Commander
93
By establishing the 'Rumsfeld C2 Interoperability Excellence Award' a strong
message would be sent to the acquisition community on the importance of
interoperability. The creation of the award would help improve C2 system
interoperability by:
" stimulating program managers to improve the interoperability of their systems
for the pride of recognition while increasing the capability they deliver to the
warfighter
* recognizing the achievements of programs which find ways to improve
interoperability for their military customers and provide an example to other
programs
" establishing guidelines and criteria that can be used to evaluate the
interoperability of C2 systems and the processes used to achieve results
* providing specific guidance to program offices to learn how to manage C2
programs to achieve interoperability excellence and making available details of
how winning programs were able to change their cultures and processes
The award process would require C2 programs to submit a formal write up and
nomination package to provides an overview of the program, and describe the
management practices, processes, teaming relationships... used and how they contributed
to the successful outcome. Categories of C2 systems based on size of the program, and
the nature of the system (i.e. communications, decision support, visualization, analysis
and correlation, sensor systems...) would be established, and an annual review board
would review packages and select winners.
Possibly the most valuable aspect of such an awards program is that it becomes a self
populating collection of interoperability challenges, alternative approaches and source of
best practices for future programs to learn from.
Creating accountability
One of the barriers to delivering programs on time, on budget, and with the
functionality promised is the fact that most C2 system development programs are
94
launched by one program manager, developed over a number of years under a series of
managers rotating onto and off of the program. The case study program had an average
program manger tenure of 14 /2 months. When a program finally delivers a system to the
field, it has undergone a number of changes and quite probably significant technology
upgrades. The program managers who sign up to the functional requirements, select the
contractors, establish the budgets, and schedules and review the initial designs are
virtually absolved of responsibility to see the project through to completion.
The idea of keeping a program manager assigned to a program is not part of the
current thinking, and runs counter to many of the evaluation criteria used in promotion
boards. This would be potentially career damaging to officers in the program
management field. An alternative is to make program life cycles shorter. This is actually
very positive move and consistent with fast clockspeed development cycles discussed by
Charles Fine [21]. The Standish Group has a long history of investigating the causes of
software project failures and developed a framework for project success and the trend is
improving.
"The reason most of these projectsfailed was notfor lack of money or
technology; mostfailedfor lack of skilled project management and
executive support." [22]
1994
1996
1998
2000
Succeeded
16%
27%
26%
2%
Failed
31%
40%
28%
23%/
Challenge4
53%
33%
40%
49%
While the overall number of projects being launched is increasing, the average size of
software projects is declining and success rates are much higher on small projects.
According to the Standish group's findings, a 1998 project that had a labor rate under
$750K had a 55% chance of delivering on time, on budget, with all the features and
functions originally specified. As the labor rate rose above $750K the success rate went
down to 33% and as the cost approached $10M the success rate went to zero. This does
95
not mean that a large project can not be built and delivered, it simply means it is virtually
impossible to build it to the original specifications on the budget and timeline planned.
The idea of incorporating 'micro-projects' into the C2 system acquisition process
helps to solve a number of problems. Let's define a micro-project in the C2 domain as a
project that must be completed with six or less people and delivered in six or less months.
The small size of the projects has an immediately impact on reducing complexity of the
programs and solves the accountability problem for program managers, since they can
potentially propose, develop and deliver multiple projects in the course of a single
assignment. The cost, schedule, and performance metrics for a micro project can be
established by a single program manager and feedback from users can be garnered as part
of a program managers evaluation. Incorporating a 'micro-project' approach as part of a
larger system program office has many interesting implications. The six person, six
month development cycle at a $20K/staff month sets a standard project at $720K. A mid
sized system program office with 10 program managers could plan for 20 projects a year
and a $14.4M annual budget. Foreknowledge of user needs and requirements is not
necessary to submit a budget request, and the system program office has the flexibility to
respond to emergent and changing needs of the user community rapidly. Forming a close
relationships with the user community could be used to generate concepts for and select
projects and provide tailored solutions to meet specialized mission needs while educating
the program managers on the mission areas and problems faced by end users.
Additionally, the small scope of the projects allows them to be treated as virtually
disposable commodities. Since there is not a massive capital investment in the microproject development, the need for long term sustainment becomes obsolete in favor of
redesign and migration to newer technology. As a track record of small programs builds
performance trends for individual program managers, program development teams,
mission areas, and technical approaches forms the basis for evaluations, promotions, and
risk management.
To support a micro-project acquisition approach requires a consistent infrastructure in
place to deploy the micro-projects on. A solid architecture and set of development and
interface standards would be needed to support the micro-project development. This is
itself a large complex system that cannot be effectively broken into micro-projects.
96
Experienced program managers and long term civilian and contractor support should be
employed to develop and maintain the infrastructure. The micro-projects become a
training ground for new program managers, giving them time to gain experience building
micro-projects on the infrastructure with the most successful program managers moving
up to manage the infra-structure migration and sustainment. While micro-projects are not
a panacea, they do provide an interesting alternative to the massive multiyear program
developments with integrated sub-systems and components as part of the delivery.
Conclusions
The track record of C2 systems deliveries has been plagued by cost over-runs,
schedule slips, and reductions in performance delivered. While there are a number of
commercial best practices that when applied have been demonstrated to improve results,
the gap between industry best practice which are usually demonstrated on small projects
and what is actually practiced on large complex projects is tremendous. The introduction
of a focused set of tools and methods for use in large complex C2 systems developments
to improve the success rates is a worthwhile effort. A relatively simple set of program
management tools that can be used to help manage the complex nature of interdependencies between systems and build cross program understanding is needed to
improve the cost/schedule/performance metrics and overall interoperability of the C2
systems under development.
The research results point to Design Structure Matrices as an effective tool for
modeling information flows within Command and Control systems as a way to represent
system architectures in a more useful format. The DSMs then become a tool for
analyzing and identifying the high risk components in the system, measuring progress on
interoperability requirements, evaluating program office level of control over interfaces,
and an approach for determining where to introduce design rules to decouple external
interfaces and mitigate effects of changes on the system. When coupled with the
recommendations to provide incentives for cooperation in delivering interoperable
systems and introduction of the concept of micro-projects as part of an evolutionary
acquisition management approach, the potential to improve the success rate of delivering
interoperable C2 systems components appears promising.
97
98
Future Work
There is a great deal of research work left in the area of Command and Control system
interoperability. This thesis focused on the system complexity and acquisition culture
barriers to interoperability, and is far from an exhaustive analysis of the subject. There
are clearly a number of other barriers and approaches that should be explored. The
logical next step to derive value from this work lies in applying the recommendations on
real C2 programs and testing the validity and utility of the recommendations. Developing
and using Design Structure Matrices for current programs as part of an ongoing dynamic
program will be extremely instructive. Changes in C2 programs are frequent and difficult
to keep up with. The thesis was based on historic data and took a snap-shot in time of the
systems architecture to use for analysis. Building DSMs under the often chaotic
changing circumstances of a C2 program may prove difficult. The lessons learned from
practical application of DSMs to represent information flows and define interfaces needs
to be researched and documented. Demonstrating the utility of DSMs in supporting
architecture decisions, determining where and when the DSMs are most useful in the
development process, how understandable and presentable DSMs are to large groups, and
how the benefits of improved understanding and management of systems interfaces
translates into improved interoperability needs to be determined.
A more detailed quantitative analysis of the survey results would be a useful to look
for correlations and derive value from the survey information collected. Additionally,
some of the preliminary research into emergence leads in an interesting direction for C2
systems themselves. The application of theories and technologies in the areas of Self
Organizing Networks, Complex Adaptive Systems, Swarm theory and Swarm
intelligence could profoundly change the definition of C2 systems interoperability. One
of the tenets of airpower is Centralized control and decentralized execution [3]. The
commonly held mental model of command and control treats the military as a large
organism where C2 systems form the nervous system. Swarm theory and self organizing
networks provide an alternative paradigm where weapons platforms and sensors would
be treated more like ants in an ant colony or bees in a bee hive. The platforms would act
as autonomous agents which follow simple rules and exchange information directly. The
99
system then becomes a swarm of independent agents vs. a large complex organism. The
swarms then seeks and arrives at locally optimal solutions through a process of mutation,
evolution, and survival of the fittest instead of centralizing and processing a massive
amount of information, making decisions and providing instructions. This concept leads
to a radically different view of interoperability and is an interesting area for future
research.
The primary goal of the thesis is to lead a shift in thinking and practice of the
acquisition community to meet the demands of improved interoperability. Applying the
recommendations in a measured fashion to create that shift and monitoring the results is
the next phase of work planned by the author.
100
Appendix A - Questionnaire
Note: Compiled datafrom the survey are availablefrom the thesis author, however
personal datafrom respondents and contact information will be maskedfor
confidentialityreasons. Send data requests to sfrey@mitre.org.
C2 Program Acquisition Questionnaire
The following questions are designed to explore the values, beliefs and mental models held by individuals
within C2 acquisition community. Personal contact information is for follow up purposes only. All your
answers will be held in strictest confidence and personal data will be removed from the compiled results.
Your participation is voluntary and deeply appreciated.
Instructions:
Use the Tab and Shift+Tab keys to move between fields. You may also use the mouse and click on the
shaded area to select it. When you have completed the survey, please save the file by selecting Save As on
the File menu. Please name the file using the convention of Lastnamesurvey.doc {e.g. Frey-survey.doc}.
Please return the saved file as an email attachment to sfrey@mit.edu.
Personal Background Data (will be held confidential)
B1. Name
Rank/Grade
Work Phone
e-mail address
B2. Who do you work for?
EJ
Civilian US Government
Military US Government
Commercial Contractor
E]I Other (please specify):
Organization name:
B3. Which best describes your current position?
Senior Manager
Program Manager
__
Team Leader
Program Management Staff
101
[:1
Contractor Support Staff
E_
Technical Staff
Other (please specify):
E]_
B4. On what activities do you currently work? (Please mark all that a ply)
El Program Management
El Requirements Management
El System Architecture
El System Engineering
El Software Design
El Software Coding
El Testing and Integration
El Quality Assurance
E]
Fielding, Training, and Support
El
Configuration Management
Security
Budgeting & Finances
Administrative Tasks
Other (please specify):
El
El
El
El
-
B5. What is your total experience in program management and systems acquisition?
El Less than 1 year
El Between I and 3 years
El Between 3 and 5 years
El Between 5 and 10 years
El Between 10 and 15 years
El Between 15 and 20 years
El Over 20 years
Questions
QI. What aspects of a C2 program make it preferable/career enhancing to work on?
Strong
Pref
No
Prefqrence
El LI El El El []
Strong
* Pref
Large Budget
Small Budget
1:1 El
Short program lifecycle
El Eli E] ElEl
Uses proven & commercially
available technology
Based on established/mature
operational concepts & methods
Incremental enhancements to common
El El El El El El El El El
Employs Leading Edge Technology
El E] El El El El l E El0
Defining revolutionary operational
El El El El E El El El El
concepts & methods
First attempt to solve a unique
El El ElE
processes
Limited number of external interfaces
Long program lifecycle
problem
E El E E
El
El
Complex engineering and system
challenges
Loosely defined system performance
____E_____E______E________integration
Clearly defined system performance
metrics
El El E E E E E
metrics
102
Tight accountability and schedule for
E
1 E E E E 1
Minimal accountability and flexible
I delivery schedule
product delivery
Q2. On average, how many years do you spend working on a single acquisition program before moving to
a new program?
Enter number:
Q3. Please provide your opinion on the following statements as they relate to ac uisition programs.
Often
Occasionally
Usually
N/A
Rarely
Cost deviations are avoidable
Schedule deviations are avoidable
Performance deviations are avoidable
SW development metrics are valuable
Risk management is used effectively
Requirements are stable
System architectures are well documented
and widely understood
System complexity is manageable
It is important to maintain awareness of the
El
11
El
El
El
El
EEl
El
E
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
El
E
El
El
I
L
El
El
E]
El
E]
R
El
E]
El
E
l
[]
E]
El
El
El
El
El
El
El
Always
E]
El
E]
L
E]
El
El
[1
[1
E]
E]
]
[i
]
EF
status of other programs within the enterprise
It is important to coordinate program
decisions with other programs in the
enterprise
Programs share common enterprise
integration goals & objectives
Program managers focus resources and effort
on their program's goals
Program managers have the flexibility and
authority to support C2 enterprise integration
goals
Individual programs cooperate to achieve
enterprise goals
Acquisition policy provides incentives for
enterprise cooperation
Q4. What factors contribute to cost and schedule growth?
Overly optimistic estimates
Low developer productivity
Requirements instability
Discrepancies, bugs & rework
Feature escalation
Low technical maturity/high technical risk
of components
Changes to incorporate new/better
technology
Downward directed changes
Rigid/inflexible architectures
Deficiencies/problems with third party
deliveries
Re-planning/re-work to accommodate third
party variations
N/A
Small
Contributor
Moderate
Contributor
Significant
Contributor
Heavy
Contributor
Primary
Cause
iT
El
El
fl
El
El
El
El
El
E
El
El
E
E
E
El
El
El
E
El
El
El
El
El
El
El0
El
El
F
El
El
El
El
El
El
E
E
l
E
E
l
E
El
El
El
El
E
El
___0_
El
El
IIIa
0F
El
_
1El
I
103
F
El
Eli
U
El
E
LI
l
El
El
El
E
El
E1
U
E]
Eli
U
El
[l
E
U
El
[]
E]
U
Slow/Bureaucratic decision making
El
processes
Poor decision based on inadequate
information
Management/staff turn-over and lack of
E]
continuity
Communication barriers / lack common
understanding of issues
Other (please specify):
Other (please specify):
El
El
El
El
Q7. What factors contribute to reductions in delivered functionality?
Small
Moderate
Significant
Heavy
Primary
Contributor
Contributor
Contributor
Contributor
Cause
U
U
U
U
U
N/A
Cost over-runs
Schedule slips
Requirements instability and changes
Unanticipated technical problems
High risk components failed to deliver
Architecture limitations encountered
Budgets cuts
Program re-baselining
External partner program budget cuts or re-
U
U
U
U
U
U
U
U
1U
U
UU
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
U
baselining
Other (please specify):
Other (please specify):
Conclusion
C1. Please use this space for any additional comments.
Thank you for taking the time to complete this survey. If you are interested in receiving a copy of the
survey results please indicate so below.
IU IYes, please send a copy to my e-mail address provided above.
U No thank you.
Steven E. Frey
MIT System Design & Management Fellow
104
105
Bibliography
1.
Coakley, T., Command and Controlfor War and Peace. 1991, Washington DC:
National Defense University Press.
2.
JCS, D.J.S., DoD Dirctionaryof Military and Associated Terms, in JP 1-02, Z.A.
S A Fry, Editor. 12 April 2001.
3.
SECAF, S.O.T.A.F., Air Force DoctrineDocument 1, in AFDD 1-1, M.E. RYAN,
Editor. 12 Aug 1998.
4.
Brooks, F., The Mythical Man-Month. 1995 ed. 1995: Addison Wesley.
5.
Parnas, D., Software Aspects of the Strategic Defense Initiative. Communications
of the ACM, 1985. 28(12): p. 1326.
6.
Rechtin, E., The Art of System Architecting. 1997: CRC Press.
7.
Bachman, F. Managing variabilityin software architectures.in Symposium on
software reusability. 2001. Toronto Canada.
8.
Senge, P., The Fifth Discipline. 1990, New York: Currency & Doubleday.
9.
HQ AFPCIDPP, S.o.t.A.F., Guide to USAF Officer Evaluation System, in AFPAM
36-2404, H.A.D.C.S.S. Lerum), Editor. 1 December 1996.
10. Beck, D.A.W., Probing the "It Depends" Variables, in ProgramManagement.
2001. p. 22.
11. Johnson, J., Turning Chaos into success,. 1999, The Standish Group.
12. Walker, D.M., MajorManagement Challengesand ProgramRisks, Department
of Defense,. 2001, General Accounting Office.
13. Walker, D.M., Major Challengesand ProgramManagement Risks, Department
of Defense,. 1999, US General Accounting Office.
14. Hippel, E.v., Innovation by User Communities: Learning From Open-Source
Software, in MIT Sloan Management Review. 2001. p. pp. 82-86.
15. Sterman, J., Business Dynamics. 2000: McGraw Hill.
106
16. Roetzheim, W., Turning Around Troubled Software Projects, . 1999.
17. Parnas, D., On Criteriato be used in decomposing systems into modules. ACm,
1972.15(12): p. 1053-1058.
18. Brown, N., The ProgramManager's Guide to Software Acquisition Best
Practices. 1995: DoD.
19. Sullivan, K. The Structure and Value of Modularity in Software Design. in Joint
InternationalConference on Software Engineering. 2001. Vienna.
20. Congress, U., The Malcolm BaldrigeNational Quality Improvement Act of 1987,
in HR 812, O.H.C.o.t.U.S.o. America, Editor. Aug 1987.
21. Fine, C., Clock Speed. 1998, New York, NY: Perseus Books.
22. Johnson, J.H., Micro Projects Cause Constant Change,. 2001, The Standish
Group: West Yarmoth Mass.
107
Download