Analysis of Alternatives (AoA) Handbook

advertisement
Analysis of Alternatives (AoA)
Handbook
A Practical Guide to the Analysis of Alternatives
10 June 2013
Office of Aerospace Studies
Air Force Materiel Command (AFMC) OAS/A5
1655 1st Street SE
Kirtland AFB, NM 87117-5522
For public release.
(505) 846-8322, DSN 246-8322
Distribution unlimited.
www.oas.kirtland.af.mil
377ABW-2013-0453
1
2
Table of Contents
PREFACE.................................................................................................................................................... 6
1
INTRODUCTION .............................................................................................................................. 7
1.1
1.2
1.3
1.4
1.5
1.6
2
DECISION MAKING AND HOW THE AOA FITS .................................................................... 17
2.1
2.2
2.3
2.4
2.5
3
EFFECTIVENESS METHODOLOGY ......................................................................................43
EFFECTIVENESS ANALYSIS METHODOLOGY .....................................................................45
LEVELS OF ANALYSIS .......................................................................................................53
SENSITIVITY ANALYSIS ....................................................................................................57
EFFECTIVENESS ANALYSIS RESULTS PRESENTATION .......................................................58
PERFORMING COST ANALYSIS................................................................................................ 60
5.1
5.2
5.3
5.4
5.5
5.6
5.7
6
SCOPING THE ANALYSIS ...................................................................................................31
DEFINING THE ALTERNATIVE CONCEPTS ..........................................................................35
IDENTIFYING STAKEHOLDER COMMUNITY .......................................................................35
DETERMINING LEVEL OF EFFORT .....................................................................................36
ESTABLISHING THE STUDY TEAM .....................................................................................38
STUDY PLAN PREPARATION AND REVIEW ........................................................................41
PERFORMING THE EFFECTIVENESS ANALYSIS ................................................................ 43
4.1
4.2
4.3
4.4
4.5
5
MAJOR PROCESSES ...........................................................................................................17
WHAT DECISIONS MUST BE MADE/SUPPORTED? .............................................................19
ROLE OF ANALYSIS IN THE MATERIEL SOLUTION ANALYSIS PHASE ................................22
WHO USES THE ANALYSIS ................................................................................................22
RELATIONSHIP BETWEEN AOA AND OTHER ACTIVITIES ...................................................25
PLANNING THE ANALYTICAL EFFORT................................................................................. 31
3.1
3.2
3.3
3.4
3.5
3.6
4
PURPOSE OF THE AOA ........................................................................................................8
AOA ENTRY CRITERIA .......................................................................................................8
AOA STUDY GUIDANCE ...................................................................................................10
AOA PRODUCTS ...............................................................................................................11
DECISION MAKER EXPECTATIONS OF AN AOA .................................................................13
REFERENCE INFORMATION ...............................................................................................15
GENERAL COST ESTIMATING ............................................................................................60
AOA COST ESTIMATING ...................................................................................................60
LIFE CYCLE COST CONSIDERATIONS ................................................................................62
COST ANALYSIS RESPONSIBILITY .....................................................................................65
COST ANALYSIS METHODOLOGY .....................................................................................67
COST RESULTS PRESENTATION.........................................................................................77
COST DOCUMENTATION ...................................................................................................78
PERFORMING THE RISK ANALYSIS ....................................................................................... 80
6.1
RISK ASSESSMENT FRAMEWORK ......................................................................................80
3
6.2
6.3
7
ASSESSING SUSTAINABILITY IN THE ANALYSIS OF ALTERNATIVES STUDY .......... 86
7.1
7.2
7.3
7.4
7.5
7.6
8
INTRODUCTION .................................................................................................................86
WHAT IS SUSTAINABILITY? ..............................................................................................86
DEFINING THE MAINTENANCE CONCEPT AND PRODUCT SUPPORT STRATEGY .................86
SUSTAINABILITY PERFORMANCE, COST, AND RISK ..........................................................87
RELIABILITY, AVAILABILITY, MAINTAINABILITY AND COST RATIONALE REPORT ...........94
SUSTAINMENT KEY PERFORMANCE PARAMETER .............................................................95
ALTERNATIVE COMPARISONS ................................................................................................ 97
8.1
9
RISK IDENTIFICATION .......................................................................................................83
USING PREVIOUS ANALYSES ............................................................................................85
ALTERNATIVE COMPARISON METHODOLOGY ..................................................................97
DOCUMENTING ANALYTICAL FINDINGS ........................................................................... 104
APPENDIX A: ACRONYMS ............................................................................................................... 107
APPENDIX B: REFERENCES AND INFORMATION SOURCES ................................................ 113
APPENDIX C: STUDY PLAN TEMPLATE ...................................................................................... 114
APPENDIX D: FINAL REPORT TEMPLATE ................................................................................. 121
APPENDIX E: STUDY PLAN ASSESSMENT .................................................................................. 126
APPENDIX F: FINAL REPORT ASSESSMENT .............................................................................. 127
APPENDIX G: LESSONS LEARNED ................................................................................................ 129
APPENDIX H: OAS REVIEW OF DOCUMENTS FOR AFROC................................................... 130
APPENDIX I: JOINT DOD-DOE NUCLEAR WEAPONS ACQUISITION ACTIVITIES .......... 131
APPENDIX J: HUMAN SYSTEMS INTEGRATION (HSI) ............................................................ 140
APPENDIX K: ACQUISITION INTELLIGENCE IN THE AOA PROCESS ................................ 148
APPENDIX L: MISSION TASKS, MEASURES DEVELOPMENT, AND DATA IN DETAIL ... 154
APPENDIX M: GAO CEAG, TABLE 2 .............................................................................................. 167
APPENDIX N: DEVELOPING A POINT ESTIMATE .................................................................... 170
APPENDIX O: CAPE AOA STUDY GUIDANCE TEMPLATE ..................................................... 182
List of Figures
Figure 1-1:
Figure 2-1:
Figure 2-2:
Figure 3-1:
Figure 4-1:
Air Force AoA Activities Overview ..................................................................... 12
Capabilities Based Planning (CBP) Process ....................................................... 18
Decision Framework ............................................................................................. 19
Example Study Team Structure .......................................................................... 40
General Approach for Effectiveness Analysis .................................................... 44
4
Figure 4-2:
Figure 4-3:
Figure 4-4:
Figure 5-1:
Figure 5-2:
Figure 5-3:
Figure 6-1:
Figure 7-1:
Figure 8-1:
Figure 8-2:
Figure 8-3:
Figure 8-4:
Hierarchy of Analysis ........................................................................................... 54
Notional Example of Tool and Measure Linkage ............................................... 56
Effectiveness Analysis Results Presentation ....................................................... 59
Comparing All Alternatives Across the Same Life Cycle ...................................... 65
Cost by fiscal year and appropriation ................................................................. 77
General LCC Summary (By Alternative) ........................................................... 78
Standard Air Force Risk Scale Definitions ......................................................... 81
Sustainment Key Performance Parameter ......................................................... 95
Aircraft Survivability System Cost/Capability Tradeoff Example .................. 99
Target Defeat Weapon Cost/Capability Tradeoff Example............................ 101
Example of Critical MOE Results ..................................................................... 102
Example of Comparing Alternatives by Effectiveness, Risk, and Cost ......... 103
List of Tables
Table 4-1:
Table 4-2:
Table 5-1:
Table 5-2:
Table 5-3:
Table 5-4:
Table 5-5:
Table 7-1:
Table 7-2:
Table 7-3:
Weighting Measures ............................................................................................... 45
JCIDS JCAs ............................................................................................................ 48
GAO’s Basic Characteristics of Credible Cost Estimates .................................. 61
Most Common Cost Estimating Methods ................. Error! Bookmark not defined.
Cost As an Independent Variable (CAIV)................................................................. 73
A Hardware Risk Scoring Matrix .......................................................................... 75
A Software Risk Scoring Matrix ............................................................................ 76
Sustainability Concepts/Attributes ....................................................................... 88
Measure of Suitability Description Example ....................................................... 90
Operations and Support Cost Element Categories ............................................. 92
5
Preface
This is the 2013 version of the OAS AoA Handbook. It has undergone a major rewriting based
upon recent changes to OSD and Air Force policy and guidance. It incorporates a number of
recommendations we received from the AoA community about the 2010 version. This
handbook is intended to make you think, it is not a one-approach-fits-all recipe. It is also not
intended to be a standalone self-help manual. OAS has a companion handbook, the Pre-MDD
Handbook, which addresses key analytic issues that precede the AoA.
Since every AoA is different, the handbook emphasizes the whats and whys of requirements
analysis and less on the how. The details of how are very problem specific and best done oneon-one with an OAS advisor. Philosophically, we have focused on broadly addressing the right
requirements questions to the level of detail that the senior decision makers need. We advocate
frequent and open communication both to understand what the senior decision makers need and
to convey what the analysis uncovers. We advocate sound analytic processes, not specific tools.
While detailed analytic tools are often necessary for key parts of an AoA, much can be learned
with good use of simpler approaches such as parametric analyses and expert elicitation. We
encourage you to read through Chapters 1 through 3 to get a general understanding of how the
AoA relates to the requirements decision processes. More specific information regarding the
AoA process can be found in Chapters 4 through 9.
This handbook is grounded in over twenty years of providing analytic advice on Air Force and
DoD AoAs. It has been shaped by best practices we have gathered from well over two hundred
AoAs, and by what we have observed to be the expectations of Air Force and OSD senior
decision makers. Those expectations keep evolving, and in response so will this handbook. If
you have questions regarding the currency of your AoA handbook version, please contact OAS
at (OAS.DIR@kirtland.af.mil) to ensure that you are in possession of the most recent version.
We encourage you to contact us and ask questions if parts of the handbook are not clear, or you
are not sure how they apply to your situation, or if you have suggestions on how to improve the
document. We always appreciate feedback.
Jeff Erikson
Director, Office of Aerospace Studies
6
1 Introduction
The Analysis of Alternatives (AoA) is an analytical comparison of the operational effectiveness,
suitability, risk, and life cycle cost (or total ownership cost, if applicable) of alternatives that
satisfy validated capability needs (usually stipulated in an approved Initial Capabilities
Document (ICD)). An AoA typically occurs during the Materiel Solution Analysis (MSA)
phase and applies to all Acquisition Category (ACAT) initiatives in accordance with the
Weapon Systems Acquisition Reform Act (WSARA) of 2009, Department of Defense
Instruction (DoDI) 5000.02, Air Force Instruction (AFI) 10-601 and AFI 63-101 direction. The
AoA must make a compelling statement about the capabilities and military worth that the
alternatives provide. In short, the AoA must provide decision-quality information that enables
senior decision makers to debate and assess a potential program's operational capability and
affordability, and maximize its investment.
AoAs are essential elements of three Department of Defense (DoD) processes that work in
concert to deliver the capabilities required by warfighters: the requirements process, the
acquisition process, and the Planning, Programming, Budgeting, and Execution (PPBE) process.
Details of how AoAs support and interact with these processes are discussed in Chapter 2 of this
handbook.
Other Services/DoD Components have their own processes for executing AoAs. When the Air
Force is directed to support an AoA led by another Service/DoD Component, the Air Force will
follow the lead organization’s procedures and guidance. The Air Force’s direct involvement in
the lead organization’s process will ensure that Air Force interests are considered and addressed
in the AoA. Likewise, for Air Force-led AoAs, it is imperative that the Air Force represent,
address, and analyze the supporting organizations’ issues and concerns. Finally, all AoAs must
be conducted in accordance with the Acquisition Decision Memorandum (ADM) issued by the
Milestone Decision Authority (MDA) at the Materiel Development Decision (MDD) and any
additional guidance provided by appropriate requirements and acquisition authorities.
The processes outlined in this handbook apply to all AoAs regardless of ACAT level or Joint
Staffing Designator (JSD), formerly known as Joint Potential Designator (JPD) (for additional
insight on the JSD definitions, see the JCIDS Manual). They ensure that the recommendations
from the AoA represent credible, defensible results. The only difference between AoAs of
different ACAT levels or JSDs is the level of effort, oversight, and approval required.
According to the Weapon Systems Acquisition Reform Act (WSARA) of 2009, Director, Cost
Assessment and Program Evaluation (CAPE) has responsibility for all efforts that are JROC
Interest regardless of ACAT level. Approval processes are dictated more by the JSD than by
the ACAT level since any effort may have JROC or OSD interest, regardless of ACAT level.
7
The AoA process consists of the following primary activities which are described throughout
the remainder of this handbook: study guidance development, planning, execution, and
reporting.
1.1 Purpose of the AoA
There are two primary goals of the AoA. The first is to provide decision-quality analysis and
results to inform the Milestone Decision Authority (MDA) and other stakeholders at the next
milestone or decision point. The AoA should shape and scope courses of action (COA) for new
materiel to satisfy operational capability needs and the Request for Proposal (RFP) for the next
acquisition phase. The AoA provides the analytic basis for performance parameters
documented in the appropriate requirements documents (e.g., Joint DCR, AF Form 1067,
Capability Development Document (CDD), and/or Capability Production Document (CPD)).
Additionally, the AoA should use the capability gaps from the ICD(s) as starting points rather
than as minimum standards to disqualify alternatives. The AoA should provide feedback to the
requirements process of any recommended changes to validated capability requirements that
appear unachievable and/or undesirable from a cost, schedule, performance, and/or risk point of
view. The AoA results enable decision-makers to have the appropriate cost, schedule,
performance, and/or risk tradeoff discussions.
The second goal of the AoA is to provide an understanding of why alternatives do well or
poorly. The AoA results should characterize the circumstances in which a given alternative
appears superior and the conditions under which its outcome degrades. The AoA should
explain cost drivers or factors that impact ratings of assessment metrics such as Concept of
Operations (CONOPS), manpower, and performance parameters of alternatives.
AoAs are useful in determining the appropriate investment strategy for validated, prioritized,
operational needs. In addition to considering operational effectiveness, suitability, risk, and life
cycle cost, the AoA should also highlight the impact to each alternative with respect to domestic
and foreign policy, technical maturity, industrial base, environment, treaties, etc. AoAs also
provide a foundation for the development of documents required at the next acquisition
milestone, such as, the Technology Development Strategy (TDS), Test and Evaluation Strategy
(TES), and Systems Engineering Plan (SEP).
1.2 AoA Entry Criteria
Air Force policy requires that the following information be presented to the Air Force
Requirements Review Group (AFRRG) for approval before a team may proceed in initiating an
AoA study plan:


A completed CBA that has been approved by the sponsoring command
An Air Force Requirements Oversight Council (AFROC) approved ICD
8




An operational risk assessment of not filling the capability gap approved by the
sponsoring command and the AFROC
A list of the most promising alternatives to address the gap
Intelligence Support Considerations approved by AF/A2
Signed AoA Study Guidance, by Air Force and when appropriate, Cost Assessment and
Program Evaluation (CAPE)
The list of promising alternatives may include representatives of the following solution
categories:






Legacy systems
Modified legacy systems
Modified commercial/government/allied off-the-shelf systems
Dual-use items
Additional production of previously developed U.S./allied systems
New development alternatives, which can include:
o Cooperative development with allies
o New Joint Service development
o New DoD component-unique development
For programs that have OSD (AT&L) as the MDA, the AoA entry criteria items discussed
above as well as an AFROC validated and CAPE approved study plan are required at MDD.
According to the Defense Acquisition Board (DAB) template, OSD (AT&L) requires that the
following additional information be presented before an AoA will be approved at MDD:


CONOPS summary - provides operational context for understanding the need and
solution tradespace
o Desired operational outcome
o Effects produced to achieve outcome
o How capability complements joint forces
o Enabling capabilities
Development Planning (DP)
o Non-materiel solution approaches to mitigate the capability gap
 Review of what was considered
• Includes buying more of existing
• Includes using existing differently than current employment
• Includes tolerating the gap
 Evidence of how well (or not) they mitigate the gap
o Materiel solution approaches which could address the capability gap
 Review of what was considered
 Evidence that these approaches provide the desired operational
performance parameters
9

o Which approaches are included in AoA guidance and/or study plan
 Evidence of technical feasibility to meet needed timeframe
 Basic capabilities that solution has to fill capability gap within needed
timeframe (mission effectiveness)
 For each alternative, what are the implications or dependencies
• Includes portfolio implications
• Includes existing system impacts
• Includes related ICDs
• Includes additional capabilities needed to address the gap (System
of Systems (SoS)/Family of Systems (FoS))
 How are dependencies factored into the AoA study plan
o When is the capability needed
 Is proposed materiel solution expected to be available
 What is being done to address the gap until the materiel solution becomes
available
Affordability constraints (describe how much DoD is willing to spend to fill the gap)
In addition to the information required by policy, the following may be available from previous
analytical efforts and early systems engineering activities and can be used in the AoA:





Scenarios and threats utilized in the CBA
Analysis measures (mission tasks, measures of effectiveness, etc.) utilized as evaluation
criteria in the CBA
Definition of baseline capabilities from the CBA
Core team members from the ICD High Performance Team (HPT) membership
Initial Concept Characterization and Technical Descriptions (CCTDs) from Early
Systems Engineering (ESE) and DP activities
This information is discussed further in Section 3.1.1 “Using Previous Analysis as the
Foundation.”
1.3 AoA Study Guidance
AoA study guidance is required prior to the MDD. AoA study guidance is developed to address
the critical areas that the senior decision makers want explored during the AoA. For Air Force
led AoAs, the study guidance will build upon the initial input identified during the ICD High
Performance Team (HPT). A “Best Practice” is to maintain continuity of HPT membership
from the ICD through the AoA (and beyond). Having enduring HPT membership will help
provide continuity, greatly facilitate AoA planning, and ensure the stakeholder communities are
properly represented.
AF/A5R requires the sponsoring organization to notify AF/A5R-P for approval to proceed, prior
to drafting AoA study guidance. The notification must also identify which specific AFROC
10
validated/prioritized topic and the associated ICD(s) this analysis will address. Additionally, an
HPT is required to develop AoA study guidance and is usually conducted in conjunction with
the ICD HPT. AF/A5R-P will review and approve the HPT membership prior to approving an
HPT. Air Force policy has identified core organizations that have enduring membership for all
HPTs. The sponsoring organization develops and coordinates the guidance and submits the
document to AF/A5R-P. AF/A5R will approve all Air Force sponsored AoA study guidance
before it is submitted to CAPE. For those AoAs where Director, CAPE elects not to provide
AoA study guidance, AF/A5R will serve as the approval authority.
The AoA study guidance provides direction and insight to the team to plan and execute the
study. CAPE-issued AoA study guidance should be drafted using the AoA Study Guidance
Template included in Appendix O of this handbook. There may be additional sections and
major considerations required by the Air Force (either supplemental to CAPE guidance or when
there is no CAPE guidance).
1.4 AoA Products
Most AoAs produce four major products:
1. A study plan which outlines the background, key questions, guidance, methodologies,
tools, data, schedule, and other elements of the AoA
2. An interim progress briefing to summarize early findings (this often results in
refinement to the direction of the AoA)
3. A final report to document the AoA results in detail
a. Includes identification of the tradespace explored
b. Includes cost/performance/schedule tradespace findings
4. A final briefing, including the Requirements Correlation Table (RCT), to summarize the
final results of the AoA
Figure 1-1 illustrates the review process and sequence of events for these products beginning
with the ICD HPT/draft AoA study guidance through the decisions supported by the AoA final
report and briefing.
11
Figure 1-1: Air Force AoA Activities Overview
The study plan is critical to the AoA process because it defines what will be accomplished
during the AoA and how it will be done. As shown in Figure 1-1, the AoA study plan must be
completed and approved prior to the MDD. The study plan must be reviewed by the AFRRG
and validated by the AFROC. For those efforts with OSD oversight, the study plan must also
be approved by CAPE prior to the MDD. The study plan should clearly identify how the effort
will address all AoA guidance received from the Air Force and OSD. [Note: this requires the
study team to develop and obtain AFROC validation of the study plan before the formal MDA
decision is made to conduct an AoA]. Appendix D contains a recommended template for the
study plan.
The interim progress briefing is designed to provide interim results and to permit redirection of
the AoA by senior reviewers. The most common reasons for interim progress briefings include:



Changes to key assumptions and constraints
Significant modification of the mission tasks and/or measures
Knowledge gained in the analysis to date that would impact requirements decisions,
such as:
o Alternatives showing insufficient mitigation of the gaps
o Alternatives demonstrating unaffordability
o Recommendations for the focus of remaining analysis
12
o Early identification of areas requiring sensitivity analysis [Note: this is critical
to end results because it identifies the key areas of the tradespace that need to be
explored sufficiently.]
The final report is the repository for AoA information describing what was done, how it was
accomplished, and the results/findings. The final report requires significant time and effort to
produce and staff. It should include detailed descriptions of the analysis and results of the AoA
effort. Since team members may disperse quickly after their parts of the study are completed, it
is important to continuously document the process and results throughout the study. If the final
report is not finalized shortly after the end of the study, there may be little to show for what was
accomplished during the AoA. A study not documented is a study not done.
The final briefing is generated from the final report and is the mechanism to illustrate the
answers to important questions and issues, and summarize the findings for the decision makers.
[Note: the briefing is usually the item referred to more frequently. Therefore, the team must
ensure that it is an appropriate representation of the final report.] It is important that both the
final report and final briefing address the following:







Enablers such as logistics, intelligence, Human Systems Integration, and
communications and their impact on the cost, risk, and effectiveness of the study
alternatives
Key study questions sufficiently to inform the decision makers
Appropriate operational performance parameters and threshold/objective values for the
RCT
Alternatives’ capabilities to mitigate gaps
Performance, cost, and risk drivers
Tradespace explored:
o Trade-offs between performance, cost, and risk
o Sensitivity analysis results (e.g., dependency of results on assumptions,
scenarios, CONOPS/CONEMP, technology maturity, etc.)
Affordability constraints identified at MDD
Both the final report and final briefing should follow the entire review/oversight process to the
AFROC and OSD for approval as outlined in AFI 10-601.
As the products are created and briefed, the Study Director should consider the classification
and proprietary nature of the information. The Study Director should also ensure the team
understands the AoA products may contain pre-decisional information and should not be
released outside the AoA study team or the approval chain. During the creation and briefing of
each of these products, OAS stands ready to assist the study team.
1.5 Decision Maker Expectations of an AoA
13
Senior leaders and decision makers continually refine their expectations of an AoA. CAPE and
the AFROC have identified the following key expectations for an AoA:











Unbiased inquiry into the costs and capabilities of options (identify the strengths and
weaknesses of all options analyzed)
Identification of key trades among cost, schedule, and performance using the capability
requirements (e.g., ICD and CDD gaps) as reference points
Identification of potential KPP/KSAs and an assessment of the consequence of not
meeting them
Explanation of how key assumptions drive results, focused on the rationale for the
assumption
Explanation of WHY alternatives do or do not meet requirements and close capability
gaps
Identification of the best value alternatives based on results of sensitivity analysis
Increased emphasis on affordability assessments (conditions and assumptions under
which a program may or may not be affordable)
Increased emphasis on legacy upgrades and non-developmental solutions versus new
starts
o Explore how to better use existing capabilities
o Explore lower cost alternatives that sufficiently mitigate capability gaps but may
not provide full capability
Increased emphasis on expanding cost analysis to focus beyond investment, for
example, O&S across the force beyond the alternatives being analyzed
Explore the impact of a range of legacy and future force mixes on the alternatives
Increased emphasis on exploring an operationally realistic range of scenarios to
determine impact on performance capabilities and affordability
CAPE and the AFROC have also identified what is NOT valued in an AoA:





Building a case for a preferred solution by attempting to validate a pre-selected option
instead of allowing results to inform the decision.
Over-emphasis on meeting KPP thresholds by automatically disqualifying alternatives
instead of exploring what is needed to sufficiently mitigate the gap and achieve an
acceptable level of operational risk.
Over-emphasis on performance capability without adequately addressing cost and
schedule risks.
Focus on the best system versus the best value for the investment.
Focus on assumptions, data, and scenarios that unfairly skew the results toward a
preferred alternative to the detriment of not understanding the actual need.
In summary, the AoA must identify what is the best value, not what is the best system.
14
1.6 Reference Information
The following terms and definitions are provided to facilitate the use of this handbook and the
AoA process:








Cost Driver - any element within the cost work breakdown structure (WBS)
that has a noticeable or significant effect on the overall cost.
Decision framework - the series of informal and formal investment decisions
supported by operational requirements analysis.
Enablers - any element, such as a system, process, or information that is
required for the success of an assigned task or mission in support associated
with an operational capability.
Problem space - the area to be explored to determine how well AoA options
mitigate the specific identified gaps associated with a specific mission or core
function.
Solution space - the range of alternatives that adequately address the capability gaps
defined by the problem space and whose characteristics/performance parameters will
define the tradespace to be analyzed.
Stakeholders - any organization, agency or Service with a vested interest (a
stake) in the outcome of the pre-acquisition analyses. A stakeholder may
contribute directly or indirectly to these pre-acquisition activities. A
stakeholder usually stands to gain or lose depending on the decisions made
from these pre-acquisition activities.
Tradespace - the range of options to address operational requirements that
explore trade-offs among performance, risk, cost, and schedule.
Tradespace analysis - analysis to highlight key trades among performance, risk,
cost, and schedule, if they exist. These trades highlight where the tradespace
exists to determine what level of required capability should be acquired, when,
and at “what cost.” This analysis should also identify where there is no
tradespace.
The following references provide more detailed information regarding AoA processes and
products:







AFI 10-601
AFI 63-101
AT&L’s MDD Defense Acquisition Board (DAB) Template
AT&L’s MDD Checklist
CAPE’s AoA Guidance Template
Defense Acquisition Guidebook (DAG)
(https://acc.dau.mil/CommunityBrowser.aspx?id=526151)
Defense Acquisition Portal (https://dap.dau.mil/Pages/Default.aspx)
15


DoDI 5000.02
JCIDS Manual
16
2 Decision Making and How the AoA Fits
The purpose of this chapter is to describe the operational capability requirements development
process. It defines the decisions supported by various analytical efforts, the role of analysis, and
the appropriate decision makers. Additionally, it shows the interfaces between the analysis and
other activities.
2.1 Major Processes
Capability Based Planning (CBP) supports operational capability requirements development.
Figure 2-1 shows the six key processes that make up CBP. The six processes include the
following, of which the first three have the greatest impact on the AoA:






Strategic guidance is issued by Office of the Secretary of Defense (OSD) to provide
strategic direction for all subsequent decisions and to provide planning and
programming guidance for the building of the Program Objective Memorandum (POM)
and the development of acquisition programs. This information is the foundational
guidance for all analysis.
Support for Strategic Analysis (formerly known as the Analytic Agenda) is driven by
guidance from OSD and the analysis and modeling and simulation are executed by the
JS/J-8 to identify potential force structure issues and to provide detail on the Defense
Planning Scenarios (DPSs) and Integrated Security Constructs (ISCs) used for
identifying capability needs.
The JCIDS process identifies capability needs based on input from the concepts and the
Strategic Analysis and feeds the results to the acquisition and budgeting processes.
The Planning, Programming, Budgeting and Execution (PPBE) is directed by the
Comptroller and the OSD Director, Cost Assessment and Program Evaluation to ensure
appropriate funding for the Department’s efforts.
AT&L provides policy guidance and oversight on the acquisition process, makes
acquisition decisions on Major Defense Acquisition Programs (MDAP) and Major
Automated Information Systems (MAIS), and coordinates program decisions through
use of capability roadmaps.
The J7 manages the Joint Concepts development and approval process. These top-down
identified concepts become the baseline for developing capability needs.
17
Figure 2-1: Capabilities Based Planning (CBP) Process
For the Air Force, AFI 10-601, Operational Capability Requirements Development, outlines the
processes for development of operational requirements documents (what JS/J8 calls JCIDS
documents). There are several categories of operational requirements development efforts that
may or may not require an AoA study for mitigating or closing gaps. The Air Force
Requirements Review Group (AFRRG) reviews and approves the initial Requirements Strategy
Review which defines the strategy for mitigating the identified gaps and the requirement for an
AoA. Two categories where an AoA is not required are:


Efforts that support technology refreshment of existing systems, but provide no new
capability. An AoA is not required since there will be no new acquisition effort (e.g.,
AF Form 1067).
Efforts to address current operational urgent needs are handled through the Urgent
Operational Need (UON)/ Joint Urgent Operational Need (JUON) process.
If the effort addresses future new capabilities (including sustainment efforts that contain
requirements for new capabilities) and requires articulated capability needs via JCIDS (i.e.,
ICD/CDD), then an AoA is required.
18
2.2 What Decisions Must Be Made/Supported?
It is important to understand how the AoA fits into the decision framework and the decisions
made prior to and following the AoA. As illustrated in Figure 2-2, there are four major decision
points within the decision framework.
Figure 2-2: Decision Framework
The focus of Decision Point 1 is to ensure examine the “problem space” and decide if additional
analysis is required. The problems to study may be downward directed (e.g., Chief of Staff of
the Air Force) or identified by the Core Function Lead Integrator (CFLI). This decision often
identifies the need to conduct a Capabilities Based Assessment (CBA) or other analysis (e.g.,
DOTmLPF-P analysis). The analysis resulting from this decision point should answer:






What parts of the problem are already understood?
What is the gap(s)? What is the cause(s)? What will happen if we stay on the present
course?
What is the operational risk(s) caused by the gap(s)?
How can the community better use what they already have? What gaps and risks
remain?
How much can be solved through modifications to existing systems? What gaps and
risks remain?
Is a new materiel option even feasible or is S&T investment needed?
The results from this analysis inform Decision Point 2. At this point, a decision is made to
develop (or not develop) an ICD. AF Policy requires the Air Force Requirements Review
Group (AFRRG) to review and approve an initial requirements strategy review (RSR) prior to
“Decision Point 2.” The purpose of this review is to conduct a strategic requirements overview
that will accomplish the following:
19





Examine previous analysis (Is the CBA or other analyses sound?)
Assess operational risk (Is there compelling operational risk to drive a need for a
materiel solution?)
Evaluate affordability (Do the costs merit an investment decision?)
Validate capability gap(s) (Are the gaps still operationally relevant and a priority?)
Examine materiel and non-materiel solutions (Will the solutions proposed for
investigation likely close or sufficiently mitigate the identified gaps?)
Information presented at the RSR and previous analysis informs Decision Point 2. The decision
may include:





Determining which gaps will not be addressed (risk is acceptable)
Identifying DOTmLPF-P changes to better use current systems
Deciding where to invest in Science and Technology (S&T) or modify current systems
Approving development of an ICD when a new materiel solution is likely needed
Identifying, scoping, and prioritizing the additional analysis needed before the Air Force
will be ready to request an Materiel Development Decision (MDD)
Note that Decision Point 2 does not authorize the start of an AoA. After Decision Point 2, ICD
development, Development Planning (DP), and associated pre-MDD analysis are the primary
activities that occur in preparation for the MDD (Decision Point 3). The focus of these
activities is to identify the following information:






Which gaps can be mitigated by non-developmental materiel solutions?
What are the COTS/GOTS solution types?
What are the broad cross-capability solution types?
o Air, Space, Surface, Subsurface, Cyber?
o Manned/unmanned? Sensor/shooter?
How would these solutions be employed? What are the implied capabilities/needed to
make it work (logistics, command and control, communications, intelligence, etc.)?
What are the risks (technical, operations, integration, political, etc.) with each of these
solutions?
Which solutions demonstrate a capability to address the appropriate gap(s)? Which
solutions are affordable? Which meet the scheduled need-date?
The results from these activities (ICD development, DP, pre-MDD Analysis) become inputs for
determining if any of the viable (existing and new) materiel solutions are likely to be affordable
in the AF budget (funded through next milestone), mitigate enough of the gap(s) to be worth the
cost and risks, and have an acceptable level of impact on other systems, functions and enablers.
The following are several significant policy requirements that must be addressed prior to the
start of an AoA:
20



The AFROC requires that each lead command present bi-annually, a list of ongoing and
forecasted AoAs with traceability to the appropriate Core Function Master Plans
(CFMPs). The AFROC reviews these lists for validation, de-confliction, and
prioritization. As a result of this requirement, the lead command must identify the
specific AFROC validated/prioritized AoA topic to analyze and its associated ICD(s).
AF policy requires that the entry criteria identified in Section 1.2 be presented for
approval prior to proceeding to AoA planning and development of the AoA study plan.
AT&L policy and the AT&L MDD DAB template require the following information be
presented at the MDD for approval to execute the AoA:
o Approved ICD (Joint Staff/Service Sponsor)
 Definition of the CONOPS
 Identification of capability gap and operational risk of not filling the gap
o Development Planning (Service Sponsor)
 Range of materiel solution approach(s) which could address the gap
[Note: this is where the CCTD content is discussed. CCTDs must be
approved by SAF/AQR prior to submission of the AoA Study Plan to the
AFROC (see Section 3.1.1)]
 Evidence of technical feasibility and external implications of the
alternatives in the AoA
 Timeliness to capability need
o Approved AoA study guidance and plan (CAPE/Service Sponsor)
o Acquisition Plans (Service Sponsor)
 Materiel solution analysis phase funding and staffing
 Program schedule, affordability constraints
 Entry criteria for next milestone/ADM content
The following are the types of questions asked by the AFRRG/AFROC to determine if an AoA
is required:






Is this an AF priority now?
Can we afford it now?
What is the capability gap(s)?
o Is it still valid in this budget reality?
What is the operational impact/risk?
What is the risk of maintaining the baseline?
What is the risk of divesting this capability/system?
AoAs contribute significantly to the acquisition process by providing the MDA critical
information to inform milestone decisions. The MDA may authorize entry into the process at
any point consistent with phase-specific entrance criteria and statutory requirements. AoAs are
typically accomplished in the Materiel Solution Analysis, but may be accomplished in any
subsequent acquisition phase to answer questions not addressed by a previous AoA or that
21
require updating. Results from the AoA provide information that allows the MDA to make an
informed decision on whether an acquisition program is appropriate and at which milestone the
program should begin. It also allows the Program Manager (PM) to structure a tailored,
responsive, and innovative program.
OAS can assist the analysis team in ensuring they have a defensible need to conduct an AoA.
2.3 Role of Analysis in the Materiel Solution Analysis Phase
According to the JCIDS Manual, analysis executed during the Materiel Solution Analysis Phase
supports the following:




Assessment of potential materiel solutions to mitigate validated capability gaps
identified in an ICD
Identification of required DOTmLPF-P changes
Identification of best course of action to inform the MDA on how to mitigate prioritized
capability gaps
Development of the Capability Development Document and Technology Development
Strategy
According to the Defense Acquisition Guidebook, analysis executed during the Materiel
Solution Analysis Phase supports the following:




Assessment of industrial and manufacturing capability for each evaluated alternative in
the AoA. This information should be used when developing the TDS
Identification of new or high risk manufacturing capability or capacity risks, if
applicable. This should include any risks associated with production scale-up efforts
and/or potential supply chain issues
Consideration of possible trade-offs among cost, schedule, and performance objectives
for each materiel solution analyzed.
Assessment of whether or not the operational capability requirement can be met
consistent with the cost and schedule objectives identified by the JROC [Note: if the
operational capability requirement cannot be met consistent with the JROC objectives,
need to identify the operational impact.]
2.4 Who uses the Analysis
AoA study plans and results are usually briefed at high levels in the Air Force and the OSD.
These products inform the decision making process to potentially change doctrine, tactics,
techniques and procedures and, if appropriate, support acquisition of new capabilities. AoA
results influence the investment of significant DoD resources. Therefore, AoAs receive multi22
layered direction and oversight from start to finish. This direction and oversight is necessary to
achieve agreement on the results and findings by all stakeholders.
For all AoA efforts, the following organizations will typically review the analysis:









Stakeholders
AFRRG/AFROC
Senior Review Group (SRG)/Senior Advisory Group (SAG)
Approval authority of AoA Study Guidance (AF/A5R and CAPE, where appropriate)
MDA
OAS
A5R functionals
SAF/AQ
SAF/A8
Stakeholders are representatives from any organization, Agency or Service with a vested
interest in the outcome of the AoA. They include representatives from the requirements
community, appropriate operational communities, engineers, logisticians, intelligence analysts,
maintainers, etc. The primary role of the stakeholders is to represent the mission
area/community associated with the problem being studied in the AoA and those who will
design, support, and maintain these capabilities. They review the analysis to ensure that the
trade-offs being examined inform the degree to which the gaps can be mitigated and the
operational risk reduced.
The AFRRG/AFROC, Senior Advisory Group, Guidance Approval Authority representatives
and MDA representatives ensure the study team accomplishes what it planned to complete.
They review the analysis to determine if the key questions are being answered and to provide
redirection, if necessary. They are also the principal oversight organizations for the execution
of the AoA. Their primary objective is to determine the degree to which each solution can
mitigate the gap(s) and reduce the operational risk. The analysis used to inform the decision
makers includes cost, effectiveness, risk, sensitivity, and tradespace. This information will also
be used to inform AF and DoD investment decisions in order to resolve/mitigate the identified
gap(s). The following are key areas that are scrutinized throughout the study:





What enablers (e.g., logistics, intelligence, communications, and Human Systems
Integration) are addressed? How well are their interdependencies understood? What are
the costs associated with the enablers?
Are the key questions answered sufficiently for the decision makers?
How well does the analysis determine each alternative’s capability to mitigate each gap?
How well does the analysis determine potential key parameters and threshold and
objective values to inform development of the RCT?
How well does the analysis explore the tradespace? What sensitivity analysis is
accomplished to refine the threshold and objective values?
23






How sensitive is each solution to the analysis assumptions?
How sensitive are the threshold and objective values to cost and schedule? What are the
cost, schedule, and performance drivers?
How do the costs compare with any affordability constraints identified at the MDD
(based on rough cost estimate)?
What questions are still unanswered? What information is still needed?
What are the risks?
Which parts of the analysis are sound and which are the best that could be done, but
introduce more error?
The exact questions and issues change over time and are based upon
personalities and politics. OAS attends every AFROC and has a
representative co-located with AF/A5R, giving us the ability to quickly
adjust to new questions being asked and assist teams and sponsoring
organizations to be better prepared.
For AoAs where AT&L is the MDA, CAPE may also identify a Study Advisory Group (SAG)
in the AoA study guidance. AT&L may use the Overarching Integrated Product Team (OIPT)
and/or the Cost Performance IPT (CPIPT) to support their oversight of the AoA. See the
Defense Acquisition Guidebook for information concerning these panels.
AoAs that are Joint Requirements Oversight Council (JROC) Interest or Joint Capabilities
Board (JCB) Interest must be presented to the Functional Control Board (FCB), JCB, and
JROC. Their primary role is to validate the threshold and objective values in the RCT.
Additionally, they provide informed advice to the MDA on the best course of action to mitigate
the specified prioritized capability gaps. During the Technology Development phase, the
program office will explore the tradespace using the threshold and objective values.
Before the final report is presented to the JROC, CAPE also has the responsibility of
accomplishing an AoA sufficiency review at the completion of the AoA. This is accomplished
for all efforts that are JROC Interest, JCB Interest and/or have OSD oversight. CAPE’s
sufficiency review is primarily focused on the following:



Were the key questions answered?
Does the analysis support the proposed acquisition investment strategy?
Can the Joint military requirement be met in a manner consistent with the cost and
schedule objectives recommended by the JROC?
24
OAS assists the study team in ensuring the analysis results are presented in a clear and
comprehensive manner which address the questions and issues identified above. OAS conducts
a risk assessment of the study plan, interim results, and final report for the AFRRG and AFROC
principals. This assessment assists the Air Force in determining the investment course of action
based on the analysis results.
In addition to these roles, OAS provides assistance to the AoA Study Director in identifying
each of these organizations before the initiation of the AoA, working with the stakeholder
community, and preparing for each of the appropriate reviews and analytical decision points.
2.5 Relationship between AoA and Other Activities
This section outlines the specific relationships between the AoA and other requirements related
activities.
2.5.1 Activities that shape the AoA
The main activities that lay the foundation for and provide a starting point for the AoA are:




Capability Based Planning (which includes the CBA)
Doctrine, Operations, Training, materiel, Leadership/Education, Personnel, Facilities,
and Policy (DOTmLPF-P) Analysis
Early Systems Engineering and Development Planning (DP)
Materiel Development Decision (MDD)
Depending on the AoA, there may be activities in non-DoD organizations (e.g. the other Federal
Departments, the Intelligence Community, etc.) to consider. If so, contact OAS for assistance
in making the right contacts with the appropriate agencies for analytical support.
2.5.1.1
Capability Based Planning Contributions
The primary information that the Capability Based Planning contributes to the AoA is:




Definition of the existing/programmed capabilities (also known as the baseline)
Threats and scenarios utilized in the conduct of the CBA
Measures and metrics utilized in the conduct of the CBA
o Tasks, conditions, and standards
o Evaluation criteria utilized in the CBA that demonstrate a gap exists
Capability gaps identified during the conduct of the CBA
o Assessment of the operational risk if the gap remains
o Identification of the cause of the gap
25
 Results of the CBA documented in a CBA final report and where appropriate, an
Initial Capabilities Document (ICD)
 Identification of the measures (or other metrics) that demonstrate that a capability gap
exists from sources other than the CBA
2.5.1.2 DOTmLPF-P Analysis Contributions
Prior DOTmLPF-P analyses, conducted as part of the CBA or as separate studies, focus on
whether non-materiel approaches sufficiently mitigate any of the capability gaps by
recommending changes to one or more DOTmLPF-P areas. The small “m,” refers to existing
materiel solutions, not an initiation of a new program of record.
According to the JCIDS Manual, the most common non-materiel approaches are:



Alternative doctrinal approaches and alternative CONOPS. Investigating alternative
CONOPS is a JCIDS requirement. Where applicable, alternatives should also consider
CONOPS involving allied/partner nation or interagency participation.
Policy Alternatives. When considering policy alternatives, the CBA must document
which policies are contributing to capability gaps and under which circumstances. A
policy change that allows new applications of existing capabilities or modifies force
posture to increase deterrence is always of interest and should be considered.
Organizational and personnel alternatives. This means examining ways in which
certain functions can be strengthened to eliminate gaps and point out mismatches
between force availability and force needs.
A Joint DCR is generated when the DOTmLPF-P analysis shows that the capability gap can be
sufficiently addressed by one of the three approaches below. In these situations, an ICD and
AoA are not required.



New non-materiel solution
Recommending changes to existing capabilities of the Joint force in one or more of the
eight DOTmLPF-P areas
Increased quantities of existing capability solutions
This DOTmLPF-P analysis should also identify any interdependencies between any potential
solutions and S&T and/or experimentation recommendations.
Refer to Chapter 4 of OAS’s Pre-MDD Analysis Handbook, dated June 2010, for additional
guidance on the execution of this analysis.
2.5.1.3 Early Systems Engineering and Development Planning Contributions
26
The focus of Early Systems Engineering is not to engineer a system but to better understand the
systems engineering aspects of the solution space and the technical feasibility. The goal is to
determine the most viable, affordable solutions to be explored in post-MDD activities and
processes, such as the AoA or S&T activities. This process is used to investigate types or
categories of potential solutions (e.g., satellite, armed airframe, ground-launched weapon, cyber
solutions) vice specific systems (e.g., B-2 with weapons modifications).
Early Systems Engineering should also identify architectures appropriate to the capability areas
being explored. These architectures enable a better understanding of the complexity of each of
the materiel solutions. Since architectures integrate visions, requirements, and capabilities, they
help provide unique insights for the MDA about the discriminating differences between the
potential solutions. Some solutions may have multiple, complex interdependencies which must
be identified to the decision makers. Architectures can also provide insight into the logistics
and sustainment requirements of each of the solutions as well as the ease of improving them
(such as technology insertions) over their individual life cycles.
The final consideration for use of architectures is to illustrate how the capabilities provided by
existing and “to-be” architectures address human system integration issues. This information
will be critical to the MDD presentation and AoA planning efforts, especially:



Understanding the impacts of each solution on other parts of the architecture
Developing ROM costs across the architecture
Gaining insights into the affordability aspects of MDD
This will provide information to the decision makers about the external implications of the
alternatives. [Note: when considering how architectures will be utilized, the objective is not to
accomplish an architecture study at this time, but rather to use them to aid in the illustration of
complex materiel solutions (i.e., family-of-systems (FoS) and system-of-systems (SoS) solutions)
and interdependencies. Additionally, each solution should be examined to see how it impacts
the baseline architecture and, in some cases, may result in a future architecture study.]
In support of SAF/AQR’s Early Systems Engineering initiative, Air Force Materiel Command
(AFMC) and Air Force Space Command (AFSPC) established DP with the overall objective of
ensuring the launch of high-confidence programs capable of delivering warfighting systems
with required capabilities on time and within budget. While it applies across the entire life cycle
of a given capability, simply stated, DP is the process by which the AoA solution space is
defined and CCTDs are developed prior to MDD. DP support is obtained by submitting a DP
effort request to either AFMC or AFSPC, as appropriate.
As stated in the SAF/AQ Early Systems Engineering Guidebook, the intent of early systems
engineering is to enhance the quality and fidelity of proposed future military system concepts
that may eventually be considered in AoAs. The primary means of meeting this intent is via
development of a CCTD for each conceptual materiel approach being considered to fill the
27
related capability gaps/shortfalls. The CCTD provides a mechanism for documenting and
communicating the data and information associated with the prospective materiel solution
analyzed during the Pre-MDD analysis. This enables the analysis to produce increasing levels
of detail regarding the materiel concepts under consideration. Chapter 3 of the SAF/AQ guide
describes how this analysis effort and identification of the tradespace is supported. CCTDs are
required at the Air Force Review Board (AFRB) that is conducted prior to the MDD. They are
included as an appendix to the AoA study plan and final report.
A “Best Practice” is to capture other information about the solution space in addition to that
found in the CCTD or DCR (for non-materiel solutions). Some examples of other information
to help define the solution space include:




Overarching assumptions (these are the assumptions that are specific to the problem and
apply to all potential solutions)
Overarching operational concept/employment concept (this is what is defined in the ICD
and refined as the overarching concepts independent of the individual solutions)
Overarching operational considerations (this is problem specific and applies to all
potential solutions equally)
Overall DOTmLPF-P implications (these are the implications that apply regardless of
solution)
2.5.1.4 MDD Contributions
MDD is the formal decision to conduct an AoA and when the MDA accepts the approved AoA
study guidance and AoA study plan. Current requirements development and acquisition
policies identify the following list of recommended approaches in preferred order:
1. Implementation of DOTmLPF-P changes which do not require development and
procurement of a new materiel capability solution.
2. Procurement or modification of commercially available products, services, and
technologies, from domestic or international sources, or the development of dual-use
technologies.
3. The additional production or modification of previously-developed U.S. and/or allied
military or interagency systems or equipment.
4. A cooperative development program with one or more allied nations.
5. A new, joint, DoD component or interagency development program.
6. A new DoD component-unique development program.
If sponsors select a less preferred approach from the list above (for example, the preferred
solution is #1 but the sponsoring organization selects #2), they must explain and provide
supporting evidence to justify the decision. It is critical to ensure that the range of alternatives
includes non-Air Force solutions and U.S. government solution options. Since the Joint Staff
now requires that all studies have to be identified in a central repository, J8 study repository
28
(currently known as Knowledge Management/ Decision Support (KM/DS)) is a great resource
to review previously assessed solutions and other studies in the same mission area. It is also
critical to ensure that pre-MDD activities include sufficient time to identify and explore these
non-Air Force solutions.
2.5.2 Activities shaped by the AoA
The AoA is a primary input to:


Development of Technology Development Strategy (TDS)
Development of Capability Development Document (CDD)
The AoA provides information for the development of the TDS phase in the following manner:





Informs technology maturation activities such as the building and evaluation of
competitive prototypes and refinements of the user capability requirements leading up to
a preliminary design review
Informs development of a RCT which will identify the operational performance
parameters, thresholds, and objectives to be explored during the TD phase.
Aids in identifying which performance parameters require further refinement during the
TD phase. The RCT must be included in the AoA final report.
Informs assessments of the technical maturity of the proposed solution and when an
MDA should consider abbreviating or eliminating the TD phase based on these
assessments
Informs development of the TDS and RFPs for the TD phase following the MS A
decision where appropriate.
With respect to updates to a JCIDS document such as a CDD, the sponsor must review the
AoA to determine if it continues to be relevant. If a CDD update invalidates the previous
AoA, the sponsor will update or initiate a new AoA to support the CDD update. A CDD
may not be submitted for staffing and validation until the AoA is completed.
The AoA provides information for the development of the CDD in the following manner:



Provides sufficient information to define Key Performance Parameters (KPPs) and Key
System Attributes (KSAs) for multiple capability increments. A single CDD may be
validated to support the MS B decisions for all of the described increments. Therefore,
the CDD must clearly articulate the unique set of KPPs and KSAs for each increment
and identify any KPPs and KSAs that apply to all increments.
Provides a summary of the alternatives, performance criteria, assumptions,
recommendations, and conclusions.
Identifies where relevant training criteria and alternatives were evaluated. This
information provides the analytical foundation for establishing the Training KPP.
29
Training aspects should be part of the cost, schedule, and performance trade-offs
examined during the AoA.
30
3 Planning the Analytical Effort
This section describes AoA planning activities. Since AoAs are used to support the decisions
identified in Section 2.2, it is important to understand how to scope the AoA, utilize previous
efforts as a foundation for analysis, identify the stakeholders, and form the team. Additionally,
it is important to determine the level of effort required for the AoA. This section also describes
use of the enduring HPT membership discussed earlier in the handbook. Lastly, this section
addresses how to capture all planning details in an AoA study plan and the review/approval
processes required for the plan.
3.1 Scoping the Analysis
As identified in the JCIDS Manual, the study guidance and study plan should build on
appropriate prior analysis including that conducted as part of the JCIDS process. AoAs are
designed to provide decision quality information to inform investment decisions. As a result, it
is important to tailor the AoA appropriately to focus on the information required for those
decisions. Senior leaders are not looking to “reinvent the wheel” and repeat analysis that has
already been accomplished, but instead on identifying what additional analysis is needed before
the next milestone decision. It is important to understand what information the MDA needs to
make an informed decision.
Since planning for the AoA begins before there is any formal guidance or direction, it is critical
that scoping discussions occur with decision makers early in the planning stage. This
discussion assists in shaping the MDD Acquisition Decision Memorandum (ADM) and AoA
study guidance. This also helps ensure that the AoA effort is tailored to address only those
issues that the decision makers need for their next decision. Therefore, it is essential that the
Study Director have frequent interaction with the CAPE and MDA staff.
A “Best Practice” is to contact OAS as soon as there is discussion about an upcoming AoA.
OAS can facilitate and provide introductions, where necessary, with the appropriate CAPE,
MDA and Air Staff functional representatives for the effort. This can help ensure that the AoA
is properly scoped and tailored to meet AF and DoD needs.
Many of the items that define the scope of the AoA will come from the CBA and/or Doctrine,
Organization, Training, materiel, Leadership, Personnel, Facilities, and Policy (DOTmLPF-P)
analysis that precede the AoA. The following are typically used to establish the scope of the
AoA:




Capability gaps and any identified prioritization
Mission areas and tasks
Operational concepts and environment
Threats and scenarios
31







Measures and standards
Approaches and alternative concepts, including the baseline
Maturity of the technologies
Operational risk
Timeframes
Ground rules, constraints, and assumptions
Science and Technology (S&T) activities and DOTmLPF-P Change Recommendations
(DCRs)
The following are examples of key overarching questions decision makers ask:







How well does each alternative close the capability gaps?
How does each alternative compare to the baseline (current capability)?
What are all the enabling capabilities (C3, ISR, HSI, logistics, etc.)?
What are the risks (technical, operational, integration, political, etc.)?
What is the life cycle cost estimates (LCCE) for each alternative?
What are the significant performance parameters?
What are the trade-offs between effectiveness, cost, risk, and schedule for each
alternative?
3.1.1 Using Previous Analysis as the Foundation
It is important to understand what corporate knowledge exists within in the relevant stakeholder
organizations. Other potential sources for information include:
 AF/A9
 Institute for Defense Analysis (IDA)
 RAND
 Defense Technical Information Center (DTIC)
 AF Knowledge Now/Intelink
 J8 Study Repository (currently known as KM/DS)
 CAPE
 Army Training and Doctrine Command’s (TRADOC) Army Experiment and Study
Information System https://cac.arcicportal.army.mil/ext/aesis/aesis/default.aspx)
 AT&L’s Acquisition Information Repository (https://www.dodtechipedia.mil/AIR)
OAS can also assist in identifying relevant studies for the effort.
This research is focused on identifying the following:



Where there is extensive knowledge
Where there is little knowledge
Where there is no knowledge
32
The next step is to determine the applicability of the previous analysis to the effort. Previous
analyses are useful for identifying the baseline and other alternatives, and refining or developing
scenarios and measures. Just because the title seems to fit does not mean that it is applicable.
Contact the office associated with the previous analyses for assistance in determining
applicability. It is important to understand the objectives of the previous analyses and the
conditions under which it was executed to determine its relevance to the current study. By
reviewing the previous analyses, the study team will be better prepared to conduct the AoA.
3.1.1.1 Using Scenarios from Previous Analyses
AoA alternatives must be studied in realistic operational settings to provide reasonable comparisons
of their relative performances. The AoA does this by adopting or developing one or more
appropriate military scenarios. Scenarios define operational locations, the enemy order of battle,
and the corresponding enemy strategy and tactics ("the threat"). Scenarios are chosen with
consideration of AoA mission need, constraints and assumptions, and the physical environments
expected. The scenarios selected from previous analyses should be considered first when
determining which scenarios should be used in the AoA.
Threats and scenarios determine the nature of the physical environment in which the alternatives
operate. However, there is often a need to operate in a range of physical environments and this can
drive the selection of scenarios. The environment reflects both man-made and natural conditions.
Natural conditions include weather, climate, terrain, vegetation, geology, etc. Depending on the
alternative, these conditions can impact the target selection process, the aircraft and munitions
selection process, aircraft sortie rate, aircraft survivability, navigation and communications
capabilities, logistics, etc. Man-made conditions such as jamming and chemical/biological warfare,
have their own impacts. Chemical or biological warfare, for example, may impact the working
environment for operational crews and logistics support personnel. This can impact the results of the
war or how it is executed. Such real or potential threats may in turn affect aircraft basing decisions
and sortie rates.
The threat is most often developed and defined by the AoA study team working in conjunction with
the intelligence community. Engagement of the intelligence community should begin early in the
AoA process. MAJCOM intelligence organizations, DIA, and other intelligence organizations can
provide detailed threat and target information. If System Threat Assessment Reports (STARs or
STAs) are available, they could serve as the basis for the AoA threat description.
The Defense Planning Guidance/Illustrative Planning Scenario (DPG/IPS) provides broad context
for a limited number of scenarios and should be used as a starting point for scenario development.
The DPG contains a strategic framework and general description of potential military operations in
several areas of the world and for various contingencies. Variance from the DPG/IPS (called
scenario excursions) must be identified, explained, and approved by DIA after sponsoring command
A2 review.
The Multi-Service Force Deployment (MSFD) or other digital force projections are resources
providing details on enemy, friendly, and non-aligned forces in these areas. In Joint AoAs, Army,
33
Navy, and Marine forces must be considered, as well as the Air Force. The order of battle and roles
of allied and non-aligned forces must also be considered. Environmental factors that impact
operations (e.g., climate, atmospherics, vegetation and terrain) are important as well.
Typical threat elements addressed in an AoA are:
•
•
•
•
•
•
The enemy order of battle
Limitations on threat effectiveness, such as logistics, command and control,
operational capabilities, strategy or tactics, and technology
Countermeasures and changes in enemy strategy and tactics in response to the new
system's capabilities (i.e., reactive threats)
A range of threats to account for uncertainties in the estimates
A target set representing a cross section of all possible targets
Threat laydown showing potential threat systems and their location
In summary, scenarios must portray realistic operational environments. A range of scenarios may be
needed to investigate the full potential of the alternatives and their sensitivities to variations in
constraints and assumptions, particularly with regard to threats.
Refer to Section 3.1 of the OAS Pre-MDD Analysis Handbook for additional guidance on
scenario selection.
3.1.2 Identifying Ground Rules, Constraints, and Assumptions (GRC&A)
GRC&As help scope the AoA and must be carefully documented and coordinated with senior
decision makers. Some GRC&As will be general in nature and encompass the entire study,
while other GRC&As will be more specific and cover only a portion of the analysis. Many of
these assumptions will be described in the AoA study guidance provided to the team prior to
creation of the study plan.
In this context, the specific definitions are:



Ground rules – broadly stated procedures that govern the general process, conduct, and
scope of the study. An example is: the working group leads will be members of the risk
review board.
Constraints - imposed limitations that can be physical or programmatic. Human
physical or cognitive limitations or a specific operating frequency range are
examples of physical constraints. Specifying the latest acceptable initial
operational capability (IOC) date illustrates a programmatic constraint.
Assumptions - conditions that apply to the analysis. Examples include specific
manpower levels, inclusion of a target type that will proliferate in the future
thus forcing consideration of a specific threat system, or that certain
infrastructure or architectures will be provided by another program
34
GRC&A arise from many sources. IOC time constraints, for example, may be imposed by an
estimated fielding date of a new threat or by the need to replace an aging system. Net-centricity
or interoperability with the Global Information Grid (GIG), for example, may be dictated in the
ADM. Regardless of the source, each GRC&A must be explicitly identified, checked for
consistency, fully documented, and then accounted for in the scope of the AoA. Later they will
need to be accounted for in the analytical methodologies. The source of and rationale for the
GRC&A should also be noted, if known.
The GRC&A are subject to scrutiny, particularly if not reviewed with the MDA, AF/A5R,
CAPE and critical stakeholders early in the process. It is critical that the team thoroughly
document each GRC&A. The study plan will contain an initial set of GRC&A, but may change
as the study progresses. Any changes to the GRC&A should be vetted with stakeholders and
decisions makers and documented in the final report.
3.2 Defining the Alternative Concepts
The AT&L MDD DAB template requires alternatives be identified, fully understood, and
presented at MDD. In addition, the AoA study guidance will identify a minimum set of
alternatives that must be included in the AoA. The Air Force uses CCTD documents to
describe the technical and operational aspects of each alternative. The CCTDs should be
created during Development Planning (DP) and Early System Engineering. The AoA study
team will refine the CCTDs to ensure they have sufficient information to support the
effectiveness, cost, and risk analyses. SAF/AQR is responsible for approving the CCTDs prior
to the MDD.
At a minimum, the AoA must include the following alternatives:



The baseline, which represents the existing, currently programmed system funded and
operated according to current plans
Alternatives based on potential, yet unfunded improvements to the baseline, generally
referred to as the baseline+ or modified baseline. [Note: it is not always best to include
all potential improvements to the baseline in one alternative, consider having multiple
alternatives in this category.]
Alternatives identified in the AoA study guidance (for example, COTS/GOTS, allied
systems, etc.)
3.3 Identifying Stakeholder Community
Stakeholder is defined as any agency, Service, or organization with a vested interest (a stake) in
the outcome of the pre-acquisition analyses. A stakeholder may contribute directly or indirectly
to the pre-acquisition activities and is usually affected by decisions made as a result of these
35
activities. Asking the following questions can help identify members of the stakeholder
community:



Who are the end-users (e.g., COCOMs, warfighters, etc.) of the capability?
What enablers (intelligence, HSI, logistics, communications, etc.) have
interdependencies within the solution space being analyzed in the AoA?
How do the other Services, DoD agencies, and other government agencies fit into the
mission area being explored in the AoA?
The stakeholder community can assist the AoA study team identify other solutions available
from other Services or agencies (within or outside DoD). Additionally, allied and partner
nations may offer possible solutions.
OAS can assist in identifying the stakeholder community.
3.4 Determining Level of Effort
The level of effort (LoE) for the analysis will depend on various factors such as the study
questions, complexity of the problem, time constraints, manpower and resource constraints, and
type of analysis methodology. By controlling the scope of the study, the LoE is more likely to
remain manageable over the course of the analysis. All study scoping decisions should be
coordinated with stakeholders to ensure that expectations are managed. This ensures that the
LoE and resources required are understood for each scoping decision. The results of these
discussions should be documented so that everyone understands what is within scope and what
is not.
Answers to the following questions will aid in determining the LoE:









How much analysis has been accomplished to date? (See Section 3.1.1)
What remaining information needs to be learned from the AoA? (See Section 3.1)
Who in the stakeholder community is available to participate in the effort?
Are the right experts available and can they participate?
How much government expertise is available? Contractor support?
What data and tools are needed to execute the AoA?
How much time and funding is available to execute the AoA?
What level of analytic rigor is required?
Where and what amount of analytic risk is acceptable to the decision makers?
36
There is a relationship between the level of effort and study risks. When defining the level of
effort, it is important to identify areas of risk associated with the time and resources allotted to
conduct the study. The answers to above questions will aid in identifying LoE and study risks.
There are other risks associated with uncertainties inherent in the study process such as the
effectiveness and cost analysis methodologies, funding and resources limitations, and
insufficient time to conduct the study. For example, a study with limited time and resources
may reach different conclusions compared to similar study with less constrained time and
resources. In this example, the less constrained study could utilize more empirically based
research methods that enhance confidence in the study findings. It is important that the team
recognizes and documents these uncertainties, identifies the potential impacts, and provides this
information to the decision makers.
Once the LoE and study risks are identified, the team should discuss the implications with the
senior decision makers. This discussion should include courses of action which identify
possible tradeoffs to mitigate the risk (e.g., providing more resources and/or reducing scope to
meet an aggressive study schedule). This discussion will ensure the LoE and risks are
acceptable to senior decision makers. These agreed upon risk areas should be included in
presentations to the AFRRG and AFROC.
OAS, AF/A5R and SAF/AQ can aid in determining the appropriate LoE and study risks.
3.4.1 Joint Staffing Designator (JSD) and Acquisition Category (ACAT)
Determination
The JSD and ACAT level will also influence the LoE required. An effort that is expected to be
designated as JROC Interest will have the same level of scrutiny as an ACAT I program effort
or Major Defense Acquisition Program (MDAP). The following types of efforts will have OSD
oversight and CAPE-issued guidance: ACAT ID, ACAT IAM, JROC Interest, and labeled as
“special interest.”
OSD and JROC determine the classification using the following criteria:

DoDI 5000.02 specifies: “The USD(AT&L) shall designate programs as ACAT ID or
IAM when the program has special interest based on one or more of the following
factors: technological complexity; Congressional interest; a large commitment of
resources; the program is critical to achievement of a capability or set of capabilities;
or a program is a joint program. Exhibiting one or more of these characteristics,
however, shall not automatically lead to an ACAT ID or IAM designation.”

Capabilities in Battlespace Awareness (BA), Command & Control (C2), Logistics and
Net-Centric are initially considered JROC Interest because the capabilities are enablers
that cut across Service boundaries
37



If not a Special Interest, ACAT I, or JROC Interest, capabilities in Force Application
(FA) and Protection are initially considered Independent or Joint Information
If not Special Interest, capabilities in Force Support, Building Partnerships, and
Corporate Management & Support are initially considered Independent
Revised definition of MDAP based on implementation of WSARA (DTM 09-027):
o An MDAP is a DoD acquisition program that is not a highly sensitive classified
program and:
1. That is designated by the USD(AT&L) as a MDAP; or
2. That is estimated to require an eventual total expenditure for RDT&E,
INCLUDING ALL PLANNED INCREMENTS, of more than $365 million
(based on FY 2000 constant dollars) or an eventual total expenditure for
procurement, INCLUDING ALL PLANNED INCREMENTS, of more than
$2.19 billion (based on FY 2000 constant dollars).
o This revised definition may result in a change in Milestone Decision Authority
(MDA).
3.4.2 Contract Support
Technical support contractors often conduct substantial parts of the analysis. It is important to
understand the study objectives before making contract support arrangements. This will
increase the likelihood that the chosen contractor is well suited to perform the required tasks.
Know the needs first, and then contract. It is important to remember that the responsibility for
the AoA rests with the lead command and this responsibility should not be delegated to the
contractor. Questions to answer to determine contractor support requirements:






Is there adequate expertise available within the government?
Are sources of funding available?
For which study areas do I need contract support?
Which contractors are qualified?
What are the available contract vehicles?
How will the contract be administered?
Experienced and qualified contractors are often obtained through the Air Force product centers
and program offices. For most product centers, access to technical support contractors is
available through scientific, engineering, technical, and analytical (SETA) contracts. Also,
Federally Funded Research and Development Centers (FFRDC) are available to some product
centers. Use of an existing contract for the best-qualified contractor can reduce the AoA
initiation and development time considerably.
3.5 Establishing the Study Team
38
The Study Director leads the study team in conducting the AoA. The Study Director is
normally appointed by the sponsor (most often the operational user) designated as the lead for
the AoA.
Management and integration of the information/products from each working group is
undertaken by a core team of government representatives usually comprised of the Study
Director, Deputy Study Director, lead and deputy lead from each working group, and the OAS
representative. The enduring HPT membership should serve as the foundation of this core team
membership to maintain continuity of the effort. Ideally, this team also includes members from
previous applicable studies. Finally, the study team should include appropriate members of the
stakeholder community (sponsoring command/ organization, other Air Force commands and
agencies, Army, Navy and Marines, DoD, Joint Staff, and civilian government agencies).
OAS and AF/A5R facilitate the AoA study guidance and study plan HPTs. OAS provides an
advisor to the Study Director. The advisor assists in training, planning, executing, and
facilitating the accomplishment of the AoA. The level of assistance from OAS is determined by
the scope of the AoA and where the AoA fits in the overall Air Force prioritization. OAS is
focused on ensuring quality, consistency, and value in AoAs.
A “Best Practice” is to organize in a way that meets the study needs. The structure of the AoA
study team depends upon the scope of the AoA and the level of effort required. Not all study
teams are identical, but are instead tailored in size and skill sets to meet the objectives of the
AoA. Team membership may include operators, logisticians, intelligence analysts, cost
estimators, and other specialists. Depending on the scope of the AoA, the team is usually
organized along functional lines to conduct the effectiveness, risk, and cost analyses.
If the AoA is only focused on conducting sensitivity analysis of the assumptions from previous
analysis and updating the cost estimates, the AoA study team will consist primarily of those
members needed to conduct those specific tasks. In other words, each study team structure is
dependent upon the questions the effort must answer and the specific scope of the AoA. Small
AoA teams with dedicated members are often better able to react to the timeline demands of the
AoA and may be more productive.
Early and proper organization is the key to a successful study. Ideally, the working group leads
and their deputies should be subject matter experts able to lead people, manage multiple
situations, and facilitate their groups. It can be difficult to find individuals with all of these
abilities. If unable to find a working group leader (or deputy) who can facilitate a group, OAS
can assist.
After the core team members have been identified, OAS can provide training to the team. The
training will be tailored to the specific analytic effort and is best accomplished prior to the AoA
study plan HPT.
39
Figure 3-1 illustrates an example study team structure and various oversight and support
organizations. In situations when stakeholder organizations have conflicting interests, consider
selecting working group co-leads from those organizations to facilitate their buy-in. Ad hoc
working groups are formed to accomplish specific tasks in support of the other working groups.
For example, the Alternative Comparison working group may be an ad hoc group because it is
formed from members of other working groups to synthesize all of the analysis results to
compare the alternatives (this is described further in Chapter 8 of this handbook).
Figure 3-1: Example Study Team Structure
Once the team is established, the working groups meet separately to address their fundamental
issues. They also meet with other working groups and/or the entire study team to exchange
information. Frequent and open exchanges of ideas and data are essential to a successful AoA.
When the team is geographically dispersed, maintaining frequent and open communication is
usually more challenging. Documenting questions, answers, and decisions made in the various
work groups facilitates clear and effective communication. This can be done through taking
and distributing minutes of study group meetings. Frequent interaction via telephone and e-mail
at all levels should also take place. If possible, keep the study team intact throughout the AoA.
A changing membership adversely impacts continuity and may create delays as new personnel
are integrated into the effort.
40
3.6 Study Plan Preparation and Review
An approved study plan is required prior to convening the Materiel Development Decision
(MDD). The study plan should illustrate with sufficient detail how the team will execute the
AoA to ensure the critical areas identified in the AoA study guidance are addressed. Appendix
C of this handbook contains the template for the study plan.
According to Air Force policy, prior to initiating a study plan, the sponsor will present the
information associated with the entry criteria identified in Section 1.2 to the AFRRG for
approval to proceed.
An HPT is required for development of the study plan. AF/A5R-P must review and approve the
membership prior to convening the HPT. The membership of the study guidance HPT should
be the foundation for this HPT and the core membership of the study team. The study plan HPT
membership can be altered at the discretion of the AF/A5R-P.
The AF/A5R process for review and staffing of the study plan is:
After the study plan has been prepared and coordinated with AoA stakeholders,
sponsors will provide the AoA study plan and AFROC briefing to the AF/A5R
Functional Division Chief and AFMC/OAS for assessment, simultaneously. The AF/A5R
Functional Division Chief will forward the AoA study plan and AFROC briefing to
AF/A5R-P with an AFMC/OAS assessment, simultaneously. AF/A5R-P will review
(allow for five working days) the study plan and determine if the AoA study plan is ready
to be submitted to the AFRRG for approval. Once the AFRRG concurs with the AoA
study plan, the study plan and AFROC briefing will be submitted to the AFROC for
validation.
A widespread review of the plan is useful in improving the plan and ensuring stakeholder
support for its execution. The review should start within the originating command and key team
member organizations. The external review should be solicited from a variety of agencies,
including OAS, appropriate AF/A5R functional divisions, AFMC/A3, other Services, and
CAPE (for ACAT I and JROC Interest programs).
According to AFI 10-601, the study plan should include the following to ensure approval:




Identification of the specific gaps that are being addressed in the AoA
Definition of the baseline (existing and planned) capability
Identification of the stakeholders and their roles/responsibilities in the AoA
Identification of the key questions identified in the study guidance
41


Identification of the alternatives identified by the study guidance. This includes
discussion about the implications and/or dependencies identified about the alternative
and how those dependencies will be factored into the analysis.
Description of the methodologies to be utilized and must include the following:
o Measures of effectiveness, performance, and suitability
o Decomposition of the gaps and key questions
o Traceability to measures used to establish minimum values in the ICD (from the
CBA)
o Cost work breakdown structure
o Methodology to determine alternatives ability to mitigate gaps
o Methodology to explore tradespace and description of what sensitivity analysis
will be done to determine key performance parameters and threshold and
objective values for the RCT
o Methodology to conduct the cost/capability tradeoff analysis
o Methodology for factoring in the dependencies identified for each alternative
o Scenarios to represent the operational environment
An OAS assessment of the study plan and its associated briefing is required prior to submission
to AF/A5R-P. Appendix E contains the study plan assessment criteria used by OAS in their
independent assessment of a study plan and associated briefing. This assessment is presented in
bullet fashion, highlighting the risk areas with the credibility and defensibility of the analysis
results. OAS will provide an initial assessment and get-well plan after the initial review to
determine readiness for submission to AF/A5R.
42
4 Performing the Effectiveness Analysis
Effectiveness analysis is normally the most complex element of the AoA and consumes a
significant amount of AoA resources. The effectiveness analysis working group (EAWG) is
responsible for accomplishing the effectiveness analysis tasks. The goal of the effectiveness
analysis is to determine the military worth of the alternatives in performing Mission Tasks
(MTs). The MTs are typically derived from the capabilities identified in the Initial Capabilities
Document (ICD). A Capability Development Document (CDD), Capability Production
Document (CPD), or Concept Characterization Technical Description (CCTD) may exist for the
current baseline, and can be useful in determining MTs and measures for the EA effort.
However, while there may be existing requirements documents, the team should use whatever
documents provide the best, most current information. Avoid using information from sources
that are superseded by or do not accurately reflect the current capabilities or required mission
tasks. The ability to satisfy the MTs is determined from estimates of alternatives' performance
with respect to measures of effectiveness (MOEs), measures of performance (MOPs), and
measures of suitability (MOSs). Additionally, AoAs and other supporting analyses can provide
the analytical foundation for determining the appropriate thresholds and objectives for system
attributes and aid in determining which of these attributes should be KPPs or KSAs.
4.1 Effectiveness Methodology
The effectiveness methodology is the sum of the processes used to conduct the EA even if some
pieces are done by other parts of the larger AoA team. The development of the effectiveness
methodology is almost always iterative: a methodology will be suggested, evaluated against the
resources and data available to support it, and then modified to correspond to what is both
possible and adequate. As the AoA progresses, this development sequence may be repeated as
more is understood about the nature of the alternatives, the models or analysis tools, and what is
necessary to support the AoA decision. Analysis continues throughout the conduct of the AoA
and based on what the team learns as it progresses, methodologies may be refined. Figure 4-1
General Approach for Effectiveness, shows the flow of analysis tasks discussed in this chapter.
43
Figure 4-1: General Approach for Effectiveness Analysis
OAS does not recommend the use of the Analytical Hierarchy Process (AHP) or similar
methods which implement weighting schemes as part of AoA effectiveness methodology.
Typically, employing AHP/weighting adds complexities to the study results which are difficult
to understand and difficult to explain to decision makers. OAS suggests keeping the
effectiveness methodology as simple as possible in order to evaluate and present accurate,
informative results.
Measure weighting schemes can oversimplify the results and potentially mask important
information. Table 4-1 below illustrates how measure weighting is dependent on the group
determining the weighting and may not be representative of what senior leaders, stakeholders,
or decision makers would consider important.
44
Table 4-1: Weighting Measures
4.2 Effectiveness Analysis Methodology
Discussion of the EA methodology must begin very early in the process; even before the AoA
study officially begins. In fact, since the study team is required to present their study plan along
with the guidance at MDD, it is very important to provide a well developed and comprehensive
plan at MDD. The plan must at a minimum identify the actual alternatives to be studied, the
relevant mission tasks, gaps, and measures, and include specific information regarding the
analysis tools and methodologies to be used to conduct the analysis. There should be clear logic
linking the tasks, gaps, measures, and methodologies.
The EA methodology is designed to compare the effectiveness of the alternatives based on
military and operational worth. It encompasses and is influenced by the MTs, measures
(MOEs, MOPs, MOSs), alternatives, threats, scenarios, operations concept, prior analysis, study
schedule, and available analysis resources. The methodology must be systematic and logical. It
must be executable and repeatable, and it must not be biased for or against any alternative.
It is important that the team determine the appropriate level of detail required in the analysis.
Because of the teams’ dependence on many factors, it can approach its final form only after the
above factors are defined. The identification and selection of suitable analysis tools and input
data sources must await development of the MTs, measures, selection of the alternatives, and
determination of analysis level of detail. It is important to note though that, before measures
can be developed, there must be agreement among the decision makers and stakeholders
regarding which capability gaps to address first, followed by agreement on which are the
appropriate mission tasks associated with the capability gaps. Finally, once the appropriate
level of detail is determined and suitable analysis tools are identified, the team must be sure to
secure the buy in of the senior decision makers.
45
4.2.1 Terms and Definitions
While there are certainly several other definitions in use by many different organizations, the
following terms and definitions are those used by OAS to describe parameters associated with
capabilities, mission tasks, and measures.







Capability – the ability to achieve a desired effect under specified standards and
conditions through combinations of means and ways across the DOTMLPF-P to perform
a set of tasks to execute a specified course of action. (JCIDS Manual)
Mission Task – tasks a system will be expected to perform; the effectiveness of system
alternatives is measured in terms of the degree to which the tasks would be attained.
Attribute – a quality or feature of something. Attributes of mission tasks (e.g.,
survivability, persistence, availability, accuracy, etc.) form the basis for identifying and
drafting measures.
Measure – a measure is a device designed to convey information about an entity being
addressed. It is the dimensions, capacity, or amount of an attribute an entity possesses.
A measure is used to provide the basis for comparison or for describing varying levels of
an attribute.
Metric – a unit of measure that coincides with a specific method, procedure, or analysis
(e.g., function or algorithm). Examples include: mean, median, mode, percentage, and
percentile.
Criteria – the acceptable levels or standards of performance for a metric. It is often
expressed as a minimum acceptable level of performance (threshold) and desired
acceptable level of performance (objective).
Data – an individual measurement used to compute the metric for a measure.
4.2.2
Mission Tasks (MTs)
Because the goal of the AoA is to identify the most promising solution(s), MTs must not be
stated in solution-specific language. Each MT will have at least one measure supporting it. In
general, measures should not call for optimizing aspects of a task or effect, because this often
has unintended impacts on cost or other aspects of the alternatives’ performance. For example,
one solution to minimizing aircraft attrition could be not flying missions at all; however, this
solution would hardly be conducive to placing targets at risk. Similarly, maximizing targets
destroyed may result in unacceptable attrition. There may be other cases; however, where
optimization is desirable – ensuring maximum personnel survivability for instance. Regardless,
measures must be grounded in requirements documents or through decision maker questions
contained in the guidance.
While the alternatives’ performance will be compared to each other, the team should resist rank
ordering them or making recommendations based on a rank order. Remember, the alternatives
should be evaluated on their capability to accomplish mission tasks and meet established
requirements. It is possible that all the alternatives might not meet some or all requirements.
46
Conversely, while all the alternatives might meet all requirements, the highest performer might
also be the most costly and/or most risky. Recommending the lowest performer still meeting all
the requirements might be the preferred solution given its associated cost and risk.
4.2.3 Attributes
Once the MTs are well defined and understood, the next step is to identify the necessary
attributes of successful mission tasks. An attribute is essentially a property or characteristic of
an entity – some desired characteristics of the entity. An entity may have many attributes, not
all of which are of interest. Attributes should be problem specific and should be used in so far
as they enlighten decision makers, answer key questions, and respond to guidance. The key
should be to keep it logical, identify the desired attributes first, and then craft the measures to
address them.
The January 2012 JCIDS manual briefly describes attributes and provides several examples
although this list is neither exhaustive nor directive. According to the manual:
“The Capabilities Based Assessment (CBA) produces a set of tasks and measures used
to assess the programmed capabilities of the force. These measures should be based on
the list of capability attributes outlined in Appendix A to Enclosure A. The Enclosure
provides examples of appropriate attributes which should be used where applicable,
although other attributes may be identified and used when those in Appendix A to this
Enclosure are not appropriate.”
Additionally, an excerpt from the Air Force Operational Test and Evaluation Center (AFOTEC)
Measures Primer, May 2007 identifies several characteristics of attributes that are useful to
study teams when developing measures. AFOTEC defines an attribute as:
“A property or characteristic of an entity that can be distinguished quantitatively or
qualitatively by human or automated means. An entity may have many attributes, only
some of which may be of interest for the information needs. Some of these attributes or
characteristics might likely found in ICDs or other requirements documents. A given
attribute may be incorporated in multiple measurement constructs supporting different
information needs. Measurement is the process of assigning numbers to the attributes of
an entity in such a way that relationships of the numbers reflect relationships of the
attribute being measured.”
A measure specifically addresses one or more attributes. Further, an attribute may have more
than one measure associated with it.
Since the AoA should trace its MTs back to the Joint Capability Areas (JCAs), it is useful to
link the study measures back using those attributes and their associated JCAs found in
Appendix A, Enclosure A of the JCIDS Manual. [Note: that these attributes are not
47
comprehensive and not all JCAs are yet represented in the manual; however, these examples
illustrate a variety of attributes that a team may identify]. In general, teams should not feel they
are tied to any or all of these. Other attributes, not included in these examples, may be
appropriate for certain mission tasks. Nevertheless, the team should, at a minimum, link its
mission tasks back to the applicable JCAs as this linkage is required in the to be developed
CDD which includes the AoA-developed Requirements Correlation Table (RCT) with identified
Key Performance Parameters (KPPs) and Key System Attributes (KSAs). Note that these JCAs
may not apply to non-DoD mission tasks. As identified in the January 2012 JCIDS manual, the
attributes for four of the JCAs are provided below.
Table 4-2: JCIDS JCAs
48
4.2.4 Measures
Measures are a central element when conducting an AoA. Without them, there is no way to
determine the effectiveness and suitability of an alternative and their ability to close gaps either
partially or completely. Properly formed and explicitly stated, measures will:






Specify what to measure (what data to collect, e.g., time to deliver message)
Determine the type of data to collect (e.g., transmit start and stop times)
Identify the source of the data (e.g., human observation)
Establish personnel and equipment required to perform data collection
Identify how the data can be analyzed and interpreted
Provide the basis for the assessment and conclusions drawn from the assessment
There is no universal definition for a measure within the analytic and test communities. Each
organization, developer, as well as academia and industry, defines the concept of a measure
slightly different. While there is no universal definition, there are certain tenets that apply to all
measures. Measures are not requirements although they are developed from requirements.
Measures are typically not conditions such as altitude, temperature, or terrain but they will be
measured under various conditions. In some situations; however, certain conditions may be
measures of interest. For instance, altitude may be something a team might want to measure if it
is critical to platform survivability. Finally, measures are not criteria although they will be
evaluated against established criteria. Remember, measures should be framed by the tasks,
conditions, and standards (criteria).
Results from measures not only make it possible to compare alternatives, they also can be used
to investigate performance sensitivities to variations of key assumptions and measure values.
Such analyses help define input to follow-on requirements and acquisition documents such as
the CDD, CPD, and TDS.
There are a variety of terms used to describe the value of a capability to the operator/user and
measures should be stated in terms of their capability to provide this value. Frequently used
terms include military worth, military utility, operational utility and operational significance.
Success can be measured relative to the immediate goals of the system (attack, communicate,
detect, etc.) or relative to high-level goals related to "winning the war." However, in many
cases, this determination is much more difficult and attributing “winning the war” to the
performance of one particular system may not be possible. Nevertheless, some examples of
measures demonstrating military worth are:





Reduction in fratricide
Loss/exchange ratio
Targets held at risk
Targets defeated
Level of collateral damage
49



Attrition rate
Quantity (and types) of resources consumed
Number of operating locations needed
Measures may come from a variety of sources. For some Air Force AoAs, the operational
utility may be expressed in terms of the Air Force’s end customer which may be other
departments and organizations such as U.S. Army, DHS, DOS, etc. The team should consider
potential future studies and testing and attempt to craft measures that link to and are relevant for
these events.
4.2.5 Types of Measures
There are several different types of measures:



Measures of Effectiveness (MOEs)
Measures of Suitability (MOSs)
Measures of Performance (MOPs)
4.2.5.1 Measures of Effectiveness (MOEs)
Measures associated with attributes of operational effectiveness are referred to as MOEs.


Operational Effectiveness: The overall degree of mission accomplishment of a system
when used by representative personnel in the environment planned or expected for
operational employment of the system considering organization, doctrine, tactics,
survivability, vulnerability, and threat.
Measure of Effectiveness: A measure of operational success that must be closely related
to the objective of the mission or operation being evaluated.
MOEs are a qualitative or quantitative measure of a alternative’s performance or characteristic
that indicates the degree to which it performs the task or meets a requirement under specified
conditions. They are a measure of operational success that must be closely related to the
objective of the mission or operation being evaluated. There will be at least one MOE to support
each MT. Each alternative is evaluated against each MOE criteria (requirement), and the results
are used to differentiate performance and capability among the alternatives. MOEs should be
focused on operational outcomes and closing the operational gaps rather than specific technical
performance parameters.
MOEs are usually developed by the study team. If possible, MOEs should be chosen to provide
suitable assessment criteria for use during later developmental and operational testing. The
team should look to the CBA, earlier analytic activities, requirements documents, and the
testing community to help identify these criteria. This linking of the AoA to testing is valuable
50
to the test community and the decision maker. Involvement of the testing community is
extremely helpful when developing “testable” measures.
MOEs should be reviewed by principal stakeholders during development of the AoA study plan.
Suitable selection of MOEs helps later independent review and evaluation of the AoA study
plan and results.
MOEs should be as independent of the alternatives as possible. The measures selected should
not bias the alternatives in some way and all alternatives should be evaluated using all MOEs.
Additionally, the team should be cautious of measures that are strongly correlated with one
another to avoid overemphasizing particular aspects of the alternatives. In these cases, the team
must be cognizant of the relationship among the measures to clearly understand the capabilities
and limitations of the alternatives.
Finally, MOEs should normally represent raw quantities like numbers of something or
frequencies of occurrence. Attempts to disguise these quantities through a mathematical
transformation (for example, through normalization), no matter how well meaning, may reduce
the information content and might be regarded as tampering with the data. Although ratios are
typically used for presenting information such as attrition rates and loss/exchange ratios, one
should still use caution as a ratio can also essentially hide both quantities. This can be
particularly misleading when sample sizes are small. It is generally better to identify the
proportion (e.g. 4 of 5).
4.2.5.2 Measures of Suitability (MOSs)
Measures associated with attributes of operational suitability are referred to as MOS.


Operational Suitability: The degree to which a system can be placed satisfactorily in
field use with consideration given to availability, compatibility, transportability,
interoperability, reliability, wartime usage rates, maintainability, safety, Human Systems
Integration, manpower supportability, logistics supportability, natural environmental
effects and impacts, documentation, and training requirements.
Measure of Suitability: A measure of a system’s ability to support mission/task
accomplishment with respect to reliability, availability, maintainability, transportability,
supportability, and training.
It is important for the study team to consider suitability areas when evaluating operational
effectiveness. Suitability issues such as reliability, availability, maintainability (RAM), and
deployability can be significant force effectiveness multipliers.
A suitable system results in increased combat capability with smaller, more responsive
deployable systems requiring fewer spare parts and people and less specialized equipment. In
addition to significantly impacting mission capability, an alternative’s suitability performance
could be a major factor in its life cycle cost. Maintainability issues could dramatically increase
51
the number of maintainers need to sustain a system. Major Human Systems Integration (HSI)
issues might increase operator workload. Additionally, significant reliability issues could result
in low operational availability.
In developing requirements, lead commands must identify RAM and deployability performance
parameters. Support requirements should relate to a system’s operational effectiveness,
operational suitability, and total ownership cost. And, in fact, all AoAs are required to address
these measures for all alternatives considered.
Finally, sustainment is a mandatory Key Performance Parameter (KPP) for all ACAT I
programs (for ACAT II and below programs, the sponsor will determine the applicability of the
KPP). MOSs and the importance of examining sustainability during the AoA are discussed in
much more detail in the Sustainability section.
As will be discussed later in the Alternative Comparison section, the study team can and should
identify not only trades among overall operational effectiveness, cost and risk, but also between
effectiveness and suitability. For instance, can improvements in reliability be achieved if some
requirements for performance are relaxed?
4.2.5.3 Measures of Performance (MOPs)
Measures associated with a quantitative measure of physical performance or physical
characteristics are MOPs.

Measure of Performance: A measure of the lowest level of physical performance (e.g.,
range, velocity, throughput, etc.) or physical characteristic (e.g., height, weight, volume,
frequency, etc.).
MOPs are chosen to support the assessment of one or more MOEs. MOPs will support the
MOEs by providing causal explanation for the MOE and/or highlighting high-interest aspects or
contributors of the MOE. MOPs may apply universally to all alternatives or, unlike MOEs;
they may be system specific in some instances. In order to determine how well an alternative
performs, each MOP should have an initial minimally acceptable value of performance (often
the “threshold” value). In addition to a minimum performance value, each MOP might also
have an initial, more demanding value (often the “objective” value). While these values may
come from existing requirements documents, there will be some cases where these documents
and requirements simply do not exist prior to AoA initiation. In these cases, the team might rely
on subject matter experts (SMEs), search Combat Air Force (CAF) standards, CONOPS,
Concepts of Employment (CONEMP), and Tactics, Techniques, and Procedures (TTP), or use
some combination of sources to help define performance standards. However, if these
documents (or sources) are dated, are superseded by, or do not reflect current capabilities the
team should find other legitimate sources for defining required performance parameters. In
some cases where no legitimate source(s) can be found, one of the purposes of the analysis may
be to determine where those required values should be. Regardless of the source, these initial
52
values and the rationale for their selection should be well documented as the MOPs and their
performance criteria may later be directly or indirectly reflected in system performance
parameters in the ICD/CDD/CPD or other documents. It is possible that the lack of identified
performance values could signify that valid capability gaps have not been established. In this
case, the team should look to earlier analysis (if any exists) such as the CBA to ensure
capability gaps exist, a materiel solution is warranted, and the AoA is the next prudent path to
pursue.
Keep in mind that not all measures of interest for the study (MOEs, MOSs, and MOPs and their
minimum performance values) will necessarily be explicitly identified in any source document.
It is up to the team to identify what needs to be measured to adequately evaluate the
alternatives’ capability to accomplish the required mission tasks and close the capability gaps.
Finally, as with MOEs, the MOPs should be linked (where possible) to future testing
requirements.
As stated earlier, there is no universal definition of measures within the analytic and test
communities. As will be discussed in Appendix L, different organizations not only use MOPs
in different ways, but also use the term MOP to refer to different items or factors. Appendix L
also contains more detailed information regarding the mechanics of developing mission tasks,
measures, criteria, and conducting data analysis.
4.3 Levels of Analysis
In the world of military operations analysis, levels of effectiveness analysis are characterized by
the number and types of alternatives, threat elements, and the levels of fidelity needed for the
study. A typical four-level classification for model selection is shown in Figure 4-2.
At the base of the triangle is the engineering analysis performed on individual components of an
alternative or threat system. One level up, engagement analysis can model the interaction
between a single element of the alternative and a single threat. An example of this analysis is
weapon versus target, or aircraft versus aircraft. Engagement analysis also looks at interactions
of larger quantities of the same elements, or few-on-few.
At the top two levels, mission/battle and theater/campaign (many on many), the analysis
becomes very complex involving the modeling of most or all of the forces in a specific,
complex scenario. At these higher levels the focus of the analysis changes. The applicable
models and simulations (M&S) will also change, as does the complexity of the analysis.
Analysis at higher levels may require inputs from supporting analysis at lower levels.
While the supporting analysis may come from sources outside the AoA, it will often be
performed by the AoA team. MOP values tend to be produced from engineering and one-on53
one analyses. MOE values tend to come from higher levels of analyses. MOS values may come
from either source. There are no hard and fast rules, though, because of the range of issues
considered in AoAs.
Given the increasing complexity of the analysis encountered in moving up the pyramid, every
effort must be made to use the appropriate level needed to answer the AoA's questions. In some
cases, a team may need to use several levels of analysis to adequately address all AoA issues.
Figure 4-2 depicts the analysis hierarchy.
Figure 4-2: Hierarchy of Analysis
Once measures have been identified and the methodologies to be used for each analytical effort
determined, it is time to determine what “tools” will be used to develop measure data. The term
“tools” is defined as spreadsheets, SMEs, methods, processes, and Modeling & Simulation
(M&S). The analysis tools are the heart and soul of analysis and can consist of everything from
hand-written steps executed with a "stubby pencil" to elegant mathematical formulations
represented by thousands of lines of computer code. In some cases, they may include personin-the-loop simulations or the informed judgment of SMEs. Whatever their complexity or form,
there comes a point when the AoA team must decide which tools to use to generate measure
data for alternative comparisons.
The measures developed for the analysis should dictate which tools are needed. Never develop
measures based on the availability or familiarity of a particular analysis tool. Doing so (for
example, because of easy accessibility to a particular M&S) may result in the wrong issues
being investigated and the wrong alternatives being identified as promising. Once the measures
54
are identified, the necessary level(s) of analysis can be determined and a search conducted for
tools suitable for those measure calculations. [Note: the study questions and the methodology
to address those questions should always drive tool selection and who should do the analysis,
not the other way around.]
When selecting analysis tools consider the following:





Information or input data requirements and the quality of the data sources
Credibility and acceptance of the tool output or process results (e.g., SME assessments)
Who is available to run the M&S, develop/manipulate the spreadsheets or participate in
SME assessments
Whether or not the tool can be applied to support the analysis within time and funding
constraints
Cost of running M&S
Tool inputs come from all aspects of the AoA: threats and scenarios, alternative definitions,
employment concepts, constraints and assumptions, etc. These may also be derived from the
outputs of other tools. Before selecting an M&S tool, the sources of all inputs should be
identifiable and credible. Where the best available tools fall short, the team must identify this
information to decision makers. Information regarding some commonly accepted models can be
obtained from the Air Force Standard Analysis Toolkit (AFSAT) located at the HAF/A9 portal
page.
Before deciding on a final integrated set of tools, it is useful to check that the toolset is adequate
for evaluating all measures in the AoA. Constructing a linkage diagram as illustrated in Figure
4-3 may be useful for this.
As shown, this diagram depicts the source of data to resolve the measure data values and
provides a system level diagram of how the selected analysis tools are expected to work
together. It should also show what information is expected to flow from one tool (or process) to
another. A review of the linkage diagram should also ensure that a common set of assumptions
is made across all the tools. Including a linkage diagram in the Study Plan should also enhance
the understanding of those reading or reviewing the plan.
55
Figure 4-3: Notional Example of Tool and Measure Linkage
4.3.1 M&S Accreditation
The DODI 5000 series requires that digital M&S used in support of acquisition decisions be
formally accredited for use by an Accreditation Authority. Additionally, AFI 16-1001
Verification, Validation, and Accreditation (VV&A) establishes policy, procedures, and
responsibilities for the VV&A of Air Force owned or managed M&S. MIL-STD-3022, DoD
Standard Practice, Documentation of VV&A for Models and Simulation Accreditation outlines
the templates for the M&S accreditation plan and report.
Accreditation is an official determination by the accreditation authority that a model (or
methodology, tools, data) is acceptable for a specific purpose and identifies risks associated
with using that model. Accreditation provides credibility to the study by demonstrating the
pedigree of the model, offering evidence that model is credible, and establishing that it is
appropriate for its use within the study. The study team should allow time for the M&S
accreditation process within the AoA schedule; this process should be discussed in the study
plan and the accreditation plan should be included as an appendix to the study plan. OAS can
help tailor an appropriate accreditation plan.
Model accreditation begins with development of the accreditation plan. The plan contains
criteria for model assessment based on the ability of the model to accept the required input data
and to provide appropriate output information to resolve the MOEs. All data used for model
56
input and scenario configuration should also be accredited to ensure credibility of the output.
While accreditation is important, the study team must balance the extent of work required to do
the accreditation with the study questions at hand. In other words, the accreditation authority
must determine what the appropriate level of accreditation is for this problem. Once the model
assessment is complete, a final accreditation report is prepared.
4.3.2 Study Risk
Fundamentally, AoAs consist of three primary analysis components: effectiveness, cost, and
risk. Risk in this sense, refers to the operational, technical, and programmatic risks associated
with the alternative solutions. Various factors such as technical maturity, survivability,
dependency on other programs are considered in determining these risks.
There are other risks (uncertainties) in AoAs associated with the conduct of the study rather
than the alternatives themselves. Generally, these uncertainties pertain to factors that could
impact the conduct of the study as described in Section 3.4, such as time and resource
constraints.
In terms of uncertainties associated with the effectiveness analysis, consider, for instance, a
situation where a team is evaluating both existing, established, mature systems and newer,
cutting edge technologies for which little historical data exists. While the team has access to
sufficient, credible information (maintenance records, prior test results, historical performance
data, etc.) regarding the operational capabilities of the mature technologies, it will need to make
certain assumptions regarding the newer technologies that may or may not actually be true.
Additionally, the team may only have a limited set of data, and/or subject matter expertise to
rely on for analysis. While the SMEs may come to the conclusion that the second alternative
should be capable of performing the required tasks to the required standards, they do not have
any hard evidence to unequivocally support this conclusion. Due to the uncertainty associated
with this data, the team will have less confidence in the conclusions drawn for the newer
system. While the SMEs believe it will be operationally effective, it may not.
These areas of uncertainty are excellent starting points for sensitivity analysis. Given the
uncertainty of the information used to form some conclusion, what is the operational impact if
the team is wrong? It is important that the team recognizes and documents these uncertainties,
identifies the operational impact if an unanticipated outcome occurs, and provides this
information to the decision makers.
4.4 Sensitivity Analysis
Alternatives whose effectiveness is stable over a range of conditions provide greater utility and
less risk than those lacking such stability. Alternatives in an AoA are typically defined with
57
certain appropriate assumptions made about their performance parameters: weight, volume,
power consumption, speed, accuracy, impact angle, etc. These alternatives are then assessed
against AoA-defined threats and scenarios under a set of AoA-defined assumptions. This
provides very specific cost and performance estimates, but does little to assess the stability of
alternative performance to changes in system parameters or AoA threats, scenarios,
employment, and other assumptions.
Stability can only be investigated through sensitivity analyses in which the most likely critical
parameters are varied; for instance: reduced speed or increased weight, greater or less accuracy,
different basing options, reduced enemy radar cross section, or when overarching assumptions
are changed. This form of parametric analysis can often reveal strengths and weaknesses in
alternative performance that are valuable in making decisions to keep or eliminate alternatives
from further consideration. Sensitivity analyses should always be performed with an emphasis
on alternatives that survived early screening processes. It should be budgeted for in the original
plan; however, the specific sensitivity analysis to be conducted usually will not be known until
well into the AoA. Sensitivity analysis can also add credibility to the information developed
during the effectiveness analysis. Of course, it is always necessary to balance the amount of
sensitivity analysis against its potential value and the available resources.
In addition to sensitivity analysis, the team may want to consider examining various excursions
from the original scenarios and other what if analysis to provide a more thorough and robust
evaluation of the capabilities and limitations of the alternatives in differing operational
environments and when employment situations change.
4.5 Effectiveness Analysis Results Presentation
Once the effectiveness analysis has been completed, the most important task for the team is to
provide a concise, cogent, and clear picture of the effectiveness of each alternative in relation to
the requirements. The team must determine how to convey the critical information learned. In
most AoAs, this is an art form far more than a science and requires serious operational and
military judgment. One method to do this (similar to the figure below) is to present the values
for the measures of each alternative using a color scheme indicating how well each measure was
accomplished. This is only one example which may not be suitable in each case; particularly if
the analysis included numerous measures. If a presentation such as this is used, a methodology
needs to be developed to map measured values to the colors displayed. Any method chosen;
however, should map measure values in relation to the threshold value and associated changes
in military utility and reduction in the gaps – not in relation to one another. As discussed in
section 4.1 above, OAS discourages roll-up, aggregation, and weighting schemes that tend to
mask important information and potentially provide misleading results. Therefore, for studies
with an abundant amount of measures information, a balance must be achieved between
providing credible results with sufficient clarity and overwhelming or confusing the audience.
58
Figure 4-4: Effectiveness Analysis Results Presentation
59
5 Performing Cost Analysis
5.1 General Cost Estimating
Generally, cost estimates are required for government acquisition programs, as they are used to
support funding decisions. Developing a sound cost estimate requires stable program
requirements, access to detailed documentation and historical data, and well-trained,
experienced cost analysts. Cost estimating combines concepts from such disciplines as
accounting, budgeting, economics, engineering, mathematics, and statistics. Establishing
realistic estimates for projected costs supports effective resource allocation and increases the
probability of a program’s success. In addition, cost estimates are used to develop annual
budget requests, evaluate resource requirements at key decision points, and to develop
performance measurement baselines.
Cost estimating is defined as the process of collecting and analyzing historical data and
applying quantitative models, techniques, and tools to predict the future cost of an item,
product, program, or task. Cost estimating is an integral part of the AoA and is used to support
the following activities:






Evaluating program or sponsor viability, structure, and resource requirements
Supporting a program’s or sponsor’s planning, programming, budgeting, and execution
process (PPBE)
Predicting future costs based on known historical technology and manpower
requirements
Evaluating alternative courses of action
Supporting milestone decisions and reviews
Forming the basis for budget requests to Congress
5.2 AoA Cost Estimating
The Life Cycle Cost Estimate (LCCE) includes more than just the procurement cost of the
system. Although procurement cost is important, it is often not the largest portion of the overall
cost of an alternative. The LCCE can provide the following insights to inform acquisition
decisions:




Total cost to the Federal Government (to the U.S. treasury) of developing, procuring,
fielding, and sustaining operations for each alternative for its expected life cycle
The annual breakdown of costs expected for the alternative by funding categories (e.g.
3300 Military Construction, 3400 O & M, 3500 Military Personnel, 3600 RDT&E, etc.)
Trade-off analysis/Cost As an Independent Variable (CAIV) to identify solutions that,
given a fixed cost, provide the greatest (may be less than 100% solution) capability
(CAIV is discussed in paragraph 5.5.4 below)
The cost drivers of alternatives (i.e., those items having the greatest impact on the
overall costs)
60




Cost of enablers and operational support for the capability being evaluated
Estimated life cycle costs that represent what is necessary to deliver the predicted
operational effectiveness for each alternative
Projected costs associated with various operational, basing, fielding, or programmatic
decisions expected for each alternative evaluated
Uncertainty and risk associated with the cost estimate
It is critical that all cost estimates included in AoAs be credible and clearly documented. Table
5-1 describes characteristics of credible cost estimates from the Government Accountability
Office (GAO) Cost Estimating guide. This guide has been referenced in several recent reports
to Congress on how to improve DoD’s acquisition process. It is provided to help study teams
understand what is necessary to produce credible cost estimates.
Table 5-1: GAO’s Basic Characteristics of Credible Cost Estimates
Characteristic
Clear identification of task
Broad participation in preparing estimates
Description
Estimator must be provided with the system
description, ground rules and assumptions, and
technical and performance characteristics.
Estimate’s constraints and conditions must be
clearly identified to ensure the preparation of a
well-documented estimate
All stakeholders should be involved in
deciding mission need and requirements and in
defining system parameters and other
characteristics
Availability of valid data
Numerous sources of suitable, relevant, and
available data should be used from similar
systems to project costs of new systems; these
data should be directly related to the system’s
performance characteristics
Standardized structure for the estimate
A standard work breakdown structure, as
detailed as possible, should be used. It should
be refined as the cost estimate matures and the
system becomes more defined. The work
breakdown structure ensures that no portions
of the estimate are omitted and allows
comparisons to similar systems and programs
Provision for program uncertainties
Uncertainties should be identified and
allowance developed to cover the cost effect
61
Recognition of inflation
The estimator should ensure that economic
changes, such as inflation, are properly and
realistically reflected in the life cycle cost
estimate
Recognition of excluded costs
All costs associated with a system should be
included; any excluded costs should be
disclosed and given a rationale
Independent review of estimates
Conducting an independent review of an
estimate is crucial to establishing confidence
in the estimate; the independent reviewer
should verify, modify, and correct an estimate
to ensure realism, completeness, and
consistency
The life cycle cost in an AoA captures the total cost of each alternative over its expected life
and includes costs incurred for research and development, investment, operations and support,
and end of life disposal. Sunk costs (funds already spent or obligated) are not included in the
LCCEs; however, they may be of interest to decision makers and should be identified
separately. All AoA LCCEs are based on peacetime operations and do not include any warrelated costs such as replacement of expended or destroyed assets or increased costs associated
with wartime operational tempo.
The study team should determine what is included in peacetime operations and what is included
in contingency operations. For example, airlift operations during peacetime may entail
scheduled flights in commercial airspace and operating at various established commercial and
military airfields. On the other hand, some Special Operations missions during peacetime may
appear to be contingency operations, such as flying limited sorties into areas not serviced by
commercial airspace and landing in unimproved areas. It will be the study team’s responsibility
to determine where the defining line between peacetime and contingency operations falls for
their study, and to obtain WIPT, SRG, SAG, and AFROC concurrence with this critical
assumption.
5.3 Life Cycle Cost Considerations
5.3.1 Sunk Costs
Sunk costs are those that either already occurred or will be incurred before the AoA can inform
any decisions on their expenditure. The best method of determining the cut off for sunk costs is
to use the fiscal year in which the AoA is to be completed. Any costs that are expected to be
incurred after that fiscal year should be included in the AoA LCCEs.
62
5.3.2 Research and Development (R&D) Costs
The costs of all R&D phases, including Advanced Technology Demonstration (including
Concept Development), Technology Development, and Engineering and Manufacturing
Development, are included in this cost element. There are many types of R&D costs:
prototypes, engineering development, equipment, test hardware, contractor system test and
evaluation, and government support to the test program. Engineering costs for environmental
safety, supportability, reliability, and maintainability efforts are also included, as are support
equipment, training, and data acquisition supporting R&D efforts.
5.3.3 Investment Costs
The cost of investment (low rate initial production, full rate production, and fielding) includes
the cost of procuring the prime mission equipment and its support. This includes training, data,
initial spares, support equipment, integration, pre-planned product improvement (P3I) items,
and military construction (MILCON). MILCON cost is the cost of acquisition, construction, or
modification of facilities (barracks, mess halls, maintenance bays, hangers, training facilities,
etc.) necessary to accommodate an alternative. The disposal of this infrastructure should be
captured in the disposal costs (discussed in paragraph 5.3.5). The cost of all related
procurement (including transportation, training, support equipment, etc.) is included in the
investment phase.
5.3.4 Operations and Support (O&S) Costs
O&S costs are those program costs necessary to operate, maintain, and support system
capability through its operational life. These costs include all direct and indirect elements of a
defense program and encompass costs for personnel, consumable and repairable materiel, and
all appropriate levels of maintenance, facilities, and sustaining investment. Manpower
estimates should be consistent with the Manpower Estimate Report (MER), which is produced
by the operating command’s manpower office. For more information, refer to the OSD Cost
Analysis Improvement Group's Operations and Support Cost Estimating Guide, October 2007.
5.3.5 Disposal Costs
Disposal costs represent the cost of removing excess or surplus property (to include MILCON)
or materiel from the inventory. It may include costs of demilitarization, detoxification,
divestiture, demolition, redistribution, transfer, donation, sales, salvage, destruction, or long
term storage.
It may also reflect the costs of hazardous waste disposition of storage and environmental
cleanup. Disposal costs may occur during any phase of the acquisition cycle. If during
development or testing some form of environmentally unsafe materials are created, the costs to
dispose of those materials are captured here.
63
5.3.6 Baseline Extension Costs
The baseline is the existing, currently programmed system funded and operated according to
current plans. Baseline extension costs are those costs associated with maintaining the current
capabilities (i.e., the baseline alternative) through the life cycle identified in the study. Only
improvements that are included in the POM are part of the baseline. This may require Service
Life Extension Program (SLEP) efforts, additional procurement, additional maintenance, or
other efforts to continue to provide the baseline level of capability. Capabilities that may be
provided by other alternatives but are not provided by the baseline alternative should be
addressed as continued shortfalls in the baseline capability. For other study alternatives, these
costs must be continued until such time as an alternative providing that additional capability is
fielded and operational (Full Operational Capability (FOC), which will be based upon the study
assumptions).
5.3.7 Life Cycle Time Frame
The cost of each alternative (baseline and all proposed alternatives) must be evaluated for the
same life cycle time frame. The time frame should span from the end of the AoA to the end of
the life cycle as defined in the study (e.g., 20 year life cycle). This allows for a fair comparison
of each alternative and may require service life extension efforts for other alternative (including
the baseline) with expected shorter useful lives or the calculation of residual values for
alternatives that may continue to provide capability past the study cut off dates. It is important
estimate the costs associated with providing a capability (albeit possibly at different levels for
different alternatives) for the same period of time.
Figure 5-1 below illustrates the concept of comparing all alternatives across the same life cycle.
In this example all alternatives provide their evaluated capability from FY02 through FY38.
The assumption is that alternative 1 has the longest life and ends its useful life (and incurs
disposal costs) in FY38. Each alternative has a different Initial Operational Capability (IOC)
date where it becomes an operational asset and requires at least one Service Life Extension
Program (SLEP) effort during its life. Alternative 2 may have some residual value at the end of
the analysis time frame which should be included in the LCCE. The baseline alternative is
shown incurring costs until such time as its capabilities are replaced by the new alternatives.
There will likely be a ramp-down in baseline costs from IOC to FOC for each new alternative
along with a corresponding ramp-up of alternative operational costs for the alternative being
evaluated.
64
Figure 5-1: Comparing All Alternatives Across the Same Life Cycle
5.3.8 Pre-fielding Costs
Pre-fielding costs are those associated with maintaining the capabilities being analyzed in the
AoA until a specific alternative can be fielded to provide them. Pre-fielding costs must include
the costs of maintaining the current baseline alternative (or capability) until such time as the
other alternatives can be fielded (FOC). There may be ramp-up of new alternatives and a
corresponding ramp-down of baseline capabilities from IOC to FOC depending on the study
and its assumptions.
5.4 Cost Analysis Responsibility
The working group created to evaluate costs for an AoA should be led by a government cost
analyst (also referred to as cost estimator in this handbook) familiar with the type of capability
being studied. This group should also include representatives from operating and implementing
command organizations (stakeholders) with expertise in cost analysis and knowledge of the
system alternatives. Additionally, other specialists can assist the team in assessing the cost
implications of enablers (e.g., logisticians, intelligence analysts, Human Systems Integration
practitioners, and communications specialists). OAS will serve as an advisor and assist the cost
team throughout the AoA. As one of its first official duties, the cost analysis working group
should request support from the Air Force Cost Analysis Agency (AFCAA). Specifically, this
support should include AFCAA participation in the cost analysis working group, review and
65
validation of the methodologies, and an independent review of the final cost estimates. In
response to this request, AFCAA may provide a representative to support the working group in
developing the cost analysis methodology. If not possible, AFCAA should respond to the
team’s request and identify what, if any, involvement they will have in the AoA. Their
involvement may include providing regulatory guidance, reviewing and approving proposed
cost analysis methodologies, and performing a sufficiency review, which is a form of NonAdvocate Cost Assessment (NACA), per AFPD 65-5 (August 2008).
The cost group is responsible for the following cost analysis tasks:

















Request AFCAA support for the cost analysis
Identify key cost analysis team support (stakeholders, modelers, etc.) requirements
Develop appropriate cost analysis ground rules and assumptions and ensure they are
consistent with other ground rules and assumptions in the study
Develop the Work Breakdown Structure (WBS) to be used in the cost analysis; the WBS
is a hierarchical organization of the items to be costed
Develop cost analysis approaches and methodologies
Locate and determine the suitability and availability of cost models and data required
Define the enabling (logistics, intelligence, Human Systems Integration, etc.) elements
necessary to create the cost analysis
Prepare point estimates and confidence ranges for the baseline and each viable
alternative, as determined by the screening process
Bound the LCCE point estimates with uncertainty ranges (or cumulative distribution
functions) specifically identifying the 50th and 80th percentile points
Document the cost analysis so that a qualified cost analyst can reconstruct the estimate
using only the documentation and references provided in the final report
Crosscheck the estimates to ensure the methodology and the ground rules and
assumptions are consistent across all alternatives and that the LCCE is complete
Include programmatic data in the cost analyses documentation, such as quantities and
delivery schedules (whether known or developed by the cost team)
Identify cost drivers (those elements to which estimates are most sensitive to changing)
and perform sensitivity analyses on significant cost drivers demonstrating the impact of
changing assumptions on the overall LCCE
Coordinate with the effectiveness analysis working group to evaluate and identify any
possible relationships between cost drivers and aspects of the alternatives that may drive
capability delivery
Address any funding and affordability constraints and specify schedule limitations
Assuming such constraints are identified, provide necessary cost data to perform CAIV
analyses
Provide support to Core Function Lead Integrator (CFLI) for an affordability assessment
of the alternatives’ impact on the entire mission area in accordance with DAG Section
3.2.
66








Present all costs in base-year dollars (BY$) and then-year dollars (TY$)
Identify and use of the appropriate inflation indices for use in creating TY$ estimates
(the most current OSD indices are published on the SAF/FMC web page)
Separately identify sunk costs for each alternative
Address manpower implications (government and contract manpower) to include all
costs with employing each person for each alternative in the O&S cost
Address appropriate environmental regulations, treaties, risk mitigation, etc. in
determining disposal costs
Address sources that are driving cost risk and uncertainty for each alternative and
provide mitigation plans where possible
Write cost section of the study plan, final report, and review group (WIPT, SRG,
AFROC, etc.) briefings
Participate in the alternative comparison and risk analysis efforts to ensure LCCE data is
appropriately used and interpreted
5.5 Cost Analysis Methodology
Cost analysis allows alternatives to be compared to the baseline system using their relative
estimated costs. The cost methodologies to be used are initially outlined in the study plan and
updated as the AoA proceeds. A recommended approach for structuring the LCCE process
during AoAs is outlined in Table 2 (“The Twelve Steps of a High-Quality Cost Estimating
Process”) of the GAO Cost Estimating and Assessment Guide (GAO CEAG), March 2009 (See
Appendix M). This guides cost estimators of all levels of experience in developing the LCCE.
The cost analysis group will use the same general work breakdown structure (WBS) to compute
cost estimates for all viable alternatives. See Section 5.5.1 for a description of WBS. The level
of alternative description available and the fidelity of the cost estimate will vary depending on
the detail of alternative definition and its technological maturity. The definition of each
alternative in the CCTD will serve as the foundation for the cost, effectiveness, and risk analysis
efforts during the AoA. It is crucial that the same version of the CCTD be used as the basis for
all analysis. As part of the cost methodology, the AoA study plan should identify general cost
ground rules and assumptions underlying the analysis (for example: all maintenance will be
provided with military personnel) as well as those specific to particular cost elements or life
cycle phases (for example: System Engineering/Project Management (SEPM) will be estimated
at 5% of green aircraft cost). At a minimum, the preliminary list of cost ground rules and
assumptions should address the following:



Cost basis of the estimate (specified in BY$)
Duration (years) alternatives are to be operational (life cycle) for costing purposes
Specific inflation indices used (OSD unless otherwise justified)
67









Definition of sunk costs (date separating costs expended or contractually committed
from those to be included in the estimate)
Schedule issues, including major milestones and significant events (IOC and FOC dates,
production schedules and quantities)
Basing, logistics, and maintenance concepts for each alternative
Fully Burdened Cost of Energy(FBCE)
MILCON requirements
Intelligence, Human Systems Integration, and other enabler support requirements
Environmental costs
Personnel requirements and constraints
Affordability constraints
5.5.1 Work Breakdown Structure (WBS)
The cost estimating methodology is generally based on a WBS. A WBS is a product-oriented
(as opposed to functionally-oriented) tree composed of hardware, software, services, data, and
facilities that define the product to be developed and produced. The following is a notional
WBS for an aircraft system; it illustrates the typical elements found at the first three WBS levels
(succeeding levels contain greater detail).
Aircraft System
 Air Vehicle
 Airframe
 Propulsion
 Air vehicle software
 Armament
 Weapons delivery
 Systems Engineering and Program Management
 (no Level 3 breakdown)
 System Test & Evaluation (T&E)
 Development T&E
 Operational T&E
 T&E support
 Test facilities
 Training
 Equipment
 Services
 Facilities
 Data
 Technical publications
 Engineering data
 Management data
 Support data
68





Peculiar Support Equipment
 Test & measurement equipment
 Support & handling equipment
Common Support Equipment
 Test and measurement equipment
 Support and handling equipment
Operational/Site Activation
 System assembly, installation and checkout
 Contractor technical support
 Site construction
Industrial Facilities
 Construction, conversion, or expansion
 Equipment acquisition or modernization
 Maintenance (industrial facilities)
Initial Spares and Repair Parts
 (no Level 3 breakdown)
Once the WBS has been created, cost estimates are collected for the WBS elements and then
used to develop an overall point estimate for each alternative. It is recommended that study
teams include a WBS to at least level 3 in their AoA study plans. This demonstrates to decision
makers that the team understands each alternative. The CCTD is the best source of information
to use in developing the WBS. Although the CCTD may not be complete when the study plan
development effort begins, there should be enough information available to initiate development
of the level 3 WBS. Each alternative’s WBS will be further defined and lower levels added
during the analysis. For further information on WBS, refer to MIL-STD-881 Revision C, Work
Breakdown Structures for Defense Materiel Items (3 October 2011).
5.5.2 Cost Estimating Methodologies
Once the cost estimating team has developed the WBS, the next step is to determine how to
develop cost estimates for each element of the WBS. These individual estimates will form the
basis of the overall point estimate. There are multiple cost estimating methods available which
span the Acquisition Life Cycle to facilitate the cost estimating process.
Depending on project scope, estimate purpose, project maturity, and availability of cost
estimating resources, the estimator may use one, or a combination, of these techniques.
Generally speaking, the estimating team should identify an overarching methodology that will
frame the entire estimating effort, and also identify the specific methodology that is most
appropriate for estimating each individual WBS element. As the level of project definition
increases, the estimating methodology tends to progress from conceptual techniques to
deterministic and definitive techniques.
The cost team must choose the appropriate methodology which applies to where the program
69
or effort is in its life cycle. Early in the program, definitions maybe somewhat limited and
actual costs may not have been accrued. Once a program is in production, cost and technical
data from the development phase can be used to estimate the remainder of the program. DoD
5000.4‐M, Cost and Software Data Reporting (CSDR) Manual, identifies five analytical cost
estimating methods and techniques commonly used to develop cost estimates for DoD
acquisition systems:
1.
2.
3.
4.
5.
Analogy
Engineering build‐up
Parametric
Extrapolation from actual costs
Expert opinion
For definitions and explanations for analogy, engineering build-up and parameter methods refer
to Appendix N which is an excerpt from Chapter 11 in the GAO Cost Estimating and
Assessment Guide.
Table 5-2 compares the most common cost estimating methods:
9999
70
Further information or details in applying any of these methods can be obtained through
discussions with the Office of Aerospace Studies or by consulting the GAO Cost Estimating and
Assessment Guide (March 2009).
5.5.3 Sensitivity Analysis
Sensitivity analysis reveals how the cost estimate is affected by changes in assumptions, ground
rules, and cost drivers. The cost estimator must examine the effect of changing one assumption,
ground rule, or cost driver at a time while holding all other variables constant. By doing so, it is
easier to understand which variable most affects the cost estimate. In some cases, a sensitivity
analysis can be conducted to examine the effect of multiple assumptions changing in relation to
a specific scenario.
Since estimates are built on a number of predicted technologies and assumptions, it is necessary
to determine the sensitivity of the cost elements to changes in assumptions, ground rules, and
cost drivers. If possible, cost estimators should quantify the risks they identify. This can be
done through both a sensitivity analysis and an uncertainty analysis (discussed in the paragraph
5.3.5).
Uncertainty about the values of some, if not most, of the technical parameters is common early
in an alternative’s design and development. Many assumptions made at the start of a study may
prove to be inaccurate. Therefore, once the point estimate has been developed, it is important to
determine how sensitive the total cost estimate is to changes in the study assumptions, ground
rules, and cost drivers.
5.5.3.1 Sensitivity Factors
Some factors that are often varied in a sensitivity analysis are:












Duration of life cycle
Volume, mix, or pattern of workload
Threshold/objective criteria
Operational requirements
Hardware, software, or facilities configurations
Assumptions about program operations, fielding strategy, inflation rate, technology
heritage savings, and development time
Learning curves
Performance characteristics
Testing requirements
Acquisition strategy (multiyear procurement, dual sourcing, etc.)
Labor rates
Software lines of code or amount of software reuse
71





Scope of the program
Manpower levels and personnel types
Occupational health issues
Quantity planned for procurement
Purchase schedule
Many of these are usually cost drivers in AoAs and are responsible for sizable changes in early
cost estimates.
5.5.3.2 Cost as an Independent Variable (CAIV)
CAIV is one of the most common types of sensitivity analysis. CAIV is a technique for varying
the expected cost of the alternative(s) and changing performance and schedule to determine the
impact of funding limitations. This technique allows the cost team to perform “what if”
analysis with funding levels even before such levels have been determined or included in
budgets. It is good practice for the cost team to fluctuate the point estimate they have developed
by decrements (for example, 0, 10, and 25 percent) and then, with the alternative development
team, derive the number of units, performance characteristics, and schedules that such reduced
funding levels would represent. It is likely this effort will identify a point at which it is not
advisable to proceed with one or more alternatives. These results can provide important
information to the decision maker.
There are no set levels which the cost should be fluctuated, nor are there any set formats for
displaying this information. Table 5-4 shows a recommended way to display CAIV results in
the AoA; however, teams may have other methods that provide greater insights.
72
Table 5-2: Cost As an Independent Variable (CAIV)
5.5.4 Cost Models and Data
Cost models incorporating the five methodologies are available to assist the cost analyst in
developing the LCC estimates. The models and data intended for use in the AoA should be
identified and described in the study plan. Cost models and data generally accepted by the Air
Force cost analysis community should be used. AFCAA and CAPE can provide a
comprehensive list of acceptable cost models and databases. Cost models frequently used
include:






ACEIT (integrated)
COCOMO (software)
CRYSTAL BALL (risk)
LSC (logistics)
SEER (software/hardware)
SEM (software)
73


PRICE-H (hardware)
PRICE-S (software)
5.5.5 Cost Risk and Uncertainty
Because the LCCEs may be used as estimates for future program costs, it is important to
determine the amount of uncertainty associated with the estimate. For example, data from the
past may not always be relevant in the future, because new manufacturing processes may
change a learning curve slope or new composite materials may change the relationship between
weight and cost. Moreover, a cost estimate is usually composed of many lower-level WBS
elements, each of which comes with its own source of error. Once these elements are added
together, the resulting cost estimate can contain a great deal of uncertainty.
5.5.5.1 The Difference Between Risk and Uncertainty (GAO-09-3SP, GAO Cost
Estimating Guide)
Risk and uncertainty refer to the fact that because a cost estimate is a forecast, there is always a
chance that the actual cost will differ from the estimate. Moreover, lack of knowledge about the
future is only one possible reason for the difference. Another equally important reason is the
error resulting from historical data inconsistencies, assumptions, cost estimating equations, and
factors typically used to develop an estimate.
In addition, biases are often found in estimating program costs and developing program
schedules. The biases may be cognitive—often based on estimators’ inexperience—or
motivational, where management intentionally reduces the estimate or shortens the schedule to
make the project look good to stakeholders.
Recognizing the potential for error, and deciding how best to quantify it, is the purpose of both
risk and uncertainty analysis.
It is inaccurate to add up the most likely WBS elements to derive a program cost estimate, since
their sum is not usually the most likely estimate for the total program, even if they are estimated
without bias.
Quantifying risk and uncertainty is a cost estimating Best Practice addressed in many guides
and references. DOD specifically directs that uncertainty be identified and quantified. The
Clinger-Cohen Act requires agencies to assess and manage the risks of major information
systems, including the application of the risk-adjusted return on investment criterion in deciding
whether to undertake particular investments.
While risk and uncertainty are often used interchangeably, in statistics their definitions are
distinct:
74
•
•
Risk is the chance of loss or injury. In a situation that includes favorable and
unfavorable events, risk is the probability that an unfavorable event will occur.
Uncertainty is the indefiniteness about the outcome of a situation. It is assessed in
cost estimate models to estimate the risk (or probability) that a specific funding level
will be exceeded.
Therefore, while both risk and uncertainty can affect a program’s cost estimate, enough data
will never be available in most situations to develop a known frequency distribution. Cost
estimating is analyzed more often for uncertainty than risk, although many textbooks use both
terms to describe the effort.
5.5.5.2
Technology Readiness Levels (TRLs)
Technology Readiness Levels (TRLs) are often used in early analysis to determine potential
costs, uncertainties, and risks. However, given the early stages of some alternative development
there may not be credible TRL scores available. Table 5-5 from the 2003 Society of Cost
Estimating Analysis (SCEA) “Cost Risk Analysis” paper may be of some assistance in
identifying the risk areas associated with technology and hardware components of alternatives.
Using the risk category (relates somewhat to phase of the life cycle) and the descriptors of the
current state of the alternative or subsystem, a risk score (0-10) can be assigned which will help
to identify relative risks amongst alternatives.
For example, in the Technology Development phase of the life cycle (roughly equivalent to
“Technology advancement” in the table) since the alternative being considered represents the
state of the art (i.e., being used today) then the risk of requiring significant investment dollars
for R&D would be low (or “0” in the table). On the other hand, in the Engineering and
Manufacturing Development phase (roughly equivalent to “Engineering development” in the
table) since the alternative has only a concept defined, the risk of having to invest sizable
amounts of funding into development and testing of that concept is high (or “10” in the table).
This is provided as a tool to help teams evaluate potential cost risks and uncertainty as they
apply to AoA alternatives, other methods may be appropriate as well.
Table 5-3: A Hardware Risk Scoring Matrix
Risk score: 0 = low, 5 = medium, 10 = high
Risk category 0
1–2
3–5
6–8
9–10
1. Technology Completed,
advancement state of the art
Minimum
advancement
required
Modest
advancement
required
Significant
advancement
required
New
technology
2. Engineering Completed, fully
development tested
Prototype
Hardware
and software
development
Detailed design
Concept
defined
3. Reliability
Historically
high on similar
Modest problems
known
Serious
problems known
Unknown
Historically high
for same system
75
Systems
4. Producibility Production and
yield shown on
same system
Production and
yield shown on
similar system
Production and
yield feasible
Production
feasible and
yield problems
No known
production
experience
5. Alternative
item
Exists or
availability on
other items not
important
Exists or
availability on
other items
somewhat
important
Potential
alternative in
development
Potential
alternative in
design
Alternative
does not exist
and is required
6. Schedule
Easily achieved
Achievable
Somewhat
challenging
Challenging
Very
challenging
Source: © 2003, Society of Cost Estimating and Analysis (SCEA), “Cost Risk Analysis.”
5.5.5.3
Software Cost Risk
Another source of cost risk to alternatives is software development, modification, and
integration. These aspects are evaluated in a similar fashion as hardware described in the
previous section. Table 5-6 is a guide to make sure that estimates reflect what we know and
what we don’t know about the development effort required for the software piece of the
estimate. Like other sources of risk this needs to be addressed in the LCCE. Table 5.6,
Software Scoring Matrix, was developed by the Air Force and published in the GAO Cost
Estimating Guide. This guide helps the study team decide where the software cost risk is
prevalent. As an example of how to use the Software Risk Scoring Matrix, a subject matter
expert (SME) would determine whether the Design Engineering is scored as “0” (Design
complete and validated) or as high as “10” (Requirements are partly designed).
Table 5-4: A Software Risk Scoring Matrix
Risk score: 0 = low, 5 = medium, 10 = high
1–2
3–5
6–8
Risk category
0
1. Technology
Advancement
Proven
conventional
analytic
approach,
standard
methods
Undemonstrated
conventional
approach,
standard
methods
Emerging
approaches, new
applications
Unconventional
approach,
concept in
development
Unconventional
approach,
concept
unproven
2. Design
Engineering
Design complete
and validated
Specifications
defined and
validated
Specifications
defined
Requirements
defined
Requirements
partly defined
3. Coding
Fully integrated
code available
and validated
Fully integrated
code available
Modules
integrated
Modules
exist but not
integrated
Wholly new
design, no
modules exist
4. Integrated
Thousands of
instructions
Tens of
thousands of
instructions
Hundreds of
thousands of
instructions
Millions of
instructions
Tens of millions
of instructions
76
9–10
5. Testing
Tested with
system
Tested by
simulation
Structured
walk-throughs
conducted
Modules tested
but not as a
system
Untested
modules
6. Alternatives
Alternatives
exist; alternative
design not
important
Alternatives
exist; design
somewhat
important
Potential for
alternatives in
development
Potential
alternatives
being
considered
Alternative does
not exist but is
required
Modest schedule,
few concurrent
activities, review
cycle reasonable
Modest schedule,
many concurrent
activities,
occasional
reviews, late first
review
Fast track on
schedule, many
concurrent
activities
Fast track,
missed
milestones,
review at
demonstrations
only, no periodic
reviews
7. Schedule and Relaxed
management
schedule, serial
activities, high
review cycle
frequency, early
first review
Source: U.S. Air Force.
5.6 Cost Results Presentation
The format illustrated in Figure 5-2 is used to display the AoA cost analysis results; it allows
the costs for each alternative and LCC element to be directly compared. This format can be
used to present both Base Year (BY$) and Then Year (TY$) costs.
Figure 5-2: Cost by fiscal year and appropriation
Figure 5-3 presents each alternative's cost in terms of fiscal year spread and appropriation.
Again, this format can be used for both BY$ and TY$. The results should be graphically
displayed for presentation. Notice sunk costs are excluded from the estimates in both examples.
77
Figure 5-3: General LCC Summary (By Alternative)
5.7 Cost Documentation
A complete set of cost documentation is an essential part of the AoA cost analysis. Without an
explanation of the data sources and methodology used for each element of the estimates, the
costs cannot be replicated and therefore may lack credibility. Chapter 3 of AFI 65-508, Cost
Analysis Guidance and Procedures, provides guidance on the level of documentation required.
Attachment 5 to the same instruction contains a cost documentation checklist useful in
determining the completeness of the cost documentation.
5.7.1 Tradespace Analysis during Alternative Comparison
Once the team determines the format and data requirements for use in developing tradespace
analysis during the alternative comparison phase of the study, cost analysis inputs will need to
be developed to feed that process. The actual format of the data required will vary from study
to study, so it will be incumbent upon the Study Director to identify both the cost and
effectiveness data required for production of the tradespace analysis early to make the analysis
useful. Refer to chapter 8 for more detail.
5.7.2 Cost Reviews
The AoA study team reviews the cost estimates for consistency and completeness. OAS also
reviews the cost section of the study plan and the final results as part of the overall AoA
78
assessment provided to the AFROC. Recent GAO reviews and AFROC requests have
reinforced the preference for an independent review of the cost estimates. This review should
be performed by an organization which has not been involved in creating the estimate.
All AoAs, regardless of Acquisition Category (ACAT) level or Joint Staffing Designator (JSD),
should have their cost analyses independently reviewed at the end of the study. An AFCAA
independent review is the most desirable as it is the most robust and includes a sufficiency
memorandum. However, resources realities and priorities mean AFCAA is not always able to
perform the independent review of all AoAs. When AFCAA is unable to provide a complete
independent review, the team needs to identify the level of independent review they can
achieve.
The recommended process for obtaining an independent review of an AoA study cost analyses
is as follows:
1. Study Directors must contact AFCAA and determine if they will conduct the
independent review. This is best done by the lead command sending a memorandum to
AFCAA requesting they participate in the study and conduct the independent review. If
AFCAA can provide this support, they will be included as part of the Cost Analysis
Working Group (CAWG) and allowed to provide real-time assistance and guidance in
order to shorten the “review” process at the completion of the study.
2. If AFCAA is not able to conduct the review they still may be willing to participate in the
cost methodology development and planning, but not commit to the more formal
independent review.
3. Once the level of AFCAA participation is known, the study team needs to consider their
need to approach another organization to do the independent review. The options may
include OAS, Product Center Financial Management (FM) office or a MAJCOM FM.
The “correct” choice will be based in part upon issues of independence and resource
availability. In Joint AoAs it may even be an appropriate financial management office
from another Service or Agency.
4. In the absence of a government led independent review, Study Directors may choose to
find a contractor organization to perform the review.
5. In all cases in which AFCAA is not able to perform the sufficiency/non-advocate
reviews, the sufficiency review will only address the sufficiency and completeness of
the costing. It is not considered a Service Cost Position (SCP), as only the Service Cost
Agency can certify a SCP.
79
6 Performing the Risk Analysis
In addition to analyzing the operational effectiveness and life cycle cost, the study team
examines the risks associated with the various alternatives using the Risk Assessment
Framework (RAF). The RAF is a scalable Air Force enterprise-wide risk assessment approach
that fosters consistency and uniformity in the use of risk-related terminology within and across
the Air Force. The RAF is linked to the Chairman’s Risk Assessment definitions and the CJCS
Integrated Risk Matrix.
This chapter provides a brief overview of the RAF to help study teams identify and rate risks
associated with the alternatives. This risk assessment does not address the risks of conducting
the AoA, these risks are addressed in Section 3.3.
The application of RAF to the AoA is new and requires more definition. OAS is exploring
approaches to develop the implementation methodology of RAF to the AoA. More explicit
details regarding how to use the RAF will be provided in future materials.
6.1 Risk Assessment Framework
The RAF provides a structured way for identifying and translating risks into a consistent and
comparable format. The RAF is based on a tree structure where the base of the tree represents
the aggregation of Service Core Functional objectives. Branches of the tree connect to nodes
representing activities that are vital to the accomplishment of the objectives. Finally, the
activities are connected to metrics that are designed to measure resource, schedule, or other
performance factors that impact the activities. The following describes how risk assessments
are accomplished up to the activity level. A similar assessment approach is used for levels
above the activity level (see the HAF/A9 website on the Air Force portal for additional details).
The RAF requires development of metrics with specific threshold values to assess risk. The
metrics are associated with activities that are impacted by resource, schedule, or other
performance factors as measured by the risk metrics. In the AoA, the activities may be
associated with the capability gaps, mission tasks, or measures of effectiveness and suitability.
Each risk metric for an activity is defined with two points (typically the success and failure
endpoints) and successive levels between the two endpoints. The lowest point of risk for a
metric is set such that the activity is assured of success as far as that metric is concerned. In
other words, no additional improvement in that metric will increase the activity’s chance of
success. Similarly, the highest point of risk for a metric is set such that the activity is assured to
fail as a result of the critical factor associated with that metric. In other words, no degradation
in that metric will worsen the activity’s chance of failure. In between the low and high risk
points, there are thresholds marking risk assessment transitions from low to moderate to
significant to high (see Figure 6-1).
80
Once the metrics have been defined, the study team can use various techniques such as
professional military judgment, modeling and simulation, and data analysis to determine the risk
rating. The analysis is conducted to determine where on the scale the particular metric falls for
any given time frame and set of scenarios. The study team should explore the impact of
changes to assumptions, criteria, scenarios, force structures, and time frames on the risk ratings.
Those changes should be highlighted when discussing the results. The AoA risk assessment
should address the following questions:


What defines success and failure in the scenario context? This should build upon the
operational effectiveness analysis results.
o Which scenarios, timeframes, and force structure assumptions were used?
o How were the success and failure points determined for each scenario/timeframe,
etc.?
What is being done or recommended in the future to mitigate the identified risks?
o For operational – answer how well the gap can be mitigated by each alternative
and to what level the operational risk is reduced. This enables decision makers
to determine if that is an acceptable level.
o For schedule and technical/manufacturing - identify mitigation strategies that
should be considered if there is a follow-on acquisition.
Using the metrics associated with each activity, the study team assesses the risk level for each
activity as low, moderate, significant, or high (see Figure 6-1). Typically, the risk assessment of
the activity is the same as the highest (worst) risk for the supporting metrics. If the worst-case
is not appropriate, professional military judgment may be applied, but the rationale should be
explained.
Figure 6-1: Standard Air Force Risk Scale Definitions
Presentation of the risk assessment results is expected to utilize a common format risk
statement. A risk statement is required for each of the risks identified during the risk
assessment. The study team will also need to identify the scenario(s), timeline(s) and force
81
structure(s) utilized for the AoA and their relationship to the identified risk element. The format
for the risk statement is:
“According to (organization), the (type) risk of (activity) is (assessment) with an (analytical
rigor level) for (context/timeframe/force structure) assuming (mitigation measures/authority).”
The key terms in the risk statement are defined as follows:





Organization - organization accomplishing the risk assessment (study team)
Type of risk – Operational, schedule, or technology/manufacturing (see Section 6.2 for
definitions of these risks)
Activity – actions that are impacted by resource, schedule, or other performance factors
as measured by the risk metrics (for AoAs, activities could be associated with the
capability gaps, mission tasks, or measures)
Assessment - defined risk levels of low, moderate, significant, and high (See Figure 6-1
for risk level definitions). This is done for each of the risks associated with each
activity. Each activity’s assessment will be the same as the highest (worst) risk assessed
for supporting metric. If the “worst-case” risk level is not appropriate for the activity,
professional military judgment may be applied but must be documented and
substantiated for traceability and defensibility. This is usually the situation for AoAs
due to the limited knowledge about the alternatives and risks at this point in the process.
Analytic Rigor Level - gives leadership a quick understanding of how well the
assessment embodies the desired attributes (defendable, measurable, repeatable,
traceable, linkable, implementable, scalable, and incorporates military judgment). Each
activity’s analytic rigor level will be set at the lowest rigor level of the metrics driving
the activity level risk assessment. Levels 1-3 are the most appropriate for an AoA. The
assessment levels are defined as:
o Level 1 - findings are based heavily on subject matter expertise. The assessment
process was not documented and lacks a tree structure and metrics. As a result, a
different set of subject matter experts could reasonably develop different results.
o Level 2 - assessment has limited structure. Discrete metrics are in place prior to
execution of the assessment. There is some ability to trace metrics to the core
function risk assessments via a tree structure.
o Level 3 - assessment process has a fully developed tree structure. There is a
traceable understanding of the linkages between vital objectives, activities and
metrics that compose the assessment.
o Level 4 - maximum level achievable to support an AFROC Risk Assessment.
Prior to the assessment, fully defensible linkages between the assessed metrics
and user (planning requirements) have been presented to the “Assessment
Requestor” for validation. Assessors have explored cross-functional mitigation
options and have included results in their assessment. Metric assessments are
conducted via documented and analytically rigorous methods. This level is rare
for an AoA; it will only be used when the AoA results are combined with other
82
analysis results in order to depict findings across one or more Service Core
Functions.




Scenario - intended to provide additional information needed to specifically frame the
environment that the activity is assessed.
Timeframe - timeframe for each assessment must be provided in guidance since it will
drive both friendly and hostile force assumptions.
Force Structure - provides the force structure assumption behind the assessment (e.g.,
programmed force or programmed force extended).
Mitigation/Measures/Authority - identifies mitigation actions already taken or assumed
across the areas of DOTMLPF-P by the organization making the assessment. This
information is essential to aid decision makers in understanding what actions have been
taken to date in order to best evaluate the situation and explore their risk management
options.
6.2 Risk Identification
Although many types of risks may exist (e.g., political, interoperability, etc.), senior decision
makers expect, at a minimum, the following risk assessments to be conducted in the AoA:


Operational risk assessment - the degree to which the operational risk associated with
the specified gap could be mitigated if the alternative was implemented.
Schedule and technology/manufacturing risk assessment - an assessment of the
Technology Risk Levels (TRLs)/Manufacturing Risk Levels (MRLs) for an alternative’s
critical technology elements (CTEs) which could impact the likelihood of delivering the
required capability on schedule and within budget.
The following lists some areas for the study team to consider when identifying operational,
schedule, and technology/manufacturing risks:






Determine operational impact, if any, of revised thresholds based on effectiveness
analysis sensitivity analysis. In other words, does the threshold value need further
adjustment based on the risk identified?
Consider what might happen if changes in threat capabilities evolve either before, or in
response to our fielding a potential alternative
Examine current and proposed IOC/FOC schedules, design, suppliers, operational
employment, resources, dependencies, etc.
Identify testing requirements and their impacts on the various alternative timelines.
Analyze negative trends in the industry or suppliers
Determine impact of interdependencies on other programs/efforts to provide the full
capability needed to appropriately mitigate the specified gap
83

Determine the level of coalition force needed and probability of getting that support
A “Best Practice” is to recognize that risk identification is the responsibility of every member
of the AoA team, and should occur throughout the conduct of the study.
The study team should consider the following when identifying sources of risk:











Threat - The sensitivity of the alternatives to uncertainty in the threat description, the
degree to which the alternative or its employment would have to change if the threat's
parameters change, or the vulnerability of the alternative to foreign intelligence
collection efforts (sensitivity to threat countermeasure).
Test and Evaluation - The adequacy and capability of the test and evaluation process
and community to assess attainment of performance parameters and determine whether
the alternative is operationally effective, operationally suitable, and interoperable.
[Note: this requires T&E membership on the study team.]
Modeling and Simulation (M&S) - The adequacy and capability of M&S to support all
life cycle phases of an alternative using verified, validated, and accredited models and
simulations.
Technology - The degree to which the technology proposed for the alternative has
demonstrated sufficient maturity (TRL) to be realistically capable of providing the
required capability.
Logistics - The ability of the alternative’s support concepts to achieve the sustainment
KPP thresholds based on the alternative technical description, maintenance concept,
expected availability of support data and resources, and the ability of the associated
maintenance concept to handle the expected workload.
Concurrency - The sensitivity of the alternative to uncertainty resulting from the
combining or overlapping of life cycle phases or activities.
Industrial Capabilities - The degree to which the manufacturing/industrial base has
demonstrated sufficient maturity (MRL) to be realistically capable of providing the
required capability.
Schedule - The sufficiency of the time allocated by the estimated schedule to deliver the
required capability by IOC/FOC.
Command and Control - The ability of the alternative to work within the existing C2
environment as well as the ability of alternatives being evaluated to perform C2
functions in the operational environment, if appropriate.
Interoperability - The ability of alternatives being evaluated to work with existing or
planned systems in the operational environment. This may be C2 interoperability, the
ability to coordinate fires from another weapon system, or the ability of a new
component in an existing system to operate with the remaining subsystems.
CONOPS - The impact of various aspects of the operational concept for an alternative
on its mission effectiveness. For example, will basing in certain areas impact targets
held at risk? What risk does that represent in operational or political terms?
84

Intelligence - The ability of resources expected to be available at IOC/FOC to provide
the intelligence data required by the alternative, in the right format, in a timely fashion
to allow the alternative to function as envisioned.
6.3 Using Previous Analyses
The completed CBA(s) or other analyses that identified the specific gaps to be analyzed in the
AoA also should have identified the operational risk associated with not filling that gap. This
information should be used as a starting point for determining risks associated with the
alternatives. The information may also reduce the requirement for additional analysis to support
the risk assessment.
The following resources will aid in conducting the risk assessment:






Chairman’s Risk Assessment
CJCS Integrated Risk Matrix and associated AF/A9 Risk Assessment Framework (RAF)
DoD Technology Readiness Assessment (TRA) Deskbook, July 2009 prepared by
DDR&E.
Defense Acquisition Guidebook
Risk Management Guide for DoD Acquisition
SAF/AQ Guidance Memorandum on Life Cycle Risk Management (as based on the Risk
Management Guide for DoD Acquisition)
85
1
2
7 Assessing Sustainability in the Analysis of Alternatives
Study
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
7.1 Introduction
Acquiring systems that are both effective in meeting mission requirements and sustainable at
lower total ownership costs continues to be a top priority in the Air Force. Early decisions in the
acquisition life cycle have long-term sustainability implications that impact costs and mission
effectiveness. Since most of the life cycle costs of a program are locked-in early during the
technology development phase, it is important to address sustainability early in the acquisition
process. The early stages of the acquisition process provide the best opportunity to maximize
potential sustainability and mission capability. Accordingly, sustainability should be addressed in
the AoA study to ensure Air Force senior leaders make informed decisions that result in
sustainable and effective systems that meet mission requirements.
7.2 What is Sustainability?
Sustainability is a system’s capability to maintain the necessary level and duration of operations
to achieve military objectives. Sustainability depends on ready forces, materiel, and consumables
in enough quantities and working order to support military efforts. Sustainability encompasses a
wide range of elements such as systems, spare parts, personnel, facilities, documentation, and
data. Sustainability performance not only impacts mission capability, but is also a major factor
that drives the life cycle cost of a system. Maintainability issues, for example, could considerably
increase life cycle costs by increasing the number of maintainers needed to sustain a system in the
field. In other situations, significant Human System Integration (HSI) issues may increase an
operator’s workload or poor reliability performance could result in low operational availability.
7.3 Defining the Maintenance Concept and Product Support
Strategy
Defining how alternatives will be employed in the operational environment is an essential step in
conducting the sustainability analysis in the AoA study. The concept of employment (CONEMP)
for each alternative should be defined in the CCTD document and include descriptions of the
projected maintenance concept and product support strategy. Given that the alternatives are
primarily developmental or conceptual at this early stage of the life cycle, defining the
maintenance concept and product support strategy can be challenging and may require the
assistance of system engineers and acquisition logistics, maintenance, supply, and transportation
86
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
specialists. In some situations, the maintenance concept and product support strategy may be
based on similar existing systems that are relevant to the alternatives being considered in the AoA
study. In situations where the alternative systems are new concepts, there may not be any existing
systems that are sufficiently similar to use in defining the maintenance concept and product
support strategy. In these cases, assistance from system engineers and other logistics specialists
to help define the maintenance concept and product support strategy is particularly important.
The maintenance concept is a general description of the maintenance tasks required in support of
a given system or equipment and the designation of the maintenance level for performing each
task. The maintenance concept is eventually implemented through a Life Cycle Sustainment
Plan. As an example, assume the “system” is a computer, with a CPU, keyboard, and mouse.
The maintenance concept for this system is a two-level concept, organizational and depot. The
organizational level maintenance will restore the computer to service by the removal and
replacement of the Line Replaceable Units (LRU) (e.g., the CPU, mouse, and keyboard). The
organizational level will forward the failed LRU to the depot for repair by removal or replacement
of failed assemblies, subassemblies, or parts based on economic criteria (i.e., repair or discard).
Product support consists of the management and technical activities and resources needed to
implement the maintenance concept, and establish and maintain the readiness and operational
capability of a weapon system, its subsystems, and its sustainment infrastructure. Product support
encompasses materiel management, distribution, technical data management, maintenance,
training, cataloging, configuration management, engineering support, repair parts management,
failure reporting and analyses, and independent logistics assessments.
Product support is implemented by the Performance-based Logistics (PBL) strategy which seeks
to optimize system availability while minimizing cost and the logistics footprint. The PBL
strategy should be tailored to fit the individual system in the intended operational environment for
the duration of its projected service life. The PBL strategy defines performance in terms of
military objectives using criteria such as operational availability, operational reliability, total cost,
logistics footprint, and logistics response time. PBL applies to both retail (base or organizational
level) logistics operations and wholesale (depot) logistics operations. While the provider of the
support may be public, private, or a public-private partnership, the focus is to achieve maximum
weapon system availability at the lowest Total Ownership Cost (TOC).
7.4 Sustainability Performance, Cost, and Risk
Sustainability of materiel solutions should be analyzed in the AoA study in terms of performance,
cost, and risk. The following provides key methodological insights into the analysis of
sustainability with respect to performance, cost, and risk. More detailed information can be found
in the reference sources listed at the end of this section.
87
79
80
81
82
83
84
85
86
87
88
89
90
7.4.1 Sustainability Performance Analysis
The AoA study provides the analytic basis for establishing an initial set of performance measures
associated with concepts of sustainability such as reliability, availability, and maintainability.
These measures are referred to as measures of suitability (MOS) and are designed to measure a
system’s capability to support mission accomplishment. MOS’s are essential for conducting the
sustainability analysis and should address sustainability related performance requirements
identified or implied in previous studies such as Capabilities-Based Assessments (CBAs) and
requirements documents such as the Initial Capabilities Document (ICD). The analyst should
consider the sustainment concepts and attributes described in Table 7-1 in developing the MOSs.
Table 7-1: Sustainability Concepts/Attributes
Concept/
Attribute
Description
Availability
A measure of the degree to which an item is in an operable and committable state at the
start of a mission when the mission is called for at an unknown (random) time. (MILHDBK-502, 30 May 1997)
Reliability
The ability of a system and its parts to perform its mission without failure, degradation,
or demand on the support system. (AFI63-101, 8 April 2009)
Maintainability
The ability of an item to be retained in, or restored to, a specified condition when
maintenance is performed by personnel having specified skills using prescribed
procedures and resources at each prescribed level of maintenance and repair. (AFI63-101,
8 April 2009)
Deployability
The inherent ability of resources to be moved, used, sustained, and recovered with ease,
speed, and flexibility to meet mission requirements. (AFPAM63-128, 5 October 2009)
Supportability
The degree to which system design characteristics and planned logistics resources,
including manpower, meet system peacetime readiness and wartime utilization
requirements. (AFI63-101, 8 April 2009)
Interoperability
The ability of U.S. and coalition partner systems, units, or forces to provide data,
information, materiel, and services to and accept the same from other systems, units, or
forces, and the use the data, information, materiel, and services so exchanged to enable
them to operate effectively together. (JCIDS Manual, 31 January 2011)
Compatibility
The capability of two or more items or components of equipment or material to exist or
function in the same system or environment without mutual interference. Common types
of compatibility include electrical, electromagnetic, human-systems interface, and
physical. (Human Systems Integration Requirements Pocket Guide, USAF Human
Systems Integration Office, September 2009)
Transportability
The capability of material to be moved by towing, self-propulsion, or carrier through any
means such as railways, highways, waterways, pipelines, oceans, space, and airways.
(Joint Publication 1-02, DoD Dictionary of Military and Associated Terms, 8 November
2010)
Environment
Air, water, land, space, cyberspace, markets, organizations, living things, built
infrastructure, cultural resources, and the interrelationships that exist among them.
88
Environmental considerations may affect the concept of operations and requirements to
protect systems from the environment and to protect the environment from system
design, manufacturing, operations, sustainment, and disposal activities. (Human Systems
Integration Requirements Pocket Guide, USAF Human Systems Integration Office,
September 2009)
91
92
93
94
95
96
97
98
99
100
101
102
103
104
Human Systems
Integration
The integrated, comprehensive analysis, design and assessment of requirements, concepts
and resources for system Manpower, Personnel, Training, Environment, Safety,
Occupational Health, Habitability, Survivability and Human Factors. (AFI10-601)
System Training
All training methodologies (embedded, institutional, Mobile Training Team, computer,
and web-based) that can be used to train and educate operator and maintainer personnel
in the proper technical employment and repair of the equipment and components of a
system and to educate and train the commanders and staffs in the doctrinal tactics,
techniques, and procedures for employing the system in operations and missions. (JCIDS
Manual, 31 January 2011)
Safety
Promotes system design characteristics and procedures to minimize the potential for
accidents or mishaps that: cause death or injury to operators, maintainers, and support
personnel; threaten the operation of a system or cause cascading failures in other systems.
(Human Systems Integration Requirements Pocket Guide, USAF Human Systems
Integration Office, September 2009)
Occupational
Health
Promotes system design features and procedures that serve to minimize the risk of injury,
acute or chronic illness or disability, and enhance job performance of personnel who
operate, maintain, or support the system. (Human Systems Integration Requirements
Pocket Guide, USAF Human Systems Integration Office, September 2009)
Utilization Rate
The average life units expended or missions attempted (launched and airborne) per
system or subsystem during a specified interval of time. (AFPAM63-128, 5 October
2009)
Documentation
Operator and maintenance instructions, repair parts lists, and support manuals, as well as
manuals related to computer programs and system software such as the software load
instruction, user manuals, and system administrator manuals. (AFOTECPAM99-104, 9
November 2010)
The analyst must consider various factors such as the study questions and objectives, the maturity
of the alternative concepts, and data availability when selecting measures for the analysis. For
example, emerging or developmental systems may not have sufficient data to measure certain
aspects of sustainability. Given these factors, the analyst must use some judgment in determining
whether the selected measures are sufficient for conducting the sustainability performance
analysis.
As stated in Chapter 4, the description of the MOSs should include the supported mission task,
attribute, measure statement, criteria, and data information. Table 7-2 provides an example of a
sustainability task and its associated measure parameters. At a minimum, the measure criteria
should identify the threshold standard (i.e., the minimum acceptable operational value of a system
capability or characteristic below which the utility of the system becomes questionable) and if
necessary, an objective standard (i.e., an operationally significant increment above the threshold).
89
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
An objective value may be the same as the threshold when an operationally significant increment
above the threshold is not identifiable.
Table 7-2: Measure of Suitability Description Example
Analysts typically rely on a combination of study methods to collect and analyze data and assess
the sustainability of alternative systems. Selection of the study method depends largely on the
data requirements, availability of applicable tools or techniques, and the maturity and specificity
of the alternatives. Several commonly used methods are described below:
Modeling and Simulation (M&S): A model is a physical, mathematical, or logical representation
of a system, entity, phenomenon, or process that allows for investigation of the properties of the
system. A simulation is a method for implementing a model over time. M&S offers several
advantages such as repeatability and control since events can be replicated under controlled
conditions.
An example of M&S that has been used to analyze sustainability of systems is the Logistics
Composite model (LCOM). LCOM is an Air Force Standard Analysis Toolkit (AFSAT) model
used to identify the best mix of logistical resources to support a given weapon system under
certain operational constraints (e.g., aircraft sortie rates, maintenance and supply policies,
manpower levels, and spare part quantities). Logistics resources include manpower, spare parts,
support equipment, and facilities. The supportability of design alternatives can be evaluated by
varying the reliability and maintainability characteristics of the components and tasks contained
in the database. The impact of policy decisions (e.g., organizational, maintenance concepts, and
personnel) upon resource requirements or sortie generation capability can be analyzed as well.
Concept Characterization: Also referred to as “alternative characterization”, this method uses
data and information gleaned from CCTD documents, Requests for Information (RFI), and other
90
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
documents (e.g., reports, studies, and analyses). Once verified by the analyst, the data and
information can be used in various ways. For example, data may be used as inputs to parametric,
statistical, or simulation models (e.g., altitude and range parameters are used along with other
variables as inputs to a model to determine survivability of a system). Other possible uses of the
data and information include resolving measures (e.g., the number of 463L pallet positions
required for transport of an alternative identified in the CCTD is used to determine whether the
alternative meets the two pallet position threshold standard for transport) as well as identifying
operational, technical, and programmatic risks associated with sustainability.
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
Expert elicitation is a particularly useful for collecting information from subject matter experts
regarding the deployability, transportability, and maintainability of alternatives. For example,
after reviewing technical and design information associated with each alternative, maintenance
experts are asked to answer a series of questions on the ease of maintainability of critical
components of each alternative.
Expert Elicitation: Expert elicitation is a structured approach of gathering subject matter expert
judgment and answering questions concerning issues or problems of interest in a study. Since
expert judgment is affected by the approach used to gather it, a specially designed process is
required that includes procedures for developing questions, conducting the elicitation, and
handling biases that may arise. Although the process is formal and structured, it can differ in
terms of the degree of interaction between experts, level of detail in information elicited, number
of meetings, type of communication method, and degree of structure in the elicitation process.
Individual or group interviews are commonly used to elicit the information.
Comparative Analysis: The purpose of the comparative analysis it to select or develop a Baseline
Comparison System (BCS) that represents characteristics of the new system for projecting
supportability related parameters, making judgments concerning the feasibility of the new system
supportability parameters, and determining the supportability, cost, and readiness drivers of the
new system.
A BCS may be developed using a composite of elements from different existing systems when a
composite most closely represents the design, operation, and support characteristics of a new
system alternative. The analysis requires the use of experience and historical data on similar
existing systems that are relevant to the materiel solutions being considered in the AoA study. If
support parameters (e.g., resupply time, turnaround times, transportation times, and personnel
constraints) are to be projected, then current systems (support systems) which are similar to the
new system's support concept must be identified. This may be a support system completely
different than the one supporting similar systems in design characteristics.
The level of detail required in describing comparative systems will vary depending on the amount
of detail known about the new system's design, operational, and support characteristics and the
accuracy required in the estimates for new system parameters. Early in the system life cycle,
when the design concept for the new system is very general, only a general level comparative
system description should be established. For this preliminary analysis, the analyst should
91
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
identify existing systems and subsystems (hardware, operational, and support) useful for
comparative purposes with new system alternatives. The results of the analyses can help identify
supportability, cost, and readiness drivers of each significantly different new system alternative.
7.4.2 Operations and Support Cost Analysis
Operations and Support (O&S) cost is the cost element associated with sustainability. In
determining O&S cost, the cost analysis should include the support resources necessary to
achieve specified levels of readiness for a range of assumptions regarding various aspects such as
system reliability, maintainability, usage rates, and operating scenarios. Because of their potential
impact on product performance, readiness, and cost, all manpower and personnel requirements
(i.e., quantities, skills, and skill levels) should be identified and evaluated early. Due to the
uncertainty in estimating resource costs such as manpower and energy, sensitivity analyses should
be performed to help identify the various factors which drive life cycle costs.
The O&S cost element structure is divided into six major categories (Table 7-3). If a cost applies
to a system, the cost structure identifies where a specific type of cost should appear in the
estimate. Some cost elements refer to expenses that may not apply to every system. For example,
ground radar systems do not have training munitions or expendable stores. In this case, the O&S
estimate for the radar system would omit (or record as zero) that portion of the cost structure.
Table 7-3: Operations and Support Cost Element Categories
Cost
Element
Description
1.0 Unit
Personnel
Cost of operators, maintainers, and other support personnel assigned to operating units.
Includes active and reserve military, government civilian, and contractor personnel costs.
While the cost elements in this category separate operators, maintainers, and other direct
support personnel, unit personnel sometimes serve in more than one capacity. If this
occurs, ensure that all three types are accounted in one of the categories and group the
personnel specialties using their predominant responsibility. To the extent possible,
government personnel costs will be based on personnel grades and skill categories. Costs
of military, government civilian and contractor personnel will be separately shown in the
estimate of unit personnel costs.
2.0 Unit
Operations
Cost of unit unit-level consumption of operating materials such as fuel, POL, electricity,
expendable stores, training munitions and other operating materials. Also included are
any unit-funded support activities; training devices or simulator operations that uniquely
support an operational unit; temporary additional duty/temporary duty (TAD/TDY)
associated with the unit’s normal concept of operations; and other unit funded services.
Unit-funded service contracts for administrative equipment as well as unit-funded
equipment and software leases are included in this portion of the estimate. Unit operating
costs provided through a system support contract will be separately identified from those
provided organically.
3.0
Maintenance
Cost of all maintenance other than maintenance personnel assigned to operating units
(includes contractor maintenance). Includes the costs of labor above the organizational
92
level and materials at all levels of maintenance in support of the primary system,
simulators, training devices, and associated support equipment. Where costs cannot be
separately identified to separate levels of maintenance, use the category that represents the
predominant costs. All maintenance costs provided through a system support contract will
be separately identified within the appropriate cost element.
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
4.0 Sustaining
Support
Cost of support activities other than maintenance that can be attributed to a system and are
provided by organizations other than operating units. Includes support services provided
by centrally managed support activities not funded by the units that own the operating
systems. It is intended that costs included in this category represent costs that can be
identified to a specific system and exclude costs that must be arbitrarily allocated. Where
a single cost element includes multiple types of support, each should be separately
identified in the cost estimate.
5.0 Continuing
System
Improvement
Cost of hardware and software modifications to keep the system operating and
operationally current. Includes the costs of hardware and software updates that occur
after deployment of a system that improve a system's safety, reliability, maintainability, or
performance characteristics to enable the system to meet its basic operational
requirements throughout its life. These costs include government and contract labor,
materials, and overhead costs. Costs will be separated into government and contractor
costs within each cost element.
6.0 Indirect
Support
Cost of support activities that provide general services that cannot be directly attributed to
a system. Indirect support is generally provided by centrally managed activities that
support a wide range of activities. Indirect support costs are those installation and
personnel support costs that cannot be directly related to the units and personnel that
operate and support the system being analyzed. O&S cost analyses should include
marginal indirect costs. The intention is to include only the costs that would likely result
in changes to DoD budgets if the action being analyzed (e.g., new system development,
etc.) occurs.
The Department of Defense is increasingly using contract support for many aspects of system
operations and support, including functions that have historically been provided by government
organizations. Knowing the maintenance concept and product support strategy for each O&S
function is important to cost estimators. O&S cost estimates should clearly identify the expected
source of support for each element of an O&S cost estimate.
Interim contractor support (ICS) provides logistics support on a temporary basis until a
government support capability is established. The scope and duration of ICS varies depending on
acquisition strategy and other management decisions. ICS costs are normally included in the
O&S estimate, unless explicitly covered in the production/investment cost estimate.
7.4.3 Sustainability Risk Assessment
The design, maintenance concept, product support strategy, support system design, and
availability of support data and resources are significant sources of risk to the sustainability of a
system. Risks associated with sustainability should be assessed early in the acquisition since
failing to do so could cause significant consequences in the program’s latter phases.
93
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
The risk assessment of sustainability constraints and concepts should be an integral part of the
sustainability analysis. The assessments should identify risk drivers, determine the sensitivity of
interrelated risks, and quantify risk impacts. Again, the analyst should rely on experience and
historical data to help identify risk factors.
For more information, refer to the following sources of information:






Air Force Analysis of Alternatives Measure Development Process and Guidelines. July
2011. Office of Aerospace Studies, Kirtland AFB, NM.
Air Force Analysis of Alternatives Suitability Measures. July 2011. Office of Aerospace
Studies, Kirtland AFB, NM.
AFPAM 63-128 Guide to Acquisition and Sustainment Life Cycle Management. October
5, 2009.
AFI63-101 Acquisition and Sustainment Life Cycle Management. April 8, 2009.
Department of Defense Risk Management Guide for DoD Acquisition, Sixth Edition.
August, 2006.
Department of Defense Acquisition Logistics Handbook. MIL-HDBK-502. May 30,
1997. USAMC Logistics Support Activity, Redstone Arsenal, AL.
7.5 Reliability, Availability, Maintainability and Cost Rationale
Report
For efforts designated as JROC Interest, an initial Reliability, Availability, Maintainability and
Cost Rationale Report (RAM-C Report) should be developed as part of the AoA study. For all
other efforts, the required RAM-C Report is determined by the DoD component. The report is
designed to ensure effective collaboration between the requirements and acquisition communities
in the establishment of RAM requirements for the Milestone A decision. Although this report
may be limited in scope due to the many unknowns at this stage, it will still describe the
reliability, availability, and maintainability requirements, assumptions, rationale, and ownership
costs to ensure that effective sustainment is addressed early in the life cycle for all systems.
For more information, refer to the following sources of information:



Department of Defense Reliability, Availability, Maintainability, and Cost Rationale
Report Manual. June 1, 2009. Washington, DC, Office of the Secretary of Defense.
Department of Defense Guide for Achieving Reliability, Availability, and Maintainability.
August 3, 2005.
USD AT&L Directive-Type Memorandum (DTM) 11-003 – Reliability Analysis,
Planning, Tracking and Reporting. March 21, 2011.
94
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
7.6 Sustainment Key Performance Parameter
For all projected Acquisition Category (ACAT) I programs, sustainment is a mandatory Key
Performance Parameter (KPP) that should be addressed in the AoA study and the associated
initial RAM-C Report. For all other programs, the sponsoring command will determine the
applicability of the sustainment KPP. See Appendix L for additional information on KPPs.
As shown in Figure 7-1, the sustainment KPP consists of a KPP, availability, and two supporting
Key System Attributes (KSAs), reliability and ownership cost (see Appendix L for more
information on KSAs). It is important to note that AoA studies and other supporting analyses
provide the analytic foundation for determining appropriate threshold and objective values of
system attributes and aid in determining which attributes should be KPPs or KSAs.
Figure 7-1: Sustainment Key Performance Parameter
The availability KPP has two components, materiel availability and operational availability.
Materiel availability (MA) is the measure of the total inventory of a system operationally capable
(ready for tasking) of performing an assigned mission at a given time, based on materiel
condition. Development of this measure is a program manager responsibility and is determined
later in the acquisition cycle. Consequently, the measure would not be addressed in an AoA study
in the pre-Milestone A phase.
Operational availability (AO) is the percentage of time that a system or group of systems within a
unit is operationally capable of performing an assigned mission. Development of this measure is
95
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
the requirements manager responsibility and requires an analysis of the projected system and
planned use as identified in the CONEMP.
Reliability is the ability of a system and its parts to perform its mission without failure,
degradation, or demand on the support system. The basic concept of reliability is that the system
performs satisfactorily, where satisfactorily implies a lack of a broad variety of undesirable events
and subsequent impact. Development of the measure is a requirements manager responsibility.
There are two aspects of reliability, mission reliability and materiel or logistics reliability.
Mission reliability is the capability of a system to perform its required function for the stated
mission duration or for a specified time into the mission. The mission reliability calculation
depends on the system and may include numbers of operating hours, critical failures, successful
missions, and sorties flown. Typical measures of mission reliability include break rate (BR),
mean time between critical failure (MTBCF), and weapon system reliability (WSR).
Materiel (logistics) reliability is the capability of a system to perform failure free, under specified
conditions and time without demand on the support system. All incidents that require a response
from the logistics system (i.e., both maintenance and supply systems) are addressed in the
measure. The materiel (logistics) reliability calculation depends on the system and may include
number of flight hours, maintenance events, operating hours, and possessed hours. A typical
measure of materiel (logistics) reliability is the mean time between maintenance (MTBM).
Finally, ownership cost is the operations and support (O&S) cost associated with the availability
KPP. Ownership cost provides balance to the sustainment solution by ensuring that the O&S
costs associated with availability are considered in making decisions. The cost includes energy
(e.g., fuel, petroleum, oil, lubricants, and electricity), maintenance, manpower/personnel costs,
support sustainment, and continuing system improvements regardless of funding source. All
costs cover the planned life cycle timeframe. Fuel costs are based on the fully burdened cost of
fuel. The analysis should identify the associated sources of reference data, cost models, and
parametric cost estimating techniques or tools used to create the cost estimates. The ownership
cost is included as part of the total life cycle cost estimate in the AoA study.
For more information, refer to the following sources of information:



Air Force Analysis of Alternatives Suitability Measures. July 2011. Office of Aerospace
Studies, Kirtland AFB, NM.
Manual for the Operation of the Joint Capabilities Integration and Development System
(JCIDS Manual). January 19, 2012.
Chairman of the Joint Chiefs of Staff (CJCSI 3170.01G) Joint Capabilities Integration and
Development System. January 10, 2012.
96
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
8 Alternative Comparisons
The AoA must explore tradespace in performance, cost, risk and schedule across a full range of
alternatives to address validated capability requirements. Therefore, once the operational
effectiveness analysis results, life cycle cost estimates, and risk assessments are completed, it is
time to bring that information together and address overall sensitivities and tradeoffs through
comparative analysis.
Comparing the alternatives involves the simultaneous consideration of the alternatives’ cost,
operational effectiveness, associated risks; the outcome of this comparison highlights the factors
that influence the tradespace. Consumers are familiar with the concept of comparing alternatives,
whether buying laundry detergent, a new car, or a home. They collect data on costs and make
assessments on how well the alternatives will meet their needs (the effectiveness of the
alternatives) and any potential risks associated with each option. With data in hand, consumers
make comparisons and identify the tradespace to consider before buying the product or service.
In an AoA, the process is essentially the same. Keep in mind that there is rarely a clear-cut single
answer.
8.1 Alternative Comparison Methodology
8.1.1 Sensitivity Analysis during Alternative Comparison
Sensitivity analysis continues during this phase. It should leverage sensitivity analysis
accomplished in the operational effectiveness analysis, cost analysis, and risk assessments. The
previous sensitivity analyses should have identified the cost, schedule, risk and performance
drivers to be considered as part of the tradespace analysis being conducted during this phase. The
sensitivity analysis associated with this comparative analysis must accomplish the following to
ensure meeting the decision makers’ expectations and requirements for AFROC and CAPE
sufficiency review.







Identify the proposed parameters for the RCT, along with recommended
threshold/objective values for further exploration in the tradespace.
Identify why those parameters are proposed for the RCT
Identify the assumptions and variables highlighted by the sensitivity analysis.
Explore the sensitivity of the RCT values by addressing the impact of changes to cost,
effectiveness, and performance, on the alternative’s ability to mitigate gaps
Identify key assumptions that drive results
Identify the conditions and assumptions for which an alternative is or is not affordable
Identify the conditions and assumptions for which an alternative does or does not
adequately mitigate the operational gap
97
362
363
364
365
366
367
368


369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
8.1.2 Cost/Capability Tradeoff Analysis


Identify how legacy forces complement the alternatives
Examine of the robustness of the results. It should address the effectiveness, cost, and risk
changes that alter the comparative relationships among alternatives.
Examine variations of cost elements identified as significant drivers. This is intended to
identify the point at which further expenditure provides little additional value.
Identify performance parameters that make significant changes to mission effectiveness or
most likely to influence development and/or production cost.
The study team uses the cost/capability tradeoff analysis to determine the best value alternative
that provides acceptable capability to the warfighter. In conducting the analysis, the study team
should consider the affordability constraints expressed in the AoA guidance or ADM.
Figure 8-1 shows an example presentation of the cost/capability tradeoff analysis results for a
notional Aircraft Survivability System. Alternatives 1 and 2 are the most viable of the
alternatives analyzed and are shown in the figure (note that non-viable alternatives are not
displayed). The life cycle cost estimates are shown in $B along the x-axis. The y-axis shows the
probability of survival for a specific ISC and vignette. The results from other scenarios and
vignettes can be shown in separate charts to help the decision makers understand how robust the
alternatives are in different scenarios/vignettes. Alternatively, the results associated with all the
scenarios and vignettes analyzed in the study can be combined and presented in one chart.
Probability of survival was selected since it will be a Key Performance Parameter (note that the
threshold and objective values are highlighted on the chart). Other possibilities for the y-axis
include reduction in lethality and loss exchange rate.
The table below the graph provides a summary showing the probability of survival and LCCE
values as well as the overall risk rating of the alternative for the increments of capability for each
alternative. The color rating for the probability of survival is based on whether the alternative
meets the threshold/objective value.




Red: Did not meet threshold, significant shortfall
Yellow: Did not meet threshold, not a significant shortfall
Green: Met threshold
Blue: Met objective
98
397
398
399
Figure 8-1: Aircraft Survivability System Cost/Capability Tradeoff Example
400
401
402
403
404
405
406
407
408
Alternative 1 with the basic capability is significantly below the threshold value and is therefore
rated red, whereas alternative 2 with the basic capability meets the threshold and is rated green.
Alternative 1 with the A and B increments of capability meet the threshold and are rated green,
while alternative 2 with the X and Y increments of capability meet the objective value, and are
therefore rated blue. In situations where there is no objective value (threshold = objective), then
only the red, yellow, and green ratings should be used. In other situations where threshold and
objective values do not exist, the team will need to explain the difference in performance without
referencing these values. In this example, Alternative 1 with the A increment and Alternative 2
99
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
with the basic capability (circled in red) may be the best value options. Alternative 2 with the X
and Y increments (circled in blue) are the high performance, cost, and risk options.
Figure 8-2 shows another example presentation of the cost/capability tradeoff analysis results for
a notional Target Defeat Weapon. Alternatives 1, 2, and 3 are the most viable of the alternatives
analyzed and are shown in the chart. The life cycle cost estimates are shown in $B along the xaxis. The y-axis shows the probability of functional kill for two ISC vignettes. The vertical bars
show the Target Template Sets (TTS) analyzed in the study. TTS range from very simple to
extremely complex and are defined in terms of hardness, depth, construction design, and function
(e.g., command and control, operations, storage, leadership, etc.). The current baseline
performance is shown on the chart (probability of functional kill = .55).
Alternative 1 provides increased probability of functional kill (+.11 over the current baseline
systems) and is capable of functional kills in the TTS-F and G that are not possible with the
existing baseline weapons. LCCE is $3B and the overall risk was rated moderate. Alternative 2
provides additional functional kill capability (+.17 over the current baseline systems) and is
capable of functional kills in the TTS-F, G, H, I, and J that are not possible with the existing
baseline weapons. LCCE is $4.2B and the overall risk was rated high. Finally, alternative 3
provides the most functional kill capability (+.22 over current baseline systems) and is capable of
functional kills in the TTS-F, G, H, I, J, and K that are not possible with existing baseline
weapons. LCCE is $5.3B and the overall risk was rated high.
It is important to note that none of the alternatives are capable of functional kills in the TTS-L, N,
O, and Q. If TTS-L, N, O, and Q include targets that are the most critical to the warfighter, the
determination of whether any of the alternatives are a best value option becomes more difficult
despite the additional capability each of the alternatives provide over the baseline.
100
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
Figure 8-2: Target Defeat Weapon Cost/Capability Tradeoff Example
There may be other ways to present cost/capability trade information, but regardless of the
method, the message must be clear and cogent. It is important to avoid rolling up or aggregating
effectiveness results since it can hide important information.
8.1.3 Alternative Comparison Presentation
The objective of the comparative analysis presentation is to show how the capabilities of each
alternative close or mitigate the capability gap(s) and present the associated tradespace to the
senior decision makers. Typically, there may be several viable alternatives, each with different
costs, effectiveness, and risks. There is no requirement for an AoA to identify a single solution.
101
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
The study team must answer the high-level concerns/questions from the guidance and those that
arise during the course of the AoA. The study team must also address the capabilities of the
alternatives to close or mitigate the capability gap(s) and the associated reduction in operational
risk (as identified in the CBA, ICD, and appropriate Core Function Master Plans (CFMPs)).
The study team should illustrate the operational impact of failing to meet threshold values. The
presentation should show why each key performance parameter in the RCT was chosen and the
tradespace analysis that resulted in the threshold values. The RCT should contain those key
parameters that are so critical that failure to meet them brings the military utility of the solution
into question and risks appropriately mitigating the gap. Finally, the study team should identify
and discuss the key items that discriminate the alternatives. This will aid in the presentation of
the critical findings.
Once all of the analysis is complete and understood, the study team must determine the most
appropriate way to present the critical findings of the AoA. Decision makers expect this
information to be presented for the capability gap(s) illustrating the trade-offs identified. There
are several ways to display this as shown in the following examples.
In addition to the presentation examples discussed in the cost/capability tradeoff analysis section,
Figure 8-2 shows a notional example of an alternative comparison. In this illustration, MOEs a,
b, and c are all critical, enabling the study team to show important differences in performance by
alternative.
Figure 8-3: Example of Critical MOE Results
Figure 8-3 shows a second example of presenting findings from the analysis. The example
presents the effectiveness, operational risk, technical maturity, other risk factors, and costs of each
alternative. If this approach is utilized, it is important to define the color rating scheme. The
102
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
color ratings for the measures are based on the capability of the alternatives to meet the measure
criteria (threshold, objective values). In this example, the operational risk and technical maturity
color rating methods should be discussed and agreed upon by stakeholders and decision makers
prior to executing the alternative comparisons. Once the study team applies the rating method,
they should conduct a review of the results to determine whether the method is sound or must be
revised.
Figure 8-4: Example of Comparing Alternatives by Effectiveness, Risk, and Cost
It is important to ensure the information presented is clear, concise, cogent, and unbiased. The
presentation should accurately depict the analysis results, present understandable interpretations,
and support recommendations. The more straightforward and clear the results are presented, the
easier it becomes to understand the differences among the alternatives. The study team’s job is to
help the decision makers understand the differences among the alternatives.
The Study Director should determine the best way to tell the story of what was learned in the
AoA. OAS and the A9 community can assist the team in developing the story. Every effort is
different, but OAS and the A9 community will be able to share examples to aid in this
development. Finally, the Study Director should plan sufficient time for stakeholder and senior
decision maker review of the results.
103
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
9 Documenting Analytical Findings
With analysis complete, the final report and final results briefing document what was learned
concerning the ability to solve or mitigate the examined capability gaps. This documentation
contains answers to key questions from decision makers. Appendix D contains the template for
the final report.
The primary purpose of the documentation is to illustrate, for decision makers, the cost, schedule,
performance, and risk implications of tradespace around the validated capability requirements.
The final report should also provide feedback to the requirements process. This feedback
addresses those validated capability requirements that, upon further study, appear unachievable
and/or undesirable from cost, schedule, performance, and risk points of view.
The team provides the documents to the AFRRG and AFROC which determine Air Force
investment decisions prior to delivery to the MDA and any other appropriate oversight groups.
The JSD associated with the effort determines which decision makers, senior leaders, and review
groups receive the documents. For JROC Interest efforts, the documents will also be provided to
CAPE (for sufficiency review), JROC, JCB, FCB, and AT&L-led OIPT, DAB, and the Study
Advisory Group.
According to Air Force AoA final report approval criteria, the report should include:









Assumptions and rating criteria used for evaluation in each of the analyses
Answers to the key questions outlined in the study guidance. These must be answered
sufficiently for decision makers to support the upcoming decisions.
Identification of enablers and how they align with those outlined at the MDD and in the
AoA guidance.
Identification of the effectiveness, cost, and risk drivers and how they were fully explored
in sensitivity analysis.
Discussion of the tradespace through cost, effectiveness, and risk analysis. This must
clearly identify for the decision makers where the trade-offs exist, the operational risk
associated with the performance, and the degree to which the capability gap(s) have been
mitigated.
Identification of the key parameters in the RCT and analytical evidence to support the
thresholds and objectives identified. This must include identifying what the associated
cost drivers are for those values and how sensitive the cost is to those values.
Identification of the sensitivity each alternative to the analysis assumptions and their
sensitivity to specific scenarios.
Identification of technical feasibility of thresholds and objectives identified in the RCT
based on the affordability constraints identified.
Identification and scope of additional information/analysis required prior to initiation of
acquisition activities (e.g., requesting a milestone decision).
104
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591


Identification of how the cost of each alternative aligns with the affordability constraints
identified at MDD and in the study guidance.
Identification of sustainability considerations for the operational environment.
To facilitate a successful CAPE sufficiency review for the MDA, the team should provide the
following:








Identification of the measures evaluated, including any cost, performance, and schedule
trade off analyses conducted.
Evaluation of benefit versus risk. Risks should include an examination of technical, cost,
and schedule risks in addition to operational risks. It is important for the team to address
the non-operational benefits and risks with the same level of fidelity/rigor as the
operational benefits and risks. Regarding risks, it is equally important for the team to
carefully examine the non-operational risks as they can be significant contributors to
program failure.
Explanation of why alternatives do well or poorly. This must include rationale for the
results.
Explanation of how variations to CONOPS or performance parameters might mitigate cost
or change effectiveness ratings. This should include characterizing the circumstances in
which each alternative appears superior and the conditions under which it degrades.
Identification of estimated schedules for each alternative. This should include an
assessment of existing TRLs/MRLs for critical technologies. This assessment includes the
impact of not completing development, integration, operational testing on schedule and
within budget and the likelihood of achieving the proposed schedule.
Identification of practical risk mitigation strategies (if they exist) to minimize impact to
delivering operational capability, and any potential workarounds that may be applied if a
risk comes to fruition.
Identification of all DOTMLPF-P implications for each alternative.
Identification and rationale for any questions not answered or analysis that remains
incomplete and recommendations to address these in future.
An HPT is not required for development of the final report; however, key stakeholders and
representatives should be involved. Members of the enduring HPT are expected to develop the
RCT and review the final report.
A widespread review of the report is useful in ensuring the report appropriately addresses the
areas addressed above. The review should start within the originating command. Outside review
can be solicited from a variety of agencies, including OAS, appropriate AF/A5R functional
divisions, stakeholders, other Services, as appropriate, and CAPE (for ACAT I and JROC Interest
programs).
According to AF/A5R procedures, the following process is to be followed for review and staffing
of the final report:
105
592
593
594
595
596
597
598
599
600
601
602
603




MAJCOM provide the final report and briefing simultaneously to AF/A5R Functional
Division Chief and OAS for assessment.
The AF/A5R Functional Division Chief will forward the report and briefing (including the
OAS risk assessment) simultaneously to AF/A5R-P.
After AF/A5R-P Gate Keeper review, the final report will be submitted to the AFRRG.
If the AFRRG concurs with the final report, it will be submitted to the AFROC for
validation or approval depending on JSD.
The OAS assessment criteria are applied to evaluate its credibility and completeness in light of
the requirements outlined above and the study guidance. Appendix F of this handbook contains
the OAS assessment criteria for the final report. See Appendix H for the recommended timelines
for OAS review of documents prior to submission.
604
605
606
106
607
608
Appendix A: Acronyms
ACAT
Acquisition Category
ACEIT
Automated Cost Estimating Integrated Tools
ACTD
Advanced Concept Technology Demonstration
ADM
Acquisition Decision Memorandum
AF
Air Force
AF/A2
Air Force Assistant Chief of Staff for Intelligence
AF/A5R
Air Force Director of Requirements
AF/A5R-P
Directorate of Operational Capability Requirements, Chief of
Requirements Policy and Process Division
AF/A9
Director, Studies & Analyses, Assessments and Lessons Learned
AFCAA
Air Force Cost Analysis Agency
AFI
Air Force Instruction
AFMC
Air Force Materiel Command
AFOTEC
Air Force Operational Test & Evaluation Center
AFP
Air Force Pamphlet
AoA
Analysis of Alternatives
BA
Battlespace Awareness
BCS
Baseline Comparison System
BR
Break Rate
BY$
Base Year Dollars
CAIV
Cost As an Independent Variable
CAPE
Cost Assessment and Program Evaluation (OSD)
CAWG
Cost Analysis Working Group
CBA
Capabilities Based Assessment
CBP
Capabilities Based Planning
CCTD
Concept Characterization and Technical Description
CDD
Capability Development Document
CJCSI
Chairman Joint Chiefs of Staff Instruction
CONEMP
Concept of Employment
107
CONOPS
Concept of Operations
CPD
Capability Production Document
CPIPT
Cost Performance Integrated Product Team
DAB
Defense Acquisition Board
DAG
Defense Acquisition Guidebook
DAP
Defense Acquisition Process
DCAPE
Director, CAPE
DCR
Doctrine Change Request
DOE
Department of Energy
DoD
Department of Defense
DODD
Department of Defense Directive
DOTMLPF-P
Doctrine, Operations, Training, Materiel, Leadership/Education, Personnel,
and Facilities
DOTmLPF-P*
*Note: in this version of the acronym, “m” refers to existing materiel in
the inventory (Commercial Off the Shelf (COTS) or Government Off the
Shelf (GOTS)).
DP
Development Planning
EA
Effectiveness Analysis
EAWG
Effectiveness Analysis Working Group
ECWG
Employment Concepts Working Group
FCB
Functional Control Board
FFRDC
Federally Funded Research and Development Center
FM
Financial Management
FoS
Family of Systems
FOC
Full Operational Capability
GAO
Government Accountability Office
GAO CEAG
GAO Cost Estimating Assessment Guide
GIG
Global Information Grid
GRC&A
Ground Rules, Constraints & Assumptions
HPT
High Performance Team
108
HSI
Human Systems Integration
IC
Implementing Command
ICD
Initial Capabilities Document
ICS
Interim Contractor Support
IDA
Institute for Defense Analysis
IIPT
Integrating Integrated Product Team
IOC
Initial Operational Capability
IPT
Integrated Product Team
ISA
Intelligence Supportability Analysis
ISR
Intelligence, Surveillance and Reconnaissance
ISWG
Intelligence Supportability Working Group
IT
Information Technology
JCA
Joint Capability Area
JCB
Joint Capabilities Board
JCD
Joint Capabilities Document
JCIDS
Joint Capabilities Integration and Development System
JCTD
Joint Concept Technology Demonstration
JFACC
Joint Force Air Component Commander
JROC
Joint Requirements Oversight Council
JS
Joint Staff
JSD
Joint Staffing Designator
KM/DS
Knowledge Management/Decision Support
KPP
Key Performance Parameter
KSA
Key System Attribute
LCOM
Logistic Composite Model
LC
Lead Command
LCC
Life Cycle Cost
LCCE
Life Cycle Cost Estimate
LCMC
Life Cycle Management Center
LoE
Level of Effort
109
LRU
Line Replaceable Unit
LSC
Logistics Support Cost
M&S
Modeling & Simulation
MA
Materiel Availability
MAJCOM
Major Command
MDA
Milestone Decision Authority
MDAP
Major Defense Acquisition Program
MDD
Materiel Development Decision
MER
Manpower Estimate Report
MILCON
Military Construction
MOE
Measure of Effectiveness
MOP
Measure of Performance
MOS
Measure of Suitability
MOU
Memorandum of Understanding
MRL
Manufacturing Readiness Level
MS
Milestone
MSFD
Multi-Service Force Deployment
MT
Mission Task
MTBCF
Mean Time Between Critical Failure
MTBM
Mean Time Between Maintenance
NACA
Non-Advocate Cost Assessment
NSSA
National Security Space Acquisition
O&S
Operations and Support
O&M
Operations and Maintenance
OAS
Office of Aerospace Studies
OCWG
Operations Concepts Working Group
OIPT
Overarching Integrated Product Team
OSD
Office of the Secretary of Defense
OSD/AT&L
Office of the Secretary of Defense for Acquisition Technology & Logistics
OSD/Cost Assessment and Program Evaluation
110
OSDCAPE
P3I
Pre-Planned Product Improvement
PBL
Performance-based Logistics
PM
Program Manager
POL
Petroleum, Oils, and Lubricants
POM
Program Objective Memorandum
PPBE
Planning, Programming, Budgeting, and Execution
R&D
Research and Development
RAF
Risk Assessment Framework
RCT
Requirements Correlation Table
RDT&E
Research, Development, Test & Evaluation
RFI
Request for Information
RFP
Request for Proposal
S&T
Science and Technology
SAF
Secretary of the AF
SAF/AQ
Assistant Secretary of the AF for Acquisition
SAF/FMC
Deputy Assistant Secretary of the AF for Cost and Economics
SAG
Study Advisory Group
SCEA
Society of Cost Estimating Analysis
SEER
Systems/Software Estimating and Evaluation of Resources
SEM
Software Estimating Model
SEP
System Engineering Plan
SETA
Scientific, Engineering, Technical, and Analytical
SLEP
Service Life Extension Program
SME
Subject Matter Expert
SoS
System of Systems
SRG
Senior Review Group
STINFO
Scientific & Technical Information
T&E
Test and Evaluation
TAWG
Technology & Alternatives Working Group
111
TDS
Technology Development Strategy
TEMP
Test and Evaluation Master Plan
TES
Test and Evaluation Strategy
TOC
Total Ownership Cost
TRL
Technology Readiness Level
TSWG
Threats and Scenarios Working Group
TY$
Then-year (dollars)
USD (AT&L)
Undersecretary of Defense for Acquisition, Technology and Logistics
USAF
United States Air Force
VCSAF
Vice Chief of Staff Air Force
WBS
Work Breakdown Structure
WG
Working Group
WIPT
Working-Level Integrated Product Team
WSARA
Weapon Systems Acquisition Reform Act
WSR
Weapon System Reliability
609
112
610
611
Appendix B: References and Information Sources
612
A. Joint Capabilities Integration and Development System (JCIDS) Manual
613
B. CJCSI 3170.01H, JCIDS Instruction
614
C. Capabilities-Based Assessment (CBA) User’s Guide
615
D. DODD 5000.01, The Defense Acquisition System
616
E. DODI 5000.02, Operation of the Defense Acquisition System
617
F. Defense Acquisition Guidebook
618
G. DODD 5101.2, DoD Executive Agent for Space
619
H. National Security Space Acquisition (NSSA) Policy—Interim Guidance
620
I. DOD 5000.4-M, Cost Analysis Guidance & Procedures
621
J. Risk Management Guide for DoD Acquisition (and AF-specific implementation)
622
K. AFPD 63-1 – Capability-Based Acquisition System
623
L. AFI 10-601 – Capabilities-Based Requirements Development
624
M. AFI 10-604 –Capabilities-Based Planning
625
N. Information Technology (IT) Related Policies
626
O. Clinger-Cohen Act 1996
627
P. CJCSI 6212.01C - Interoperability and Supportability of IT and NSS
628
Q. DODD 4630.5 – Interoperability and Supportability of IT and NSS
629
R. DODI 4630.8 – Procedures for Interoperability and Supportability of IT and NSS
630
S. DODD 8100.1 – Global Information Grid (GIG) Overarching Policy
631
T. Joint Pub 6-0 – Doctrine for C4 Systems Support to Joint Operations
632
U. MIL-HDBK-881B Work Breakdown Structures for Defense Materiel Items (3 October
633
634
635
2011)
V. DoD Reliability, Availability, Maintainability, and Cost (RAM-C) Rationale Report
Manual
636
W. Weapon Systems Acquisition Reform Act (WSARA) of 2009
637
X. OSD Operating and Support Cost-Estimating Guide, May 1992
638
113
639
640
641
642
Appendix C: Study Plan Template
This appendix contains the AoA Study Plan template required for the AoA. (CAPE desires a
maximum of ten to fifteen pages.)
643
644
-----------------------------Cover Page -----------------------------
645
<Name of Project Here>
646
647
648
649
Analysis of Alternatives (AoA)
Study Plan
650
651
<Lead MAJCOM>
652
<Date>
653
654
Distribution Statement
655
Refer to these sources for more information:
656
657
1. Department of Defense Directive (DODD) 5230.24, “Distribution Statements on Technical
Documents”
658
659
2. Air Force Pamphlet (AFP) 80-30, “Marking Documents with Export-Control and DistributionLimitation Statements” (to be reissued as Air Force Instruction (AFI) 61-204)
660
661
Ask the Scientific & Technical Information (STINFO) Officer for help in choosing which of the
available statements best fits the AoA
662
REMEMBER -- AoA information may be PROPRIETARY, SOURCE SELECTION
663
SENSITIVE, OR CLASSIFIED
664
665
666
114
-----------------------Table of Contents---------------------
667
668
669
670
671
672
673
674
1. Introduction
1.1. Background
1.2. Purpose and Scope
1.3. Study Guidance
1.4. Capability Gaps
1.5. Stakeholders
1.6. Ground Rules, Constraints, and Assumptions
675
676
677
678
2. Alternatives
2.1. Description of Alternatives
2.2. Operational Concepts
2.3. Scenarios and Operational Environment
679
680
681
682
683
684
3. Effectiveness Analysis
3.1. Effectiveness Methodology
3.2. Measures
3.3. Sensitivity Analysis Methodology
3.4. Analysis Tools and Data
3.5. Modeling and Simulation Accreditation
685
686
687
688
689
4. Cost Analysis
4.1. Life Cycle Cost Methodology
4.2. Work Breakdown Structure
4.3. Cost Tools and Data
4.4. Cost Sensitivity and Risk Methodology
690
691
692
5. Risk Assessment
5.1. Risk Assessment Methodology
5.2. Risk Assessment Tools
693
694
695
6. Alternative Comparison
6.1. Alternative Comparison Methodology and Presentations
6.2. Cost/Capability Tradeoff Analysis Methodology
696
697
698
699
7. Organization and Management
7.1. Study Team Organization
7.2. AoA Review Process
7.3. Schedule
700
Appendices
701
702
703
704
705
A.
B.
C.
D.
F.
Acronyms
References
CCTD(s)
Modeling and Simulation Accreditation Plan
Other appendices as necessary
115
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
---------------------Plan Section Contents----------------------1. Introduction
1.1. Background
• Briefly describe the history of the effort and related programs. Summarize relevant
analyses that preceded this study such as applicable Joint Concept Technology
Demonstrations (JCTDs) or Advanced Concept Technology Demonstrations (ACTDs).
This should include any lessons learned from previous efforts, especially those that were
cancelled.
• Explain why the study is being conducted now and the key decisions that have been made
to this point.
1.2. Purpose and Scope
• Describe the scope and purpose of the AoA. Describe any tailoring or streamlining used
to focus the study.
• Identify potential areas of risk and/or roadblocks pertinent to the study (particularly
schedule, lack of required data, lack of stakeholder participation, etc.)
• Identify the key acquisition or other issues that will be addressed in the analysis. Also
explain why any key issues will not be considered or addressed in the analysis.
• Identify the milestone decision the analysis will inform.
1.3. Study Guidance
• Summarize the AoA study guidance from the Air Force and/or CAPE, as appropriate.
• Identify the key questions in the guidance.
1.4. Capability Gaps
• Identify and describe the specific AFROC or JROC approved capability gaps that will be
addressed in the AoA. Identify the validated sources of these gaps.
• Identify the threshold/objective requirement values in the ICD and how they will be
treated as reference points to explore the tradespace.
• Identify the timeframe for the operational need.
1.5. Stakeholders
• Identify the stakeholders for this AoA and explain their roles/responsibilities in the AoA.
• Describe how methodologies, alternatives, evaluation criteria, and results will be reviewed
by the stakeholders and oversight groups (e.g., Senior Review Group, Study Advisory
Group, etc.).
1.6. Ground Rules, Constraints, and Assumptions
• Identify the AoA ground rules, constraints, and assumptions. Describe the implications of
the ground rules, constraints, and assumptions. Reference appropriate assumptions
identified in the ICD or AoA guidance and describe their implications to the study.
• Identify the projected Initial Operating Capability (IOC) and Full Operating Capability
(FOC) milestones.
2. Alternatives
2.1. Description of Alternatives
• Describe the baseline (existing and planned systems) capability.
116
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
•
Describe the alternatives specified in the AoA study guidance and how the alternatives
will be employed in the operational environment. Explain the rationale for including them
in the study. Explain the rationale for excluding any specific types of alternatives in the
study.
• Discuss dependencies associated with each alternative and how the dependencies will be
addressed in the analysis.
• Identify the appendix that contains the CCTD(s) for baseline and each alternative.
2.2. Operational Concepts
• Identify organizational functions and operations performed during the mission. This
includes describing logistics and maintenance concepts.
• Describe what enablers exist and how they interface with the alternatives. This includes
identifying the dependencies of each alternative.
• Discuss significant tactics, techniques, procedures, and doctrine used.
• Discuss significant interfaces with other systems.
• Identify any peacetime and contingency operation implications. Describe any deployment
issues.
2.3. Scenarios and Operational Environment
• Describe the scenarios that will be used in the AoA and rationale for their selection. This
includes an explanation of how the scenarios represent the operational environment.
• Describe the expected operational environment, including terrain, weather, location, and
altitude. Describe how the environment will impact the alternatives.
• Describe the enemy tactics (include potential countermeasures).
3. Effectiveness Analysis
3.1. Effectiveness Methodology
• Describe the effectiveness methodology, including the types of analysis (e.g., parametric,
expert elicitation, modeling and simulation, etc.). This includes describing how
performance drivers will be identified and fully explored in the sensitivity analysis.
• Describe how the methodology and associated measures will be reviewed by the
appropriate stakeholder and oversight groups (e.g., Senior Review Group, Study Advisory
Group, etc.).
• Describe how the dependencies identified for each alternative will be addressed in the
analysis.
• Describe the decomposition of the capability gaps and how they will be addressed in the
analysis.
• Describe the methodology to explore the tradespace and give a brief description of what
sensitivity analysis will be accomplished to determine Key Performance Parameters/Key
System Attributes and threshold/objective (T/O) values for the Requirements Correlation
Table (RCT). This includes describing how the tradespace around the capability threshold
values will be explored to determine if adjustments need to be recommended based on the
results.
• Describe the methodology to assess sustainability concepts such as reliability, availability,
and maintainability.
3.2. Measures
117
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
•
•
Identify the Measures of Effectiveness, Suitability, and Performance.
Describe the traceability of the AoA measures to the requirements and associated
minimum values identified in the ICD (from the CBA).
• Describe the traceability of the AoA measures to the capability gaps and mission tasks.
• Discuss how the measures are measurable and will support the development of the postAoA documents (e.g., CDD, CPD, TES, TEMP).
3.3. Sensitivity Analysis Methodology
• Describe the sensitivity analysis that will be conducted to determine key performance
parameters/key system attributes and threshold/objective values for the RCT.
3.4. Analysis Tools and Data
• Describe the analysis methods and tools that will be used to conduct the analysis and the
rationale for selection. Describe the input data to be used and corresponding sources.
• Discuss how the data for the scenarios, threats, and each of the alternatives will be current,
accurate, and unbiased (technically sound and doctrinally correct).
• Describe how the analysis methods and tools will provide data to address the measures.
Illustrate how the analysis methods and tools are linked (suggest using the confederation
of tools diagram described in Chapter 4 of this handbook).
3.5. Modeling and Simulation Accreditation
• Describe the modeling and simulation accreditation plan.
• Discuss any potential model biases, such as “man-in-the-loop” biases.
4. Cost Analysis
4.1. Life Cycle Cost Methodology
• Describe the cost analysis methodology. Describe how the cost drivers will be identified
and fully explored in sensitivity analysis.
• Describe how the cost analysis methodology will be reviewed by the stakeholders and
oversight groups (e.g., Senior Review Group, Study Advisory Group, etc.).
• Describe how the dependencies identified for each alternative will be addressed in the
analysis.
• Identify the economic operating life of the alternatives (e.g., 10 year, 20 year, 25 year
Operations and Support cost).
• Describe the methodology for costing Research and Development (R&S), Investment,
Operations and Support (O&S), Disposal, and total LCC for each alternative.
• Identify the sunk costs for information purposes only.
4.2. Work Breakdown Structure
• Describe the cost work breakdown structure.
4.3. Cost Tools and Data
• Describe the cost analysis methods (e.g., analogy, expert opinion, etc.) and models (e.g.,
ACEIT, CRYSTALL BALL, etc.) that will be used and the reason for their selection.
Describe the input data to be used and corresponding sources.
• Discuss any potential model shortfalls.
4.4. Cost Sensitivity and Risk Methodology
• Describe the methodology to identify the cost drivers.
118
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
•
Describe the methodology for determining the level of uncertainty for each element of
LCC and each cost driver.
• Describe how the cost of each alternative will be assessed with respect to the affordability
constraints identified at MDD and in the AoA study guidance.
5. Risk Assessment
5.1. Risk Assessment Methodology
• Describe the methodology for identifying risk (operational, technical risk, cost, and
schedule). Discuss how empirical data will be used to assess technical risk, especially in
the area of integration risk.
• Describe the methodology to identify schedule drivers.
5.2. Risk Assessment Tools
• Describe the risk assessment tools or models that will be used in the analysis.
6. Alternative Comparison
6.1. Alternative Comparison Methodology and Presentations
• Describe the alternative comparison methodology. If using a color scheme (e.g., red,
yellow, green), describe how the color rating will be determined from the values.
• Describe how the alternative comparison methodology will be reviewed by the
stakeholders and oversight groups (e.g., SAG).
• Describe the methodology for performing the sensitivity tradeoff analysis. This includes
describing how knee-in-the-curves for cost drivers will be determined to identify cost
effective solutions rather than single point solutions.
• Describe the methodology for identifying the assumptions and variables, when changed,
will significantly change the schedule, performance, and/or cost-effectiveness of the
alternatives.
• Describe the methodology for identifying performance parameters, when changed, will
significantly change operational effectiveness. Also identify performance parameters, if
fixed as performance specifications, are most likely to influence development and
production cost.
6.2. Cost/Capability Tradeoff Analysis Methodology
• Describe the cost/capability tradeoff analysis methodology to determine the best value
alternative(s) that provide acceptable capability to the warfighter.
7. Organization and Management
7.1. Study Team Organization
• Identify how the team is organized and a general description of the responsibilities of each
working group.
• Describe the stakeholders and oversight groups (e.g., Senior Review Group, Study
Advisory Group, etc.) and their roles.
7.2. AoA Review Process
• Describe the review process and the oversight groups involved (e.g., Senior Review
Group, Study Advisory Group, Milestone Decision Authority, etc.).
119
874
875
876
877
878
879
880
881
882
883
884
7.3. Schedule
• Describe the AoA schedule (a chart of the timeline with key decision points and events is
suggested). Discuss the ability of the study team to execute the study plan according to
the schedule. Identify potential schedule risk pertinent to the study.
APPENDICES
A. Acronyms
B. References
C. CCTD(s)
D. Modeling and Simulation Accreditation Plan
E. Other appendices as necessary
885
120
886
887
888
Appendix D: Final Report Template
This appendix contains the AoA Final Report template required for the AoA.
889
890
-----------------------------Cover Page -----------------------------
891
<Name of Project Here>
892
893
894
895
Analysis of Alternatives (AoA)
Final Report
896
897
<Lead MAJCOM>
898
<Date>
899
900
Distribution Statement
901
Refer to these sources for more information:
902
903
1. Department of Defense Directive (DODD) 5230.24, “Distribution Statements on Technical
Documents”
904
905
2. Air Force Pamphlet (AFP) 80-30, “Marking Documents with Export-Control and DistributionLimitation Statements” (to be reissued as Air Force Instruction (AFI) 61-204)
906
907
Ask the Scientific & Technical Information (STINFO) Officer for help in choosing which of the
available statements best fits the AoA
908
REMEMBER -- AoA information may be PROPRIETARY, SOURCE SELECTION
909
SENSITIVE, OR CLASSIFIED
910
911
912
121
913
-----------------------Table of Contents---------------------
914
915
Executive Summary
916
917
918
919
920
921
922
1. Introduction
1.1. Purpose and Scope
1.2. Study Guidance
1.3. Capability Gaps
1.4. Stakeholders
1.5. Ground Rules, Constraints, and Assumptions
1.6. Description of Alternatives
923
924
925
926
2. Operational Effectiveness Analysis Results
2.1. Operational Effectiveness Analysis Results
2.2. Operational Effectiveness Sensitivity Analysis Results
2.3. Requirements Correlation Table (RCT)
927
928
929
3. Cost Analysis
3.1. Life Cycle Cost Results
3.2. Cost Sensitivity and Risk Results
930
931
4. Risk Assessment
4.1. Risk Assessment Results
932
933
934
935
5. Alternative Comparison
5.1. Alternative Comparison Results
5.2. Sensitivity Analysis Results
5.3. Conclusions and Recommendations
936
Appendices
937
938
939
940
941
942
943
944
945
A. Acronyms
B. References
C. CCTD(s)
D. Analysis Methodology Details
E. Modeling and Simulation Accreditation Final Report
F. RAM-C Report (JROC interest and other ACAT I efforts)
G. Intelligence Supportability Analysis (ISA)
H. Other appendices as necessary
946
947
948
949
122
950
--------------------- Report Section Contents-----------------------
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
Executive Summary
• Describe the purpose of the study.
• Identify key organizations associated with the study.
• Summarize the results of the study. This must include a summary of the answers to the
key questions in the AoA study guidance and identification of where the trade-offs exist,
operational risk associated with the performance and to what degree the capability gap(s)
have been mitigated by each alternative.
• Summarize the key parameters in the RCT and the analytical evidence to support them.
1. Introduction
1.1. Purpose and Scope
• Describe the scope and purpose of the AoA. Discuss how the AoA scope was tailored to
address the AoA study guidance and ADM. Explain the reason for any incomplete
analysis and the plan to complete any remaining analysis.
• Identify any key MDA or other issues that were not considered or addressed in the
analysis. Explain the reason for any unanswered questions and the plan to address them.
• Identify the Milestone Decision the analysis results will inform.
1.2. Study Guidance
• Summarize the AoA study guidance from the AF and CAPE, as appropriate.
• Identify the key questions in the guidance.
1.3. Capability Gaps
• Identify and describe the specific AFROC or JROC approved capability gaps that were
addressed in the AoA. Identify the validated source of these gaps.
1.4. Stakeholders
• Identify the stakeholders for the AoA and explain their roles/responsibilities in the AoA.
• Describe how the methodologies, alternatives, evaluation criteria, and results were
reviewed and accepted by the stakeholders and oversight groups (e.g., Study Advisory
Group).
1.5. Ground Rules, Constraints, and Assumptions for the AoA
• Summarize the overarching AoA ground rules, constraints, and assumptions.
• Describe the expected need timeframe.
1.6. Description of Alternatives
• Describe the baseline (existing and planned systems) capability.
• Describe each of the alternatives assessed in the AoA (include any discriminating
features).
• Describe what enablers were addressed and how they align with those identified at MDD
and in the AoA guidance.
• Identify all DOTmLPF-P implications for each alternative.
2. Operational Effectiveness Analysis
2.1. Operational Effectiveness Analysis Results
123
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
• Describe the results of the effectiveness and sustainability analysis.
2.2. Operational Effectiveness Sensitivity Analysis Results
• Describe the sensitivity analysis conducted.
• Identify the key parameters highlighted by the sensitivity analysis (performance drivers)
and how they were fully explored.
2.3. Requirements Correlation Table (RCT)
• Identify the key parameters in the RCT and analytical evidence to support the thresholds
and objectives identified.
3. Cost Analysis
3.1. Life Cycle Cost Results
• Describe the results of the cost analysis. This includes presentation of the life cycle cost
estimates (LCCEs) and any total ownership costs.
3.2. Cost Sensitivity and Risk Results
• Identify the cost drivers highlighted by the sensitivity analysis and how they were fully
explored.
• Identify the level of uncertainty for each cost driver.
• Identify how the cost of each alternative aligns with the affordability constraints identified
at MDD and in the AoA study guidance.
4. Risk Assessment
4.1. Risk Analysis Results
• Describe the results of the risk analysis. Identify operational and non-operational (e.g.,
technical, cost, schedule) risks.
• Describe the initial acquisition schedule for each alternative, assessment of existing
TRLs/MRLs for critical technologies which may impact likelihood of completing
development, integration, operational testing on schedule and within budget. This should
include an assessment of the likelihood of achieving the proposed schedule.
• For significant risks, identify practical mitigation strategies to minimize impact to
delivering operational capability and, if applicable, potential workarounds in the event
risks are realized.
5. Alternative Comparison
5.1. Alternative Comparison Results
• Describe the results of the alternative comparison.
• Explain the rationale for disqualifying any alternatives from further consideration.
• If appropriate, identify recommended changes to validated capability requirements for
consideration if changes would result in acceptable tradeoffs.
• Explain why alternatives do well or poorly (include rationale for the results).
• Describe the results of the cost/capability tradeoff analysis. This must clearly identify
where the tradeoffs exist and to what degree the capability gap(s) have been mitigated.
5.2. Sensitivity Analysis Results
• Identify the performance, cost, and risk drivers and how they were fully explored in the
sensitivity analysis.
• Identify how sensitive the alternatives are to changes in assumptions and how they are
sensitive to changes in specific scenarios.
124
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
•
Identify how sensitive the alternatives are to changes in the threshold and objective values
identified in the RCT (include the associated cost drivers for the values and how sensitive
the cost is to the values).
• Explain how variations to CONOPS could mitigate cost drivers or effectiveness
(performance) shortfalls. This should include characterizing the conditions under which
the performance of each alternative improves and degrades.
5.3. Conclusions and Recommendations
• Provide conclusions and recommendations based on the analysis.
• Provide answers to the key questions identified in the AoA study guidance (must be
answered sufficiently to inform the upcoming decision).
• Identify what additional information/analysis is needed prior to initiation of future
acquisition activities and milestone decisions.
APPENDICES
A.
B.
C.
D.
E.
F.
G.
H.
I.
Acronyms
References
CCTD(s)
Detailed Description of the AoA methodologies
Lessons Learned
Modeling and Simulation Accreditation Final Report
RAM-C Report (JROC interest and other ACAT I efforts)
Intelligence Supportability Analysis (ISA)
Other appendices as necessary
125
1058
1059
Appendix E: Study Plan Assessment
1060
1061
1062
1063
1064
1065
This appendix contains the AoA Study Plan assessment criteria used by OAS in their independent
assessment of an AoA Study Plan and associated briefing for presentation to the AFROC and
OSD/CAPE. This assessment will be presented in bullet fashion, highlighting the risk areas with
the credibility and defensibility of the analysis results as it progresses outside of the AF to the
decision makers. OAS will provide an initial assessment and get-well plan after the initial review
to determine readiness for submission to AF/A5R.
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1. AoA purpose, definition and scope consistent with guidance
 Identification of the specific gaps that are being addressed in the AoA.
 Identification of the key questions identified in the AoA study guidance.
 Definition of the baseline (existing and planned systems) capability.
 Identification of the alternatives identified by the AoA study guidance. This includes
discussion about the implications and/or dependencies identified about the alternative and
how the dependencies will be addressed in the analysis.
 Discussion of previous related studies and their relevance to this study.
2. Appropriate stakeholders, issues, constraints addressed
 Identification of the stakeholders and their roles/responsibilities in the AoA.
 Identification of how each part of the stakeholder and oversight communities will
participate in the study and review processes.
 Addresses all assumptions and constraints in guidance. Additional assumptions and
constraints are reasonable and do not artificially constrain the outcome of the study.
3. Analytic Methodology
 Measures of Effectiveness, Suitability, and Performance identified.
 Modeling and Simulation Accreditation Plan is acceptable.
 Decomposition of the capability gaps.
 Traceability of the AoA measures to the requirements and associated minimum values
identified in the ICD (from the CBA).
 Cost work breakdown structure.
 Methodology to determine capability of alternatives to close or mitigate gaps.
 Methodology to explore tradespace and description of what sensitivity analysis will be
accomplished to determine key parameters and T/O values for RCT.
 Methodology to conduct the cost/capability tradeoff analysis.
 Methodology for addressing the dependencies identified for each alternative.
 Scenarios to represent the operational environment.
4. Level of effort and schedule is reasonable
 Includes a schedule for AoA activities.
 Addresses potential milestones that are driving the AoA.
 Addresses the ability of the AoA study team to execute the study plan.
 Identifies potential areas of risk and/or roadblocks pertinent to the study (particularly
schedule risk, lack of required data, lack of stakeholder participation, etc.).
126
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
Appendix F: Final Report Assessment
This appendix contains the AoA assessment criteria used by OAS for their independent
assessment of AoA Final Reports and associated briefings for presentation to the AFROC. This
assessment will be presented in bullet fashion, highlighting the risk areas with the completeness,
credibility and defensibility of the analysis results as it progresses outside of the AF to the
decision makers. OAS will provide an initial assessment and get-well plan after the initial review
to determine readiness for submission to AF/A5R.
1. Scope and problem definition consistent with guidance
 Description of the scope and purpose of the AoA. Demonstrated consistency with
guidance. Discussed how AoA scope was “tailored” to address the AoA study guidance
and ADM
 Identified any key MDA or other issues that were not considered or addressed in the
analysis (if applicable). This included identification and rationale for any unanswered
questions and/or incomplete analysis and description of the recommended plan to answer
these questions and to bring any remaining analysis to closure.
2. Appropriate stakeholders, issues, constraints addressed
 Identification of stakeholder and oversight communities and explanation of their
roles/responsibilities in the AoA
 Description of how methodologies, evaluation criteria, and results were reviewed and
accepted by stakeholder and oversight communities
3. Analytic Execution
 Assumptions and rating criteria used in the evaluation
 Identification of which enablers were addressed and how they align with those outlined at
the MDD and in the AoA guidance
 Identification of the performance, cost, and risk drivers and how they were fully explored
in sensitivity analysis.
 Identification of how sensitive each of the alternatives are to the analysis assumptions and
if they are sensitive to specific scenarios.
 Identification of the key parameters in the RCT and analytical evidence to support the
thresholds and objectives identified. This must include identifying what the associated
cost drivers are for those values and how sensitive the cost is to those values.
 Identification of technical feasibility of thresholds and objectives identified in the RCT
based on the affordability constraints identified.
 Identification and scoping of what additional information/analysis is needed prior to
initiation of any acquisition activities; to include requesting a milestone decision.
 Identification of how the cost of each alternative lines up with the affordability constraints
identified at MDD and in the AoA study guidance.
 Identification of Measures of Suitability and how they are intended to be supported in the
intended operational environment.
127
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170




Identification of the metrics used, any weighting factors applied, and the rationale for
applying each weighting factor. Analysis should illustrate interrelationship between the
metrics and cost to facilitate cost/capability/risk/schedule tradespace discussions.
Identification of the operational and non-operational (e.g., technical, cost, schedule) risks.
It is important that the study team address the non-operational risks with the same level of
fidelity/rigor as the operational risks. Non-operational risks can be significant
contributors to future program failure.
Identification of all DOTmLPF-P implications for each alternative
Description of each alternative under consideration including discriminating features
4. Recommendations and Conclusions Supported by AoA Findings
 Answers to the key questions identified in the AoA study guidance. These must be
answered sufficiently for decision makers to support the upcoming decisions.
 Illustration of the cost/capability/risk tradespace. This must clearly identify for the
decision makers where the trade-offs exist, operational risk associated with the
performance and to what degree the capability gap(s) have been mitigated.
 Rationale for disqualifying any alternatives from further consideration.
 If appropriate, recommended changes to validated capability requirements for
consideration if change would enable more appropriate tradespace.
 Explanation of why alternatives do well or poorly. This must include rationale for the
results.
 Explanation of how variations to CONOPS or attributes might mitigate cost drivers or low
ratings on assessment metrics. This should include characterizing the circumstances in
which each alternative appears superior and the conditions under which it degrades.
 Identification of estimated schedules for each alternative, assessment of existing
TRLs/MRLs for critical technologies which may impact likelihood of completing
development, integration, operational testing on schedule and within budget. This should
include an assessment of the likelihood of achieving the proposed schedule based on DoD
experience.
128
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
Appendix G: Lessons Learned
This appendix provides rationale and guidance for capturing and documenting lessons learned.
Lessons learned provide current and future AoA study teams with valuable knowledge derived
from past and present AoA efforts. This knowledge includes information about the strengths and
weaknesses of initiating, planning, and executing an AoA. Lessons learned from the beginning of
the AoA to completion of the AoA process should be thoroughly documented. By capturing and
documenting lessons learned, each AoA team can add to and benefit from the collective wisdom
and “Best Practices” related to the AoA process.
Some of the most commonly recurring Study Team lessons learned include:








Meet regularly either in person or virtually
Team composition of both Air Force and contractor personnel provides good
complementary technical support
Study Advisory Groups provide guidance, support and sanity checks
The Study Director and the core team must lead the entire effort
Small numbers of people meeting are more productive
Buy-in of the senior leaders at all levels is critical
Things will change – documentation and communication is critical
Utilization of High Performance Teams can increase efficiency and has the potential to
shorten timelines. They are especially useful when a team is faced with a very aggressive
schedule
1195
129
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
Appendix H: OAS Review of Documents for AFROC
This appendix provides a general timeline to follow for review of AoA related documents (study
plans, final reports, and interim status briefings) in preparation for presentation to the AFROC.
This timeline applies a staffing process that begins with the AoA document delivery to OAS and
concludes with the presentation of the document at the AFROC. The staffing process may
conclude prior to AFROC presentation if an intermediate body (AFRRG, MAJCOM, etc.)
recommends that the document either; 1) does not contain sufficient information, or 2) is not
appropriate for presentation at the next scheduled AFROC. This schedule is not fixed, but it does
define the recommended minimum timeline necessary for satisfactory review and staffing by the
organizations with an interest in AoA documentation. The first two weeks of review are designed
to assist the study team in understanding OAS’ assessment and provides the team with the time
needed to make any adjustments they see fit in preparation for the remainder of the staffing
process.
This timeline only applies to efforts that have had an OAS member embedded with the team
throughout the effort. For studies in which an OAS member has not been imbedded, the AoA
team should plan for a lengthier review process in order for OAS to become familiarized with the
study, its objectives and the products subject to review.
Suspense
6 weeks prior to
AFROC
AoA Team submits document to OAS. Provides presentation and
document to OAS. OAS works with teams to refine products.
5 Weeks prior to
AFROC
OAS provides assessment of presentation and document to AoA
Team & MAJCOM/Lead Command. OAS works with teams to
refine products.
4 Weeks prior to
AFROC
AoA Team submits documents to AF Functional for review. OAS
submits assessment to AF Functional.
3 Weeks prior to
AFROC
AF Functional submits document to AF/A5R-P in preparation for
AFRRG.
2 Weeks prior to
AFROC
AoA Team presents document at AFRRG. OAS submits
assessment to AFRRG.
1 Week prior to
AFROC
AoA Team revises document based on feedback/direction from
AFRRG.
Week of AFROC
1216
Activity
AoA Team presents document to AFROC. OAS submits
assessment to AFROC.
Figure H-1. Example Timeline for Review of Documents and Briefings to the AFROC
130
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
Appendix I: Joint DoD-DOE Nuclear Weapons Acquisition
Activities
This appendix provides additional information for those Department of Defense (DoD)/Air Force
Acquisition efforts which are part of the nuclear enterprise. It provides information regarding
important stakeholders, councils, and committees which should be included in the AoA processes.
It also points out the significantly longer process that is used by the Department of Energy (DOE)
during and acquisition of nuclear weapons. These timelines can greatly impact the fielding of a
new capability to the warfighter and close coordination is vital for program success.
Complementary DoD and DOE Departmental Responsibilities
Although there is a dual-agency division of responsibilities between the DoD and the DOE, these
responsibilities are complementary. These complementary responsibilities are based on law and
formal agreements to provide a safe, secure, and militarily effective nuclear weapons stockpile.
All nuclear weapon development, production, sustainment, and retirement projects shall be
coordinated fully between the DoD and the DOE, and shall consider total weapon cost and
performance (including DOE costs and other resource requirements) in establishing military
requirements and design objectives. The DoD and DOE will jointly determine the classification of
developmental systems.
Councils and Committees
In addition to establishing roles and responsibilities two primary entities were established to
provide support and advice in matters involving oversight, guidance, and day-to-day matters
concerning nuclear stockpile activities: the Nuclear Weapon Council and the Nuclear Weapons
Council Standing and Safety Committee.
Nuclear Weapon Council (NWC)
131
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273a.
1274b.
1275
1276c.
1277
1278
1279
1280
1281
The DoDD 3150.1, E2.1.5. defines the Nuclear Weapon Council (NWC) as: “An
advisory/approval body established under DoD-DOE MOU (reference (f)) and 10 U.S.C. 179
(reference (g)) to provide high-level oversight, coordination and guidance to nuclear weapons
stockpile activities. It is chaired by USD (AT&L), with the Vice Chairman of the Joint Chiefs of
Staff and a senior representative from the DOE as members.” All communication between DoD
and DOE should be transmitted through the NWC.
The Council provides an inter-agency forum for reaching consensus and establishing priorities
between the two Departments. It also provides policy guidance and oversight of the nuclear
stockpile management process to ensure high confidence in the safety, security, and reliability of
U.S. nuclear weapons.
The NWC serves as an oversight and reporting body and is accountable to both the Legislative
and Executive branches of the government. The NWC meets regularly to raise and resolve issues
between the DoD and the NNSA regarding concerns and strategies for stockpile management.
The Council is also required to report regularly to the President regarding the safety and reliability
of the U.S. stockpile.
Air Force Nuclear Weapons Center (AFNWC)
The Air Force Nuclear Weapons Center (AFNWC) was established with the following Strategic
Nuclear Goals in mind:
Maintain nuclear weapons system surety through timely and credible processes.
Enable the development and implementation of nuclear technology to maintain air, space and
information dominance.
Sustain war-winning nuclear capabilities to ensure operational readiness and effectiveness.
Highlighted below are the individual and joint responsibilities of the DoD and DOE. AFNWC has
the DoD lead responsibility to coordinate nuclear weapon arsenal requirements with the DOE for
refurbishments to be acquired within the Joint DOE-DoD 6.X Life Cycle process.
132
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
National Nuclear Security Administration (NNSA)
The National Nuclear Security Administration (NNSA) is a separately organized and funded
agency within the Department of Energy (DOE) and has a multi-billion dollar per year budget to
maintain the safety and reliability of the nation’s nuclear weapon stockpile. NNSA manages life
extension efforts using multipart nuclear weapon refurbishment process, referred to as the 6.X
Process, which separates the life extension process into phases.
Nuclear weapons are developed, produced, maintained in the stockpile, and then retired and
dismantled. This sequence of events is known as the nuclear weapons life cycle. As a part of
nuclear weapons management, the DoD and the National Nuclear Security Administration
(NNSA) have specific responsibilities related to nuclear weapons life cycle activities. Therefore,
The DoD and the NNSA share responsibility for all U.S. nuclear weapons/warheads.
The following chart depicts the Joint DoD and DOE relationships. The left side of the diagram
includes the DoD organizational flow, beginning with the PM to the PEO, followed by the
SAF/AQ, AFROC, JROC, up to the Under Secretary of Defense Acquisition, Technology and
Logistics (USD(AT&L)). The USD(AT&L) coordinates with the NWC.
On the DOE side, the flow begins with the NNSA PM to the NA-10 Defense Programs, and up to
the NNSA N-1. The NNSA N-1 coordinates with the NWC. Joint DoD/DOE involves
coordination of the DoD PM and NNSA PM
133
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
DoD Responsibilities
The DoD is responsible for:









Participating in approved feasibility studies
Developing requirements documents that specify operational characteristics for each
warhead-type and the environments in which the warhead must perform or remain safe
Participating in the coordination of engineering interface requirements between the
warhead and the delivery system
Determining design acceptability
Specifying military/national security requirements for specific quantities of warheads
Receiving, transporting, storing, securing, maintaining, and (if directed by the President)
employing fielded warheads
Accounting for individual warheads in DoD custody
Participating in the joint nuclear weapons decision process (including working groups, the
warhead Project Officer Group (POG), the NWC Standing & Safety Committee
(NWCSSC), and the NWC)
Developing and acquiring the delivery vehicle and launch platform for a warhead; and
storing retired warheads awaiting dismantlement in accordance with jointly approved
plans.
DOE Responsibilities
134
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
The DOE is responsible for:















Participating in approved feasibility studies
Evaluating and selecting the baseline warhead design approach
Determining the resources (funding, nuclear and non-nuclear materials, facilities, etc.)
required for the program
Performing development engineering to establish and refine the warhead design
Engineering and establishing the required production lines
Producing or acquiring required materials and components
Assembling components and sub-assemblies into stockpile warheads (if approved by the
President)
Providing secure transport within the U.S.
Developing maintenance procedures and producing replacement limited-life components
(LLCs)
Conducting a jointly-approved quality assurance program
Developing a refurbishment plan—when required—for sustained stockpile shelf-life
Securing warheads, components, and materials while at DOE facilities
Accounting for individual warheads in DOE custody
Participating in the joint nuclear weapons decision process
Receiving and dismantling retired warheads; and disposing of components and materials
from retired warheads.
Joint Nuclear Acquisition Process
The Joint DoD-NNSA Nuclear Weapons Life Cycle consists of seven phases from initial research
and Concept Design through Retirement, Dismantlement, and Disposal. This process is similar to,
and has many parallels with, the Defense Acquisition System described in DoD 5000 directives
and instructions.
The United States has not conducted a weapon test involving a nuclear yield since 1992. This
prohibition against nuclear testing is based on diplomatic and political considerations including
the international Comprehensive Test Ban Treaty of 1996 which the United States follows even
though the treaty was not formally ratified. Although entirely new nuclear weapons could be
developed without testing involving a nuclear yield, the United States has refrained from
developing new devices and instead has emphasized stockpile management including programs to
upgrade, update, and extend the service life of our current weapons. This sustainment effort is
called the 6.X Process.
135
1374
1375
1376
1377
1378
1379
1380
1381
The Phase 6.X Process
The Nuclear Weapons Council (NWC) has a major role in the refurbishment and maintenance of
the enduring nuclear weapons stockpile. To manage and facilitate the refurbishment process, the
NWC approved the Phase 6.X Procedural Guideline in April 2000.
1382
1383
1384
136
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
Phase 6.1 – Concept Assessment (On-going)
The Concept and Assessment Phase, consists of continuing studies by the DoD, the NNSA, and
the Project Officer Group (POG). A continuous exchange of information, both formal and
informal, is conducted among various individuals and groups. This exchange results in the
focusing of sufficient interest on an idea for a nuclear weapon or component refurbishment to
warrant a Program Study. During the 6.1 Phase, the NWC must be informed in writing before the
onset of any activity jointly conducted by the DoD and the NNSA.
Phase 6.2 – Feasibility Study (9-18 months)
After the NWC approves entry into Phase 6.2, the DoD and the NNSA embark on a Phase 6.2
Study, which is managed by the POG for that weapon system. In a Phase 6.2 Study, design
options are developed and the feasibility of a Phase 6.X refurbishment program for that particular
nuclear weapon is evaluated. The NNSA tasks the appropriate DOE laboratories to identify
various design options to refurbish the nuclear weapon. The POG performs an in-depth analysis
of each design option. At a minimum, this analysis considers the following:









Nuclear safety
System design, trade-offs, and technical risk analyses
Life expectancy issues
Research and development requirements and capabilities
Qualification and certification requirements
Production capabilities and capacities
Life cycle maintenance and logistics issues
Delivery system and platform issues
Rationale for replacing or not replacing components during the refurbishment
The Phase 6.2 Study includes a detailed review of the fielded and planned support equipment
(handling gear, test gear, use control equipment, trainers, etc.) and the technical publications
associated with the weapon system. This evaluation is performed to ensure that logistics support
programs can provide the materials and equipment needed during the planned refurbishment time
period.
Military considerations, which are evaluated in tandem with design factors, include (at a
minimum):



Operational impacts and/or benefits that would be derived from the design options
Physical and operational security measures
Requirements for joint non-nuclear testing
Refurbishment options are developed in preparation for the development of the option downselect package. This package includes any major impacts on the NNSA nuclear weapons complex
137
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
Phase 6.2A – Design Definition and Cost Study (3-6 months)
The NNSA will work with the National Nuclear Laboratories to identify production issues and to
develop process development plans and proposed workload structures for the refurbishment. The
National Nuclear Laboratories will continue to refine the design and to identify qualification
testing and analysis in order to verify that the design meets the specified requirements. Cost
estimates are developed for the design, testing, production, and maintenance activities for the
projected life of the Life Extension Program (LEP) refurbishment
Phase 6.3 – Development Engineering (1 - 3 years)
Phase 6.3 begins when the NWC prepares a Phase 6.3 letter requesting joint DoD and NNSA.
The NNSA, in coordination with the DoD, conducts experiments, tests, and analyses to validate
the design option(s). Also at this time, the production facilities assess the producibility of the
proposed design, initiate process development activities, and produce test hardware as required.
At the end of Phase 6.3, the weapon refurbishment design is demonstrated to be feasible in terms
of:





Safety
Use control
Performance
Reliability
Producibility
The design is thereby ready to be released to the production facilities for stockpile production
preparation activities. These activities are coordinated with parallel DoD activities.
The Lead Service may decide that a Preliminary Safety Study of the system is required in order to
examine design features, hardware, and procedures as well as aspects of the concept of operation
that affect the safety of the weapon system. During this Study, the Nuclear Weapon System
Safety Group (NWSSG) identifies safety-related concerns and deficiencies so that timely and
cost-efficient corrections can be made during this Phase.
Phase 6.4 – Production Engineering (1-3 years)
When development engineering is sufficiently mature, the NNSA authorizes the initiation of
Phase 6.4. This Phase includes activities to adapt the developmental design into a producible
design as well as activities that prepare the production facilities for refurbishment component
production.
138
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
Generally, Phase 6.4 ends after the completion of production engineering, basic tooling, layout,
and adoption of fundamental assembly procedures, and when NNSA engineering releases indicate
that the production processes, components, subassemblies, and assemblies are qualified.
Phase 6.5 – First Production (3-6 months)
When sufficient progress has been made in Phase 6.4, the NNSA initiates Phase 6.5. During this
Phase, the production facilities begin production of the first refurbished weapons. These weapons
are evaluated by the DoD and the NNSA. At this time, the NNSA preliminarily evaluates the
refurbished weapon for suitability and acceptability. A final evaluation is made by the NNSA and
the Labs after the completion of an engineering evaluation program for the weapon.
If the DoD requires components, circuits, or software for test or training purposes prior to final
approval by the NNSA, the weapons or items would be utilized with the understanding that the
NNSA has not made its final evaluation. The POG coordinates specific weapons requirements for
test or training purposes.
The POG informs the NWCSSC that the Life Extension Programs (LEP) refurbishment program
is ready to proceed to IOC and full deployment of the refurbished weapon. The Lead Service
conducts a Pre-Operational Safety Study at a time when specific weapon system safety rules can
be coordinated, approved, promulgated, and implemented 60 days before Initial Operational
Capability (IOC) or first weapon delivery.
During this Study, the NWSSG examines system design features, hardware, procedures, and
aspects of the concept of operation that affect the safety of the weapon system to determine if the
DoD nuclear weapon system safety standards can be met. If safety procedures or rules must be
revised, the NWSSG recommends draft revised weapon system safety rules to the appropriate
Military Departments.
Phase 6.6 – Full-Scale Production (Varies)
Upon NWC approval to initiate Phase 6.6, the NNSA undertakes the necessary full-scale
production of refurbished weapons for entry into the stockpile. Phase 6.6 ends when all planned
refurbishment activities, certifications, and reports are complete.
Aligning the DoE Phase 6.X processes with DoD Acquisition activities takes a lot of planning and
coordination. The timelines for the DoE process are significantly longer than current DoD
projections and data flow has to be tightly coordinated to ensure a successful program execution
to meet warfighter timelines and requirements.
139
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
Appendix J: Human Systems Integration (HSI)
OPR: SAF/AQ-AFHSIO
hsi.workflow@pentagon.af.mil
Introduction
System hardware and software components are defined and transformed as technology evolves,
but the human being is the one (and often the only) known factor as we begin to define a materiel
solution to a capability gap. To begin planning and developing a new system, it is important to
consider the human first. This is called Human Systems Integration (HSI).
HSI defined:
Human Systems Integration (HSI): “interdisciplinary technical and management processes
for integrating human considerations within and across all system elements; an essential
enabler to systems engineering practice” (International Council on Systems Engineering
(INCOSE), 2007)
[The goal of HSI is] “to optimize total system performance, minimize total ownership
costs, and ensure that the system is built to accommodate the characteristics of the user
population that will operate, maintain, and support the system.” DoDI 5000.02
“The integrated, comprehensive analysis, design and assessment of requirements, concepts
and resources for system Manpower, Personnel, Training, Environment, Safety,
Occupational Health, Habitability, Survivability and Human Factors” AFI 10-601
Air Force HSI is frequently broken out into nine elements, known as “domains,” to help people
think about and manage various aspects of human involvement/impacts. The AF HSI domains
are: Manpower, Personnel, Training, Environment, Safety, Occupational Health, Human Factors
Engineering, Survivability, and Habitability. These domains are explained later in this appendix.
Integrating the human when developing alternatives
The Department of Defense (DoD) places a high priority on our people, and the policies reflect
that. The Air Force priority of “Develop and care for Airmen” is a guiding tenet for this work.
The earlier humans are considered in the Integrated Life Cycle, and the consistency with which
they are considered throughout all phases, the better the system will be and the better it will be for
the people who use it. In fact, the DoD requires that acquisition programs give the human equal
treatment to hardware and software as systems are developed:
“The human and ever increasingly complex defense systems are inextricably
linked. Systems, composed of hardware and software, enable the ability of
140
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
humans to perform tasks that successfully project combat power in difficult and
lethal environments. High levels of human effectiveness are typically required for
a system to achieve its desired effectiveness. The synergistic interaction between
the human and the system is key to attaining improvements in total system
performance and minimizing total ownership costs. Therefore, to realize the full
and intended potential that complex systems offer, the Department must apply
continuous and rigorous approaches to HSI to ensure that the human capabilities
are addressed throughout every aspect of system acquisition….The DoD has embraced HSI
as a systemic approach. The concept of HSI embraces the total human involvement with
the system throughout its life cycle…. In summary, this means that the human in
acquisition programs is given equal treatment to hardware and software.” (FY11
Department of Defense Human Systems Integration Management Plan, 2011. Washington,
DC: DDRE, Director, Mission Assurance, and Director of Human Performance, Training
& Biosystems.)
These principles apply to unmanned platforms as well:
“…unmanned systems are unmanned in name only. While there may be no Airman
onboard the actual vehicle, there indeed are airmen involved in every step of the process,
including the pilots who operate the vehicles’ remote controls and sensors and
maintenance personnel.” (Gen Fraser, VCSAF, 23 July 2009)
We have learned through the years that because of maintenance and data processing requirements,
“unmanned” systems often require MORE people to operate and maintain them than traditional
platforms.
1578
The importance of HSI
1579
1580
1581
1582
1583
HSI ensures that the people who touch the system in any way are accommodated and provided
with effective, usable, maintainable, safe equipment, which is integrated and designed properly.
The earlier humans are considered in CONOPS, capability gaps, capability based analysis
assumptions and constraints, and JCIDS documents, the better chance the equipment will be
designed and funded for development with human considerations intact.
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594



Human performance (HP) capabilities and limitations often become serious design
constraints – they should be factored into analytical assumptions and constraints
Effective HSI results in systems with excellent usability, availability, safety, suitability,
accessibility, and maintainability
Early recognition and consideration of the human element may preclude some deficiencies
often found in OT&E that may be very costly to redesign
HSI responsibilities
DODI 5000.02, Enclosure 8 (new draft 5000.02 will be Enclosure 2-6) stipulates that the
Program Manager (PM) is responsible for doing HSI during the development of the program. As
141
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
a member of the Analysis of Alternatives study team, consider the human when addressing
questions, and develop the study’s assumptions and constraints. Determine costs related to
mission tasks, threats, environments, manpower requirements, skill levels, effectiveness, system
performance, usability, accessibility, maintainability, safety, etc. The analysis will tell PMs and
System Engineers (SE) about the necessary characteristics and performance parameters of the
new system. In turn, information will be used to make decisions impacting the lives of Airmen. It
is important, therefore, that to provide a thorough basis for analytic decisions.
1608
The HSI process
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
"…A knowledgeable, interdisciplinary HSI team is generally required to address the full
spectrum of human considerations, and the systems engineer is key to ensuring that HSI is
included throughout the system’s life cycle…" (INCOSE Systems Engineering Handbook,
Para 9.12.1 “HSI is Integral to the SE Process”, Version 3.2, 2010)
“The objective of Human Systems Integration is to appropriately integrate the human into
the system design to optimize total system effectiveness. To accomplish this, all aspects of
human interaction with the system must be understood to the highest degree possible to
optimize the system and human capabilities. For each program or system, different
emphasis is placed on the human interaction depending on the type of system and its
mission and environment.. (FY11 Department of Defense Human Systems Integration
Management Plan, 2011. Washington, DC: DDRE, Director, Mission Assurance, and
Director of Human Performance, Training & Biosystems.)
Human roles, needs and impacts in maintenance, logistics, training, intelligence, security and
other support functions can and should be accounted for in the HSI process – but only if they are
first addressed in the CBA and AoA. Starting with the human as the integrating focus is a rational
methodology to inform early systems engineering, development planning and concept
characterization efforts.
How HSI traces through the JCIDS documents – if the correct HSI language does not appear in
any one document, it cannot be inserted in the next stage or put onto contract:
1. HSI in CBA: cites inability to safely perform mission, and unacceptable O&S costs due to
long maintenance times.
2. HSI in ICD: cites need for operator safety and accessibility of components for
maintenance.
3. HSI in AoA: MOEs/MOPs for safety, usability, accessibility
4. HSI in CDD: KPP for Force Protection, KPP for Safety, KPP for Sustainment, KSA for
Maintainability, attributes for accessibility
5. HSI in CPD: KPP for Force Protection, KPP for Safety, KPP for Sustainment, KSA for
Maintainability, attributes for accessibility
6. (after JCIDS) HSI in SRD: cites applicable parts of HSI DID and MIL STD 1472 for
safety, usability, accessibility
142
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
[NOTE: A simple tool for considering human involvement, impacts, constraints, and trade-offs
can be found at the end of this appendix.]
Domains defined
The HSI domains represent a diverse set of human resources-related issues to stakeholders in the
acquisition process, many of whom are primary “bill payers” during the operations and support
and disposal phases of the system life cycle. Since changes in one domain will likely impact
another, domains should not be considered as separate stove-pipes. An integrated perspective,
balancing the equities of all stakeholders, needs to start prior to MS A and continue through
development and sustainment.
Below are the AF HSI domains and some human considerations that might influence decisions
and thoughts about effectiveness, usability, costs, etc. Notice that some human considerations
appear in more than one domain. The placement of a consideration will often be overlapping, and
it may be contextual (for example: vehicle exhaust may cause an Occupational Health issue, or an
Environment issue, or both):

Manpower:
Wartime/peacetime manning
requirements
Deployment considerations
Future technology and human aptitudes
System manpower estimates
Force structure
Maintenance and logistics concepts
Operating strength
BRAC Considerations
Manning concepts
Life Cycle Cost implications of manpower
decisions
Manpower policies
1658
1659
 Personnel:
Selection and classification
Personnel/training pipeline
Demographics
Qualified personnel
Knowledge, Skills and Abilities
Projected user population/recruiting
Accession/attrition
Career progression/retention
Cognitive, physical, educational profiles
Life Cycle Cost implications of personnel
decisions
Promotion flow
1660
143
1661

Training:
Training strategy and concepts
Training development and methods
Impact on school house or resources Simulation/embedded/ emulation
1662
1663
1664
1665


Virtual applications
Operational tempo
Trainer currency
Training vs. job aids
Training system costs, Operational Safety,
Suitability, and Effectiveness (OSS&E)
efficiency
Refresher and certification training
Required lead time for training, and
timeliness of delivery
Manpower and Personnel policy implications for
training flow and costs
Environment:
System hazards that affect or impact Natural resources
the human or earth
Air
Local communities/political/cultural
Water
Pollution prevention
Earth
Exhaust/toxic emissions
Noise
Disposal
Wildlife
Recycle/ Reuse
Safety:
Safety Types:
 Human
 Flight
 Weapon
 Ground
 CBRNE/NBC
 Equipment
Design safety
Procedures:
 Normal
 Emergency
System risk reduction
Human error
Redundancy of systems
Total System Reliability
Fault reduction
Communication
1666
1667

Occupational Health:
144
Operational Health
Temperature
Hazards
Humidity/salt spray
Operational Environments
Weather
Acoustics
Shock and vibration
Biological and chemical
Laser protection
Radiation
Ballistic spall/fragmentation
Oxygen deficiency
Exhaust/toxic emissions
Air/water pressure
Health care
Lighting
Medications
Weight /load weight distribution
Diagnose, treat and manage illness and trauma
Heat, cold, hydration
Stress
Exercise & fitness – mission
readiness
Disease prevention
(vaccines/hygiene)
1668
1669

Human Factors Engineering:
Human-centered design
Human-system interface (aka
Human-Machine Interface (HMI))
Design impact on skill, knowledge,
& aptitudes
Cognitive load, workload
Personal protection
Fatigue
Implications of design on performance
Human performance
Simplicity of operation, maintenance, and
support
Design-driven human effectiveness, efficiency,
safety and survivability
Costs of design driven human error,
inefficiency, or effectiveness
Personnel Survivability/Force Protection
Situational Awareness
Threats
Ergonomics
Fratricide & Identification Friend, Foe, Neutral
Operational arenas
Potential damage to crew compartment
Camouflage/ concealment
Protective equipment
Sensors
Medical injury (self, buddy, etc.)
Fatigue & stress
Degraded operations (long/short term)
1670
145
1671

Habitability:
Living/work environment
Impact on sustained mission effectiveness,
(ergonomics, bed, toilet, bath, food
prep, rest and eating areas)
Support services (food, medical,
Impact on recruitment, retention
cleaning, recreation, etc.)
Ingress/ egress
Security for personnel/personal items



Normal
Emergency
Evacuation of casualties

While wearing/carrying
equipment
Storage space for equipment, spares,
food, water, supplies
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
Power requirements for food safety, lighting,
temperature control, safety/security
HSI practitioners can assist the AoA teams by providing information and focusing thoughts on
effectiveness, usability, task analysis, safety considerations, habitability/facility needs,
occupational health impacts, etc.
HSI Support throughout the AoA
The Air Force Human Systems Integration Office (SAF/AQ-AFHSIO)
hsi.workflow@pentagon.af.mil should be the first call for HSI support. They can coordinate the
appropriate expertise from across the Air Force including AFLCMC, AFNWC, AFSPC/SMC, the
MAJCOM HSI cells, and the 711 Human Performance Wing.
Human Systems Integration Tool. Below is an easy matrix to help remember the domains and
consider the many users of a program. It is populated with a few examples to provide an idea of
how it is used to help keep all the humans using/touching a system (and the costs associated with
them) in mind as when planning a new system. Use this when incorporating data into the
assessment. Keep in mind that considerations in one domain may cause significant risks or tradeoffs in another.
146
Personnel
Types?
Types? Skill
levels?
Training
Pilot, sims,
SERE
Schools,
OJT
Human Factors
Engineering
Cockpit design? Accessibility Lift / stretch
constraints?
Environment
Exhaust, HAZMATExhaust,
HAZMAT
How many
guards, how
many gates,
how many
shifts?
How many
analysts to
process
data?
How many
loggies
Other (ex: medical,
ground station, fuel,
mess, etc)
Supporting 2
(ex: Weather
supports system)
Supporting 1 (ex:
Intell supports
system)
Security
Trainers
Weapons
load,
transport
load
numbers
Supported 2 (ex:
system provides
transport)
Wartime / Peacetime
Wartime /
numbers
Peacetime
numbers
Supported 1 (ex:
system provides
sensor intell)
Manpower
How many
Fuelers,
Medical
Skill levels?
Contract?
Pilot and
Mx
Schools,
Train-thetrainer,
contractors
?
Send threat
updates
realtime to
cockpit?
Exhaust,
HAZMAT
Send Wx
updates to
cockpit?
Send threat
info, video,
sigs to Intell
sqd
In-flight
refuel?
Checklists,
emergency
shut-off
valves
Safety
1701
1702
1703
1704
1705
1706
Logisticians
Operators
HSI Domains
Maintainers
Humans Using System
Occupational Health
Temps, pressurize,Temps,
gBack strain
strain
back strain,
flight line
ear prot.
Personnel Survivability
(aka Force Protection)
Ejection, restraints,
beacon, chaff
Habitability
Elimination packs,Space for
cockpit temps, tools,
restraints
workbenche
s, seating,
moving
large equip


Example 1: Reducing Manpower in maintainers may cause a Safety or Occupational
Health problem for the operator.
Example 2: Decreasing the personnel skill levels of operators may cause a need for longer
and more intense training, more trainers, or more advanced technical design to make up
for the shortfall.
1707
147
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
Appendix K: Acquisition Intelligence in the AoA Process
Analysis of Alternatives (AoAs) that are intelligence sensitive (i.e. either produce intelligence
products or consume intelligence products during development and/or operation) require
acquisition intelligence support and Intelligence Supportability Analysis (ISA).
Acquisition Intelligence is the process of planning for and implementing the intelligence
information and infrastructure necessary to successfully acquire and employ future Air Force
capabilities. Acquisition Intelligence has two primary goals. 1) to identify Intelligence
requirements early in the life cycle to minimize cost, schedule and performance risks and 2) to
support programs/initiatives throughout their lifecycle with high quality intelligence material
through a set of standard acquisition intelligence processes and tools.
Background
Intelligence integration in support of Air Force and Joint systems development has never been
more important or challenging than it is in today's environment. When intelligence is not fully
integrated into the Air Force's acquisition and sustainment processes, the results often include
costly work-a-rounds or modifications, scheduling delays, unplanned adjustments to
operations/maintenance, and/or delivery of a weapon system that has vulnerabilities to or is less
effective against emerging threats. As future systems become more intelligence-dependent, the
cost of omitting intelligence integration will increase significantly. Late identification of
requirements hampers the ability of the intelligence community to conduct the long-term
planning, funding and development of collection and production capability needed to support
user's requirements. Intelligence customers, in turn, are forced to use existing intelligence
products or contract out intelligence production, significantly impacting both weapon capabilities
and/or increasing program costs.
The expanded role of intelligence in the acquisition and sustainment processes is intended to
minimize program cost, schedule, technical, and performance risk by enabling long term support
planning by the intelligence community. Recent changes to DoDI 5000.02, CJCSI 3170.01, and
AFI 63-101 and the publishing of the new DoDD 5250.01 have significantly increased the
intelligence supportability requirements for weapon system programs. Specifically, program
managers are responsible to ensure an ISA is conducted in collaboration with the local
AFLCMC/IN intelligence office (also referred to as the local Senior Intelligence Officer-SIO,
throughout this document) to establish program intelligence sensitivity, document intelligence
requirements, and ensure current, authoritative threat data is used for analysis throughout the
program life cycle.
Intelligence Supportability Analysis (ISA) for AoAs
After establishing that an AoA is intelligence sensitive, i.e. either producing intelligence products
or consuming intelligence products during development and/or operation, analysis is conducted to
148
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
fill in the intelligence infrastructure details for capability enablers, their DOTmLPF implications,
and associated costs. That established framework is then applied to the proposed alternatives
resulting in a more complete and accurate picture of the proposals intelligence support
requirements and associated intelligence supportability shortfalls. Ideally, ISA will already have
been conducted on the baseline alternative, consult with the local SIO, requiring analysis only on
the alternatives to identify and to document additional intelligence supportability needs.
ISA is the process by which AF intelligence, acquisition, and operations analysts identify,
document and plan for requirements, needs, and supporting intelligence infrastructure necessary
to successfully acquire and employ AF capabilities, thereby ensuring intelligence supportability.
The ISA results will provide the stakeholders with the needed info to compare a capability’s
stated or derived intelligence (data and infrastructure) support requirements with the intelligence
support capabilities expected throughout a capability’s life cycle. ISA results in the identification
of derived intelligence requirements (DIRs) and deficiencies, along with associated impacts to
both acquisition and operational capability if the required intelligence is not provided. Through
ISA, stakeholders identify, document, and plan for derived requirements and supporting
intelligence infrastructure necessary to successfully acquire and field Air Force capabilities.
Several major types of intelligence products and services are needed by weapon systems (see
Figure 1).
Intelligence Products and Services
Needed by Weapons Systems
Threat
Examples:
•System Threat
Assessment Report
(STAR)
•SAP Annex to STAR
•SA-99 Sys Description
•AA-99 FME Report
•Jammer study
•EWIRDB
•TMAP models
•IR/RF signatures
Geospatial
•Scenarios
•Threat
assessments
•Order of battle
•Platform Fit Data
•Characteristics &
Performance data
Examples:
•Digital Terrain
Elevation Data
(DTED)
•Digital Point
Positioning
Database
(DPPDB)
•Controlled Image
Base (CIB)
Information &
Services (GI&S)
and
Targeting
•Paper Maps
•Signature data
•Electronic Maps
•Images
•Database access
•Terrain Data
•Dynamic models
•CAD models
•Vector data
•Other
•Custom
products
Intelligence
Examples:
•Fighter
Squadron intel
personnel
•Acq Intel
personnel
•SAR clearances
for FS intel
personnel
•SAR clearances
for MSIC
analysts
• TTPs
•SCI facilities
•SCI tools
•SIPRNET
Infrastructure
•Manpower
•Clearances
•Training
•Procedures
•Facilities
•Computer
Systems
•Connectivity
•Other
•Cooperative
(mostly Adversary,
but also includes
Targeting Data
•Other
Neutral,
Commercial,
Coalition, and US
systems)
Some products are periodic; most are aperiodic
1771
1772
Figure K-1: Intelligence Products and Services
149
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
Acquisition Intelligence Analysts’ Support to AoAs
AoA is an evaluation of the performance, operational effectiveness, operational suitability,
and estimated costs of alternative systems to meet a mission capability. The analysis assesses
the advantages and disadvantages of alternatives versus the baseline capability, including the
sensitivity of each alternative in the available tradespace. Acquisition intelligence has a role
in all of the AoA working groups (WGs) identifying intelligence infrastructure requirements and
implications.
Threats and Scenarios Working Group (TSWG). The TSWG is responsible for identifying and
providing the scenario(s) to be used during an AoA to assess the military utility and operational
effectiveness of solutions being considered for possible AF acquisition to meet a valid
requirement. Additionally, the TSWG provides threat performance and characteristic information
from intelligence sources to enable the AoAs Effectiveness Analysis Working Group (EAWG) to
simulate potential threats to mission effectiveness. The TSWG will be staffed primarily with
acquisition intelligence professionals and SMEs. Members support the TSWG by providing
relevant intelligence information to sustain TSWG decisions. The TSWG is the forum tasked to
track, anticipate, and mitigate issues potentially impacting the identification, selection and
recommendation of scenarios to the AoA WIPT. Other members may be added on an ad hoc basis
to resolve issues, as they arise.
Technology and Alternatives Working Group (TAWG). The TAWG acts as the interface
with alternative providers, crafting the requirements request, receiving alternative data, and
resolving questions between the providers and the rest of the AoA WGs. The acquisition
intelligence specialist's role as a TAWG member is to ensure an ISA is conducted on all of the
alternatives and intelligence needs are identified, and potential intelligence shortfalls are
highlighted, including inputs from the acquisition intelligence costs analyst detailing what data is
required to frame the ISR infrastructure costing analysis report. An example of decomposing
how Alternative A navigates: Alternative A navigates to a target using a seeker. What type of
data is needed to develop or operate the seeker (i.e. electro-optical signatures)? Does the IC
produce that type of data? If not, identify this as a potential intelligence shortfall.
The table below illustrates a typical AoA set of alternatives, from an intelligence supportability
perspective. Each column represents an area of intelligence infrastructure that would be required
for system development or operation. Refer to fig 1 page 2 for intelligence products and services.
Alternative
Comm
Signatures
Training
Facilities
Baseline
Baseline +
X
A
X
Shortfall
X
X
B
Y
X
X
X
C
Z
X
X
X
150
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
Operating Concept WG. The acquisition intelligence specialist‘s role is to review the
CONOPS from an intelligence perspective to ensure intelligence supportability issues/needs are
noted. May also be called the Enabling Concept Working Group (ECWG).
Effectiveness Analysis Working Group. The acquisition intelligence specialist participates in
the creation of the analysis assumptions from the perspective of valid intelligence supportability
and aids in the identification/supply of required data.
Cost Analysis Working Group. Acquisition intelligence cost analysts, in coordination
with, members of the other working groups, support the AoA by providing cost data on
intelligence support-related activities external to the proposed solutions/alternatives (i.e.
DOTMLPF).
Acquisition Intelligence Analysts’ Support to Decision-makers
When an AoA (or AoA Study Plan) is scheduled to go before the AFROC, acquisition
intelligence analysts are asked to assess the AoA and highlight any ISA concerns. This is done
via an Intelligence Health Assessment Memo for Record. In the MFR, the acquisition
intelligence analyst identifies any potential concerns associated with the alternatives, or the AoA
Study Plan if the Plan is being reviewed by AFROC. Sample IHA MFRs are below.
1830
Contact Information
1831
AFLCMC/IN, Ms. Mary Knight, 21st IS, DSN 986-7604, mary.knight@us.af.mil
1832
1833
1834
151
1835
MEMORANDUM FOR RECORD
1836
1837
FROM: AFLCMC/IN
1838
Building 556, Area B
1839
2450 D Street
1840
Wright-Patterson AFB, OH 45433
1841
1842
1843
SUBJECT: Deployable Tactical Radar Replacement (DTR2) Analysis of Alternatives (AoA)
Update
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1. Intelligence Supportability Analysis for the DTR2 (YELLOW): AFLCMC/IN is supporting
the development of the 3D Expeditionary Long Range Radar (3DELRR); one of the possible
solutions in the DTR2 AoA. However, the AoA’s tradespace did not include the Combined
Reporting Center (CRC) so intelligence (mission data needs) were not considered. This
approach has led to intelligence supportability issues in previous AF ISR systems, such as
Global Hawk’s integration with DCGS.
2. The intelligence community’s capabilities to support probable mission data needs are
(YELLOW): The solutions to two of the primary operational capability gaps require
information that has either been identified as a gap from other programs or will require
information in a different format or fidelity then what is currently being provided by the
intelligence community. These are:
a. (YELLOW) Gap: Does not detect and track stressing air breathing targets and/or
theater ballistic missiles. The intelligence data required to develop, test, and provide
updates for the mission data to support identification for the CRC are not likely to be
available to the fidelity necessary (new requirement over legacy system).
b. (YELLOW) Gap: Does not have the ability to conduct non-cooperative combat target
recognition (NCTR). NCTR will require signature data for the air breathing and
ballistic targets that will be classified by DTR2. Since tactical radars have not needed
this information in the past, signatures will likely be needed in different frequencies or
fidelities than is currently regularly produced by the intelligence community.
3. Questions can be addressed to the undersigned, DSN XXX-XXXX,
1870
1871
//Signed//
152
@wpafb.af.mil.
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
MEMORANDUM FOR RECORD
FROM: AFLCMC/IN
Building 556, Area B
2450 D Street
Wright-Patterson AFB, OH 45433
SUBJECT: Acquisition Intelligence Input for Synthetic Aperture Radar (SAR)/Moving Target
Indicator (MTI) JSTARS Mission Area (JMA) AoA Topic at 14-15 Sep 11 AFROC
1. ISA performed (YELLOW): AFLCMC/IN performed ISA for the SAR/MTI JMA AoA. The
Intelligence Supportability Working Group (ISWG) worked directly with AoA members and
leadership during the final six months of the AoA. ISA was not included in the tradespace but
was considered parallel analysis. The ISA results were captured as an appendix within the
classified AoA Final Report completed in August 2011 but not integrated into the main
analysis. They were briefed to AoA Leadership during the July 2011 pre-AFROC AoA
Executive WIPT. The results also informed ACC/A2X of recommended planning
considerations associated with alternatives.
2. ISA Results (YELLOW): The ISA resulted in identification of Intel requirements for five
combinations of platforms/sensors with SME-based assessment of risks and cost estimates for
AoA discriminators, with manpower being the greatest based on large amounts of new sensor
data.
-
Intel concerns are (YELLOW). The focus of the AoA was exclusively on sensors and
platforms modeled against limited BMC2 missions and simulated against “watch box”
data. This constraint limited the ability to perform meaningful ISA. The results for the
“Target ID” Measure of Effectiveness was one of the key indicators that further ISA will
be needed if alternatives are narrowed down for acquisition consideration. Lack of
existing or emerging cohesive SAR/MTI AF Doctrine (BMC2 vs. ISR), coupled with
immature technology for SAR/MTI Processessing Exploitation and Analysis and
Dissemination (PED), are areas of potential future concern.
3. Address questions to the AFLCMC/IN POC, DSN XXX-XXXX.
1906
1907
1908
//Signed//
1909
1910
153
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
Appendix L: Mission Tasks, Measures Development, and
Data in Detail
1. Data Categories and Levels of Measurement
The Figure L-1 describes data categories and various levels of measurement associated with each
category. Study teams must understand these different levels to determine the type of data to
collect, decide how to interpret the data for a measure, and determine what analysis is appropriate
for the measure. These levels are nominal, ordinal, interval, and ratio. Understanding the levels
of measurement guides how to interpret the data. It helps prevent illogical statements, especially
when comparing results to criteria.
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
Figure L-1: Categories and Levels of Measurement
Some organizations include an “absolute” category generally described as data requiring
absolute counts such as “number of operators.” The four described above; however, are
those most commonly found in analytical and statistical literature.
2. Four Possible Combinations of Categories and Collection Methods
Measures are categorized using the terms quantitative or qualitative, and the terms objective or
subjective. While related, they are not the same thing, but are sometimes used incorrectly or
interchangeably. To clarify, quantitative and qualitative refer to the type of data we need to
collect while objective and subjective refer to the method used to collect the data. There are four
possible combinations of these terms:
154
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
Quantitative data with objective data collection


Measure example: Target Location Error
Rationale: The measure is quantitative since the measurement scale is on a ratio scale.
The accuracy of a targeting pod is best answered by quantitative data (distance) with the
data collected objectively (using a tape measure).
Quantitative data with subjective data collection


Measure example: Operator estimate of the probability of survival
Rationale: Measure is quantitative since the measurement is on a ratio scale (0.0 to 1.0
probability). The data is collected subjectively by first-person responses to a
questionnaire item.
Qualitative data with objective data collection


Measure example: Color of munitions
Rationale: Measure is qualitative since the measurement is on a nominal scale (e.g., blue,
red, green). The data is collected objectively by noting the color (human reading) or
measuring the wavelength of light.
Qualitative data with subjective data collection


Measure example: Operator rating of display
Rationale: Measure is qualitative since the measurement is on a ordinal scale (e.g.,
Strongly Disagree, Disagree, Neither Agree or Disagree, Agree, Strongly Agree). The
data is collected subjectively (first-person reports) by asking operators via questionnaires
for their opinion or perception.
As discussed above, sometimes these terms are used incorrectly or interchangeably. Quantitative
and objective are often used as synonyms for one another while qualitative and subjective tend to
be treated as synonyms. Figure L-2 clarifies their use and relationship to each other. In addition,
the table describes the appropriate statistical techniques to use for each data type/data collection
type combination.
155
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
Figure L-2: Data Types and Collection Methods
3. Data Collection Methods
Not all situations allow, or even call for, quantitative-objective data. Qualitative-subjective data is
appropriate in many cases:




Someone’s judgment or perception of a system’s performance, capabilities, and/or
characteristics is important in the assessment
In some cases, data for attributes can only be obtained through judgment or perception of
individuals (e.g., human mental states like workload or situational awareness)
Qualitative-subjective data is needed to help explain quantitative-objective data
Quantitative-objective data may not exist or it’s impossible to collect and/or analyze
Quantitative-objective data can be more difficult to collect than qualitative-subjective data, but
has many other advantages:


Most informative and easiest to substantiate
Sensitive analytical techniques (e.g., mean, correlation, regression, analysis of variance)
can be applied to analyze data
156
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039

Errors and bias are easier to identify and quantify
In addition to measure data, other information not necessarily used to compute a metric should be
collected such as conditions or factors, stakeholder (user, operator, maintainer) and AoA team
comments, and data from other reports or events
4. Determining how to use data
Data can be used in several different ways:



Compute metrics for measures
Serve as inputs to models
Describe factors or conditions
In past studies, some teams have referred to each of the above ways of using data as their MOPs.
It’s up to each study team to determine what data is important enough to be measured, and how
all other data should/should not be used and reported. How the data are used and what it may be
called will likely vary from study to study. Although significant amounts of data may exist, study
teams must consider a number of things in determining how to use data:



Study objectives, questions, limitations, constraints, and guidance
Characteristics of interest in the materiel alternatives being analyzed
Availability of the data and confidence in the data
The example below shows how the altitude data element can be used in different ways:
Figure L-3: Altitude Data Element Example
5. Creating Mission Tasks and Measures
Figure L-4 provides an example of MTs and measures that can be derived from the following
Mission Statement in the ICD:
The Theater commander must provide moving target indicator support to maneuver and surface
forces across a Corps sized area. It must detect, track, and identify a wide range of potential
157
2040
2041
target categories and classes and communicate that information to enable the targeting and
prosecution of those targets.
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
Figure L-4: Notional MTs/Measures
5.1 Measure Criteria
Measure criteria represent a level of performance against which system characteristics and
capabilities are compared. Measure criteria are expressed as threshold and objective values or
standards:
•
•
Threshold: A minimum acceptable operational value of a system capability or
characteristic below which the utility of the system becomes questionable. (CJCSI
3170.01G and AFI 10-601)
Objective: An operationally significant increment above the threshold. An objective
value may be the same as the threshold when an operationally significant increment above
the threshold is not identifiable. (CJCSI 3170.01G and AFI 10-601)
5.2 Types of Measure Criteria
User-established criteria are explicitly stated or implied in a requirements document such as the
ICD, CDD, CPD, and the TDS. They can be qualitative or quantitative.
158
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
An identified standard can be developed from requirements that describe system characteristics
and performance but have no explicitly stated or implied metrics and criteria standards. Identified
standards can be drawn from sources such as CONOPS, TTPs, SME input, studies, etc. They also
can be qualitative or quantitative.
Any perceived credibility concern can be mitigated by obtaining user concurrence. For both userestablished criteria and identified standards, it is important to document source and rationale. It is
also important to keep in mind that requirements may evolve over the course of the AoA
requiring updates to the measure criteria. Changes occurring after the publication of the study
plan must also be clearly documented in the final report.
5.3 Measures Development Guidelines







Keep the measure as simple as possible – a simple measure requires only a single
measurement
Develop measures that are important to understanding and assessing the alternatives as
well as measures that enable discrimination among alternatives
Measures should not be listed more that once for a mission task, but the same measure
may be listed under different mission tasks
Focus on the outputs, results of performance, or the process to achieve the activity
Check to ensure the units of the metric match the criteria values
Understand the type of data being collected and the appropriate statistics that can be used
in the analysis
Do not apply weights to measures, although some measures may be more important than
others
5.4 Measures Examples
Two examples of well-crafted measures and their associated attributes, metrics, and criteria are
provided below. For each example there are two different forms of the same measure. The first
shows a poorly developed measure, the second is the preferred form. These are examples of the
mechanics of building a measure and are provided for illustrative purposes only. They do not
cover the entire range of potential measures the study team may need. For development of the
specific criteria, the team must refer to initial requirements documents, study guidance, etc. For
more comprehensive information regarding building measures, OAS has developed a primer on
measures development and guidelines and can provide in-person training. A third example is
provided to illustrate a subjective/qualitative measure and its associated attribute and standard.
Example 1: Measure descriptions should not contain metrics, criteria, or conditions, although
metrics, criteria, and conditions are always associated with a measure.
159
2104
2105
2106
2107
Example 2: Do not define success with several attributes unless all must be met concurrently for
the process to be successful. Define separate measures for each.
2108
2109
2110
2111
2112
2113
2114
2115
Example 3: Measure is qualitative since it will be measured using an ordinal scale (e.g., Strongly
Disagree, Disagree, Neither Agree or Disagree, Agree, Strongly Agree). The data is collected
subjectively (first-person reports) by asking operators via questionnaires for their opinion or
perception.
160
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
5.5 Supporting Measures
While not required, there are cases when supporting measures are appropriate. Supporting
measures are used to explicitly show a relationship between measures and can be MOEs, MOSs,
or MOPs that refer to a “parent” measure (Example 1). They support a parent measure by
providing causal explanation for the parent measure and/or highlighting high-interest aspects or
contributors of the parent measure. A parent measure may have one or more supporting
measures.
As shown in Example 1, the three supporting measures provide more insights into the probability
of survival (parent measure). In this example, the detection, identification, and jamming
capabilities of threat emitters is critically important to aircraft survivability. By using supporting
measures, more information is collected to help explain the probability of aircraft survival. For
instance, low survivability may result from poor detection capability (i.e., the aircraft systems are
incapable of detecting a significant number of threat emitters, thereby making the aircraft
vulnerable to these threats). In other situations, performance in identification and/or jamming
capabilities may explain survivability performance.
Example 1:
It is important not to use supporting measures to roll-up or summarize to a parent measure
(Example 2). A parent measure should stand by itself; it should not be a placeholder or umbrella
for supporting measures. A parent measure should have its own metric and criteria. Aggregating
161
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
supporting measures in some mathematical or qualitative manner can introduce skepticism since
the aggregation approach used may not be acceptable to all readers. Although one aggregation
approach is shown in this example, there are really many possible ways to aggregate the results
that could produce different parent measure ratings. For example, one might assign a number to
the color code rating for each supporting measure and take an average to compute the parent
measure color code rating or one might use subject matter experts to review the supporting
measure results and assign a color code rating to the parent measure. Finally, the benefit of using
supporting measures to gain insights into the parent measure performance is lost since the parent
measure is not being measured.
Example 2:
6.
High Interest Measures – Key Performance Parameter (KPP) and Key
System Attribute (KSA)
KPP: Attributes or characteristics of a system that are considered critical or essential to the
development of an effective military capability. Some KPPs are mandatory depending on the
program: Survivability, Net Ready (Interoperability, Information Assurance), Force Protection,
Sustainment (Availability). Additionally, some KPPs are selectively applied depending on the
program: System Training, Energy Efficiency (JCIDS Manual, Appendix A, Enclosure B).
KSA: System attributes considered critical or essential for an effective military capability but not
selected as KPPs. KSAs provide decision makers with an additional level of capability
prioritization below the KPP but with senior sponsor leadership control (generally 4-star level,
Defense Agency commander, or Principal Staff Assistant). For the Sustainment KPP
(Availability), there are two mandatory supporting KSAs: Reliability and Ownership Cost
Efficiency (JCIDS Manual, Appendix A, Enclosure B).
[Note: please refer to JCIDS Manual for explanation of the mandatory KPPs and KSAs]
162
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
CBAs, AoAs, and other supporting analyses provide the analytic foundation for determining the
appropriate thresholds and objectives for system attributes and aid in determining which attributes
should be KPPs or KSAs. In fact, one of the requirements of the AoA is to produce an initial
RCT which identifies an initial set of possible KPPs and KSAs. This RCT should be included in
the AoA final report and it, or a later refined version of it, will become an essential part of the
CDD.
7. KPP and KSA Development Guidelines:




Determine which attributes are most critical or essential to a system and designate them as
KPPs or KSAs (see JCIDS Manual, Enclosure B for guidance in selecting KPP and KSAs)
Number of KPPs and KSAs beyond the required mandatory should be kept to a minimum
to maintain program flexibility
– Must contain sufficient KPPs/KSAs to capture the minimum operational
effectiveness, suitability, and sustainment attributes needed to achieve the overall
desired capabilities for the system
– Some mission tasks may have more than one KPP and/or KSA, other mission tasks
may not have a KPP or KSA
Develop KPP/KSA measures (MOEs, MOPs, MOSs) that measure the critical or essential
attribute
Differences between the threshold and objective values set the tradespace for meeting the
thresholds of multiple KPPs and KSAs
8. Rating Measures and Mission Tasks
8.1 Measures
Once the effectiveness analysis has been completed, the values for the measures of each
alternative need to be presented in a comprehensive manner. The following section provides one
method for presenting each alternative using a color scheme to indicate how well each measure
and mission task was accomplished. This is certainly not the only manner in which the results
can be displayed; there are likely a multitude of methods for presenting information. Whatever
the method chosen by the team keep in mind that OAS discourages “roll-up” and weighting
schemes that tend to mask important information or potentially provide misleading results.
The assessment process begins by rating measures. Measures are the foundation for assessing
mission tasks since they are indicators of the successful (or unsuccessful) accomplishment of the
mission tasks. Measures are typically rated against their threshold evaluation criteria with four
possible measure ratings. As described earlier, these can be either user-established or an
identified standard. However, when objective values are absolutely required for performance
(i.e., not just to provide a “trade space”), an alternative rating scale which incorporates an
additional rating for objective evaluation criteria can be used. And as discussed previously, in
163
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
some cases the capability/cost/risk results space may be used to help identify the threshold and
objective values. In the following example, the measure would receive a blue code if it met or
exceeded the objective value:
When a measure does not meet the threshold evaluation criteria, operational significance becomes
a key consideration. Study teams should leverage operational experience to apply judgment and
determine the significance of the identified effectiveness and/or suitability shortfalls on the
mission task. Use all available information (e.g., CONOPS, OPLANS) to help in making a
determination. Key questions to consider when determining the significance of an operational or
suitability shortfall include:



How close to the threshold value is the measure?
What is the operational significance (i.e., what is the consequence or impact on the
mission task if the threshold criterion is missed by a certain amount)?
If the shortfall is only under some operational conditions (e.g., adverse weather,
mountainous terrain), what is the significance of the impact?
The impact on the mission task ultimately determines whether the shortfall is significant or not.
When a shortfall has only minimal operational impact on the mission task, it should be assessed
as “not a significant shortfall.” However, when a shortfall has substantial or severe operational
impact on the mission task, it should be assessed as a “significant shortfall.” “Inconclusive”
ratings are used when there is insufficient information to assess a measure. On the other hand, a
measure should be rated as “not assessed” when there is no information to assess it, or a decision
is made by the team not to assess it for a specified reason. For instance, the assessment of a
particular measure might be delayed until a later phase in the study. In either case
(“inconclusive” or “not assessed”) the final report should explain the reason for either of these
statements.
164
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
Consider the following guidelines when reporting results:




State the measures supporting the mission task with the associated criteria
Include ratings and visual indicators (Red, Yellow, Green) for the measures
Include a narrative describing how the measures were rated
Apply sound military and operational judgment in determining the significance of
capability shortfalls
Remember, the measures are the foundation for assessing mission tasks.
8.2 Mission Tasks
After all measures have been rated, the focus of the assessment shifts from individual shortfalls at
the measure level to the collective operational impact at the mission task level. In many cases, it
is useful to show how well the alternative solutions accomplish the designated mission tasks. In
many cases, some alternatives will perform some mission tasks well, but not others. Decision
makers should be made aware of the capabilities and limitations of the alternatives as they relate
to accomplishing all mission tasks and how well they do so. However, while it is necessary to
rate the measures, rating the mission tasks is optional. Each study team must make a
determination as to the best way to present the information to the decision makers. Nevertheless,
as with measure assessment, teams should use operational experience and judgment to determine
the overall impact of any capability shortfalls on the mission tasks.




There may be one or more prominent or critical measures (e.g., KPPs) which are very
influential on how well the mission task is achieved—such measures may drive the degree
of operational impact on the mission task
There may be measures that have interdependencies (e.g., messages can be sent quickly,
but they are incomplete) which need to be understood and considered when determining
the significance of impact
Do not simply rely on the preponderance of measure ratings (e.g., 3 out of 5 measures met
the criteria) to rate the mission task—use operational judgment and experience
When there is insufficient information to assess a mission task, it should be rated
“inconclusive” write an accompanying explanation
Four possible mission task ratings include the following:
165
2282
2283
2284
2285
2286
2287
Rating measures and mission tasks is not a simple task. Do not rely on mathematical or heuristic
based measure rollup or weighting schemes to derive a rating. Although simple to use, they are
never the best way to communicate.
2288
166
2289
Appendix M: GAO CEAG, Table 2
2290
2291
2292
Following these 12 steps should result in reliable and valid cost estimates that management can
use for making informed decisions. The entire guide can be found on the GAO website:
http://www.gao.gov/new.items/d093sp.pdf
2293
2294
167
2295
2296
168
2297
169
2298
Appendix N: Developing a Point Estimate
2299
170
2300
171
2301
172
2302
173
2303
174
2304
175
2305
176
2306
177
2307
178
2308
179
2309
180
2310
2311
181
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
2330
Appendix O: CAPE AoA Study Guidance Template
The following is provided by CAPE as a template to begin drafting the AoA Study Guidance.
The word draft appears to indicate any study guidance developed from this template will be draft
guidance, the template is not a draft.
DRAFT (XXXXX PROGRAM NAME)
ANALYSIS OF ALTERNATIVES GUIDANCE
Month XX, 2xxx
2331
2332
2333
2334
2335
2336
2337
2338
2339
2340
2341
2342
182
2343
2344
2345
2346
2347
2348
2349
2350
2351
2352
2353
2354
2355
2356
2357
2358
2359
2360
2361
2362
2363
2364
2365
2366
2367
2368
2369
2370
2371
2372
2373
2374
2375
2376
2377
2378
2379
2380
2381
2382
2383
2384
2385
Program Name (Abbreviation) Analysis of Alternatives Guidance
Purpose
The goal of Analysis of Alternatives (AoA) guidance is to facilitate high caliber analysis, fair
treatment of options, and decision-quality outcomes to inform the Milestone Decision Authority
(MDA) at the next Milestone and shape/scope the Request For Proposal (RFP) for the next
acquisition phase. CAPE guidance should direct the AoA to explore tradespace in performance,
schedule, risk and cost across a full range of options to address validated capability
requirements. Additionally, the guidance should support an AoA feedback mechanism to the
requirements process of recommended changes to validated capability requirements that, upon
further study, appear unachievable and/or undesirable from a cost, schedule, risk and/or
performance point of view.
Background
The guidance should provide a brief background on why the AoA is being conducted and how we
got here. It should discuss the history of the effort and characterize related programs, to include
lessons learned from previous cancellations. This section should also include a discussion of the
Joint Requirements Oversight Council (JROC)-approved capability gaps and their role in the AoA
study. The guidance should make clear that the values of the capability gaps in the Initial
Capabilities Document (ICD) and draft Capability Development Document (CDD) should be
treated as reference points to frame decision space rather than minimum standards to disqualify
options. The AoA should illuminate the operational, schedule, risk and cost implications of
tradespace around the validated capability gaps.
Assumptions and Constraints
Defining and understanding key assumptions and constraints are important in properly scoping
the issue, defining excursions, and limiting institutional bias. Assumptions that are standard or
trivial and therefore provide limited insight on what is actually driving the answer are not of
interest. Since assumptions can determine outcomes, the guidance should direct the study team to
identify the key assumptions driving the AoA results. Significant assumptions can include U.S.:
enemy force ratios, threat characterization, CONOPs, etc. All major/key assumptions and
constraints should be validated by the Study Advisory Group (SAG) as they are developed, but
prior to beginning analysis.
Alternatives
This section should delineate the base case set of alternatives. These alternatives typically include
a baseline (legacy systems and their approved modifications through the current POM), modified
legacy systems, modified commercial/government/allied off the shelf systems, and new
development alternatives. The alternatives should be distinctly defined, with enough detail to
183
2386
2387
2388
2389
2390
2391
2392
2393
2394
2395
2396
2397
2398
2399
2400
2401
2402
2403
2404
2405
2406
2407
2408
2409
2410
2411
2412
2413
2414
2415
2416
2417
2418
2419
2420
2421
2422
2423
2424
2425
2426
2427
2428
2429
support the analytic approaches used. The alternatives should be grounded in industry, national
lab or other agency responses; the AoA should avoid contriving unrealistic, “idealized” options.
The guidance should direct the AoA to explore a full range of viable modifications to legacy
systems. For all alternatives, the AoA should assess features that appear to provide substantive
operational benefit and apply to all viable alternatives (e.g., if a type of sensor is found to provide
notably improved effectiveness for one alternative, the AoA should explore incorporating that
feature in all alternatives).
Alternatives should also consider variations or excursions for attributes that are significant cost
drivers. The intent is to find the “knee-in-the-curve” for the cost driver to ensure consideration of
cost effective solutions rather than single point solutions that turn out to be unaffordable.
Analysis
The analysis should be based on sound methodologies and data that are briefly outlined in the
Study Plan. The guidance should establish an early milestone/date for the AoA team to present
their detailed methodology and data approaches, tools, scenarios, metrics, and data in- depth to
the SAG and other stakeholders.
The AoA should spell out the scenarios and CONOPS used and explain the rationale for the
inclusion of non-standard scenarios. If non-standard scenarios are employed the study team
should explain in depth outcomes unique to those scenarios. The guidance should direct that a
range of less stressing and more stressing scenarios be used, rather than using only highly
demanding scenarios.
The guidance should instruct the AoA to spell out the metrics used, any weighting factors applied
to these metrics, and the rationale for applying each weighting factor. Metrics should include
comparisons between the (weighted) metrics and cost to facilitate cost, performance and schedule
tradeoff discussions.
A problem with many legacy AoAs is that they have focused on operational benefits and
downplayed technical, schedule, and cost risk. To avoid this, the guidance should instruct the
AoA team to give full treatment to non-operational risks, since these factors have been a major
cause of failed programs in the past. Within the technical risk area, empirical data should guide
the AoA’s assessment, with particular focus on integration risk.
The guidance should direct the AoA team to explain the rationale for the results, which goes
well beyond simply presenting outcomes. The AoA team should understand that the value of
the analysis is in understanding why options do well or poorly. The study guidance should
require the AoA team to acknowledge the limitations and confidence in the results due to lack of
mature or reliable data at the time of the AoA. The team should also explain how/if variations to
CONOPS or attributes of alternatives might mitigate cost drivers or low ratings on assessment
metrics. Also, many AoAs have presented preferred options only for those cases advantageous to
184
2430
2431
2432
2433
2434
2435
2436
2437
2438
2439
2440
2441
2442
2443
2444
2445
2446
2447
2448
2449
2450
2451
2452
2453
2454
2455
2456
2457
2458
2459
2460
2461
2462
2463
2464
2465
2466
2467
2468
2469
2470
2471
2472
2473
the option. The guidance should instruct the AoA to characterize the circumstances in which a
given option appears superior and the conditions under which its outcomes degrade (a useful
example of this was in the AoA for the replacement of the M113 armored personnel carrier,
which showed how casualties varied according to the explosive weight of improvised explosive
devises).
Cost Analysis. Provide an analysis of life-cycle costs that includes estimates of development,
production, operating and support (O&S), and disposal costs. These estimates should be of
sufficient quality to support acquisition and investment decisions, but are not to be of budget
quality.

O&S cost estimates will cover a common life-cycle period for the system under
consideration (for most, a 20-year period) for all alternatives, consistent with the Operating
and Support Cost-Estimating Guide (Cost Analysis Improvement Group, Office of the
Secretary of Defense, October 2007). The estimates shall include point estimates for the
Average Procurement Unit Cost (APUC), as well as total life-cycle cost.
 Life cycle estimates should be calculated as point estimates and also shown as 50% and
80% confidence levels.
 The cost analysis will identify APUC estimates for varying procurement quantities, if
applicable. Present-value discounting should be used in comparing the alternatives, in
accordance with OSD and Office of Management and Budget guidelines.
 Costs should be expressed in current-year dollars and, if appropriate in the context of
FYDP funding, in then-year dollars. Costs should be presented at the major appropriation
level with defined risk ranges to communicate the uncertainty associated with the estimates.
 The cost portion of the analysis should include an assessment of how varying the annual
procurement rate affects cost and manufacturing risk when appropriate (e.g., procuring items
faster to complete the total buy sooner vice buying them more slowly over a longer period of
time).
Schedule and Technology/Manufacturing Risk Assessment. The AoA should include
estimated schedules for each alternative, as well as an assessment of existing Technology Risk
Levels (TRLs)/Manufacturing Risk Levels (MRLs) for critical technologies which may impact
the likelihood of completing development, integration, and operational testing activities on
schedule and within budget. Since legacy AoAs have often proposed development and
procurement schedules that were more aggressive than we actually achieved, future AoAs should
include an assessment of the likelihood of achieving the proposed schedule based on our
experience. Where significant risks are identified, the assessment should outline practical
mitigation strategies to minimize impact to delivering the operational capability to the warfighter,
and if applicable, notional workarounds in the event the risks are realized.
Sensitivity Analysis. The AoA will identify assumptions, constraints, variables and metric
thresholds that when altered, may significantly change the relative schedule, performance, and/or
cost-effectiveness of the alternatives. The sensitivity analysis should identify cost, schedule, and
performance drivers to illuminate the trade space for decision makers. (e.g., identify performance
185
2474
2475
2476
2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
2494
2495
2496
2497
2498
2499
2500
2501
2502
2503
2504
2505
2506
2507
2508
2509
2510
2511
2512
2513
2514
2515
2516
attributes that make the largest changes to the force’s mission effectiveness or are likely to most
influence development and/or production cost.)
Other specified analysis as required:
 All mandatory Key Performance Parameters (KPPs) as noted in the Joint Capabilities
Integration and Development System (JCIDS) manual should be analyzed, as applicable.
Additionally, if a value has been specified within the requirements documents for these KPPs,
describe the risk incurred for failing to achieve these values.
 DOTmLPF-P Assessment. The AoA will evaluate the implications for doctrine,
organization, training, materiel, leadership and education, personnel, facilities, and policy
(DOTmLPF-P) for each alternative.
 Operational Energy Assessment. If applicable, the AoA will include an examination of
demand for fuel or alternative energies under each of the alternatives, using fully burdened
costs. The study director will:
o Ensure the Fully Burdened Cost of Energy (FBCE) method is used in
computing costs for the Life Cycle Cost Estimate (LCCE) and documented in
the final report.
o Brief the SAG as to whether FBCE significantly differentiate between the
alternatives being considered.
o In cases where it does not significantly differentiate between alternatives, the
Service shall complete the FBCE work external to the AoA.
Specific questions to be answered by the AoA
Additional program-specific questions should be included that do not repeat the
requirements described elsewhere in the guidance. Rather, these questions should probe issues
that are specific to the program – e.g., how a program would achieve high reliability; how a
program might mitigate risk if the technology required fails to materialize; how a program might
trade lethality versus survivability if cost (or weight) is a limiting factor. This section of the
guidance should be a description of ideas that are substantive to the specific program and pose
questions that, when answered, will highlight the truly important aspects of the tradespace for
the program.
Administrative Guidance
A SAG will oversee the conduct of the AoA and ensure that the study complies with CAPE
guidance. The group will be co-chaired by OSD CAPE and a Service representative and will
include representatives from OUSD(AT&L), OUSD(P), OUSD(C), OUSD(P&R), ASD(R&E),
ASD(OEPP), DOT&E, the Joint Staff, and the Services. The SAG is responsible for ensuring that
the study complies with this guidance. The SAG has the authority to change the study guidance.
186
2517
2518
2519
2520
2521
2522
2523
2524
2525
2526
2527
2528
2529
2530
2531
2532
2533
2534
2535
2536
2537
2538
The organization performing the AoA will present an AoA study plan (not to exceed 10 pages)
for CAPE approval 30 days after the issuance of the AoA Study Guidance or no less than 30 days
prior to the Material Development Decision. The organization performing the AoA will work
with OSD CAPE to develop a schedule for briefing the SAG on the AoA study team’s progress.
The briefings should be held bimonthly unless needed more frequently. In between briefings to
the SAG, the study lead will maintain dialogue with OSD CAPE.
The guidance should set strict time limits on the analysis timeline – shorter is better. If the AoA
analysis is expected to take longer than 6-9 months, the scope of work should be reconsidered to
ensure the analysis planned is truly necessary to inform the milestone decision.
The final deliverables will include a briefing to the SAG and a written report. The written AoA
report is due to D,CAPE at least 60 days prior to the Milestone Decision (to allow for sufficiency
review) and to the other SAG members to properly inform the stakeholders prior to the release of
the RFP for the next acquisition stage. The final report will provide a detailed written record of
the AoA’s results and findings and shall be on the order of no more than 50 pages in length, plus
the Executive Summary which should be no more than 10 pages in length.
187
Download