Pilot Users' Guide

advertisement
SERC Effectiveness Measures Capability
Pilot Users' Guide
Introduction
This document provides a Pilot-Users' Guide for experimental use of two EM capabilities
that are currently supported by general-use tools. These tools are extendable Excelspreadsheets organized around a framework of systems engineering goals, contributing
critical success factor questions, and detailed metric questions. Each question can be
prioritized for relevance to the particular systems engineering effort, and assessed with
respect to the degree of evidence that it is being well addressed.
The two tools included in this pilot are intended for use at discrete assessment points
during a project’s lifetime. For example, the tools might be used to review SE plans or
preparations for major milestone reviews to assess any shortfalls in SE effectiveness.
One tool, the Systems Engineering Performance Assessment Tool (SEPAT), addresses the
evidence of thoroughness with which core systems engineering functions are being
performed. The second tool, the Systems Engineering Competence Assessment Tool
(SECAT), assesses the evidence of whether sufficient SE team personnel competence is
in place to carry out the functions. Both tools treat a shortfall in evidence as an increased
risk probability. This probability, multiplied by the relative impact of the item on project
success, produces a risk exposure quantity. These are color-coded red, yellow, and green
for identification of the risk levels of SE effectiveness items.
Pilot Evaluation Objectives
The primary objectives of the pilot evaluations are to determine the degree of utility of
the tools and their frameworks at various points in a project's life cycle. This includes
both the cost in effort required to perform the assessments, and the value obtained from
performing them.
It is not an objective of the pilot evaluations for users to externally disclose shortfalls or
risks in the projects assessed, although we would appreciate any information you can
share on the effects of using the tools.
Tool Overview
The SEPAT and SECAT tools are Excel-based prototypes. The tool was created in Excel
2007. Users with Excel 2003 will have the same functionality, but the risk exposure color
coating does not function. Macros must be enabled for functionality. The SECAT
competency evaluation tool operates identically to the SEPAT tool described below,
though the critical success factors and evaluation questions differ.
6/27/2009
1
Each tool identifies high-level Goals that must be met, and provides four or five Critical
Success Factors that support each goal. Questions then explore whether the critical
success factors are being met.
Each question is evaluated against two separate scales, Evidence and Impact. Less
Evidence is equated to higher risk. An Impact rating for each question allows the
evaluator to adjust the weighting of the question for that particular project.
It is recommended that the impact rating and evidence scores be determined by
independent reviewers. For instance, the impact ratings could be provided by the project
or program manager or their designate, and the evidence ratings provided by the project
chief engineer or chief systems engineer or their designate.
Figure 1. Detail of impact and evidence ratings.
Figure 1 illustrates the rating scale for impact and evidence on each question. In the
leftmost set of selections, the evaluator selects an appropriate weighting for the impact,
ranging from high-impact (red) to no-impact (gray). Similarly, the rightmost selection set
indicates the degree of evidence that supports the evaluation of each question, where red
implies no evidence has been found to support the conjecture, and blue implies that
external, independent experts have validated the evidence. Users make selections by
clicking on the appropriately colored boxes for each question.
Figure 2. Overall risk exposure rollup.
6/27/2009
2
As seen in Figure 2, the impact and evidence scores for each critical success factor are
rolled up into an overall risk exposure, which again is represented as a simple red-yellowgreen indicator (for Excel 2003 users, risk exposure is 3, 2, 1, respectively). The overall
risk exposure is the maximum of the risk exposures denoted by the responses to the
individual questions that support each critical success factor. The “rationale” column
may be used to record the source of evidence for later review. The “reset” button clears
the impact and evidence ratings for the entire document.1
Figure 3. Risk exposure mapping.
Risk exposure is calculated by multiplying the risk impact by the probability of risk
(exposure = impact * p(risk)). Since the impact and probability of risk are represented
here as discrete quantities, however, a different approach was used to determine the risk
exposure. Figure 3 is an excerpt from the “RE Map” tab of the SEPAT and SECAT tool
spreadsheets. On this tab, each combination of impact and p(risk)—where zero
represents no-impact/no-risk, and three represents high-impact/high-risk—may be
assigned a value from one (green) to three (red) in the “color” column. The risk exposure
matrix resulting from these choices is automatically shown on the right, in a format
similar to the five-by-five representation commonly used in risk analysis. In this case, for
example, it was chosen that “no impact” (impact=0) and “externally validated evidence”
(p(Risk)=3) result in a low (green=1) risk exposure. These values may be altered to suit
the needs of the program being evaluated.
Pilot Evaluation Context and Feedback
We have provided a Web capability for pilot users to provide some context information
on pilot project and feedback from usage of one or both of the tools. It can be found at
http://uscviterbi.qualtrics.com/SE?SID=SV_0j0FeW2KIAMKyNK&SVID=Prod. We
would appreciate the opportunity to conduct a brief follow-up interview with the pilot
users. If you are willing to participate, you can provide contact information as part of the
evaluation.
We thank you for your time in supporting this pilot activity, and will gladly report the
results of the evaluations to those who participate. Should you have any questions, please
contact Dan Ingold (dingold@usc.edu) or Winsor Brown (awbrown@usc.edu).
1
Please note that Excel macros must be enabled for the reset button to function.
6/27/2009
3
Download