A Proposed Systems Engineering Diagnostic Method

advertisement
47th AIAA Aerospace Sciences Meeting Including The New Horizons Forum and Aerospace Exposition
5 - 8 January 2009, Orlando, Florida
AIAA 2009-1006
A Proposed Systems Engineering Diagnostic Method
John C. Hsu*
The Boeing Company, Long Beach, California, 90846
S. Raghunathan†
M. Price+
Queens University, Belfast BT95AH, Northern Ireland
R. Curran#
Delft University, Delft, The Netherlands
I.
Introduction
The systems engineering was revitalized in the middle of 1990s. Increased participation using systems
engineering processes and practices during the system development and demonstration phase is seen as key to
implementing this new approach. Engineering and critical thinking is increasingly important in these revitalization
efforts in resource management, skills management, and supplier management. The industry and government
revitalization efforts including publishing systems engineering processes, methods and templates to guide people to
implement systems engineering.
In 2003 and 2004, Department of Defense (DoD) issued a number of policies that placed renewed emphasis on
the application of rigorous Systems Engineering (SE), stating that it is essential to the Department’s ability to meet
the challenge of developing and maintaining needed capability, noting that this is especially true as systems become
more complex in a family-of-systems, systems-of-systems, and net-centric warfare context. In these complex
systems, SE provides the integration/technical processes to define and balance performance, cost, schedule, and risk.
Under Secretary of Defense (USD) stated in 2004, “We will select only contractors regardless of acquisition
category who apply a robust systems engineering approach.” Michael W. Wynne, Acting USD for Acquisition,
Technology and Logistics (AT&L), and Mark D. Schaeffer, Principal Deputy, Defense Systems and Director,
Systems Engineering, Office of the USD AT&L (Ref. 1), called for the revitalization of systems engineering across
the Department of Defense. A systems engineering plan is key to the revitalization of systems engineering in the
business systems environment. The documented technical management approach within the business systems
engineering process addresses the risks and concerns surrounding business systems programs and provides the
tailored approach to effectively manage, design, test, and deploy critical business systems solutions.
An important element in revitalization has been the acquisition management education program (Ref. 1).
Leadership skills are also essential to developing successful systems engineering and acquisition management
professionals. The Acquisition Leadership Development Program (ALDP) began in January 2004 for Acquisition
Directorate systems engineers and program managers who demonstrated leadership potential.
The SE revitalization campaign leads extensive educational training in learning SE (Ref. 2) for customers,
contractors and universities. The contractors and customers work together to implement SE using acquired SE
knowledge and processes. The systems engineering implementation is more challenge than understanding the
system engineering process. The implementation of systems engineering requires a flawless interface between team
members working toward a common system thinking to correctly execute systems engineering process. But the
program cost overrun, schedule slippage and even failures were still high after more than 10 years of systems
engineering revitalization efforts although it was gradually improved. The reason for slow pace of improvement is
*
Senior Staff, Systems Engineering, Advanced Systems, Integrated Defense Systems, The Boeing Company, MC
C076-0228, 3855 Lakewood Blvd, Long Beach, CA, 90846, Adjunct Professor of California State University at
Long Beach, Royal Academy Visiting Professor in Systems Engineering, Queens University, Belfast, Northern
Ireland, and Associate Fellow.
†
Retired Professor and Bombardier Chair in Aeronautical Engineering, Queens University, Belfast BT95AH,
Northern Ireland, and Associate Fellow.
+ Professor of Aeronautical Engineering, Queens University, Belfast BT95AH, Northern Ireland, Member.
#
Professor and Chair of Aerospace Management & Operations, Aerospace Engineering, Delft University of
Technology, The Netherlands, and Senior Member.
1
American Institute of Aeronautics and Astronautics
Copyright © 2009 by John C Hsu. Published by the American Institute of Aeronautics and Astronautics, Inc., with permission.
not correctly implementing the systems engineering process. The SE implementation problems will be discussed in
the next section.
II. Implementation Issues
SE
knowledge
and
processes is not a rocket
science and not hard to
Inad equate derivation o f detailed requirements
learn.
But
the
Inad equate SE review of analysis assumptions for quality
implementation of SE is a
Collabo ration to assure solutions meeting their requirements
team
effort
requiring
Design integration problems
contractors, subcontractors
Ineffective identification and management of risks
and acquirers to seamlessly
Inad equate SE oversight of suppliers
work together. Every team
Lack definition of RAAs for key SE lead ership roles
member has to execute to
Poor requirements decomposition
the common SE process and
Over reliance o n specs and ICDs versus understanding of products
method correctly. This is a
Inadequate peer review
big challenge that may be
Poor (loo se) req uirements
harder than finding solutions
Perception that "test comes later"
for rocket science.
Poor subcontract Statement of Work
Analyses of a sampling
There is an inconsistent application/understanding of Systems Engineering
of
major
acquisition
programs show a definite
Poor requirements up front
linkage between escalating
Lack of System Architects
costs and the ineffective
Lack of recognized authority as “over-arching” integration function
application
of
systems
Mismatch between subcontractor requirements and requirements development
engineering. United States
process
Government Accountability
Program management and IPT leads unaware of impo rtance of Requirements
Office (GAO) (Ref. 3) found
Management
problems related to quality
Risk Management Process Ownership and Execution Responsib ility are separate
have resulted in major
Supplier Risk Management process not properly executed o r estimated in the
impacts to the 11 DOD
proposal process
weapon
systems
GAO
Integrated flowdown of pro cess and requirements to Suppliers
reviewed—billions
in
cost
Lack of train ing of key Program personnel
overruns,
years-long
delays,
Inconsistent product documentation results in integrity and traceability issues
and decreased capabilities
Understand Customer needs
for the warfighter. GAO’s
PM’s/IPT leaders don’t have SE experience
analysis of 11 DOD weapon
SE should be an “arch” functio n integrating function for programs
systems illustrates those
defense contractors’ poor
Table 1. Tabulation of Systems Engineering Implementation Issues
practices
for
systems
engineering activities as well as manufacturing and supplier quality problems contributed to these outcomes. DOD
faces its own set of challenges—setting achievable requirements for systems development and providing effective
oversight during the development process. DoD Project Management Institute cited the top 10 reasons projects fail A few of them are: changing requirements, inadequate specification, poor architectural design, etc. The most
frequent SE implementation issues were listed in Table 1.
There are many challenges to overcome the implementation issues. To quote from Einstein, “Insanity is doing
the same thing over and over again and expecting different results”. The SE implementation.issues will have to be
overcome. List of challenges is shown in Table 2.
Most frequent Systems Engineering Implementation Issues
III. Performance Measurements
2
American Institute of Aeronautics and Astronautics
No process is complete without
tracking progress and measuring (Ref.
1) the extent to which objectives have
been achieved.
The Systems
1. How can we highlight complexity and risk in early requirements
engineering-specific
balanced
scorecard
development and the budgeting process?
metrics
address
such
areas
as
posting
2. What are the best SE indicators and metrics and what is their
mandatory
artifacts,
sharing
best
program effect?
practices,
conducting
independent
3. How do we incorporate technology improvement into SE and SE
technical assessments, and meeting
planning?
schedules. These and other related
4. What kind of contracting strategies are most “friendly” to help
metrics to be reported on a quarterly
foster SE?
basis.
5. What new SE process areas will simplify SE implementation?
Capability
Maturity
Model
6. How can we best leverage the development of Modeling and
Integration
(CMMI)
is
a
tool
to
Simulation tools?
determine
the
risk
associated
with
7. What domain specific studies can help implement SE?
suppliers’
systems
engineering
8. No significant domain training or mentoring to make new,
processes.
This
method
helps
Defense
multidisciplinary Systems Engineers.
Contractor
Management
Agency
9. We need more qualified systems engineers.
(DCMA)
engineers
to
identify
and
10. How to contain program complexity and program control?
prioritize the most critical supplier
11. How to overcome cultural inhibitors?
processes; to evaluate those processes
12. Where does sustainment fit in? The old adage of “build Now –
objectively relative to industry’s best
Fix Later.”
practices as defined in the CMMI; to
identify suppliers’ process strengths and
Table 2. Challenges in Implementing Systems Engineering
weaknesses and the impact on product
and program performance; and to assess
program and product risk along with
other measures, such as earned value
and technical performance measurements, and also predict future program outcomes. The standard CMMI
measurement techniques focus on the application of systems engineering processes and the repeatability assessing
consistency of processes, but did not address the effectiveness of the process application.
Design reviews were often used to assess systems engineering implementation within a program and across
interrelated with customer, partner and subcontractor. Verify compliance in both areas from an end-to-end global
information grid perspective. Technical assessments are being conducted for each program/project to identify best
practices and recommended areas for improvement. Peer reviews were being used to focus on requirements
incompliance and design deficiencies. Systems engineering artifacts (Ref. 1) were required to be posted on the
intranet for review across the program. This practice improves efficiency by allowing managers and engineers to
review and re-use relevant data and methodologies. It can also lower risks by ensuring cross-program consistency
and preventing repetition of past mistakes.
Performance-based Earned Value concepts have been applied to measure systems engineering effectiveness
(Ref. 4). Focusing on SE work products provides a basis for ensuring that the closed-loop operation of the SE
management function provides measurements of the work products, not just the process.
In some cases “Lessons Learned” was attempted to be used to correct the SE performance. The lessons learned
were recorded and stored in database but seldom used and referenced, i.e., lessons were never learned only recorded.
Knowledge Management was developed to capture the good SE processes, methods and examples of practices.
However, not many engineers will read the information and data in the Knowledge Management database, or the
database may not be complete enough.
Many large corporations conducted independent reviews of critical programs and projects. It helped correct the
design problems or technical deficiencies but the SE implementation findings were faded away as time passed. It
has the same symptom as “Lessons Learned”.
Challenges in Implementing Systems Engineering
3
American Institute of Aeronautics and Astronautics
Metrics of Systems Engineering
1. Requirements
a. Requirements Volatility
b. Requirements Concurrence
c. Requirements Verification
d. Requirements Allocation
e. Requirements Traceability
f. Requirements Controlled
g. Interface Requirements
h. Requirements Scheduled Completion
i. Cost to Complete Requirements
j. System Trouble Reports
2. Technical Performance Measurement
a. Critical Parameters (CP) Identified
b. Validity of CP
c. Monitoring Frequency of CP
3. Trade Studies
a. Trade Study Criteria
b. Spantime of Trade Study
c. Trade Study Scheduled Completion
d. Cost to Complete Trade Study
4. Verification
a. Verification Plan
b. System Requirements Verification
c. Verification Scheduled Completion
5. Risk, Opportunity and Issue Management
a. Risk Identified
b. Risk Management Plan
c. Risk Burndown Rate
d. Monitoring Risks
e. Percentage of Risks Completed Prior SRR
f. Opportunity Identified and Assessed
g. Issue volatility
6. Configuration Management
a. On-Time Data Delivery
b. Change Analysis Cycle Time
c. Configuration Identification & Status
Accounting
7. IMP/IMS
a. Change IMP Progress
8. Interface Management
a. Interface Control Documents
9. System Integration
a. System Integration Plan
Some organizations believed that SE performance could
be improved by tailoring SE process, especially, for small
projects. In the Value Stream Mapping it was agreed that
the SE process was more closely followed on large projects.
This raises the question: How do you do SE on small
projects? How do you define "small"? What do you do and
under what conditions?
It was suggested to improve consistency of SE training
and process applications. It may imply to train engineers on
how to apply the process. But is this at the heart of SE
performance? Engineers were trained but how to measure.
Metrics have been defined to measure SE performance
for a project/program in industry. The typical metrics were
listed in Table 3. Some of the metrics are hard to measure
and some are not clearly defined. It is not the purpose of
this paper to defend and justify these metrics but simply list
a few of them to show one way of measuring the SE
performance.
Is there a better way to measure and capture the
corrective actions for systems engineering implementation
problems? The answer may be in the following sections.
IV. Why Need a More Effective Diagnostic
Method?
How do we measure the systems engineering
implementation deficiencies correctly? Quantity is easy to
measure but measuring quality and technical content is a
challenge. Number of drawings released and number of line
codes completed are meaningless if quality is not measured.
Quantity is a direct measure and quality is a derived
measure. Efficiency is another measurement. These
measurements cannot be assessed before the completion of a
program. Large corporations conducted many systems
engineering independent assessments. Unfortunately, it is
still not effectively correcting the implementation problems.
The new diagnostic method proposes to assess and analyze a
program/project after the failure or poor-performance based
on the evidence and fact. This is a more reliable method
than prediction and audit on an on-going program/project
based on interviews with workers.
Heinz Stoewer, the then president of INCOSE
(International Council on Systems Engineering) 2003, made
a speech in which he asserted that we needed to spend less
time on the process and more time on the product. What
does it mean? Is it possible to do a process well and have a
faulty product? Is it possible to have a good product with
Table 3. List of Systems Engineering Metrics
faulty processes?
Certainly in the systems engineering world, process is as
important as product. No one would argue that a good product could be derived from a bad process. Process
effectiveness will affect the cost and availability of the product. Of course, a good and sound process can produce a
bad product if not accomplished well. It can be concluded that product is critically dependent on process. On the
other hand, the improvement of processes will be based on the outcome of products. Therefore, the SE performance
measurement should be focused on the product not the process. You would often hear the question of “How
accurate the drawings and how the software work correctly meeting customer demands.” This is an example that the
4
American Institute of Aeronautics and Astronautics
product performance is the final measurement. Most of the current measurements as discussed in the above sections
were focused on process execution or on a portion of product since the program/project being audited was not
completed.
V. Proposed Systems Engineering Diagnostic Method
Define
...
CTX Tree (Process Tree)
The proposed SE diagnostic method will be focused on measuring the final products. Successful Change or
improvement begins with results (Ref. 5). The results from the product measurement will be used to improve
process. Six Sigma methodology (Ref. 6) and (Ref. 7) was invented to measure and eliminate manufacturing
product defects. It is commonly known and has been used for more than ten years.
There are two Six Sigma sub-methodologies: DMAIC and DMADV. The Six Sigma DMAIC process (define,
measure, analyze, improve, control) is an improvement system for existing processes falling below specification and
looking for incremental improvement. The Six Sigma DMADV process (define, measure, analyze, design, verify) is
an improvement system used to develop new processes or products at Six Sigma quality levels. Since the purpose of
this paper is to improve the SE implementation and process, DMAIC will be adopted.
Six Sigma DMAIC shall be applied
under the following conditions assuming the
SE process has been implemented:
Stakeholders
Level of Commitment
Engineer
Specialty
Quality
Systems Verification Manufacturing
• Customers expressed concern that
Engineer
Engineer
Engineer
Engineer
Engineer
product defects were causing
Compliant
O
O
O
O
O
O
frequent failures in the field.
Enthusiastic Support
X
• A program or a project failed to
Help it work
X
Hesitant
X
X
deliver.
•
A
program or a project had serious
Indifferent
X
cost overrun and schedule slippage
Uncooperative
Opposed
X
especially near the end of design life
Hostile
cycle.
X = Was Level of Com mitme nt
• There were many redesigns or
O = Level of Com mitme nt Neces sary for Succe ss
reworks associated with a program
or project. Most of the time this
Figure 1. Identification of Stakeholders
condition is closely related to the
above condition.
A Six Sigma team is formed to perform
the Six Sigma DMAIC as follows:
1. Define
Define the problems, scope and (external and internal) customer.
a. Study the “as-was” process map (AWPM) taken by the program/project team. The top-level process map
should show the SE tasks/steps utilized by the program/project team. The next level should show the
steps taken under each utilized SE task. For example, under the Requirements Task, how the
requirements
were
developed,
from
The CT Matrix Structure
customer requirements
CTY Tree (Product Tree)
or other source, or no
source, etc.? Another
example,
how
the
...
risk(s) was assessed
and mitigated, or no
Process
mitigation plan? Was
Input
the
risk
real
or
formality? What was
the quality of risk
assessment,
understandable
or
ambiguity?
If
CT Tree
necessary, the next
5
Figure 2. The Critical To (CT) Mat rix
5
American Institute of Aeronautics and Astronautics
lower level map could be generated to show the sub-steps taken under each step of a SE task. The
AWPM can reveal any missing SE
tasks not utilized by the
s
PM
er
program/project team.
ld
o
AW
eh
ak
b.
Identify
stakeholders early to
t
S
ix
atr
M
develop
communication
plan based
T
C
5
1
on level of commitment required.
CONTROL
DEFINE
An example was shown in Figure
1. It showed the “Was Level” of
identified stakeholders and the
2
4
MEASURE
IMPROVE
“Level of Commitment Necessary
3
for Success”.
Each identified
ANALYZE
stakeholder
can
be
further
identified, for example, engineer
has electrical, mechanical, etc. and
Figure 3. DMAIC - Define
specialty engineer has reliability,
maintainability, etc.
c. "Critical To" (CT) matrix links systems engineering process (columns of the matrix) or CTX tree and
product (rows) or CTY tree as shown in Figure 2. All critical characteristics, Y, of products were driven
by factors, X, of processes from the results. Understanding and controlling the causative factors, X, is
Process/Product
Failure Modes and Effects Analysis
(PFMEA)
Process/Product
Failure Modes and Effects Analysis
(FMEA)
Process or
Product Name:
Prepared by:
Responsible:
FMEA DateOri
( g) ______________ (Rev) _____________
Process
Step/Part
Numb er
Potent ial
Failure
Mode
Potent ial
Failure
Effect s
S
E
V
Po tential
Causes
O
C
C
Current Cont rols
Page ____ of ____
D
R
E
T
P
N
A ct ions
.
Resp
Reco mmended
Act ions
Taken
S
O
D
R
E
V
C
C
E
P
N
T
Table 4. Process/Product Failure Modes and Effects Analysis
the real key to high quality with no product defect. It is mathematically expressed as Y = f(X).
“Define” step with the key documents is shown in Figure 3.
2. Measure
Measure the process to determine current performance; quantifying the problem. The purpose is to narrow
range of potential causes and establish a baseline capability level.
a. Once the AWPM is completed, the team that put it together will analyze it. A checklist of things to Be
On the LookOut (BOLO) will be provided for analyzing the process. The team will study “Why did the
defect Y happen?” and “What was the X contributing the defect Y?” The study is no more than
considering the process activities and flow by looking at each process step for bottlenecks, sources of
6
American Institute of Aeronautics and Astronautics
delay, errors being fixed instead of prevented (rework), role ambiguity, duplications, unnecessary steps
and cycle time, authority ambiguity, decisions needed or not, possibly eliminating the step(s) or doing in
less time, or trying to prevent and value-added vs. non-value-added steps (from the customer's point of
view).
b. Build a Data Collection Plan.
Clearly define the goals and
objectives of the data collection.
5
1
Reach
understanding
and
CONTROL
DEFINE
agreement
on
operational
definitions and methodology for
Da t
a
the data collection plan. Ensure
2
Sam
4
plin
MEASURE
IMPROVE
data collection (and measurement)
g
PFM
repeatability,
reproducibility,
3
EA
ANALYZE
BO
accuracy,
and
stability.
Follow
LO
C ap
through
with
the
data
collection
a bil
it y
process and the results.
c.
Perform
Process/Product Failure
Figure 4. DMAIC - Measure
Modes and Effects Analysis
(PFMEA) which is shown in Table
4. Determine and rank potential causes (X’s) and their possible effect on the product problem (Y).
“Measure” step with key documents is shown in Figure 4.
3. Analyze
The
purpose
is
to
evaluate
data/information for trends, patterns,
causal relationships and "root causes".
a. A pareto chart is used to
graphically summarize and
display the relative usage of
systems engineering tasks that
has been applied by a
program/project. An example
s
t
s
i
s
n
si
nt
ys
al
me
die
aly
me
n
is shown in Figure 5. This
u
e
n
e
t
A
S
ag
lA
ag
ts
an
de
na
an
en
chart shows which systems
a
o
M
r
i
M
T
ct
e
em
sk
un
uir
Ri
fac
engineering task was used more
F
r
q
e
t
Re
In
than other tasks or can also
shows the ignorance or
Figure 5. Pareto Chart for Systems Engineering Implementation
omission of which systems
engineering tasks(s).
b. When utilizing a team approach
to problem solving, there are
often many opinions as to the problem's root cause. One way to capture these different ideas and
stimulate the team's brainstorming on root causes is the cause and effect diagram, commonly called a
fishbone. The fishbone will help to visually display the many potential causes for a specific problem or
effect. It is particularly useful in a group setting and for situations in which little quantitative data is
available for analysis.
c. 5 Whys Analysis is a problem solving technique that allows you to get at the root cause of a problem
fairly quickly. The 5 Whys can be used individually or as a part of the fishbone cause and effect
diagram. Application of the strategy involves taking any problem and asking "Why - what caused this
problem?" By repeatedly asking the question "Why" (five is a good rule), it is possible to peel away the
layers of symptoms to identify the root cause of a problem. Although this technique is called "5 Whys,"
you may find that you will need to ask the question fewer or more times than five before you find the
issue related to a problem. An example of “Not Implementing Systems Engineering” problem was
shown in Figure 6. There could be many fishbone diagrams for analyzing different problems.
“Analyze” step with key documents is shown in Figure 7.
4. Improve
7
American Institute of Aeronautics and Astronautics
s
es
oc is
P r a ly s
An
a
D at ly sis
An a
Fishbone
Caus es &
Effect
Pa
r
C h e to
a
5 W rt
hy s
The purpose is to develop, implement and evaluate solutions targeted at identified root causes. Improve the
process by eliminating defects.
a. A
brainstorming
session is a tool for
No Management Support
Lack of Teamwork
generating as many
Why ?
Why?
ideas or solutions
Why?
Why?
as possible to a
Why?
Why?
Why?
Why?
Not
problem or issue. It
Why?
Why?
Implementing
is not a tool for
Systems
determining
the
Why?
Engineering
Why ?
Why ?
best solution to a
Why?
Why ?
Why?
problem or issue.
Why ?
Why?
Why ?
Why?
Before beginning
any
effective
brainstorming
Cultural Barrier
Not Understanding Customer
session,
ground
rules must be set.
Figure 6 . Fishbone Cause and Effect Diagram
The
typical
groundrules are: no
dumb ideas, no
criticizing
other
people’s ideas, building on other
people’s ideas, quality over
quantity, etc..
b. Pugh matrix can help determine
5
1
which items or potential solutions
CONTROL
DEFINE
are more important or 'better' than
others. It is necessarily to be done
2
4
after you capture the CT matrix.
MEASURE
IMPROVE
The Push matrix is a scoring matrix
3
ANALYZE
used for solution selection, in
which options are assigned scores
relative to criteria. The selection is
made based on the consolidated
scores. An example is shown in
Figure 7. DMAIC - Analyze
Table 5. If the criteria is
the same as in the existing
solution, a “S” is given, if
Key Criteria
Solution 1
Solution 2
Solution 3
it is better a “+” is given
Organizational
+
+
and if it is worse a “-” is
Compatibility
Customer
+
given. The sum of the
Satis faction
sames, pluses, and minuses
Support
S
+
are consolidated. The “+”s
geographical
distribution
indicate the strengths and
Scalable to
S
S
S
the “-”s indicate the
size of
weaknesses. A solution,
progra m
Sum of
1
3
0
which has more strength, is
Positive s
selected. In this example,
Sum of
1
0
3
Solution 2, which had more
Negatives
strength, was selected.
Sum of Sames
2
1
1
c.
Develop
the
Systems
Table 5. Push De cision Matrix
Engineering
Implementation
Improvement Plan (SEIIP) based on the results in “Analyze” step and the recommendation from Pugh
matrix. Use the AWPM to develop the new “to be” process map (TBPM) by referring to CT matrix in
8
American Institute of Aeronautics and Astronautics
Control Plan
iz ed
dard
s
St an
esse
Pro c
ito r
Mo n ning
ai
Tr
“Define”, BOLO list and the data collected in “Measure” step. The Plan shall be thorough and robust to
correct all the identified deficiencies.
“Improve” step with key documents is shown in
Figure 8.
5. Control
The purpose is to make sure that the problem
stays fixed and new methods can be further
5
1
CONTROL
DEFINE
improved over time. It is how to control future
S oluti
ons
process performance.
TBPM
a. Develop a Control Plan to sustain the
2
4
Brainstorming
MEASURE
IMPROVE
improvements.
Identify the process
Docum ents
owners
who
will
track
the execution of
3
at rix
M
h
ANALYZE
Pug
g
n
the
SEIIP.
The
variation
from the
i
er
ne
gi n
improved
process
will
be
tracked
and the
En t io P lan
e ta
e m en nt
s t em m e
cause
should
be
identified.
Tools
and
y
S pl ve
I m pro
charts
are
needed
to
assist
the
monitoring
Im
Figure 8. DMAIC - Improve
of variations. Measure the key inputs and
outputs on an ongoing basis. There is
"common cause" variation. For example, people tend to work silo not participating in the teamwork as
often as it should be, or tendency to forget the customer requirements especially there were changes, etc..
In most processes, reducing common cause variation saves money. When the “common cause” cannot
be reduced, it is time to perform
another DMAIC to correct the
causes.
b.
Perform
cost of quality. Calculate
Co
Qu st o f
the cost savings if the process cycle
alit
Key
L ear y
5
1
nin g
time is reduced; the re-design is
CONTROL
DEFINE
reduced or eliminated or the
rework is reduced or eliminated.
2
4
This is a measurement of the
MEASURE
IMPROVE
degree of success of DMAIC or the
3
ANALYZE
failure of DMAIC. If it creates
savings, the ROI should be
captured for justification and future
reference.
Figure 9. DMAIC - Control
c. The new process map and
associated procedures should be
published and distributed. Training may be required for the new process and procedures. Transfer
ownership and knowledge to new process owner and project team tasked with the responsibilities if it is
necessary.
At this step, it is the project closure. Capture knowledge gained from this DMAIC project. What is the key
learning? Communicate to rest of organization for knowledge sharing and transfer. Archive in knowledge
management repositories for future projects. “Control” step with key documents is shown in Figure 9.
VI. Conclusion
The systems engineering revitalization was started more than ten years ago. There are still issues with system
engineering implementation. Large corporations and government organizations conducted independent design
reviews for design deficiencies caused by incorrect systems engineering implementation. When the independent
design review was completed, the program/project design problems associated with systems engineering execution
might be corrected but the comments from the review team were not captured for future improvements on other
programs/projects. When the program/project was assessed by the Best Practice Systems Engineering Independent
Assessment Team, it was based on the interviews with workers and observation of the partial results for an on-going
program/project. The findings from the assessment were not totally accurate and definitely not based on final
products. Final products are the ultimate indicators of the degree of success of processes implementation. A good
9
American Institute of Aeronautics and Astronautics
product will not need to be diagnosed, like a healthy person does not need to see a doctor. Six Sigma DMAIC
process will be used to measure and evaluate the defects of imperfect (quality) product(s). After thorough study,
evaluation and analysis using the DMAIC methodology, a SEIIP is generated. The systems engineering execution
will be monitored and controlled. If the variation from the improved process cannot be corrected, another DMAIC
may need to be performed.
The DMAIC methodology presented in this paper is not in enough details since it has not been applied to a poor
performed program/project yet. The readers will have to explore and navigate through the recommended path to
apply on a real case.
Acknowledgments
The authors gratefully acknowledge the funding from the Royal Academy of Engineering UK for the work
reported in this paper.
References
1
Defense AT&L, “Revitalizing Systems Engineering”, May-June 2005.
Hsu, J. C., Raghunathan S. and Curran, R., “Effective Learning in Systems Engineering”, Aerospace Science Meeting,
American Institute of Aerospace and Astronautics, Reno, Nevada, 2008.
3
Government Accounting Office, “Increased Focus on Requirements and Oversight Needed to Improve DOD’s Acquisition
Environment and Weapon System Quality”, GAO-08-294, February 2008
4
Carson, Ronald S., “Using Performance-Based Earned Value for Measuring Systems Engineering Effectiveness”, INCOSE
International Symposium, Netherlands, 2008.
5
Schaffar, Robert H. and Thomson, Harvey A., “Successful Change Programs Begin with Results,” HBR, Jan-Feb 1992.
6
Rath & Strong, “Six Sigma Pocket Guide,” Rath & Strong Management Consultants, Lexington, MA, 2001.
7
Harry, Mikel, J. and Schroeder, Richard, “Six Sigma, The Breakthrough Management Strategy Revolutionizing The World's
Top Corporations”, Doubleday, December 1999.
2
10
American Institute of Aeronautics and Astronautics
Download